Merge branch 'yt-dlp:master' into vidio-live

This commit is contained in:
MinePlayersPE 2021-12-29 13:55:11 +07:00 committed by GitHub
commit fb127cccd2
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
389 changed files with 27586 additions and 11792 deletions

View file

@ -1,70 +0,0 @@
---
name: Broken site support
about: Report broken or misfunctioning site
title: "[Broken]"
labels: Broken
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.08.10. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/yt-dlp/yt-dlp.
- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a broken site support
- [ ] I've verified that I'm running yt-dlp version **2021.08.10**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar issues including closed ones
## Verbose log
<!--
Provide the complete verbose output of yt-dlp that clearly demonstrates the problem.
Add the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] yt-dlp version 2021.08.10
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
PASTE VERBOSE LOG HERE
```
<!--
Do not remove the above ```
-->
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

View file

@ -0,0 +1,63 @@
name: Broken site support
description: Report broken or misfunctioning site
labels: [triage, site-bug]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a broken site
required: true
- label: I've verified that I'm running yt-dlp version **2021.12.27**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your issue in an arbitrary form.
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output of yt-dlp **that clearly demonstrates the problem**.
Add the `-Uv` flag to your command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version 2021.12.27 (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (2021.12.27)
<more lines>
render: shell
validations:
required: true

View file

@ -1,57 +0,0 @@
---
name: Site support request
about: Request support for a new site
title: "[Site Request]"
labels: Request
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.08.10. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://github.com/yt-dlp/yt-dlp. yt-dlp does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a new site support request
- [ ] I've verified that I'm running yt-dlp version **2021.08.10**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that none of provided URLs violate any copyrights
- [ ] The provided URLs do not contain any DRM to the best of my knowledge
- [ ] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

View file

@ -0,0 +1,74 @@
name: Site support request
description: Request support for a new site
labels: [triage, site-request]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a new site support request
required: true
- label: I've verified that I'm running yt-dlp version **2021.12.27**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that none of provided URLs [violate any copyrights](https://github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: example-urls
attributes:
label: Example URLs
description: |
Provide all kinds of example URLs for which support should be added
placeholder: |
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide any additional information
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output **using one of the example URLs provided above**.
Add the `-Uv` flag to your command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version 2021.12.27 (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (2021.12.27)
<more lines>
render: shell
validations:
required: true

View file

@ -1,40 +0,0 @@
---
name: Site feature request
about: Request a new functionality for a site
title: "[Site Request]"
labels: Request
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.08.10. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar site feature requests: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a site feature request
- [ ] I've verified that I'm running yt-dlp version **2021.08.10**
- [ ] I've searched the bugtracker for similar site feature requests including closed ones
## Description
<!--
Provide an explanation of your site feature request in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
WRITE DESCRIPTION HERE

View file

@ -0,0 +1,72 @@
name: Site feature request
description: Request a new functionality for a supported site
labels: [triage, site-enhancement]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a site feature request
required: true
- label: I've verified that I'm running yt-dlp version **2021.12.27**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: example-urls
attributes:
label: Example URLs
description: |
Example URLs that can be used to demonstrate the requested feature
value: |
https://www.youtube.com/watch?v=BaW_jenozKc
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your site feature request in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output of yt-dlp that demonstrates the need for the enhancement.
Add the `-Uv` flag to your command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version 2021.12.27 (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (2021.12.27)
<more lines>
render: shell
validations:
required: true

View file

@ -1,73 +0,0 @@
---
name: Bug report
about: Report a bug unrelated to any particular site or extractor
title: ''
labels: ''
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.08.10. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/yt-dlp/yt-dlp.
- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Read bugs section in FAQ: https://github.com/yt-dlp/yt-dlp
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a bug unrelated to a specific site
- [ ] I've verified that I'm running yt-dlp version **2021.08.10**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] The provided URLs do not contain any DRM to the best of my knowledge
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar bug reports including closed ones
- [ ] I've read bugs section in FAQ
## Verbose log
<!--
Provide the complete verbose output of yt-dlp that clearly demonstrates the problem.
Add the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] yt-dlp version 2021.08.10
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
PASTE VERBOSE LOG HERE
```
<!--
Do not remove the above ```
-->
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

57
.github/ISSUE_TEMPLATE/4_bug_report.yml vendored Normal file
View file

@ -0,0 +1,57 @@
name: Bug report
description: Report a bug unrelated to any particular site or extractor
labels: [triage, bug]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a bug unrelated to a specific site
required: true
- label: I've verified that I'm running yt-dlp version **2021.12.27**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your issue in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output of yt-dlp **that clearly demonstrates the problem**.
Add the `-Uv` flag to **your** command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version 2021.12.27 (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (2021.12.27)
<more lines>
render: shell
validations:
required: true

View file

@ -1,40 +0,0 @@
---
name: Feature request
about: Request a new functionality unrelated to any particular site or extractor
title: "[Feature Request]"
labels: Request
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.08.10. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar feature requests: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a feature request
- [ ] I've verified that I'm running yt-dlp version **2021.08.10**
- [ ] I've searched the bugtracker for similar feature requests including closed ones
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
WRITE DESCRIPTION HERE

View file

@ -0,0 +1,30 @@
name: Feature request
description: Request a new functionality unrelated to any particular site or extractor
labels: [triage, enhancement]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a feature request
required: true
- label: I've verified that I'm running yt-dlp version **2021.12.27**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your site feature request in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true

View file

@ -1,40 +0,0 @@
---
name: Ask question
about: Ask youtube-dl related question
title: "[Question]"
labels: question
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- Look through the README (https://github.com/yt-dlp/yt-dlp) and FAQ (https://github.com/yt-dlp/yt-dlp) for similar questions
- Search the bugtracker for similar questions: https://github.com/yt-dlp/yt-dlp
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm asking a question
- [ ] I've looked through the README and FAQ for similar questions
- [ ] I've searched the bugtracker for similar questions including closed ones
## Question
<!--
Ask your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/yt-dlp/yt-dlp.
-->
WRITE QUESTION HERE

52
.github/ISSUE_TEMPLATE/6_question.yml vendored Normal file
View file

@ -0,0 +1,52 @@
name: Ask question
description: Ask yt-dlp related question
labels: [question]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm asking a question and **not** reporting a bug/feature request
required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions including closed ones
required: true
- type: textarea
id: question
attributes:
label: Question
description: |
Ask your question in an arbitrary form.
Please make sure it's worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information and as much context and examples as possible.
If your question contains "isn't working" or "can you add", this is most likely the wrong template
placeholder: WRITE QUESTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
If your question involes a yt-dlp command, provide the complete verbose output of that command.
Add the `-Uv` flag to **your** command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version 2021.12.01 (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (2021.12.01)
<more lines>
render: shell

5
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View file

@ -0,0 +1,5 @@
blank_issues_enabled: false
contact_links:
- name: Get help from the community on Discord
url: https://discord.gg/H5MNcFW63r
about: Join the yt-dlp Discord for community-powered support!

View file

@ -1,70 +0,0 @@
---
name: Broken site support
about: Report broken or misfunctioning site
title: "[Broken]"
labels: Broken
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is %(version)s. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/yt-dlp/yt-dlp.
- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a broken site support
- [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar issues including closed ones
## Verbose log
<!--
Provide the complete verbose output of yt-dlp that clearly demonstrates the problem.
Add the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] yt-dlp version %(version)s
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
PASTE VERBOSE LOG HERE
```
<!--
Do not remove the above ```
-->
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

View file

@ -0,0 +1,63 @@
name: Broken site support
description: Report broken or misfunctioning site
labels: [triage, site-bug]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a broken site
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your issue in an arbitrary form.
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output of yt-dlp **that clearly demonstrates the problem**.
Add the `-Uv` flag to your command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version %(version)s (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (%(version)s)
<more lines>
render: shell
validations:
required: true

View file

@ -1,57 +0,0 @@
---
name: Site support request
about: Request support for a new site
title: "[Site Request]"
labels: Request
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is %(version)s. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://github.com/yt-dlp/yt-dlp. yt-dlp does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a new site support request
- [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that none of provided URLs violate any copyrights
- [ ] The provided URLs do not contain any DRM to the best of my knowledge
- [ ] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

View file

@ -0,0 +1,74 @@
name: Site support request
description: Request support for a new site
labels: [triage, site-request]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a new site support request
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that none of provided URLs [violate any copyrights](https://github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: example-urls
attributes:
label: Example URLs
description: |
Provide all kinds of example URLs for which support should be added
placeholder: |
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide any additional information
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output **using one of the example URLs provided above**.
Add the `-Uv` flag to your command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version %(version)s (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (%(version)s)
<more lines>
render: shell
validations:
required: true

View file

@ -1,40 +0,0 @@
---
name: Site feature request
about: Request a new functionality for a site
title: "[Site Request]"
labels: Request
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is %(version)s. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar site feature requests: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a site feature request
- [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've searched the bugtracker for similar site feature requests including closed ones
## Description
<!--
Provide an explanation of your site feature request in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
WRITE DESCRIPTION HERE

View file

@ -0,0 +1,72 @@
name: Site feature request
description: Request a new functionality for a supported site
labels: [triage, site-enhancement]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a site feature request
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: example-urls
attributes:
label: Example URLs
description: |
Example URLs that can be used to demonstrate the requested feature
value: |
https://www.youtube.com/watch?v=BaW_jenozKc
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your site feature request in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output of yt-dlp that demonstrates the need for the enhancement.
Add the `-Uv` flag to your command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version %(version)s (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (%(version)s)
<more lines>
render: shell
validations:
required: true

View file

@ -1,73 +0,0 @@
---
name: Bug report
about: Report a bug unrelated to any particular site or extractor
title: ''
labels: ''
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is %(version)s. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/yt-dlp/yt-dlp.
- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Read bugs section in FAQ: https://github.com/yt-dlp/yt-dlp
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a bug unrelated to a specific site
- [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] The provided URLs do not contain any DRM to the best of my knowledge
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar bug reports including closed ones
- [ ] I've read bugs section in FAQ
## Verbose log
<!--
Provide the complete verbose output of yt-dlp that clearly demonstrates the problem.
Add the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] yt-dlp version %(version)s
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
PASTE VERBOSE LOG HERE
```
<!--
Do not remove the above ```
-->
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

View file

@ -0,0 +1,57 @@
name: Bug report
description: Report a bug unrelated to any particular site or extractor
labels: [triage, bug]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a bug unrelated to a specific site
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your issue in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output of yt-dlp **that clearly demonstrates the problem**.
Add the `-Uv` flag to **your** command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version %(version)s (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (%(version)s)
<more lines>
render: shell
validations:
required: true

View file

@ -1,40 +0,0 @@
---
name: Feature request
about: Request a new functionality unrelated to any particular site or extractor
title: "[Feature Request]"
labels: Request
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is %(version)s. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar feature requests: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a feature request
- [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've searched the bugtracker for similar feature requests including closed ones
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
WRITE DESCRIPTION HERE

View file

@ -0,0 +1,30 @@
name: Feature request
description: Request a new functionality unrelated to any particular site or extractor
labels: [triage, enhancement]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a feature request
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your site feature request in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true

View file

@ -0,0 +1,52 @@
name: Ask question
description: Ask yt-dlp related question
labels: [question]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm asking a question and **not** reporting a bug/feature request
required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions including closed ones
required: true
- type: textarea
id: question
attributes:
label: Question
description: |
Ask your question in an arbitrary form.
Please make sure it's worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information and as much context and examples as possible.
If your question contains "isn't working" or "can you add", this is most likely the wrong template
placeholder: WRITE QUESTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
If your question involes a yt-dlp command, provide the complete verbose output of that command.
Add the `-Uv` flag to **your** command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version 2021.12.01 (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (2021.12.01)
<more lines>
render: shell

View file

@ -7,11 +7,11 @@
--- ---
### Before submitting a *pull request* make sure you have: ### Before submitting a *pull request* make sure you have:
- [ ] At least skimmed through [adding new extractor tutorial](https://github.com/ytdl-org/youtube-dl#adding-support-for-a-new-site) and [youtube-dl coding conventions](https://github.com/ytdl-org/youtube-dl#youtube-dl-coding-conventions) sections - [ ] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions)
- [ ] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [ ] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) - [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options: ### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) - [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence) - [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)

View file

@ -1,35 +1,116 @@
name: Build name: Build
on: workflow_dispatch
on:
push:
branches:
- release
jobs: jobs:
build_unix: build_unix:
runs-on: ubuntu-latest runs-on: ubuntu-latest
outputs: outputs:
version_suffix: ${{ steps.version_suffix.outputs.version_suffix }}
ytdlp_version: ${{ steps.bump_version.outputs.ytdlp_version }} ytdlp_version: ${{ steps.bump_version.outputs.ytdlp_version }}
upload_url: ${{ steps.create_release.outputs.upload_url }} upload_url: ${{ steps.create_release.outputs.upload_url }}
sha256_unix: ${{ steps.sha256_file.outputs.sha256_unix }} sha256_bin: ${{ steps.sha256_bin.outputs.sha256_bin }}
sha512_unix: ${{ steps.sha512_file.outputs.sha512_unix }} sha512_bin: ${{ steps.sha512_bin.outputs.sha512_bin }}
sha256_tar: ${{ steps.sha256_tar.outputs.sha256_tar }}
sha512_tar: ${{ steps.sha512_tar.outputs.sha512_tar }}
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v2 uses: actions/setup-python@v2
with: with:
python-version: '3.8' python-version: '3.8'
- name: Install packages - name: Install packages
run: sudo apt-get -y install zip pandoc man run: sudo apt-get -y install zip pandoc man
- name: Set version suffix
id: version_suffix
env:
PUSH_VERSION_COMMIT: ${{ secrets.PUSH_VERSION_COMMIT }}
if: "env.PUSH_VERSION_COMMIT == ''"
run: echo ::set-output name=version_suffix::$(date -u +"%H%M%S")
- name: Bump version - name: Bump version
id: bump_version id: bump_version
run: python devscripts/update-version.py run: |
- name: Print version python devscripts/update-version.py ${{ steps.version_suffix.outputs.version_suffix }}
run: echo "${{ steps.bump_version.outputs.ytdlp_version }}" make issuetemplates
- name: Push to release
id: push_release
run: |
git config --global user.name github-actions
git config --global user.email github-actions@example.com
git add -u
git commit -m "[version] update" -m "Created by: ${{ github.event.sender.login }}" -m ":ci skip all"
git push origin --force ${{ github.event.ref }}:release
echo ::set-output name=head_sha::$(git rev-parse HEAD)
- name: Update master
id: push_master
env:
PUSH_VERSION_COMMIT: ${{ secrets.PUSH_VERSION_COMMIT }}
if: "env.PUSH_VERSION_COMMIT != ''"
run: git push origin ${{ github.event.ref }}
- name: Get Changelog
id: get_changelog
run: |
changelog=$(cat Changelog.md | grep -oPz '(?s)(?<=### ${{ steps.bump_version.outputs.ytdlp_version }}\n{2}).+?(?=\n{2,3}###)') || true
echo "changelog<<EOF" >> $GITHUB_ENV
echo "$changelog" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
- name: Build lazy extractors
id: lazy_extractors
run: python devscripts/make_lazy_extractors.py
- name: Run Make - name: Run Make
run: make all tar run: make all tar
- name: Get SHA2-256SUMS for yt-dlp
id: sha256_bin
run: echo "::set-output name=sha256_bin::$(sha256sum yt-dlp | awk '{print $1}')"
- name: Get SHA2-256SUMS for yt-dlp.tar.gz
id: sha256_tar
run: echo "::set-output name=sha256_tar::$(sha256sum yt-dlp.tar.gz | awk '{print $1}')"
- name: Get SHA2-512SUMS for yt-dlp
id: sha512_bin
run: echo "::set-output name=sha512_bin::$(sha512sum yt-dlp | awk '{print $1}')"
- name: Get SHA2-512SUMS for yt-dlp.tar.gz
id: sha512_tar
run: echo "::set-output name=sha512_tar::$(sha512sum yt-dlp.tar.gz | awk '{print $1}')"
- name: Install dependencies for pypi
env:
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
if: "env.PYPI_TOKEN != ''"
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Build and publish on pypi
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
if: "env.TWINE_PASSWORD != ''"
run: |
rm -rf dist/*
python setup.py sdist bdist_wheel
twine upload dist/*
- name: Install SSH private key
env:
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
if: "env.BREW_TOKEN != ''"
uses: webfactory/ssh-agent@v0.5.3
with:
ssh-private-key: ${{ env.BREW_TOKEN }}
- name: Update Homebrew Formulae
env:
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
if: "env.BREW_TOKEN != ''"
run: |
git clone git@github.com:yt-dlp/homebrew-taps taps/
python3 devscripts/update-formulae.py taps/Formula/yt-dlp.rb "${{ steps.bump_version.outputs.ytdlp_version }}"
git -C taps/ config user.name github-actions
git -C taps/ config user.email github-actions@example.com
git -C taps/ commit -am 'yt-dlp: ${{ steps.bump_version.outputs.ytdlp_version }}'
git -C taps/ push
- name: Create Release - name: Create Release
id: create_release id: create_release
uses: actions/create-release@v1 uses: actions/create-release@v1
@ -38,9 +119,14 @@ jobs:
with: with:
tag_name: ${{ steps.bump_version.outputs.ytdlp_version }} tag_name: ${{ steps.bump_version.outputs.ytdlp_version }}
release_name: yt-dlp ${{ steps.bump_version.outputs.ytdlp_version }} release_name: yt-dlp ${{ steps.bump_version.outputs.ytdlp_version }}
commitish: ${{ steps.push_release.outputs.head_sha }}
body: | body: |
Changelog: #### [A description of the various files]((https://github.com/yt-dlp/yt-dlp#release-files)) are in the README
PLACEHOLDER
---
### Changelog:
${{ env.changelog }}
draft: false draft: false
prerelease: false prerelease: false
- name: Upload yt-dlp Unix binary - name: Upload yt-dlp Unix binary
@ -62,36 +148,82 @@ jobs:
asset_path: ./yt-dlp.tar.gz asset_path: ./yt-dlp.tar.gz
asset_name: yt-dlp.tar.gz asset_name: yt-dlp.tar.gz
asset_content_type: application/gzip asset_content_type: application/gzip
- name: Get SHA2-256SUMS for yt-dlp
id: sha256_file build_macos:
run: echo "::set-output name=sha256_unix::$(sha256sum yt-dlp | awk '{print $1}')" runs-on: macos-11
- name: Get SHA2-512SUMS for yt-dlp needs: build_unix
id: sha512_file outputs:
run: echo "::set-output name=sha512_unix::$(sha512sum yt-dlp | awk '{print $1}')" sha256_macos: ${{ steps.sha256_macos.outputs.sha256_macos }}
- name: Install dependencies for pypi sha512_macos: ${{ steps.sha512_macos.outputs.sha512_macos }}
env: sha256_macos_zip: ${{ steps.sha256_macos_zip.outputs.sha256_macos_zip }}
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }} sha512_macos_zip: ${{ steps.sha512_macos_zip.outputs.sha512_macos_zip }}
if: "env.PYPI_TOKEN != ''"
steps:
- uses: actions/checkout@v2
# In order to create a universal2 application, the version of python3 in /usr/bin has to be used
# Pyinstaller is pinned to 4.5.1 because the builds are failing in 4.6, 4.7
- name: Install Requirements
run: | run: |
python -m pip install --upgrade pip brew install coreutils
pip install setuptools wheel twine /usr/bin/python3 -m pip install -U --user pip Pyinstaller==4.5.1 mutagen pycryptodomex websockets
- name: Build and publish on pypi - name: Bump version
id: bump_version
run: /usr/bin/python3 devscripts/update-version.py
- name: Build lazy extractors
id: lazy_extractors
run: /usr/bin/python3 devscripts/make_lazy_extractors.py
- name: Run PyInstaller Script
run: /usr/bin/python3 pyinst.py --target-architecture universal2 --onefile
- name: Upload yt-dlp MacOS binary
id: upload-release-macos
uses: actions/upload-release-asset@v1
env: env:
TWINE_USERNAME: __token__ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }} with:
if: "env.TWINE_PASSWORD != ''" upload_url: ${{ needs.build_unix.outputs.upload_url }}
run: | asset_path: ./dist/yt-dlp_macos
rm -rf dist/* asset_name: yt-dlp_macos
python setup.py sdist bdist_wheel asset_content_type: application/octet-stream
twine upload dist/* - name: Get SHA2-256SUMS for yt-dlp_macos
id: sha256_macos
run: echo "::set-output name=sha256_macos::$(sha256sum dist/yt-dlp_macos | awk '{print $1}')"
- name: Get SHA2-512SUMS for yt-dlp_macos
id: sha512_macos
run: echo "::set-output name=sha512_macos::$(sha512sum dist/yt-dlp_macos | awk '{print $1}')"
- name: Run PyInstaller Script with --onedir
run: /usr/bin/python3 pyinst.py --target-architecture universal2 --onedir
- uses: papeloto/action-zip@v1
with:
files: ./dist/yt-dlp_macos
dest: ./dist/yt-dlp_macos.zip
- name: Upload yt-dlp MacOS onedir
id: upload-release-macos-zip
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ needs.build_unix.outputs.upload_url }}
asset_path: ./dist/yt-dlp_macos.zip
asset_name: yt-dlp_macos.zip
asset_content_type: application/zip
- name: Get SHA2-256SUMS for yt-dlp_macos.zip
id: sha256_macos_zip
run: echo "::set-output name=sha256_macos_zip::$(sha256sum dist/yt-dlp_macos.zip | awk '{print $1}')"
- name: Get SHA2-512SUMS for yt-dlp_macos
id: sha512_macos_zip
run: echo "::set-output name=sha512_macos_zip::$(sha512sum dist/yt-dlp_macos.zip | awk '{print $1}')"
build_windows: build_windows:
runs-on: windows-latest runs-on: windows-latest
needs: build_unix needs: build_unix
outputs: outputs:
sha256_windows: ${{ steps.sha256_file_win.outputs.sha256_windows }} sha256_win: ${{ steps.sha256_win.outputs.sha256_win }}
sha512_windows: ${{ steps.sha512_file_win.outputs.sha512_windows }} sha512_win: ${{ steps.sha512_win.outputs.sha512_win }}
sha256_py2exe: ${{ steps.sha256_py2exe.outputs.sha256_py2exe }}
sha512_py2exe: ${{ steps.sha512_py2exe.outputs.sha512_py2exe }}
sha256_win_zip: ${{ steps.sha256_win_zip.outputs.sha256_win_zip }}
sha512_win_zip: ${{ steps.sha512_win_zip.outputs.sha512_win_zip }}
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
@ -100,18 +232,21 @@ jobs:
uses: actions/setup-python@v2 uses: actions/setup-python@v2
with: with:
python-version: '3.8' python-version: '3.8'
- name: Upgrade pip and enable wheel support
run: python -m pip install --upgrade pip setuptools wheel
- name: Install Requirements - name: Install Requirements
# Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds # Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds
run: pip install "https://yt-dlp.github.io/pyinstaller-builds/x86_64/pyinstaller-4.5.1-py3-none-any.whl" mutagen pycryptodome websockets run: |
python -m pip install --upgrade pip setuptools wheel py2exe
pip install "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-4.5.1-py3-none-any.whl" mutagen pycryptodomex websockets
- name: Bump version - name: Bump version
id: bump_version id: bump_version
run: python devscripts/update-version.py env:
- name: Print version version_suffix: ${{ needs.build_unix.outputs.version_suffix }}
run: echo "${{ steps.bump_version.outputs.ytdlp_version }}" run: python devscripts/update-version.py ${{ env.version_suffix }}
- name: Build lazy extractors
id: lazy_extractors
run: python devscripts/make_lazy_extractors.py
- name: Run PyInstaller Script - name: Run PyInstaller Script
run: python pyinst.py 64 run: python pyinst.py
- name: Upload yt-dlp.exe Windows binary - name: Upload yt-dlp.exe Windows binary
id: upload-release-windows id: upload-release-windows
uses: actions/upload-release-asset@v1 uses: actions/upload-release-asset@v1
@ -123,19 +258,61 @@ jobs:
asset_name: yt-dlp.exe asset_name: yt-dlp.exe
asset_content_type: application/vnd.microsoft.portable-executable asset_content_type: application/vnd.microsoft.portable-executable
- name: Get SHA2-256SUMS for yt-dlp.exe - name: Get SHA2-256SUMS for yt-dlp.exe
id: sha256_file_win id: sha256_win
run: echo "::set-output name=sha256_windows::$((Get-FileHash dist\yt-dlp.exe -Algorithm SHA256).Hash.ToLower())" run: echo "::set-output name=sha256_win::$((Get-FileHash dist\yt-dlp.exe -Algorithm SHA256).Hash.ToLower())"
- name: Get SHA2-512SUMS for yt-dlp.exe - name: Get SHA2-512SUMS for yt-dlp.exe
id: sha512_file_win id: sha512_win
run: echo "::set-output name=sha512_windows::$((Get-FileHash dist\yt-dlp.exe -Algorithm SHA512).Hash.ToLower())" run: echo "::set-output name=sha512_win::$((Get-FileHash dist\yt-dlp.exe -Algorithm SHA512).Hash.ToLower())"
- name: Run PyInstaller Script with --onedir
run: python pyinst.py --onedir
- uses: papeloto/action-zip@v1
with:
files: ./dist/yt-dlp
dest: ./dist/yt-dlp_win.zip
- name: Upload yt-dlp Windows onedir
id: upload-release-windows-zip
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ needs.build_unix.outputs.upload_url }}
asset_path: ./dist/yt-dlp_win.zip
asset_name: yt-dlp_win.zip
asset_content_type: application/zip
- name: Get SHA2-256SUMS for yt-dlp_win.zip
id: sha256_win_zip
run: echo "::set-output name=sha256_win_zip::$((Get-FileHash dist\yt-dlp_win.zip -Algorithm SHA256).Hash.ToLower())"
- name: Get SHA2-512SUMS for yt-dlp_win.zip
id: sha512_win_zip
run: echo "::set-output name=sha512_win_zip::$((Get-FileHash dist\yt-dlp_win.zip -Algorithm SHA512).Hash.ToLower())"
- name: Run py2exe Script
run: python setup.py py2exe
- name: Upload yt-dlp_min.exe Windows binary
id: upload-release-windows-py2exe
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ needs.build_unix.outputs.upload_url }}
asset_path: ./dist/yt-dlp.exe
asset_name: yt-dlp_min.exe
asset_content_type: application/vnd.microsoft.portable-executable
- name: Get SHA2-256SUMS for yt-dlp_min.exe
id: sha256_py2exe
run: echo "::set-output name=sha256_py2exe::$((Get-FileHash dist\yt-dlp.exe -Algorithm SHA256).Hash.ToLower())"
- name: Get SHA2-512SUMS for yt-dlp_min.exe
id: sha512_py2exe
run: echo "::set-output name=sha512_py2exe::$((Get-FileHash dist\yt-dlp.exe -Algorithm SHA512).Hash.ToLower())"
build_windows32: build_windows32:
runs-on: windows-latest runs-on: windows-latest
needs: [build_unix, build_windows] needs: build_unix
outputs: outputs:
sha256_windows32: ${{ steps.sha256_file_win32.outputs.sha256_windows32 }} sha256_win32: ${{ steps.sha256_win32.outputs.sha256_win32 }}
sha512_windows32: ${{ steps.sha512_file_win32.outputs.sha512_windows32 }} sha512_win32: ${{ steps.sha512_win32.outputs.sha512_win32 }}
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
@ -145,17 +322,20 @@ jobs:
with: with:
python-version: '3.7' python-version: '3.7'
architecture: 'x86' architecture: 'x86'
- name: Upgrade pip and enable wheel support
run: python -m pip install --upgrade pip setuptools wheel
- name: Install Requirements - name: Install Requirements
run: pip install "https://yt-dlp.github.io/pyinstaller-builds/i686/pyinstaller-4.5.1-py3-none-any.whl" mutagen pycryptodome websockets run: |
python -m pip install --upgrade pip setuptools wheel
pip install "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-4.5.1-py3-none-any.whl" mutagen pycryptodomex websockets
- name: Bump version - name: Bump version
id: bump_version id: bump_version
run: python devscripts/update-version.py env:
- name: Print version version_suffix: ${{ needs.build_unix.outputs.version_suffix }}
run: echo "${{ steps.bump_version.outputs.ytdlp_version }}" run: python devscripts/update-version.py ${{ env.version_suffix }}
- name: Build lazy extractors
id: lazy_extractors
run: python devscripts/make_lazy_extractors.py
- name: Run PyInstaller Script for 32 Bit - name: Run PyInstaller Script for 32 Bit
run: python pyinst.py 32 run: python pyinst.py
- name: Upload Executable yt-dlp_x86.exe - name: Upload Executable yt-dlp_x86.exe
id: upload-release-windows32 id: upload-release-windows32
uses: actions/upload-release-asset@v1 uses: actions/upload-release-asset@v1
@ -167,28 +347,36 @@ jobs:
asset_name: yt-dlp_x86.exe asset_name: yt-dlp_x86.exe
asset_content_type: application/vnd.microsoft.portable-executable asset_content_type: application/vnd.microsoft.portable-executable
- name: Get SHA2-256SUMS for yt-dlp_x86.exe - name: Get SHA2-256SUMS for yt-dlp_x86.exe
id: sha256_file_win32 id: sha256_win32
run: echo "::set-output name=sha256_windows32::$((Get-FileHash dist\yt-dlp_x86.exe -Algorithm SHA256).Hash.ToLower())" run: echo "::set-output name=sha256_win32::$((Get-FileHash dist\yt-dlp_x86.exe -Algorithm SHA256).Hash.ToLower())"
- name: Get SHA2-512SUMS for yt-dlp_x86.exe - name: Get SHA2-512SUMS for yt-dlp_x86.exe
id: sha512_file_win32 id: sha512_win32
run: echo "::set-output name=sha512_windows32::$((Get-FileHash dist\yt-dlp_x86.exe -Algorithm SHA512).Hash.ToLower())" run: echo "::set-output name=sha512_win32::$((Get-FileHash dist\yt-dlp_x86.exe -Algorithm SHA512).Hash.ToLower())"
finish: finish:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: [build_unix, build_windows, build_windows32] needs: [build_unix, build_windows, build_windows32, build_macos]
steps: steps:
- name: Make SHA2-256SUMS file - name: Make SHA2-256SUMS file
env: env:
SHA256_WINDOWS: ${{ needs.build_windows.outputs.sha256_windows }} SHA256_BIN: ${{ needs.build_unix.outputs.sha256_bin }}
SHA256_WINDOWS32: ${{ needs.build_windows32.outputs.sha256_windows32 }} SHA256_TAR: ${{ needs.build_unix.outputs.sha256_tar }}
SHA256_UNIX: ${{ needs.build_unix.outputs.sha256_unix }} SHA256_WIN: ${{ needs.build_windows.outputs.sha256_win }}
YTDLP_VERSION: ${{ needs.build_unix.outputs.ytdlp_version }} SHA256_PY2EXE: ${{ needs.build_windows.outputs.sha256_py2exe }}
SHA256_WIN_ZIP: ${{ needs.build_windows.outputs.sha256_win_zip }}
SHA256_WIN32: ${{ needs.build_windows32.outputs.sha256_win32 }}
SHA256_MACOS: ${{ needs.build_macos.outputs.sha256_macos }}
SHA256_MACOS_ZIP: ${{ needs.build_macos.outputs.sha256_macos_zip }}
run: | run: |
echo "version:${{ env.YTDLP_VERSION }}" >> SHA2-256SUMS echo "${{ env.SHA256_BIN }} yt-dlp" >> SHA2-256SUMS
echo "yt-dlp.exe:${{ env.SHA256_WINDOWS }}" >> SHA2-256SUMS echo "${{ env.SHA256_TAR }} yt-dlp.tar.gz" >> SHA2-256SUMS
echo "yt-dlp_x86.exe:${{ env.SHA256_WINDOWS32 }}" >> SHA2-256SUMS echo "${{ env.SHA256_WIN }} yt-dlp.exe" >> SHA2-256SUMS
echo "yt-dlp:${{ env.SHA256_UNIX }}" >> SHA2-256SUMS echo "${{ env.SHA256_PY2EXE }} yt-dlp_min.exe" >> SHA2-256SUMS
echo "${{ env.SHA256_WIN32 }} yt-dlp_x86.exe" >> SHA2-256SUMS
echo "${{ env.SHA256_WIN_ZIP }} yt-dlp_win.zip" >> SHA2-256SUMS
echo "${{ env.SHA256_MACOS }} yt-dlp_macos" >> SHA2-256SUMS
echo "${{ env.SHA256_MACOS_ZIP }} yt-dlp_macos.zip" >> SHA2-256SUMS
- name: Upload 256SUMS file - name: Upload 256SUMS file
id: upload-sums id: upload-sums
uses: actions/upload-release-asset@v1 uses: actions/upload-release-asset@v1
@ -201,13 +389,23 @@ jobs:
asset_content_type: text/plain asset_content_type: text/plain
- name: Make SHA2-512SUMS file - name: Make SHA2-512SUMS file
env: env:
SHA512_WINDOWS: ${{ needs.build_windows.outputs.sha512_windows }} SHA512_BIN: ${{ needs.build_unix.outputs.sha512_bin }}
SHA512_WINDOWS32: ${{ needs.build_windows32.outputs.sha512_windows32 }} SHA512_TAR: ${{ needs.build_unix.outputs.sha512_tar }}
SHA512_UNIX: ${{ needs.build_unix.outputs.sha512_unix }} SHA512_WIN: ${{ needs.build_windows.outputs.sha512_win }}
SHA512_PY2EXE: ${{ needs.build_windows.outputs.sha512_py2exe }}
SHA512_WIN_ZIP: ${{ needs.build_windows.outputs.sha512_win_zip }}
SHA512_WIN32: ${{ needs.build_windows32.outputs.sha512_win32 }}
SHA512_MACOS: ${{ needs.build_macos.outputs.sha512_macos }}
SHA512_MACOS_ZIP: ${{ needs.build_macos.outputs.sha512_macos_zip }}
run: | run: |
echo "${{ env.SHA512_WINDOWS }} yt-dlp.exe" >> SHA2-512SUMS echo "${{ env.SHA512_BIN }} yt-dlp" >> SHA2-512SUMS
echo "${{ env.SHA512_WINDOWS32 }} yt-dlp_x86.exe" >> SHA2-512SUMS echo "${{ env.SHA512_TAR }} yt-dlp.tar.gz" >> SHA2-512SUMS
echo "${{ env.SHA512_UNIX }} yt-dlp" >> SHA2-512SUMS echo "${{ env.SHA512_WIN }} yt-dlp.exe" >> SHA2-512SUMS
echo "${{ env.SHA512_WIN_ZIP }} yt-dlp_win.zip" >> SHA2-512SUMS
echo "${{ env.SHA512_PY2EXE }} yt-dlp_min.exe" >> SHA2-512SUMS
echo "${{ env.SHA512_WIN32 }} yt-dlp_x86.exe" >> SHA2-512SUMS
echo "${{ env.SHA512_MACOS }} yt-dlp_macos" >> SHA2-512SUMS
echo "${{ env.SHA512_MACOS_ZIP }} yt-dlp_macos.zip" >> SHA2-512SUMS
- name: Upload 512SUMS file - name: Upload 512SUMS file
id: upload-512sums id: upload-512sums
uses: actions/upload-release-asset@v1 uses: actions/upload-release-asset@v1

View file

@ -12,7 +12,7 @@ jobs:
with: with:
python-version: 3.9 python-version: 3.9
- name: Install test requirements - name: Install test requirements
run: pip install pytest pycryptodome run: pip install pytest pycryptodomex
- name: Run tests - name: Run tests
run: ./devscripts/run_tests.sh core run: ./devscripts/run_tests.sh core
flake8: flake8:
@ -28,6 +28,6 @@ jobs:
- name: Install flake8 - name: Install flake8
run: pip install flake8 run: pip install flake8
- name: Make lazy extractors - name: Make lazy extractors
run: python devscripts/make_lazy_extractors.py yt_dlp/extractor/lazy_extractors.py run: python devscripts/make_lazy_extractors.py
- name: Run flake8 - name: Run flake8
run: flake8 . run: flake8 .

67
.gitignore vendored
View file

@ -1,46 +1,56 @@
# Config # Config
*.conf *.conf
*.spec
cookies cookies
cookies.txt *cookies.txt
.netrc
# Downloaded # Downloaded
*.srt *.annotations.xml
*.ttml *.aria2
*.sbv *.description
*.vtt
*.flv
*.mp4
*.m4a
*.m4v
*.mp3
*.3gp
*.webm
*.wav
*.ape
*.mkv
*.swf
*.part
*.part-*
*.ytdl
*.dump *.dump
*.frag *.frag
*.frag.aria2
*.frag.urls *.frag.urls
*.aria2
*.swp
*.ogg
*.opus
*.info.json *.info.json
*.live_chat.json *.live_chat.json
*.jpg *.part*
*.unknown_video
*.ytdl
.cache/
*.3gp
*.ape
*.avi
*.desktop
*.flac
*.flv
*.jpeg *.jpeg
*.jpg
*.m4a
*.m4v
*.mhtml
*.mkv
*.mov
*.mp3
*.mp4
*.ogg
*.opus
*.png *.png
*.sbv
*.srt
*.swf
*.swp
*.ttml
*.url
*.vtt
*.wav
*.webloc
*.webm
*.webp *.webp
*.annotations.xml
*.description
# Allow config/media files in testdata # Allow config/media files in testdata
!test/testdata/** !test/**
# Python # Python
*.pyc *.pyc
@ -76,7 +86,6 @@ README.txt
*.1 *.1
*.bash-completion *.bash-completion
*.fish *.fish
*.exe
*.tar.gz *.tar.gz
*.zsh *.zsh
*.spec *.spec

View file

@ -1,26 +1,60 @@
**Please include the full output of youtube-dl when run with `-v`**, i.e. **add** `-v` flag to **your command line**, copy the **whole** output and post it in the issue body wrapped in \`\`\` for better formatting. It should look similar to this: # CONTRIBUTING TO YT-DLP
- [OPENING AN ISSUE](#opening-an-issue)
- [Is the description of the issue itself sufficient?](#is-the-description-of-the-issue-itself-sufficient)
- [Are you using the latest version?](#are-you-using-the-latest-version)
- [Is the issue already documented?](#is-the-issue-already-documented)
- [Why are existing options not enough?](#why-are-existing-options-not-enough)
- [Have you read and understood the changes, between youtube-dl and yt-dlp](#have-you-read-and-understood-the-changes-between-youtube-dl-and-yt-dlp)
- [Is there enough context in your bug report?](#is-there-enough-context-in-your-bug-report)
- [Does the issue involve one problem, and one problem only?](#does-the-issue-involve-one-problem-and-one-problem-only)
- [Is anyone going to need the feature?](#is-anyone-going-to-need-the-feature)
- [Is your question about yt-dlp?](#is-your-question-about-yt-dlp)
- [Are you willing to share account details if needed?](#are-you-willing-to-share-account-details-if-needed)
- [DEVELOPER INSTRUCTIONS](#developer-instructions)
- [Adding new feature or making overarching changes](#adding-new-feature-or-making-overarching-changes)
- [Adding support for a new site](#adding-support-for-a-new-site)
- [yt-dlp coding conventions](#yt-dlp-coding-conventions)
- [Mandatory and optional metafields](#mandatory-and-optional-metafields)
- [Provide fallbacks](#provide-fallbacks)
- [Regular expressions](#regular-expressions)
- [Long lines policy](#long-lines-policy)
- [Inline values](#inline-values)
- [Collapse fallbacks](#collapse-fallbacks)
- [Trailing parentheses](#trailing-parentheses)
- [Use convenience conversion and parsing functions](#use-convenience-conversion-and-parsing-functions)
- [EMBEDDING YT-DLP](README.md#embedding-yt-dlp)
# OPENING AN ISSUE
Bugs and suggestions should be reported at: [yt-dlp/yt-dlp/issues](https://github.com/yt-dlp/yt-dlp/issues). Unless you were prompted to or there is another pertinent reason (e.g. GitHub fails to accept the bug report), please do not send bug reports via personal email. For discussions, join us in our [discord server](https://discord.gg/H5MNcFW63r).
**Please include the full output of yt-dlp when run with `-Uv`**, i.e. **add** `-Uv` flag to **your command line**, copy the **whole** output and post it in the issue body wrapped in \`\`\` for better formatting. It should look similar to this:
``` ```
$ youtube-dl -v <your command line> $ yt-dlp -Uv <your command line>
[debug] System config: [] [debug] Command-line config: ['-v', 'demo.com']
[debug] User config: [] [debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8
[debug] Command-line args: [u'-v', u'https://www.youtube.com/watch?v=BaW_jenozKc'] [debug] yt-dlp version 2021.09.25 (zip)
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251 [debug] Python version 3.8.10 (CPython 64bit) - Linux-5.4.0-74-generic-x86_64-with-glibc2.29
[debug] youtube-dl version 2015.12.06 [debug] exe versions: ffmpeg 4.2.4, ffprobe 4.2.4
[debug] Git HEAD: 135392e
[debug] Python version 2.6.6 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {} [debug] Proxy map: {}
Current Build Hash 25cc412d1d3c0725a1f2f5b7e4682f6fb40e6d15f7024e96f7afd572e9919535
yt-dlp is up to date (2021.09.25)
... ...
``` ```
**Do not post screenshots of verbose logs; only plain text is acceptable.** **Do not post screenshots of verbose logs; only plain text is acceptable.**
The output (including the first lines) contains important debugging information. Issues without the full output are often not reproducible and therefore do not get solved in short order, if ever. The output (including the first lines) contains important debugging information. Issues without the full output are often not reproducible and therefore will be closed as `incomplete`.
The templates provided for the Issues, should be completed and **not removed**, this helps aide the resolution of the issue.
Please re-read your issue once again to avoid a couple of common mistakes (you can and should use this as a checklist): Please re-read your issue once again to avoid a couple of common mistakes (you can and should use this as a checklist):
### Is the description of the issue itself sufficient? ### Is the description of the issue itself sufficient?
We often get issue reports that we cannot really decipher. While in most cases we eventually get the required information after asking back multiple times, this poses an unnecessary drain on our resources. Many contributors, including myself, are also not native speakers, so we may misread some parts. We often get issue reports that we cannot really decipher. While in most cases we eventually get the required information after asking back multiple times, this poses an unnecessary drain on our resources.
So please elaborate on what feature you are requesting, or what bug you want to be fixed. Make sure that it's obvious So please elaborate on what feature you are requesting, or what bug you want to be fixed. Make sure that it's obvious
@ -28,25 +62,31 @@ So please elaborate on what feature you are requesting, or what bug you want to
- How it could be fixed - How it could be fixed
- How your proposed solution would look like - How your proposed solution would look like
If your report is shorter than two lines, it is almost certainly missing some of these, which makes it hard for us to respond to it. We're often too polite to close the issue outright, but the missing info makes misinterpretation likely. As a committer myself, I often get frustrated by these issues, since the only possible way for me to move forward on them is to ask for clarification over and over. If your report is shorter than two lines, it is almost certainly missing some of these, which makes it hard for us to respond to it. We're often too polite to close the issue outright, but the missing info makes misinterpretation likely. We often get frustrated by these issues, since the only possible way for us to move forward on them is to ask for clarification over and over.
For bug reports, this means that your report should contain the *complete* output of youtube-dl when called with the `-v` flag. The error message you get for (most) bugs even says so, but you would not believe how many of our bug reports do not contain this information. For bug reports, this means that your report should contain the **complete** output of yt-dlp when called with the `-Uv` flag. The error message you get for (most) bugs even says so, but you would not believe how many of our bug reports do not contain this information.
If your server has multiple IPs or you suspect censorship, adding `--call-home` may be a good idea to get more diagnostics. If the error is `ERROR: Unable to extract ...` and you cannot reproduce it from multiple countries, add `--dump-pages` (warning: this will yield a rather large output, redirect it to the file `log.txt` by adding `>log.txt 2>&1` to your command-line) or upload the `.dump` files you get when you add `--write-pages` [somewhere](https://gist.github.com/). If the error is `ERROR: Unable to extract ...` and you cannot reproduce it from multiple countries, add `--write-pages` and upload the `.dump` files you get [somewhere](https://gist.github.com).
**Site support requests must contain an example URL**. An example URL is a URL you might want to download, like `https://www.youtube.com/watch?v=BaW_jenozKc`. There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. `https://www.youtube.com/`) is *not* an example URL. **Site support requests must contain an example URL**. An example URL is a URL you might want to download, like `https://www.youtube.com/watch?v=BaW_jenozKc`. There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. `https://www.youtube.com/`) is *not* an example URL.
### Are you using the latest version? ### Are you using the latest version?
Before reporting any issue, type `youtube-dl -U`. This should report that you're up-to-date. About 20% of the reports we receive are already fixed, but people are using outdated versions. This goes for feature requests as well. Before reporting any issue, type `yt-dlp -U`. This should report that you're up-to-date. This goes for feature requests as well.
### Is the issue already documented? ### Is the issue already documented?
Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/ytdl-org/youtube-dl/search?type=Issues) of this repository. If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2015.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity. Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/yt-dlp/yt-dlp/search?type=Issues) of this repository. If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2021.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity.
Additionally, it is also helpful to see if the issue has already been documented in the [youtube-dl issue tracker](https://github.com/ytdl-org/youtube-dl/issues). If similar issues have already been reported in youtube-dl (but not in our issue tracker), links to them can be included in your issue report here.
### Why are existing options not enough? ### Why are existing options not enough?
Before requesting a new feature, please have a quick peek at [the list of supported options](https://github.com/ytdl-org/youtube-dl/blob/master/README.md#options). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem. Before requesting a new feature, please have a quick peek at [the list of supported options](README.md#usage-and-options). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem.
### Have you read and understood the changes, between youtube-dl and yt-dlp
There are many changes between youtube-dl and yt-dlp [(changes to default behavior)](README.md#differences-in-default-behavior), and some of the options available have a different behaviour in yt-dlp, or have been removed all together [(list of changes to options)](README.md#deprecated-options). Make sure you have read and understand the differences in the options and how this may impact your downloads before opening an issue.
### Is there enough context in your bug report? ### Is there enough context in your bug report?
@ -58,23 +98,40 @@ We are then presented with a very complicated request when the original problem
Some of our users seem to think there is a limit of issues they can or should open. There is no limit of issues they can or should open. While it may seem appealing to be able to dump all your issues into one ticket, that means that someone who solves one of your issues cannot mark the issue as closed. Typically, reporting a bunch of issues leads to the ticket lingering since nobody wants to attack that behemoth, until someone mercifully splits the issue into multiple ones. Some of our users seem to think there is a limit of issues they can or should open. There is no limit of issues they can or should open. While it may seem appealing to be able to dump all your issues into one ticket, that means that someone who solves one of your issues cannot mark the issue as closed. Typically, reporting a bunch of issues leads to the ticket lingering since nobody wants to attack that behemoth, until someone mercifully splits the issue into multiple ones.
In particular, every site support request issue should only pertain to services at one site (generally under a common domain, but always using the same backend technology). Do not request support for vimeo user videos, White house podcasts, and Google Plus pages in the same issue. Also, make sure that you don't post bug reports alongside feature requests. As a rule of thumb, a feature request does not include outputs of youtube-dl that are not immediately related to the feature at hand. Do not post reports of a network error alongside the request for a new video service. In particular, every site support request issue should only pertain to services at one site (generally under a common domain, but always using the same backend technology). Do not request support for vimeo user videos, White house podcasts, and Google Plus pages in the same issue. Also, make sure that you don't post bug reports alongside feature requests. As a rule of thumb, a feature request does not include outputs of yt-dlp that are not immediately related to the feature at hand. Do not post reports of a network error alongside the request for a new video service.
### Is anyone going to need the feature? ### Is anyone going to need the feature?
Only post features that you (or an incapacitated friend you can personally talk to) require. Do not post features because they seem like a good idea. If they are really useful, they will be requested by someone who requires them. Only post features that you (or an incapacitated friend you can personally talk to) require. Do not post features because they seem like a good idea. If they are really useful, they will be requested by someone who requires them.
### Is your question about youtube-dl? ### Is your question about yt-dlp?
Some bug reports are completely unrelated to yt-dlp and relate to a different, or even the reporter's own, application. Please make sure that you are actually using yt-dlp. If you are using a UI for yt-dlp, report the bug to the maintainer of the actual application providing the UI. In general, if you are unable to provide the verbose log, you should not be opening the issue here.
If the issue is with `youtube-dl` (the upstream fork of yt-dlp) and not with yt-dlp, the issue should be raised in the youtube-dl project.
### Are you willing to share account details if needed?
The maintainers and potential contributors of the project often do not have an account for the website you are asking support for. So any developer interested in solving your issue may ask you for account details. It is your personal discression whether you are willing to share the account in order for the developer to try and solve your issue. However, if you are unwilling or unable to provide details, they obviously cannot work on the issue and it cannot be solved unless some developer who both has an account and is willing/able to contribute decides to solve it.
By sharing an account with anyone, you agree to bear all risks associated with it. The maintainers and yt-dlp can't be held responsible for any misuse of the credentials.
While these steps won't necessarily ensure that no misuse of the account takes place, these are still some good practices to follow.
- Look for people with `Member` (maintainers of the project) or `Contributor` (people who have previously contributed code) tag on their messages.
- Change the password before sharing the account to something random (use [this](https://passwordsgenerator.net/) if you don't have a random password generator).
- Change the password after receiving the account back.
It may sound strange, but some bug reports we receive are completely unrelated to youtube-dl and relate to a different, or even the reporter's own, application. Please make sure that you are actually using youtube-dl. If you are using a UI for youtube-dl, report the bug to the maintainer of the actual application providing the UI. On the other hand, if your UI for youtube-dl fails in some way you believe is related to youtube-dl, by all means, go ahead and report the bug.
# DEVELOPER INSTRUCTIONS # DEVELOPER INSTRUCTIONS
Most users do not need to build youtube-dl and can [download the builds](https://ytdl-org.github.io/youtube-dl/download.html) or get them from their distribution. Most users do not need to build yt-dlp and can [download the builds](https://github.com/yt-dlp/yt-dlp/releases) or get them via [the other installation methods](README.md#installation).
To run youtube-dl as a developer, you don't need to build anything either. Simply execute To run yt-dlp as a developer, you don't need to build anything either. Simply execute
python -m youtube_dl python -m yt_dlp
To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work: To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work:
@ -85,42 +142,42 @@ To run the test, simply invoke your favorite test runner, or execute a test file
See item 6 of [new extractor tutorial](#adding-support-for-a-new-site) for how to run extractor specific test cases. See item 6 of [new extractor tutorial](#adding-support-for-a-new-site) for how to run extractor specific test cases.
If you want to create a build of youtube-dl yourself, you'll need If you want to create a build of yt-dlp yourself, you can follow the instructions [here](README.md#compile).
* python3
* make (only GNU make is supported)
* pandoc
* zip
* pytest
### Adding support for a new site ## Adding new feature or making overarching changes
If you want to add support for a new site, first of all **make sure** this site is **not dedicated to [copyright infringement](README.md#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. youtube-dl does **not support** such sites thus pull requests adding support for them **will be rejected**. Before you start writing code for implementing a new feature, open an issue explaining your feature request and atleast one use case. This allows the maintainers to decide whether such a feature is desired for the project in the first place, and will provide an avenue to discuss some implementation details. If you open a pull request for a new feature without discussing with us first, do not be surprised when we ask for large changes to the code, or even reject it outright.
The same applies for changes to the documentation, code style, or overarching changes to the architecture
## Adding support for a new site
If you want to add support for a new site, first of all **make sure** this site is **not dedicated to [copyright infringement](https://www.github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. yt-dlp does **not support** such sites thus pull requests adding support for them **will be rejected**.
After you have ensured this site is distributing its content legally, you can follow this quick list (assuming your service is called `yourextractor`): After you have ensured this site is distributing its content legally, you can follow this quick list (assuming your service is called `yourextractor`):
1. [Fork this repository](https://github.com/ytdl-org/youtube-dl/fork) 1. [Fork this repository](https://github.com/yt-dlp/yt-dlp/fork)
2. Check out the source code with: 1. Check out the source code with:
git clone git@github.com:YOUR_GITHUB_USERNAME/youtube-dl.git git clone git@github.com:YOUR_GITHUB_USERNAME/yt-dlp.git
3. Start a new git branch with 1. Start a new git branch with
cd youtube-dl cd yt-dlp
git checkout -b yourextractor git checkout -b yourextractor
4. Start with this simple template and save it to `youtube_dl/extractor/yourextractor.py`: 1. Start with this simple template and save it to `yt_dlp/extractor/yourextractor.py`:
```python ```python
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor from .common import InfoExtractor
class YourExtractorIE(InfoExtractor): class YourExtractorIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P<id>[0-9]+)' _VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P<id>[0-9]+)'
_TEST = { _TESTS = [{
'url': 'https://yourextractor.com/watch/42', 'url': 'https://yourextractor.com/watch/42',
'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)', 'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)',
'info_dict': { 'info_dict': {
@ -134,12 +191,12 @@ After you have ensured this site is distributing its content legally, you can fo
# * A regular expression; start the string with re: # * A regular expression; start the string with re:
# * Any Python type (for example int or float) # * Any Python type (for example int or float)
} }
} }]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
# TODO more code goes here, for example ... # TODO more code goes here, for example ...
title = self._html_search_regex(r'<h1>(.+?)</h1>', webpage, 'title') title = self._html_search_regex(r'<h1>(.+?)</h1>', webpage, 'title')
@ -148,45 +205,55 @@ After you have ensured this site is distributing its content legally, you can fo
'title': title, 'title': title,
'description': self._og_search_description(webpage), 'description': self._og_search_description(webpage),
'uploader': self._search_regex(r'<div[^>]+id="uploader"[^>]*>([^<]+)<', webpage, 'uploader', fatal=False), 'uploader': self._search_regex(r'<div[^>]+id="uploader"[^>]*>([^<]+)<', webpage, 'uploader', fatal=False),
# TODO more properties (see youtube_dl/extractor/common.py) # TODO more properties (see yt_dlp/extractor/common.py)
} }
``` ```
5. Add an import in [`youtube_dl/extractor/extractors.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/extractors.py). 1. Add an import in [`yt_dlp/extractor/extractors.py`](yt_dlp/extractor/extractors.py).
6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` key in test's dict are not counted in. 1. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, the tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` key in test's dict are not counted in. You can also run all the tests in one go with `TestDownload.test_YourExtractor_all`
7. Have a look at [`youtube_dl/extractor/common.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303). Add tests and code for as many as you want. 1. Make sure you have atleast one test for your extractor. Even if all videos covered by the extractor are expected to be inaccessible for automated testing, tests should still be added with a `skip` parameter indicating why the particular test is disabled from running.
8. Make sure your code follows [youtube-dl coding conventions](#youtube-dl-coding-conventions) and check the code with [flake8](https://flake8.pycqa.org/en/latest/index.html#quickstart): 1. Have a look at [`yt_dlp/extractor/common.py`](yt_dlp/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](yt_dlp/extractor/common.py#L91-L426). Add tests and code for as many as you want.
1. Make sure your code follows [yt-dlp coding conventions](#yt-dlp-coding-conventions) and check the code with [flake8](https://flake8.pycqa.org/en/latest/index.html#quickstart):
$ flake8 youtube_dl/extractor/yourextractor.py $ flake8 yt_dlp/extractor/yourextractor.py
9. Make sure your code works under all [Python](https://www.python.org/) versions claimed supported by youtube-dl, namely 2.6, 2.7, and 3.2+. 1. Make sure your code works under all [Python](https://www.python.org/) versions supported by yt-dlp, namely CPython and PyPy for Python 3.6 and above. Backward compatibility is not required for even older versions of Python.
10. When the tests pass, [add](https://git-scm.com/docs/git-add) the new files and [commit](https://git-scm.com/docs/git-commit) them and [push](https://git-scm.com/docs/git-push) the result, like this: 1. When the tests pass, [add](https://git-scm.com/docs/git-add) the new files, [commit](https://git-scm.com/docs/git-commit) them and [push](https://git-scm.com/docs/git-push) the result, like this:
$ git add youtube_dl/extractor/extractors.py $ git add yt_dlp/extractor/extractors.py
$ git add youtube_dl/extractor/yourextractor.py $ git add yt_dlp/extractor/yourextractor.py
$ git commit -m '[yourextractor] Add new extractor' $ git commit -m '[yourextractor] Add extractor'
$ git push origin yourextractor $ git push origin yourextractor
11. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it. 1. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it.
In any case, thank you very much for your contributions! In any case, thank you very much for your contributions!
## youtube-dl coding conventions **Tip:** To test extractors that require login information, create a file `test/local_parameters.json` and add `"usenetrc": true` or your username and password in it:
```json
{
"username": "your user name",
"password": "your password"
}
```
## yt-dlp coding conventions
This section introduces a guide lines for writing idiomatic, robust and future-proof extractor code. This section introduces a guide lines for writing idiomatic, robust and future-proof extractor code.
Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old youtube-dl versions working. Even though this breakage issue is easily fixed by emitting a new version of youtube-dl with a fix incorporated, all the previous versions become broken in all repositories and distros' packages that may not be so prompt in fetching the update from us. Needless to say, some non rolling release distros may never receive an update at all. Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old yt-dlp versions working. Even though this breakage issue may be easily fixed by a new version of yt-dlp, this could take some time, during which the the extractor will remain broken.
### Mandatory and optional metafields ### Mandatory and optional metafields
For extraction to work youtube-dl relies on metadata your extractor extracts and provides to youtube-dl expressed by an [information dictionary](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303) or simply *info dict*. Only the following meta fields in the *info dict* are considered mandatory for a successful extraction process by youtube-dl: For extraction to work yt-dlp relies on metadata your extractor extracts and provides to yt-dlp expressed by an [information dictionary](yt_dlp/extractor/common.py#L91-L426) or simply *info dict*. Only the following meta fields in the *info dict* are considered mandatory for a successful extraction process by yt-dlp:
- `id` (media identifier) - `id` (media identifier)
- `title` (media title) - `title` (media title)
- `url` (media download URL) or `formats` - `url` (media download URL) or `formats`
In fact only the last option is technically mandatory (i.e. if you can't figure out the download location of the media the extraction does not make any sense). But by convention youtube-dl also treats `id` and `title` as mandatory. Thus the aforementioned metafields are the critical data that the extraction does not make any sense without and if any of them fail to be extracted then the extractor is considered completely broken. The aforementioned metafields are the critical data that the extraction does not make any sense without and if any of them fail to be extracted then the extractor is considered completely broken. While, in fact, only `id` is technically mandatory, due to compatibility reasons, yt-dlp also treats `title` as mandatory. The extractor is allowed to return the info dict without url or formats in some special cases if it allows the user to extract usefull information with `--ignore-no-formats-error` - Eg: when the video is a live stream that has not started yet.
[Any field](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L188-L303) apart from the aforementioned ones are considered **optional**. That means that extraction should be **tolerant** to situations when sources for these fields can potentially be unavailable (even if they are always available at the moment) and **future-proof** in order not to break the extraction of general purpose mandatory fields. [Any field](yt_dlp/extractor/common.py#219-L426) apart from the aforementioned ones are considered **optional**. That means that extraction should be **tolerant** to situations when sources for these fields can potentially be unavailable (even if they are always available at the moment) and **future-proof** in order not to break the extraction of general purpose mandatory fields.
#### Example #### Example
@ -200,8 +267,10 @@ Assume at this point `meta`'s layout is:
```python ```python
{ {
...
"summary": "some fancy summary text", "summary": "some fancy summary text",
"user": {
"name": "uploader name"
},
... ...
} }
``` ```
@ -220,6 +289,30 @@ description = meta['summary'] # incorrect
The latter will break extraction process with `KeyError` if `summary` disappears from `meta` at some later time but with the former approach extraction will just go ahead with `description` set to `None` which is perfectly fine (remember `None` is equivalent to the absence of data). The latter will break extraction process with `KeyError` if `summary` disappears from `meta` at some later time but with the former approach extraction will just go ahead with `description` set to `None` which is perfectly fine (remember `None` is equivalent to the absence of data).
If the data is nested, do not use `.get` chains, but instead make use of the utility functions `try_get` or `traverse_obj`
Considering the above `meta` again, assume you want to extract `["user"]["name"]` and put it in the resulting info dict as `uploader`
```python
uploader = try_get(meta, lambda x: x['user']['name']) # correct
```
or
```python
uploader = traverse_obj(meta, ('user', 'name')) # correct
```
and not like:
```python
uploader = meta['user']['name'] # incorrect
```
or
```python
uploader = meta.get('user', {}).get('name') # incorrect
```
Similarly, you should pass `fatal=False` when extracting optional data from a webpage with `_search_regex`, `_html_search_regex` or similar methods, for instance: Similarly, you should pass `fatal=False` when extracting optional data from a webpage with `_search_regex`, `_html_search_regex` or similar methods, for instance:
```python ```python
@ -239,11 +332,36 @@ description = self._search_regex(
``` ```
On failure this code will silently continue the extraction with `description` set to `None`. That is useful for metafields that may or may not be present. On failure this code will silently continue the extraction with `description` set to `None`. That is useful for metafields that may or may not be present.
Another thing to remember is not to try to iterate over `None`
Say you extracted a list of thumbnails into `thumbnail_data` using `try_get` and now want to iterate over them
```python
thumbnail_data = try_get(...)
thumbnails = [{
'url': item['url']
} for item in thumbnail_data or []] # correct
```
and not like:
```python
thumbnail_data = try_get(...)
thumbnails = [{
'url': item['url']
} for item in thumbnail_data] # incorrect
```
In the later case, `thumbnail_data` will be `None` if the field was not found and this will cause the loop `for item in thumbnail_data` to raise a fatal error. Using `for item in thumbnail_data or []` avoids this error and results in setting an empty list in `thumbnails` instead.
### Provide fallbacks ### Provide fallbacks
When extracting metadata try to do so from multiple sources. For example if `title` is present in several places, try extracting from at least some of them. This makes it more future-proof in case some of the sources become unavailable. When extracting metadata try to do so from multiple sources. For example if `title` is present in several places, try extracting from at least some of them. This makes it more future-proof in case some of the sources become unavailable.
#### Example #### Example
Say `meta` from the previous example has a `title` and you are about to extract it. Since `title` is a mandatory meta field you should end up with something like: Say `meta` from the previous example has a `title` and you are about to extract it. Since `title` is a mandatory meta field you should end up with something like:
@ -262,6 +380,7 @@ title = meta.get('title') or self._og_search_title(webpage)
This code will try to extract from `meta` first and if it fails it will try extracting `og:title` from a `webpage`. This code will try to extract from `meta` first and if it fails it will try extracting `og:title` from a `webpage`.
### Regular expressions ### Regular expressions
#### Don't capture groups you don't use #### Don't capture groups you don't use
@ -283,11 +402,10 @@ Incorrect:
r'(id|ID)=(?P<id>\d+)' r'(id|ID)=(?P<id>\d+)'
``` ```
#### Make regular expressions relaxed and flexible #### Make regular expressions relaxed and flexible
When using regular expressions try to write them fuzzy, relaxed and flexible, skipping insignificant parts that are more likely to change, allowing both single and double quotes for quoted values and so on. When using regular expressions try to write them fuzzy, relaxed and flexible, skipping insignificant parts that are more likely to change, allowing both single and double quotes for quoted values and so on.
##### Example ##### Example
Say you need to extract `title` from the following HTML code: Say you need to extract `title` from the following HTML code:
@ -299,14 +417,14 @@ Say you need to extract `title` from the following HTML code:
The code for that task should look similar to: The code for that task should look similar to:
```python ```python
title = self._search_regex( title = self._search_regex( # correct
r'<span[^>]+class="title"[^>]*>([^<]+)', webpage, 'title') r'<span[^>]+class="title"[^>]*>([^<]+)', webpage, 'title')
``` ```
Or even better: Or even better:
```python ```python
title = self._search_regex( title = self._search_regex( # correct
r'<span[^>]+class=(["\'])title\1[^>]*>(?P<title>[^<]+)', r'<span[^>]+class=(["\'])title\1[^>]*>(?P<title>[^<]+)',
webpage, 'title', group='title') webpage, 'title', group='title')
``` ```
@ -316,14 +434,25 @@ Note how you tolerate potential changes in the `style` attribute's value or swit
The code definitely should not look like: The code definitely should not look like:
```python ```python
title = self._search_regex( title = self._search_regex( # incorrect
r'<span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">(.*?)</span>', r'<span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">(.*?)</span>',
webpage, 'title', group='title') webpage, 'title', group='title')
``` ```
or even
```python
title = self._search_regex( # incorrect
r'<span style=".*?" class="title">(.*?)</span>',
webpage, 'title', group='title')
```
Here the presence or absence of other attributes including `style` is irrelevent for the data we need, and so the regex must not depend on it
### Long lines policy ### Long lines policy
There is a soft limit to keep lines of code under 80 characters long. This means it should be respected if possible and if it does not make readability and code maintenance worse. There is a soft limit to keep lines of code under 100 characters long. This means it should be respected if possible and if it does not make readability and code maintenance worse. Sometimes, it may be reasonable to go upto 120 characters and sometimes even 80 can be unreadable. Keep in mind that this is not a hard limit and is just one of many tools to make the code more readable
For example, you should **never** split long string literals like URLs or some other often copied entities over multiple lines to fit this limit: For example, you should **never** split long string literals like URLs or some other often copied entities over multiple lines to fit this limit:
@ -360,6 +489,7 @@ TITLE_RE = r'<title>([^<]+)</title>'
title = self._html_search_regex(TITLE_RE, webpage, 'title') title = self._html_search_regex(TITLE_RE, webpage, 'title')
``` ```
### Collapse fallbacks ### Collapse fallbacks
Multiple fallback values can quickly become unwieldy. Collapse multiple fallback values into a single expression via a list of patterns. Multiple fallback values can quickly become unwieldy. Collapse multiple fallback values into a single expression via a list of patterns.
@ -385,10 +515,13 @@ description = (
Methods supporting list of patterns are: `_search_regex`, `_html_search_regex`, `_og_search_property`, `_html_search_meta`. Methods supporting list of patterns are: `_search_regex`, `_html_search_regex`, `_og_search_property`, `_html_search_meta`.
### Trailing parentheses ### Trailing parentheses
Always move trailing parentheses after the last argument. Always move trailing parentheses after the last argument.
Note that this *does not* apply to braces `}` or square brackets `]` both of which should closed be in a new line
#### Example #### Example
Correct: Correct:
@ -406,30 +539,36 @@ Incorrect:
) )
``` ```
### Use convenience conversion and parsing functions ### Use convenience conversion and parsing functions
Wrap all extracted numeric data into safe functions from [`youtube_dl/utils.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/utils.py): `int_or_none`, `float_or_none`. Use them for string to number conversions as well. Wrap all extracted numeric data into safe functions from [`yt_dlp/utils.py`](yt_dlp/utils.py): `int_or_none`, `float_or_none`. Use them for string to number conversions as well.
Use `url_or_none` for safe URL processing. Use `url_or_none` for safe URL processing.
Use `try_get` for safe metadata extraction from parsed JSON. Use `try_get`, `dict_get` and `traverse_obj` for safe metadata extraction from parsed JSON.
Use `unified_strdate` for uniform `upload_date` or any `YYYYMMDD` meta field extraction, `unified_timestamp` for uniform `timestamp` extraction, `parse_filesize` for `filesize` extraction, `parse_count` for count meta fields extraction, `parse_resolution`, `parse_duration` for `duration` extraction, `parse_age_limit` for `age_limit` extraction. Use `unified_strdate` for uniform `upload_date` or any `YYYYMMDD` meta field extraction, `unified_timestamp` for uniform `timestamp` extraction, `parse_filesize` for `filesize` extraction, `parse_count` for count meta fields extraction, `parse_resolution`, `parse_duration` for `duration` extraction, `parse_age_limit` for `age_limit` extraction.
Explore [`youtube_dl/utils.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/utils.py) for more useful convenience functions. Explore [`yt_dlp/utils.py`](yt_dlp/utils.py) for more useful convenience functions.
#### More examples #### More examples
##### Safely extract optional description from parsed JSON ##### Safely extract optional description from parsed JSON
```python ```python
description = try_get(response, lambda x: x['result']['video'][0]['summary'], compat_str) description = traverse_obj(response, ('result', 'video', 'summary'), expected_type=str)
``` ```
##### Safely extract more optional metadata ##### Safely extract more optional metadata
```python ```python
video = try_get(response, lambda x: x['result']['video'][0], dict) or {} video = traverse_obj(response, ('result', 'video', 0), default={}, expected_type=dict)
description = video.get('summary') description = video.get('summary')
duration = float_or_none(video.get('durationMs'), scale=1000) duration = float_or_none(video.get('durationMs'), scale=1000)
view_count = int_or_none(video.get('views')) view_count = int_or_none(video.get('views'))
``` ```
# EMBEDDING YT-DLP
See [README.md#embedding-yt-dlp](README.md#embedding-yt-dlp) for instructions on how to embed yt-dlp in another Python program

View file

@ -22,7 +22,7 @@ Zocker1999NET
nao20010128nao nao20010128nao
kurumigi kurumigi
bbepis bbepis
animelover1984 animelover1984/horahoradev
Pccode66 Pccode66
RobinD42 RobinD42
hseg hseg
@ -78,3 +78,103 @@ pgaig
PSlava PSlava
stdedos stdedos
u-spec-png u-spec-png
Sipherdrakon
kidonng
smege1001
tandy1000
IONECarter
capntrips
mrfade
ParadoxGBB
wlritchi
NeroBurner
mahanstreamer
alerikaisattera
Derkades
BunnyHelp
i6t
std-move
Chocobozzz
ouwou
korli
octotherp
CeruleanSky
zootedb0t
chao813
ChillingPepper
ConquerorDopy
dalanmiller
DigitalDJ
f4pp3rk1ng
gesa
Jules-A
makeworld-the-better-one
MKSherbini
mrx23dot
poschi3
raphaeldore
renalid
sleaux-meaux
sulyi
tmarki
Vangelis66
AjaxGb
ajj8
jakubadamw
jfogelman
timethrow
sarnoud
Bojidarist
18928172992817182/gustaf
nixklai
smplayer-dev
Zirro
CrypticSignal
flashdagger
fractalf
frafra
kaz-us
ozburo
rhendric
sdomi
selfisekai
stanoarn
0xA7404A/Aurora
4a1e2y5
aarubui
chio0hai
cntrl-s
Deer-Spangle
DEvmIb
Grabien
j54vc1bk
mpeter50
mrpapersonic
pabs3
staubichsauger
xenova
Yakabuff
zulaport
ehoogeveen-medweb
PilzAdam
zmousm
iw0nderhow
unit193
TwoThousandHedgehogs
Jertzukka
cypheron
Hyeeji
bwildenhain
C0D3D3V
kebianizao
Lapin0t
abdullah-if
DavidSkrundz
mkubecek
raleeper
YuenSzeHong
Sematre
jaller94
r5d
julien-hadleyjack
git-anony-mouse

View file

@ -5,20 +5,694 @@
* Run `make doc` * Run `make doc`
* Update Changelog.md and CONTRIBUTORS * Update Changelog.md and CONTRIBUTORS
* Change "Merged with ytdl" version in Readme.md if needed * Change "Based on ytdl" version in Readme.md if needed
* Add new/fixed extractors in "new features" section of Readme.md * Commit as `Release <version>` and push to master
* Commit to master as `Release <version>` * Dispatch the workflow https://github.com/yt-dlp/yt-dlp/actions/workflows/build.yml on master
* Push to origin/release using `git push origin master:release`
build task will now run
* Update version.py using `devscripts\update-version.py`
* Run `make issuetemplates`
* Commit to master as `[version] update :ci skip all`
* Push to origin/master
* Update changelog in /releases
--> -->
### 2021.12.27
* Avoid recursion error when re-extracting info
* [ffmpeg] Fix position of `--ppa`
* [aria2c] Don't show progress when `--no-progress`
* [cookies] Support other keyrings by [mbway](https://github.com/mbway)
* [EmbedThumbnail] Prefer AtomicParsley over ffmpeg if available
* [generic] Fix HTTP KVS Player by [git-anony-mouse](https://github.com/git-anony-mouse)
* [ThumbnailsConvertor] Fix for when there are no thumbnails
* [docs] Add examples for using `TYPES:` in `-P`/`-o`
* [PixivSketch] Add extractors by [nao20010128nao](https://github.com/nao20010128nao)
* [tiktok] Add music, sticker and tag IEs by [MinePlayersPE](https://github.com/MinePlayersPE)
* [BiliIntl] Fix extractor by [MinePlayersPE](https://github.com/MinePlayersPE)
* [CBC] Fix URL regex
* [tiktok] Fix `extractor_key` used in archive
* [youtube] **End `live-from-start` properly when stream ends with 403**
* [Zee5] Fix VALID_URL for tv-shows by [Ashish0804](https://github.com/Ashish0804)
### 2021.12.25
* [dash,youtube] **Download live from start to end** by [nao20010128nao](https://github.com/nao20010128nao), [pukkandan](https://github.com/pukkandan)
* Add option `--live-from-start` to enable downloading live videos from start
* Add key `is_from_start` in formats to identify formats (of live videos) that downloads from start
* [dash] Create protocol `http_dash_segments_generator` that allows a function to be passed instead of fragments
* [fragment] Allow multiple live dash formats to download simultaneously
* [youtube] Implement fragment re-fetching for the live dash formats
* [youtube] Re-extract dash manifest every 5 hours (manifest expires in 6hrs)
* [postprocessor/ffmpeg] Add `FFmpegFixupDuplicateMoovPP` to fixup duplicated moov atoms
* Known issues:
* Ctrl+C doesn't work on Windows when downloading multiple formats
* If video becomes private, download hangs
* [SponsorBlock] Add `Filler` and `Highlight` categories by [nihil-admirari](https://github.com/nihil-admirari), [pukkandan](https://github.com/pukkandan)
* Change `--sponsorblock-cut all` to `--sponsorblock-cut default` if you do not want filler sections to be removed
* Add field `webpage_url_domain`
* Add interactive format selection with `-f -`
* Add option `--file-access-retries` by [ehoogeveen-medweb](https://github.com/ehoogeveen-medweb)
* [outtmpl] Add alternate forms `S`, `D` and improve `id` detection
* [outtmpl] Add operator `&` for replacement text by [PilzAdam](https://github.com/PilzAdam)
* [EmbedSubtitle] Disable duration check temporarily
* [extractor] Add `_search_nuxt_data` by [nao20010128nao](https://github.com/nao20010128nao)
* [extractor] Ignore errors in comment extraction when `-i` is given
* [extractor] Standardize `_live_title`
* [FormatSort] Prevent incorrect deprecation warning
* [generic] Extract m3u8 formats from JSON-LD
* [postprocessor/ffmpeg] Always add `faststart`
* [utils] Fix parsing `YYYYMMDD` dates in Nov/Dec by [wlritchi](https://github.com/wlritchi)
* [utils] Improve `parse_count`
* [utils] Update `std_headers` by [kikuyan](https://github.com/kikuyan), [fstirlitz](https://github.com/fstirlitz)
* [lazy_extractors] Fix for search IEs
* [extractor] Support default implicit graph in JSON-LD by [zmousm](https://github.com/zmousm)
* Allow `--no-write-thumbnail` to override `--write-all-thumbnail`
* Fix `--throttled-rate`
* Fix control characters being printed to `--console-title`
* Fix PostProcessor hooks not registered for some PPs
* Pre-process when using `--flat-playlist`
* Remove known invalid thumbnails from `info_dict`
* Add warning when using `-f best`
* Use `parse_duration` for `--wait-for-video` and some minor fix
* [test/download] Add more fields
* [test/download] Ignore field `webpage_url_domain` by [std-move](https://github.com/std-move)
* [compat] Suppress errors in enabling VT mode
* [docs] Improve manpage format by [iw0nderhow](https://github.com/iw0nderhow), [pukkandan](https://github.com/pukkandan)
* [docs,cleanup] Minor fixes and cleanup
* [cleanup] Fix some typos by [unit193](https://github.com/unit193)
* [ABC:iview] Add show extractor by [pabs3](https://github.com/pabs3)
* [dropout] Add extractor by [TwoThousandHedgehogs](https://github.com/TwoThousandHedgehogs), [pukkandan](https://github.com/pukkandan)
* [GameJolt] Add extractors by [MinePlayersPE](https://github.com/MinePlayersPE)
* [gofile] Add extractor by [Jertzukka](https://github.com/Jertzukka), [Ashish0804](https://github.com/Ashish0804)
* [hse] Add extractors by [cypheron](https://github.com/cypheron), [pukkandan](https://github.com/pukkandan)
* [NateTV] Add NateIE and NateProgramIE by [Ashish0804](https://github.com/Ashish0804), [Hyeeji](https://github.com/Hyeeji)
* [OpenCast] Add extractors by [bwildenhain](https://github.com/bwildenhain), [C0D3D3V](https://github.com/C0D3D3V)
* [rtve] Add `RTVEAudioIE` by [kebianizao](https://github.com/kebianizao)
* [Rutube] Add RutubeChannelIE by [Ashish0804](https://github.com/Ashish0804)
* [skeb] Add extractor by [nao20010128nao](https://github.com/nao20010128nao)
* [soundcloud] Add related tracks extractor by [Lapin0t](https://github.com/Lapin0t)
* [toggo] Add extractor by [nyuszika7h](https://github.com/nyuszika7h)
* [TrueID] Add extractor by [MinePlayersPE](https://github.com/MinePlayersPE)
* [audiomack] Update album and song VALID_URL by [abdullah-if](https://github.com/abdullah-if), [dirkf](https://github.com/dirkf)
* [CBC Gem] Extract 1080p formats by [DavidSkrundz](https://github.com/DavidSkrundz)
* [ceskatelevize] Fetch iframe from nextJS data by [mkubecek](https://github.com/mkubecek)
* [crackle] Look for non-DRM formats by [raleeper](https://github.com/raleeper)
* [dplay] Temporary fix for `discoveryplus.com/it`
* [DiscoveryPlusShowBaseIE] yield actual video id by [Ashish0804](https://github.com/Ashish0804)
* [Facebook] Handle redirect URLs
* [fujitv] Extract 1080p from `tv_android` m3u8 by [YuenSzeHong](https://github.com/YuenSzeHong)
* [gronkh] Support new URL pattern by [Sematre](https://github.com/Sematre)
* [instagram] Expand valid URL by [u-spec-png](https://github.com/u-spec-png)
* [Instagram] Try bypassing login wall with embed page by [MinePlayersPE](https://github.com/MinePlayersPE)
* [Jamendo] Fix use of `_VALID_URL_RE` by [jaller94](https://github.com/jaller94)
* [LBRY] Support livestreams by [Ashish0804](https://github.com/Ashish0804), [pukkandan](https://github.com/pukkandan)
* [NJPWWorld] Extract formats from m3u8 by [aarubui](https://github.com/aarubui)
* [NovaEmbed] update player regex by [std-move](https://github.com/std-move)
* [npr] Make SMIL extraction non-fatal by [r5d](https://github.com/r5d)
* [ntvcojp] Extract NUXT data by [nao20010128nao](https://github.com/nao20010128nao)
* [ok.ru] add mobile fallback by [nao20010128nao](https://github.com/nao20010128nao)
* [olympics] Add uploader and cleanup by [u-spec-png](https://github.com/u-spec-png)
* [ondemandkorea] Update `jw_config` regex by [julien-hadleyjack](https://github.com/julien-hadleyjack)
* [PlutoTV] Expand `_VALID_URL`
* [RaiNews] Fix extractor by [nixxo](https://github.com/nixxo)
* [RCTIPlusSeries] Lazy extraction and video type selection by [MinePlayersPE](https://github.com/MinePlayersPE)
* [redtube] Handle formats delivered inside a JSON by [dirkf](https://github.com/dirkf), [nixxo](https://github.com/nixxo)
* [SonyLiv] Add OTP login support by [Ashish0804](https://github.com/Ashish0804)
* [Steam] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
* [TikTok] Pass cookies to mobile API by [MinePlayersPE](https://github.com/MinePlayersPE)
* [trovo] Fix inheritance of `TrovoChannelBaseIE`
* [TVer] Extract better thumbnails by [YuenSzeHong](https://github.com/YuenSzeHong)
* [vimeo] Extract chapters
* [web.archive:youtube] Improve metadata extraction by [coletdjnz](https://github.com/coletdjnz)
* [youtube:comments] Add more options for limiting number of comments extracted by [coletdjnz](https://github.com/coletdjnz)
* [youtube:tab] Extract more metadata from feeds/channels/playlists by [coletdjnz](https://github.com/coletdjnz)
* [youtube:tab] Extract video thumbnails from playlist by [coletdjnz](https://github.com/coletdjnz), [pukkandan](https://github.com/pukkandan)
* [youtube:tab] Ignore query when redirecting channel to playlist and cleanup of related code
* [youtube] Fix `ytsearchdate`
* [zdf] Support videos with different ptmd location by [iw0nderhow](https://github.com/iw0nderhow)
* [zee5] Support /episodes in URL
### 2021.12.01
* **Add option `--wait-for-video` to wait for scheduled streams**
* Add option `--break-per-input` to apply --break-on... to each input URL
* Add option `--embed-info-json` to embed info.json in mkv
* Add compat-option `embed-metadata`
* Allow using a custom format selector through API
* [AES] Add ECB mode by [nao20010128nao](https://github.com/nao20010128nao)
* [build] Fix MacOS Build
* [build] Save Git HEAD at release alongside version info
* [build] Use `workflow_dispatch` for release
* [downloader/ffmpeg] Fix for direct videos inside mpd manifests
* [downloader] Add colors to download progress
* [EmbedSubtitles] Slightly relax duration check and related cleanup
* [ExtractAudio] Fix conversion to `wav` and `vorbis`
* [ExtractAudio] Support `alac`
* [extractor] Extract `average_rating` from JSON-LD
* [FixupM3u8] Fixup MPEG-TS in MP4 container
* [generic] Support mpd manifests without extension by [shirt](https://github.com/shirt-dev)
* [hls] Better FairPlay DRM detection by [nyuszika7h](https://github.com/nyuszika7h)
* [jsinterp] Fix splice to handle float (for youtube js player f1ca6900)
* [utils] Allow alignment in `render_table` and add tests
* [utils] Fix `PagedList`
* [utils] Fix error when copying `LazyList`
* Clarify video/audio-only formats in -F
* Ensure directory exists when checking formats
* Ensure path for link files exists by [Zirro](https://github.com/Zirro)
* Ensure same config file is not loaded multiple times
* Fix `postprocessor_hooks`
* Fix `--break-on-archive` when pre-checking
* Fix `--check-formats` for `mhtml`
* Fix `--load-info-json` of playlists with failed entries
* Fix `--trim-filename` when filename has `.`
* Fix bug in parsing `--add-header`
* Fix error in `report_unplayable_conflict` by [shirt](https://github.com/shirt-dev)
* Fix writing playlist infojson with `--no-clean-infojson`
* Validate --get-bypass-country
* [blogger] Add extractor by [pabs3](https://github.com/pabs3)
* [breitbart] Add extractor by [Grabien](https://github.com/Grabien)
* [CableAV] Add extractor by [j54vc1bk](https://github.com/j54vc1bk)
* [CanalAlpha] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [CozyTV] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [CPTwentyFour] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [DiscoveryPlus] Add `DiscoveryPlusItalyShowIE` by [Ashish0804](https://github.com/Ashish0804)
* [ESPNCricInfo] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [LinkedIn] Add extractor by [u-spec-png](https://github.com/u-spec-png)
* [mixch] Add extractor by [nao20010128nao](https://github.com/nao20010128nao)
* [nebula] Add `NebulaCollectionIE` and rewrite extractor by [hheimbuerger](https://github.com/hheimbuerger)
* [OneFootball] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [peer.tv] Add extractor by [u-spec-png](https://github.com/u-spec-png)
* [radiozet] Add extractor by [0xA7404A](https://github.com/0xA7404A) (Aurora)
* [redgifs] Add extractor by [chio0hai](https://github.com/chio0hai)
* [RedGifs] Add Search and User extractors by [Deer-Spangle](https://github.com/Deer-Spangle)
* [rtrfm] Add extractor by [pabs3](https://github.com/pabs3)
* [Streamff] Add extractor by [cntrl-s](https://github.com/cntrl-s)
* [Stripchat] Add extractor by [zulaport](https://github.com/zulaport)
* [Aljazeera] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
* [AmazonStoreIE] Fix regex to not match vdp urls by [Ashish0804](https://github.com/Ashish0804)
* [ARDBetaMediathek] Handle new URLs
* [bbc] Get all available formats by [nyuszika7h](https://github.com/nyuszika7h)
* [Bilibili] Fix title extraction by [u-spec-png](https://github.com/u-spec-png)
* [CBC Gem] Fix for shows that don't have all seasons by [makeworld-the-better-one](https://github.com/makeworld-the-better-one)
* [curiositystream] Add more metadata
* [CuriosityStream] Fix series
* [DiscoveryPlus] Rewrite extractors by [Ashish0804](https://github.com/Ashish0804), [pukkandan](https://github.com/pukkandan)
* [HotStar] Set language field from tags by [Ashish0804](https://github.com/Ashish0804)
* [instagram, cleanup] Refactor extractors
* [Instagram] Display more login errors by [MinePlayersPE](https://github.com/MinePlayersPE)
* [itv] Fix extractor by [staubichsauger](https://github.com/staubichsauger), [pukkandan](https://github.com/pukkandan)
* [mediaklikk] Expand valid URL
* [MTV] Improve mgid extraction by [Sipherdrakon](https://github.com/Sipherdrakon), [kikuyan](https://github.com/kikuyan)
* [nexx] Better error message for unsupported format
* [NovaEmbed] Fix extractor by [pukkandan](https://github.com/pukkandan), [std-move](https://github.com/std-move)
* [PatreonUser] Do not capture RSS URLs
* [Reddit] Add support for 1080p videos by [xenova](https://github.com/xenova)
* [RoosterTeethSeries] Fix for multiple pages by [MinePlayersPE](https://github.com/MinePlayersPE)
* [sbs] Fix for movies and livestreams
* [Senate.gov] Add SenateGovIE and fix SenateISVPIE by [Grabien](https://github.com/Grabien), [pukkandan](https://github.com/pukkandan)
* [soundcloud:search] Fix pagination
* [tiktok:user] Set `webpage_url` correctly
* [Tokentube] Fix description by [u-spec-png](https://github.com/u-spec-png)
* [trovo] Fix extractor by [nyuszika7h](https://github.com/nyuszika7h)
* [tv2] Expand valid URL
* [Tvplayhome] Fix extractor by [pukkandan](https://github.com/pukkandan), [18928172992817182](https://github.com/18928172992817182)
* [Twitch:vod] Add chapters by [mpeter50](https://github.com/mpeter50)
* [twitch:vod] Extract live status by [DEvmIb](https://github.com/DEvmIb)
* [VidLii] Add 720p support by [mrpapersonic](https://github.com/mrpapersonic)
* [vimeo] Add fallback for config URL
* [vimeo] Sort http formats higher
* [WDR] Expand valid URL
* [willow] Add extractor by [aarubui](https://github.com/aarubui)
* [xvideos] Detect embed URLs by [4a1e2y5](https://github.com/4a1e2y5)
* [xvideos] Fix extractor by [Yakabuff](https://github.com/Yakabuff)
* [youtube, cleanup] Reorganize Tab and Search extractor inheritances
* [youtube:search_url] Add playlist/channel support
* [youtube] Add `default` player client by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Add storyboard formats
* [youtube] Decrypt n-sig for URLs with `ratebypass`
* [youtube] Minor improvement to format sorting
* [cleanup] Add deprecation warnings
* [cleanup] Refactor `JSInterpreter._seperate`
* [Cleanup] Remove some unnecessary groups in regexes by [Ashish0804](https://github.com/Ashish0804)
* [cleanup] Misc cleanup
### 2021.11.10.1
* Temporarily disable MacOS Build
### 2021.11.10
* [youtube] **Fix throttling by decrypting n-sig**
* Merging extractors from [haruhi-dl](https://git.sakamoto.pl/laudom/haruhi-dl) by [selfisekai](https://github.com/selfisekai)
* [extractor] Add `_search_nextjs_data`
* [tvp] Fix extractors
* [tvp] Add TVPStreamIE
* [wppilot] Add extractors
* [polskieradio] Add extractors
* [radiokapital] Add extractors
* [polsatgo] Add extractor by [selfisekai](https://github.com/selfisekai), [sdomi](https://github.com/sdomi)
* Separate `--check-all-formats` from `--check-formats`
* Approximate filesize from bitrate
* Don't create console in `windows_enable_vt_mode`
* Fix bug in `--load-infojson` of playlists
* [minicurses] Add colors to `-F` and standardize color-printing code
* [outtmpl] Add type `link` for internet shortcut files
* [outtmpl] Add alternate forms for `q` and `j`
* [outtmpl] Do not traverse `None`
* [fragment] Fix progress display in fragmented downloads
* [downloader/ffmpeg] Fix vtt download with ffmpeg
* [ffmpeg] Detect presence of setts and libavformat version
* [ExtractAudio] Rescale `--audio-quality` correctly by [CrypticSignal](https://github.com/CrypticSignal), [pukkandan](https://github.com/pukkandan)
* [ExtractAudio] Use `libfdk_aac` if available by [CrypticSignal](https://github.com/CrypticSignal)
* [FormatSort] `eac3` is better than `ac3`
* [FormatSort] Fix some fields' defaults
* [generic] Detect more json_ld
* [generic] parse jwplayer with only the json URL
* [extractor] Add keyword automatically to SearchIE descriptions
* [extractor] Fix some errors being converted to `ExtractorError`
* [utils] Add `join_nonempty`
* [utils] Add `jwt_decode_hs256` by [Ashish0804](https://github.com/Ashish0804)
* [utils] Create `DownloadCancelled` exception
* [utils] Parse `vp09` as vp9
* [utils] Sanitize URL when determining protocol
* [test/download] Fallback test to `bv`
* [docs] Minor documentation improvements
* [cleanup] Improvements to error and debug messages
* [cleanup] Minor fixes and cleanup
* [3speak] Add extractors by [Ashish0804](https://github.com/Ashish0804)
* [AmazonStore] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [Gab] Add extractor by [u-spec-png](https://github.com/u-spec-png)
* [mediaset] Add playlist support by [nixxo](https://github.com/nixxo)
* [MLSScoccer] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [N1] Add support for nova.rs by [u-spec-png](https://github.com/u-spec-png)
* [PlanetMarathi] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [RaiplayRadio] Add extractors by [frafra](https://github.com/frafra)
* [roosterteeth] Add series extractor
* [sky] Add `SkyNewsStoryIE` by [ajj8](https://github.com/ajj8)
* [youtube] Fix sorting for some videos
* [youtube] Populate `thumbnail` with the best "known" thumbnail
* [youtube] Refactor itag processing
* [youtube] Remove unnecessary no-playlist warning
* [youtube:tab] Add Invidious list for playlists/channels by [rhendric](https://github.com/rhendric)
* [Bilibili:comments] Fix infinite loop by [u-spec-png](https://github.com/u-spec-png)
* [ceskatelevize] Fix extractor by [flashdagger](https://github.com/flashdagger)
* [Coub] Fix media format identification by [wlritchi](https://github.com/wlritchi)
* [crunchyroll] Add extractor-args `language` and `hardsub`
* [DiscoveryPlus] Allow language codes in URL
* [imdb] Fix thumbnail by [ozburo](https://github.com/ozburo)
* [instagram] Add IOS URL support by [u-spec-png](https://github.com/u-spec-png)
* [instagram] Improve login code by [u-spec-png](https://github.com/u-spec-png)
* [Instagram] Improve metadata extraction by [u-spec-png](https://github.com/u-spec-png)
* [iPrima] Fix extractor by [stanoarn](https://github.com/stanoarn)
* [itv] Add support for ITV News by [ajj8](https://github.com/ajj8)
* [la7] Fix extractor by [nixxo](https://github.com/nixxo)
* [linkedin] Don't login multiple times
* [mtv] Fix some videos by [Sipherdrakon](https://github.com/Sipherdrakon)
* [Newgrounds] Fix description by [u-spec-png](https://github.com/u-spec-png)
* [Nrk] Minor fixes by [fractalf](https://github.com/fractalf)
* [Olympics] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
* [piksel] Fix sorting
* [twitter] Do not sort by codec
* [viewlift] Add cookie-based login and series support by [Ashish0804](https://github.com/Ashish0804), [pukkandan](https://github.com/pukkandan)
* [vimeo] Detect source extension and misc cleanup by [flashdagger](https://github.com/flashdagger)
* [vimeo] Fix ondemand videos and direct URLs with hash
* [vk] Fix login and add subtitles by [kaz-us](https://github.com/kaz-us)
* [VLive] Add upload_date and thumbnail by [Ashish0804](https://github.com/Ashish0804)
* [VRT] Fix login by [pgaig](https://github.com/pgaig)
* [Vupload] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
* [wakanim] Add support for MPD manifests by [nyuszika7h](https://github.com/nyuszika7h)
* [wakanim] Detect geo-restriction by [nyuszika7h](https://github.com/nyuszika7h)
* [ZenYandex] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
### 2021.10.22
* [build] Improvements
* Build standalone MacOS packages by [smplayer-dev](https://github.com/smplayer-dev)
* Release windows exe built with `py2exe`
* Enable lazy-extractors in releases.
* Set env var `YTDLP_NO_LAZY_EXTRACTORS` to forcefully disable this (experimental)
* Clean up error reporting in update
* Refactor `pyinst.py`, misc cleanup and improve docs
* [docs] Migrate issues to use forms by [Ashish0804](https://github.com/Ashish0804)
* [downloader] **Fix slow progress hooks**
* This was causing HLS/DASH downloads to be extremely slow in some situations
* [downloader/ffmpeg] Improve simultaneous download and merge
* [EmbedMetadata] Allow overwriting all default metadata with `meta_default` key
* [ModifyChapters] Add ability for `--remove-chapters` to remove sections by timestamp
* [utils] Allow duration strings in `--match-filter`
* Add HDR information to formats
* Add negative option `--no-batch-file` by [Zirro](https://github.com/Zirro)
* Calculate more fields for merged formats
* Do not verify thumbnail URLs unless `--check-formats` is specified
* Don't create console for subprocesses on Windows
* Fix `--restrict-filename` when used with default template
* Fix `check_formats` output being written to stdout when `-qv`
* Fix bug in storyboards
* Fix conflict b/w id and ext in format selection
* Fix verbose head not showing custom configs
* Load archive only after printing verbose head
* Make `duration_string` and `resolution` available in --match-filter
* Re-implement deprecated option `--id`
* Reduce default `--socket-timeout`
* Write verbose header to logger
* [outtmpl] Fix bug in expanding environment variables
* [cookies] Local State should be opened as utf-8
* [extractor,utils] Detect more codecs/mimetypes
* [extractor] Detect `EXT-X-KEY` Apple FairPlay
* [utils] Use `importlib` to load plugins by [sulyi](https://github.com/sulyi)
* [http] Retry on socket timeout and show the last encountered error
* [fragment] Print error message when skipping fragment
* [aria2c] Fix `--skip-unavailable-fragment`
* [SponsorBlock] Obey `extractor-retries` and `sleep-requests`
* [Merger] Do not add `aac_adtstoasc` to non-hls audio
* [ModifyChapters] Do not mutate original chapters by [nihil-admirari](https://github.com/nihil-admirari)
* [devscripts/run_tests] Use markers to filter tests by [sulyi](https://github.com/sulyi)
* [7plus] Add cookie based authentication by [nyuszika7h](https://github.com/nyuszika7h)
* [AdobePass] Fix RCN MSO by [jfogelman](https://github.com/jfogelman)
* [CBC] Fix Gem livestream by [makeworld-the-better-one](https://github.com/makeworld-the-better-one)
* [CBC] Support CBC Gem member content by [makeworld-the-better-one](https://github.com/makeworld-the-better-one)
* [crunchyroll] Add season to flat-playlist
* [crunchyroll] Add support for `beta.crunchyroll` URLs and fix series URLs with language code
* [EUScreen] Add Extractor by [Ashish0804](https://github.com/Ashish0804)
* [Gronkh] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [hidive] Fix typo
* [Hotstar] Mention Dynamic Range in `format_id` by [Ashish0804](https://github.com/Ashish0804)
* [Hotstar] Raise appropriate error for DRM
* [instagram] Add login by [u-spec-png](https://github.com/u-spec-png)
* [instagram] Show appropriate error when login is needed
* [microsoftstream] Add extractor by [damianoamatruda](https://github.com/damianoamatruda), [nixklai](https://github.com/nixklai)
* [on24] Add extractor by [damianoamatruda](https://github.com/damianoamatruda)
* [patreon] Fix vimeo player regex by [zenerdi0de](https://github.com/zenerdi0de)
* [SkyNewsAU] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [tagesschau] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
* [tbs] Add tbs live streams by [llacb47](https://github.com/llacb47)
* [tiktok] Fix typo and update tests
* [trovo] Support channel clips and VODs by [Ashish0804](https://github.com/Ashish0804)
* [Viafree] Add support for Finland by [18928172992817182](https://github.com/18928172992817182)
* [vimeo] Fix embedded `player.vimeo`
* [vlive:channel] Fix extraction by [kikuyan](https://github.com/kikuyan), [pukkandan](https://github.com/pukkandan)
* [youtube] Add auto-translated subtitles
* [youtube] Expose different formats with same itag
* [youtube:comments] Fix for new layout by [coletdjnz](https://github.com/coletdjnz)
* [cleanup] Cleanup bilibili code by [pukkandan](https://github.com/pukkandan), [u-spec-png](https://github.com/u-spec-png)
* [cleanup] Remove broken youtube login code
* [cleanup] Standardize timestamp formatting code
* [cleanup] Generalize `getcomments` implementation for extractors
* [cleanup] Simplify search extractors code
* [cleanup] misc
### 2021.10.10
* [downloader/ffmpeg] Fix bug in initializing `FFmpegPostProcessor`
* [minicurses] Fix when printing to file
* [downloader] Fix throttledratelimit
* [francetv] Fix extractor by [fstirlitz](https://github.com/fstirlitz), [sarnoud](https://github.com/sarnoud)
* [NovaPlay] Add extractor by [Bojidarist](https://github.com/Bojidarist)
* [ffmpeg] Revert "Set max probesize" - No longer needed
* [docs] Remove incorrect dependency on VC++10
* [build] Allow to release without changelog
### 2021.10.09
* Improved progress reporting
* Separate `--console-title` and `--no-progress`
* Add option `--progress` to show progress-bar even in quiet mode
* Fix and refactor `minicurses` and use it for all progress reporting
* Standardize use of terminal sequences and enable color support for windows 10
* Add option `--progress-template` to customize progress-bar and console-title
* Add postprocessor hooks and progress reporting
* [postprocessor] Add plugin support with option `--use-postprocessor`
* [extractor] Extract storyboards from SMIL manifests by [fstirlitz](https://github.com/fstirlitz)
* [outtmpl] Alternate form of format type `l` for `\n` delimited list
* [outtmpl] Format type `U` for unicode normalization
* [outtmpl] Allow empty output template to skip a type of file
* Merge webm formats into mkv if thumbnails are to be embedded
* [adobepass] Add RCN as MSO by [jfogelman](https://github.com/jfogelman)
* [ciscowebex] Add extractor by [damianoamatruda](https://github.com/damianoamatruda)
* [Gettr] Add extractor by [i6t](https://github.com/i6t)
* [GoPro] Add extractor by [i6t](https://github.com/i6t)
* [N1] Add extractor by [u-spec-png](https://github.com/u-spec-png)
* [Theta] Add video extractor by [alerikaisattera](https://github.com/alerikaisattera)
* [Veo] Add extractor by [i6t](https://github.com/i6t)
* [Vupload] Add extractor by [u-spec-png](https://github.com/u-spec-png)
* [bbc] Extract better quality videos by [ajj8](https://github.com/ajj8)
* [Bilibili] Add subtitle converter by [u-spec-png](https://github.com/u-spec-png)
* [CBC] Cleanup tests by [makeworld-the-better-one](https://github.com/makeworld-the-better-one)
* [Douyin] Rewrite extractor by [MinePlayersPE](https://github.com/MinePlayersPE)
* [Funimation] Fix for /v/ urls by [pukkandan](https://github.com/pukkandan), [Jules-A](https://github.com/Jules-A)
* [Funimation] Sort formats according to the relevant extractor-args
* [Hidive] Fix duplicate and incorrect formats
* [HotStarSeries] Fix cookies by [Ashish0804](https://github.com/Ashish0804)
* [LinkedInLearning] Add subtitles by [Ashish0804](https://github.com/Ashish0804)
* [Mediaite] Relax valid url by [coletdjnz](https://github.com/coletdjnz)
* [Newgrounds] Add age_limit and fix duration by [u-spec-png](https://github.com/u-spec-png)
* [Newgrounds] Fix view count on songs by [u-spec-png](https://github.com/u-spec-png)
* [parliamentlive.tv] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
* [PolskieRadio] Fix extractors by [jakubadamw](https://github.com/jakubadamw), [u-spec-png](https://github.com/u-spec-png)
* [reddit] Add embedded url by [u-spec-png](https://github.com/u-spec-png)
* [reddit] Fix 429 by generating a random `reddit_session` by [AjaxGb](https://github.com/AjaxGb)
* [Rumble] Add RumbleChannelIE by [Ashish0804](https://github.com/Ashish0804)
* [soundcloud:playlist] Detect last page correctly
* [SovietsCloset] Add duration from m3u8 by [ChillingPepper](https://github.com/ChillingPepper)
* [Streamable] Add codecs by [u-spec-png](https://github.com/u-spec-png)
* [vidme] Remove extractor by [alerikaisattera](https://github.com/alerikaisattera)
* [youtube:tab] Fallback to API when webpage fails to download by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Fix non-fatal errors in fetching player
* Fix `--flat-playlist` when neither IE nor id is known
* Fix `-f mp4` behaving differently from youtube-dl
* Workaround for bug in `ssl.SSLContext.load_default_certs`
* [aes] Improve performance slightly by [sulyi](https://github.com/sulyi)
* [cookies] Fix keyring fallback by [mbway](https://github.com/mbway)
* [embedsubtitle] Fix error when duration is unknown
* [ffmpeg] Fix error when subtitle file is missing
* [ffmpeg] Set max probesize to workaround AAC HLS stream issues by [shirt](https://github.com/shirt-dev)
* [FixupM3u8] Remove redundant run if merged is needed
* [hls] Fix decryption issues by [shirt](https://github.com/shirt-dev), [pukkandan](https://github.com/pukkandan)
* [http] Respect user-provided chunk size over extractor's
* [utils] Let traverse_obj accept functions as keys
* [docs] Add note about our custom ffmpeg builds
* [docs] Write embedding and contributing documentation by [pukkandan](https://github.com/pukkandan), [timethrow](https://github.com/timethrow)
* [update] Check for new version even if not updateable
* [build] Add more files to the tarball
* [build] Allow building with py2exe (and misc fixes)
* [build] Use pycryptodomex by [shirt](https://github.com/shirt-dev), [pukkandan](https://github.com/pukkandan)
* [cleanup] Some minor refactoring, improve docs and misc cleanup
### 2021.09.25
* Add new option `--netrc-location`
* [outtmpl] Allow alternate fields using `,`
* [outtmpl] Add format type `B` to treat the value as bytes (eg: to limit the filename to a certain number of bytes)
* Separate the options `--ignore-errors` and `--no-abort-on-error`
* Basic framework for simultaneous download of multiple formats by [nao20010128nao](https://github.com/nao20010128nao)
* [17live] Add 17.live extractor by [nao20010128nao](https://github.com/nao20010128nao)
* [bilibili] Add BiliIntlIE and BiliIntlSeriesIE by [Ashish0804](https://github.com/Ashish0804)
* [CAM4] Add extractor by [alerikaisattera](https://github.com/alerikaisattera)
* [Chingari] Add extractors by [Ashish0804](https://github.com/Ashish0804)
* [CGTN] Add extractor by [chao813](https://github.com/chao813)
* [damtomo] Add extractor by [nao20010128nao](https://github.com/nao20010128nao)
* [gotostage] Add extractor by [poschi3](https://github.com/poschi3)
* [Koo] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [Mediaite] Add Extractor by [Ashish0804](https://github.com/Ashish0804)
* [Mediaklikk] Add Extractor by [tmarki](https://github.com/tmarki), [mrx23dot](https://github.com/mrx23dot), [coletdjnz](https://github.com/coletdjnz)
* [MuseScore] Add Extractor by [Ashish0804](https://github.com/Ashish0804)
* [Newgrounds] Add NewgroundsUserIE and improve extractor by [u-spec-png](https://github.com/u-spec-png)
* [nzherald] Add NZHeraldIE by [coletdjnz](https://github.com/coletdjnz)
* [Olympics] Add replay extractor by [Ashish0804](https://github.com/Ashish0804)
* [Peertube] Add channel and playlist extractors by [u-spec-png](https://github.com/u-spec-png)
* [radlive] Add extractor by [nyuszika7h](https://github.com/nyuszika7h)
* [SovietsCloset] Add extractor by [ChillingPepper](https://github.com/ChillingPepper)
* [Streamanity] Add Extractor by [alerikaisattera](https://github.com/alerikaisattera)
* [Theta] Add extractor by [alerikaisattera](https://github.com/alerikaisattera)
* [Yandex] Add ZenYandexIE and ZenYandexChannelIE by [Ashish0804](https://github.com/Ashish0804)
* [9Now] handle episodes of series by [dalanmiller](https://github.com/dalanmiller)
* [AnimalPlanet] Fix extractor by [Sipherdrakon](https://github.com/Sipherdrakon)
* [Arte] Improve description extraction by [renalid](https://github.com/renalid)
* [atv.at] Use jwt for API by [NeroBurner](https://github.com/NeroBurner)
* [brightcove] Extract subtitles from manifests
* [CBC] Fix CBC Gem extractors by [makeworld-the-better-one](https://github.com/makeworld-the-better-one)
* [cbs] Report appropriate error for DRM
* [comedycentral] Support `collection-playlist` by [nixxo](https://github.com/nixxo)
* [DIYNetwork] Support new format by [Sipherdrakon](https://github.com/Sipherdrakon)
* [downloader/niconico] Pass custom headers by [nao20010128nao](https://github.com/nao20010128nao)
* [dw] Fix extractor
* [Fancode] Fix live streams by [zenerdi0de](https://github.com/zenerdi0de)
* [funimation] Fix for locations outside US by [Jules-A](https://github.com/Jules-A), [pukkandan](https://github.com/pukkandan)
* [globo] Fix GloboIE by [Ashish0804](https://github.com/Ashish0804)
* [HiDive] Fix extractor by [Ashish0804](https://github.com/Ashish0804)
* [Hotstar] Add referer for subs by [Ashish0804](https://github.com/Ashish0804)
* [itv] Fix extractor, add subtitles and thumbnails by [coletdjnz](https://github.com/coletdjnz), [sleaux-meaux](https://github.com/sleaux-meaux), [Vangelis66](https://github.com/Vangelis66)
* [lbry] Show error message from API response
* [Mxplayer] Use mobile API by [Ashish0804](https://github.com/Ashish0804)
* [NDR] Rewrite NDRIE by [Ashish0804](https://github.com/Ashish0804)
* [Nuvid] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
* [Oreilly] Handle new web url by [MKSherbini](https://github.com/MKSherbini)
* [pbs] Fix subtitle extraction by [coletdjnz](https://github.com/coletdjnz), [gesa](https://github.com/gesa), [raphaeldore](https://github.com/raphaeldore)
* [peertube] Update instances by [u-spec-png](https://github.com/u-spec-png)
* [plutotv] Fix extractor for URLs with `/en`
* [reddit] Workaround for 429 by redirecting to old.reddit.com
* [redtube] Fix exts
* [soundcloud] Make playlist extraction lazy
* [soundcloud] Retry playlist pages on `502` error and update `_CLIENT_ID`
* [southpark] Fix SouthParkDE by [coletdjnz](https://github.com/coletdjnz)
* [SovietsCloset] Fix playlists for games with only named categories by [ConquerorDopy](https://github.com/ConquerorDopy)
* [SpankBang] Fix uploader by [f4pp3rk1ng](https://github.com/f4pp3rk1ng), [coletdjnz](https://github.com/coletdjnz)
* [tiktok] Use API to fetch higher quality video by [MinePlayersPE](https://github.com/MinePlayersPE), [llacb47](https://github.com/llacb47)
* [TikTokUser] Fix extractor using mobile API by [MinePlayersPE](https://github.com/MinePlayersPE), [llacb47](https://github.com/llacb47)
* [videa] Fix some extraction errors by [nyuszika7h](https://github.com/nyuszika7h)
* [VrtNU] Handle login errors by [llacb47](https://github.com/llacb47)
* [vrv] Don't raise error when thumbnails are missing
* [youtube] Cleanup authentication code by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Fix `--mark-watched` with `--cookies-from-browser`
* [youtube] Improvements to JS player extraction and add extractor-args to skip it by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Retry on 'Unknown Error' by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Return full URL instead of just ID
* [youtube] Warn when trying to download clips
* [zdf] Improve format sorting
* [zype] Extract subtitles from the m3u8 manifest by [fstirlitz](https://github.com/fstirlitz)
* Allow `--force-write-archive` to work with `--flat-playlist`
* Download subtitles in order of `--sub-langs`
* Allow `0` in `--playlist-items`
* Handle more playlist errors with `-i`
* Fix `--no-get-comments`
* Fix `extra_info` being reused across runs
* Fix compat options `no-direct-merge` and `playlist-index`
* Dump files should obey `--trim-filename` by [sulyi](https://github.com/sulyi)
* [aes] Add `aes_gcm_decrypt_and_verify` by [sulyi](https://github.com/sulyi), [pukkandan](https://github.com/pukkandan)
* [aria2c] Fix IV for some AES-128 streams by [shirt](https://github.com/shirt-dev)
* [compat] Don't ignore `HOME` (if set) on windows
* [cookies] Make browser names case insensitive
* [cookies] Print warning for cookie decoding error only once
* [extractor] Fix root-relative URLs in MPD by [DigitalDJ](https://github.com/DigitalDJ)
* [ffmpeg] Add `aac_adtstoasc` when merging if needed
* [fragment,aria2c] Generalize and refactor some code
* [fragment] Avoid repeated request for AES key
* [fragment] Fix range header when using `-N` and media sequence by [shirt](https://github.com/shirt-dev)
* [hls,aes] Fallback to native implementation for AES-CBC and detect `Cryptodome` in addition to `Crypto`
* [hls] Byterange + AES128 is supported by native downloader
* [ModifyChapters] Improve sponsor chapter merge algorithm by [nihil-admirari](https://github.com/nihil-admirari)
* [ModifyChapters] Minor fixes
* [WebVTT] Adjust parser to accommodate PBS subtitles
* [utils] Improve `extract_timezone` by [dirkf](https://github.com/dirkf)
* [options] Fix `--no-config` and refactor reading of config files
* [options] Strip spaces and ignore empty entries in list-like switches
* [test/cookies] Improve logging
* [build] Automate more of the release process by [animelover1984](https://github.com/animelover1984), [pukkandan](https://github.com/pukkandan)
* [build] Fix sha256 by [nihil-admirari](https://github.com/nihil-admirari)
* [build] Bring back brew taps by [nao20010128nao](https://github.com/nao20010128nao)
* [build] Provide `--onedir` zip for windows by [pukkandan](https://github.com/pukkandan)
* [cleanup,docs] Add deprecation warning in docs for some counter intuitive behaviour
* [cleanup] Fix line endings for `nebula.py` by [glenn-slayden](https://github.com/glenn-slayden)
* [cleanup] Improve `make clean-test` by [sulyi](https://github.com/sulyi)
* [cleanup] Misc
### 2021.09.02
* **Native SponsorBlock** implementation by [nihil-admirari](https://github.com/nihil-admirari), [pukkandan](https://github.com/pukkandan)
* `--sponsorblock-remove CATS` removes specified chapters from file
* `--sponsorblock-mark CATS` marks the specified sponsor sections as chapters
* `--sponsorblock-chapter-title TMPL` to specify sponsor chapter template
* `--sponsorblock-api URL` to use a different API
* No re-encoding is done unless `--force-keyframes-at-cuts` is used
* The fetched sponsor sections are written to the infojson
* Deprecates: `--sponskrub`, `--no-sponskrub`, `--sponskrub-cut`, `--no-sponskrub-cut`, `--sponskrub-force`, `--no-sponskrub-force`, `--sponskrub-location`, `--sponskrub-args`
* Split `--embed-chapters` from `--embed-metadata` (it still implies the former by default)
* Add option `--remove-chapters` to remove arbitrary chapters by [nihil-admirari](https://github.com/nihil-admirari), [pukkandan](https://github.com/pukkandan)
* Add option `--force-keyframes-at-cuts` for more accurate cuts when removing and splitting chapters by [nihil-admirari](https://github.com/nihil-admirari)
* Let `--match-filter` reject entries early
* Makes redundant: `--match-title`, `--reject-title`, `--min-views`, `--max-views`
* [lazy_extractor] Improvements (It now passes all tests)
* Bugfix for when plugin directory doesn't exist by [kidonng](https://github.com/kidonng)
* Create instance only after pre-checking archive
* Import actual class if an attribute is accessed
* Fix `suitable` and add flake8 test
* [downloader/ffmpeg] Experimental support for DASH manifests (including live)
* Your ffmpeg must have [this patch](https://github.com/FFmpeg/FFmpeg/commit/3249c757aed678780e22e99a1a49f4672851bca9) applied for YouTube DASH to work
* [downloader/ffmpeg] Allow passing custom arguments before `-i`
* [BannedVideo] Add extractor by [smege1001](https://github.com/smege1001), [blackjack4494](https://github.com/blackjack4494), [pukkandan](https://github.com/pukkandan)
* [bilibili] Add category extractor by [animelover1984](https://github.com/animelover1984)
* [Epicon] Add extractors by [Ashish0804](https://github.com/Ashish0804)
* [filmmodu] Add extractor by [mzbaulhaque](https://github.com/mzbaulhaque)
* [GabTV] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [Hungama] Fix `HungamaSongIE` and add `HungamaAlbumPlaylistIE` by [Ashish0804](https://github.com/Ashish0804)
* [ManotoTV] Add new extractors by [tandy1000](https://github.com/tandy1000)
* [Niconico] Add Search extractors by [animelover1984](https://github.com/animelover1984), [pukkandan](https://github.com/pukkandan)
* [Patreon] Add `PatreonUserIE` by [zenerdi0de](https://github.com/zenerdi0de)
* [peloton] Add extractor by [IONECarter](https://github.com/IONECarter), [capntrips](https://github.com/capntrips), [pukkandan](https://github.com/pukkandan)
* [ProjectVeritas] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [radiko] Add extractors by [nao20010128nao](https://github.com/nao20010128nao)
* [StarTV] Add extractor for `startv.com.tr` by [mrfade](https://github.com/mrfade), [coletdjnz](https://github.com/coletdjnz)
* [tiktok] Add `TikTokUserIE` by [Ashish0804](https://github.com/Ashish0804), [pukkandan](https://github.com/pukkandan)
* [Tokentube] Add extractor by [u-spec-png](https://github.com/u-spec-png)
* [TV2Hu] Fix `TV2HuIE` and add `TV2HuSeriesIE` by [Ashish0804](https://github.com/Ashish0804)
* [voicy] Add extractor by [nao20010128nao](https://github.com/nao20010128nao)
* [adobepass] Fix Verizon SAML login by [nyuszika7h](https://github.com/nyuszika7h), [ParadoxGBB](https://github.com/ParadoxGBB)
* [afreecatv] Fix adult VODs by [wlritchi](https://github.com/wlritchi)
* [afreecatv] Tolerate failure to parse date string by [wlritchi](https://github.com/wlritchi)
* [aljazeera] Fix extractor by [MinePlayersPE](https://github.com/MinePlayersPE)
* [ATV.at] Fix extractor for ATV.at by [NeroBurner](https://github.com/NeroBurner), [coletdjnz](https://github.com/coletdjnz)
* [bitchute] Fix test by [mahanstreamer](https://github.com/mahanstreamer)
* [camtube] Remove obsolete extractor by [alerikaisattera](https://github.com/alerikaisattera)
* [CDA] Add more formats by [u-spec-png](https://github.com/u-spec-png)
* [eroprofile] Fix page skipping in albums by [jhwgh1968](https://github.com/jhwgh1968)
* [facebook] Fix format sorting
* [facebook] Fix metadata extraction by [kikuyan](https://github.com/kikuyan)
* [facebook] Update onion URL by [Derkades](https://github.com/Derkades)
* [HearThisAtIE] Fix extractor by [Ashish0804](https://github.com/Ashish0804)
* [instagram] Add referrer to prevent throttling by [u-spec-png](https://github.com/u-spec-png), [kikuyan](https://github.com/kikuyan)
* [iwara.tv] Extract more metadata by [BunnyHelp](https://github.com/BunnyHelp)
* [iwara] Add thumbnail by [i6t](https://github.com/i6t)
* [kakao] Fix extractor
* [mediaset] Fix extraction for some videos by [nyuszika7h](https://github.com/nyuszika7h)
* [Motherless] Fix extractor by [coletdjnz](https://github.com/coletdjnz)
* [Nova] fix extractor by [std-move](https://github.com/std-move)
* [ParamountPlus] Fix geo verification by [shirt](https://github.com/shirt-dev)
* [peertube] handle new video URL format by [Chocobozzz](https://github.com/Chocobozzz)
* [pornhub] Separate and fix playlist extractor by [mzbaulhaque](https://github.com/mzbaulhaque)
* [reddit] Fix for quarantined subreddits by [ouwou](https://github.com/ouwou)
* [ShemarooMe] Fix extractor by [Ashish0804](https://github.com/Ashish0804)
* [soundcloud] Refetch `client_id` on 403
* [tiktok] Fix metadata extraction
* [TV2] Fix extractor by [Ashish0804](https://github.com/Ashish0804)
* [tv5mondeplus] Fix extractor by [korli](https://github.com/korli)
* [VH1,TVLand] Fix extractors by [Sipherdrakon](https://github.com/Sipherdrakon)
* [Viafree] Fix extractor and extract subtitles by [coletdjnz](https://github.com/coletdjnz)
* [XHamster] Extract `uploader_id` by [octotherp](https://github.com/octotherp)
* [youtube] Add `shorts` to `_VALID_URL`
* [youtube] Add av01 itags to known formats list by [blackjack4494](https://github.com/blackjack4494)
* [youtube] Extract error messages from HTTPError response by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Fix subtitle names
* [youtube] Prefer audio stream that YouTube considers default
* [youtube] Remove annotations and deprecate `--write-annotations` by [coletdjnz](https://github.com/coletdjnz)
* [Zee5] Fix extractor and add subtitles by [Ashish0804](https://github.com/Ashish0804)
* [aria2c] Obey `--rate-limit`
* [EmbedSubtitle] Continue even if some files are missing
* [extractor] Better error message for DRM
* [extractor] Common function `_match_valid_url`
* [extractor] Show video id in error messages if possible
* [FormatSort] Remove priority of `lang`
* [options] Add `_set_from_options_callback`
* [SubtitleConvertor] Fix bug during subtitle conversion
* [utils] Add `parse_qs`
* [webvtt] Fix timestamp overflow adjustment by [fstirlitz](https://github.com/fstirlitz)
* Bugfix for `--replace-in-metadata`
* Don't try to merge with final extension
* Fix `--force-overwrites` when using `-k`
* Fix `--no-prefer-free-formats` by [CeruleanSky](https://github.com/CeruleanSky)
* Fix `-F` for extractors that directly return url
* Fix `-J` when there are failed videos
* Fix `extra_info` being reused across runs
* Fix `playlist_index` not obeying `playlist_start` and add tests
* Fix resuming of single formats when using `--no-part`
* Revert erroneous use of the `Content-Length` header by [fstirlitz](https://github.com/fstirlitz)
* Use `os.replace` where applicable by; paulwrubel
* [build] Add homebrew taps `yt-dlp/taps/yt-dlp` by [nao20010128nao](https://github.com/nao20010128nao)
* [build] Fix bug in making `yt-dlp.tar.gz`
* [docs] Fix some typos by [pukkandan](https://github.com/pukkandan), [zootedb0t](https://github.com/zootedb0t)
* [cleanup] Replace improper use of tab in trovo by [glenn-slayden](https://github.com/glenn-slayden)
### 2021.08.10 ### 2021.08.10
* Add option `--replace-in-metadata` * Add option `--replace-in-metadata`
@ -76,8 +750,8 @@
### 2021.08.02 ### 2021.08.02
* Add logo, banner and donate links * Add logo, banner and donate links
* Expand and escape environment variables correctly in output template * [outtmpl] Expand and escape environment variables
* Add format types `j` (json), `l` (comma delimited list), `q` (quoted for terminal) in output template * [outtmpl] Add format types `j` (json), `l` (comma delimited list), `q` (quoted for terminal)
* [downloader] Allow streaming some unmerged formats to stdout using ffmpeg * [downloader] Allow streaming some unmerged formats to stdout using ffmpeg
* [youtube] **Age-gate bypass** * [youtube] **Age-gate bypass**
* Add `agegate` clients by [pukkandan](https://github.com/pukkandan), [MinePlayersPE](https://github.com/MinePlayersPE) * Add `agegate` clients by [pukkandan](https://github.com/pukkandan), [MinePlayersPE](https://github.com/MinePlayersPE)
@ -282,7 +956,7 @@
### 2021.06.09 ### 2021.06.09
* Fix bug where `%(field)d` in filename template throws error * Fix bug where `%(field)d` in filename template throws error
* Improve offset parsing in outtmpl * [outtmpl] Improve offset parsing
* [test] More rigorous tests for `prepare_filename` * [test] More rigorous tests for `prepare_filename`
### 2021.06.08 ### 2021.06.08
@ -917,7 +1591,7 @@
* Cleaned up the fork for public use * Cleaned up the fork for public use
**PS**: All uncredited changes above this point are authored by [pukkandan](https://github.com/pukkandan) **Note**: All uncredited changes above this point are authored by [pukkandan](https://github.com/pukkandan)
### Unreleased changes in [blackjack4494/yt-dlc](https://github.com/blackjack4494/yt-dlc) ### Unreleased changes in [blackjack4494/yt-dlc](https://github.com/blackjack4494/yt-dlc)
* Updated to youtube-dl release 2020.11.26 by [pukkandan](https://github.com/pukkandan) * Updated to youtube-dl release 2020.11.26 by [pukkandan](https://github.com/pukkandan)
@ -942,8 +1616,110 @@
* [spreaker] fix SpreakerShowIE test URL by [pukkandan](https://github.com/pukkandan) * [spreaker] fix SpreakerShowIE test URL by [pukkandan](https://github.com/pukkandan)
* [Vlive] Fix playlist handling when downloading a channel by [kyuyeunk](https://github.com/kyuyeunk) * [Vlive] Fix playlist handling when downloading a channel by [kyuyeunk](https://github.com/kyuyeunk)
* [tmz] Fix extractor by [diegorodriguezv](https://github.com/diegorodriguezv) * [tmz] Fix extractor by [diegorodriguezv](https://github.com/diegorodriguezv)
* [ITV] BTCC URL update by [WolfganP](https://github.com/WolfganP)
* [generic] Detect embedded bitchute videos by [pukkandan](https://github.com/pukkandan) * [generic] Detect embedded bitchute videos by [pukkandan](https://github.com/pukkandan)
* [generic] Extract embedded youtube and twitter videos by [diegorodriguezv](https://github.com/diegorodriguezv) * [generic] Extract embedded youtube and twitter videos by [diegorodriguezv](https://github.com/diegorodriguezv)
* [ffmpeg] Ensure all streams are copied by [pukkandan](https://github.com/pukkandan) * [ffmpeg] Ensure all streams are copied by [pukkandan](https://github.com/pukkandan)
* [embedthumbnail] Fix for os.rename error by [pukkandan](https://github.com/pukkandan) * [embedthumbnail] Fix for os.rename error by [pukkandan](https://github.com/pukkandan)
* make_win.bat: don't use UPX to pack vcruntime140.dll by [jbruchon](https://github.com/jbruchon) * make_win.bat: don't use UPX to pack vcruntime140.dll by [jbruchon](https://github.com/jbruchon)
### Changelog of [blackjack4494/yt-dlc](https://github.com/blackjack4494/yt-dlc) till release 2020.11.11-3
**Note**: This was constructed from the merge commit messages and may not be entirely accurate
* [bandcamp] fix failing test. remove subclass hack by [insaneracist](https://github.com/insaneracist)
* [bandcamp] restore album downloads by [insaneracist](https://github.com/insaneracist)
* [francetv] fix extractor by [Surkal](https://github.com/Surkal)
* [gdcvault] fix extractor by [blackjack4494](https://github.com/blackjack4494)
* [hotstar] Move to API v1 by [theincognito-inc](https://github.com/theincognito-inc)
* [hrfernsehen] add extractor by [blocktrron](https://github.com/blocktrron)
* [kakao] new apis by [blackjack4494](https://github.com/blackjack4494)
* [la7] fix missing protocol by [nixxo](https://github.com/nixxo)
* [mailru] removed escaped braces, use urljoin, added tests by [nixxo](https://github.com/nixxo)
* [MTV/Nick] universal mgid extractor + fix nick.de feed by [blackjack4494](https://github.com/blackjack4494)
* [mtv] Fix a missing match_id by [nixxo](https://github.com/nixxo)
* [Mtv] updated extractor logic & more by [blackjack4494](https://github.com/blackjack4494)
* [ndr] support Daserste ndr by [blackjack4494](https://github.com/blackjack4494)
* [Netzkino] Only use video id to find metadata by [TobiX](https://github.com/TobiX)
* [newgrounds] fix: video download by [insaneracist](https://github.com/insaneracist)
* [nitter] Add new extractor by [B0pol](https://github.com/B0pol)
* [soundcloud] Resolve audio/x-wav by [tfvlrue](https://github.com/tfvlrue)
* [soundcloud] sets pattern and tests by [blackjack4494](https://github.com/blackjack4494)
* [SouthparkDE/MTV] another mgid extraction (mtv_base) feed url updated by [blackjack4494](https://github.com/blackjack4494)
* [StoryFire] Add new extractor by [sgstair](https://github.com/sgstair)
* [twitch] by [geauxlo](https://github.com/geauxlo)
* [videa] Adapt to updates by [adrianheine](https://github.com/adrianheine)
* [Viki] subtitles, formats by [blackjack4494](https://github.com/blackjack4494)
* [vlive] fix extractor for revamped website by [exwm](https://github.com/exwm)
* [xtube] fix extractor by [insaneracist](https://github.com/insaneracist)
* [youtube] Convert subs when download is skipped by [blackjack4494](https://github.com/blackjack4494)
* [youtube] Fix age gate detection by [random-nick](https://github.com/random-nick)
* [youtube] fix yt-only playback when age restricted/gated - requires cookies by [blackjack4494](https://github.com/blackjack4494)
* [youtube] fix: extract artist metadata from ytInitialData by [insaneracist](https://github.com/insaneracist)
* [youtube] fix: extract mix playlist ids from ytInitialData by [insaneracist](https://github.com/insaneracist)
* [youtube] fix: mix playlist title by [insaneracist](https://github.com/insaneracist)
* [youtube] fix: Youtube Music playlists by [insaneracist](https://github.com/insaneracist)
* [Youtube] Fixed problem with new youtube player by [peet1993](https://github.com/peet1993)
* [zoom] Fix url parsing for url's containing /share/ and dots by [Romern](https://github.com/Romern)
* [zoom] new extractor by [insaneracist](https://github.com/insaneracist)
* abc by [adrianheine](https://github.com/adrianheine)
* Added Comcast_SSO fix by [merval](https://github.com/merval)
* Added DRM logic to brightcove by [merval](https://github.com/merval)
* Added regex for ABC.com site. by [kucksdorfs](https://github.com/kucksdorfs)
* alura by [hugohaa](https://github.com/hugohaa)
* Arbitrary merges by [fstirlitz](https://github.com/fstirlitz)
* ard.py_add_playlist_support by [martin54](https://github.com/martin54)
* Bugfix/youtube/chapters fix extractor by [gschizas](https://github.com/gschizas)
* bugfix_youtube_like_extraction by [RedpointsBots](https://github.com/RedpointsBots)
* Create build workflow by [blackjack4494](https://github.com/blackjack4494)
* deezer by [LucBerge](https://github.com/LucBerge)
* Detect embedded bitchute videos by [pukkandan](https://github.com/pukkandan)
* Don't install tests by [l29ah](https://github.com/l29ah)
* Don't try to embed/convert json subtitles generated by [youtube](https://github.com/youtube) livechat by [pukkandan](https://github.com/pukkandan)
* Doodstream by [sxvghd](https://github.com/sxvghd)
* duboku by [lkho](https://github.com/lkho)
* elonet by [tpikonen](https://github.com/tpikonen)
* ext/remuxe-video by [Zocker1999NET](https://github.com/Zocker1999NET)
* fall-back to the old way to fetch subtitles, if needed by [RobinD42](https://github.com/RobinD42)
* feature_subscriber_count by [RedpointsBots](https://github.com/RedpointsBots)
* Fix external downloader when there is no http_header by [pukkandan](https://github.com/pukkandan)
* Fix issue triggered by [tubeup](https://github.com/tubeup) by [nsapa](https://github.com/nsapa)
* Fix YoutubePlaylistsIE by [ZenulAbidin](https://github.com/ZenulAbidin)
* fix-mitele' by [DjMoren](https://github.com/DjMoren)
* fix/google-drive-cookie-issue by [legraphista](https://github.com/legraphista)
* fix_tiktok by [mervel-mervel](https://github.com/mervel-mervel)
* Fixed problem with JS player URL by [peet1993](https://github.com/peet1993)
* fixYTSearch by [xarantolus](https://github.com/xarantolus)
* FliegendeWurst-3sat-zdf-merger-bugfix-feature
* gilou-bandcamp_update
* implement ThisVid extractor by [rigstot](https://github.com/rigstot)
* JensTimmerman-patch-1 by [JensTimmerman](https://github.com/JensTimmerman)
* Keep download archive in memory for better performance by [jbruchon](https://github.com/jbruchon)
* la7-fix by [iamleot](https://github.com/iamleot)
* magenta by [adrianheine](https://github.com/adrianheine)
* Merge 26564 from [adrianheine](https://github.com/adrianheine)
* Merge code from [ddland](https://github.com/ddland)
* Merge code from [nixxo](https://github.com/nixxo)
* Merge code from [ssaqua](https://github.com/ssaqua)
* Merge code from [zubearc](https://github.com/zubearc)
* mkvthumbnail by [MrDoritos](https://github.com/MrDoritos)
* myvideo_ge by [fonkap](https://github.com/fonkap)
* naver by [SeonjaeHyeon](https://github.com/SeonjaeHyeon)
* ondemandkorea by [julien-hadleyjack](https://github.com/julien-hadleyjack)
* rai-update by [iamleot](https://github.com/iamleot)
* RFC: youtube: Polymer UI and JSON endpoints for playlists by [wlritchi](https://github.com/wlritchi)
* rutv by [adrianheine](https://github.com/adrianheine)
* Sc extractor web auth by [blackjack4494](https://github.com/blackjack4494)
* Switch from binary search tree to Python sets by [jbruchon](https://github.com/jbruchon)
* tiktok by [skyme5](https://github.com/skyme5)
* tvnow by [TinyToweringTree](https://github.com/TinyToweringTree)
* twitch-fix by [lel-amri](https://github.com/lel-amri)
* Twitter shortener by [blackjack4494](https://github.com/blackjack4494)
* Update README.md by [JensTimmerman](https://github.com/JensTimmerman)
* Update to reflect website changes. by [amigatomte](https://github.com/amigatomte)
* use webarchive to fix a dead link in README by [B0pol](https://github.com/B0pol)
* Viki the second by [blackjack4494](https://github.com/blackjack4494)
* wdr-subtitles by [mrtnmtth](https://github.com/mrtnmtth)
* Webpfix by [alexmerkel](https://github.com/alexmerkel)
* Youtube live chat by [siikamiika](https://github.com/siikamiika)

View file

@ -28,6 +28,7 @@ You can also find lists of all [contributors of yt-dlp](CONTRIBUTORS) and [autho
[![gh-sponsor](https://img.shields.io/badge/_-Sponsor-red.svg?logo=githubsponsors&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/coletdjnz) [![gh-sponsor](https://img.shields.io/badge/_-Sponsor-red.svg?logo=githubsponsors&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/coletdjnz)
* YouTube improvements including: age-gate bypass, private playlists, multiple-clients (to avoid throttling) and a lot of under-the-hood improvements * YouTube improvements including: age-gate bypass, private playlists, multiple-clients (to avoid throttling) and a lot of under-the-hood improvements
* Added support for downloading YoutubeWebArchive videos

View file

@ -1,4 +1,4 @@
all: yt-dlp doc pypi-files all: lazy-extractors yt-dlp doc pypi-files
clean: clean-test clean-dist clean-cache clean: clean-test clean-dist clean-cache
completions: completion-bash completion-fish completion-zsh completions: completion-bash completion-fish completion-zsh
doc: README.md CONTRIBUTING.md issuetemplates supportedsites doc: README.md CONTRIBUTING.md issuetemplates supportedsites
@ -13,9 +13,13 @@ pypi-files: AUTHORS Changelog.md LICENSE README.md README.txt supportedsites com
.PHONY: all clean install test tar pypi-files completions ot offlinetest codetest supportedsites .PHONY: all clean install test tar pypi-files completions ot offlinetest codetest supportedsites
clean-test: clean-test:
rm -rf *.dump *.part* *.ytdl *.info.json *.mp4 *.m4a *.flv *.mp3 *.avi *.mkv *.webm *.3gp *.wav *.ape *.swf *.jpg *.png *.frag *.frag.urls *.frag.aria2 test/testdata/player-*.js *.opus *.webp *.ttml *.vtt *.jpeg rm -rf test/testdata/player-*.js tmp/ *.annotations.xml *.aria2 *.description *.dump *.frag \
*.frag.aria2 *.frag.urls *.info.json *.live_chat.json *.part* *.unknown_video *.ytdl \
*.3gp *.ape *.avi *.desktop *.flac *.flv *.jpeg *.jpg *.m4a *.m4v *.mhtml *.mkv *.mov *.mp3 \
*.mp4 *.ogg *.opus *.png *.sbv *.srt *.swf *.swp *.ttml *.url *.vtt *.wav *.webloc *.webm *.webp
clean-dist: clean-dist:
rm -rf yt-dlp.1.temp.md yt-dlp.1 README.txt MANIFEST build/ dist/ .coverage cover/ yt-dlp.tar.gz completions/ yt_dlp/extractor/lazy_extractors.py *.spec CONTRIBUTING.md.tmp yt-dlp yt-dlp.exe yt_dlp.egg-info/ AUTHORS .mailmap rm -rf yt-dlp.1.temp.md yt-dlp.1 README.txt MANIFEST build/ dist/ .coverage cover/ yt-dlp.tar.gz completions/ \
yt_dlp/extractor/lazy_extractors.py *.spec CONTRIBUTING.md.tmp yt-dlp yt-dlp.exe yt_dlp.egg-info/ AUTHORS .mailmap
clean-cache: clean-cache:
find . -name "*.pyc" -o -name "*.class" -delete find . -name "*.pyc" -o -name "*.class" -delete
@ -29,7 +33,6 @@ DESTDIR ?= .
BINDIR ?= $(PREFIX)/bin BINDIR ?= $(PREFIX)/bin
MANDIR ?= $(PREFIX)/man MANDIR ?= $(PREFIX)/man
SHAREDIR ?= $(PREFIX)/share SHAREDIR ?= $(PREFIX)/share
# make_supportedsites.py doesnot work correctly in python2
PYTHON ?= /usr/bin/env python3 PYTHON ?= /usr/bin/env python3
# set SYSCONFDIR to /etc if PREFIX=/usr or PREFIX=/usr/local # set SYSCONFDIR to /etc if PREFIX=/usr or PREFIX=/usr/local
@ -38,9 +41,9 @@ SYSCONFDIR = $(shell if [ $(PREFIX) = /usr -o $(PREFIX) = /usr/local ]; then ech
# set markdown input format to "markdown-smart" for pandoc version 2 and to "markdown" for pandoc prior to version 2 # set markdown input format to "markdown-smart" for pandoc version 2 and to "markdown" for pandoc prior to version 2
MARKDOWN = $(shell if [ `pandoc -v | head -n1 | cut -d" " -f2 | head -c1` = "2" ]; then echo markdown-smart; else echo markdown; fi) MARKDOWN = $(shell if [ `pandoc -v | head -n1 | cut -d" " -f2 | head -c1` = "2" ]; then echo markdown-smart; else echo markdown; fi)
install: yt-dlp yt-dlp.1 completions install: lazy-extractors yt-dlp yt-dlp.1 completions
install -Dm755 yt-dlp $(DESTDIR)$(BINDIR) install -Dm755 yt-dlp $(DESTDIR)$(BINDIR)/yt-dlp
install -Dm644 yt-dlp.1 $(DESTDIR)$(MANDIR)/man1 install -Dm644 yt-dlp.1 $(DESTDIR)$(MANDIR)/man1/yt-dlp.1
install -Dm644 completions/bash/yt-dlp $(DESTDIR)$(SHAREDIR)/bash-completion/completions/yt-dlp install -Dm644 completions/bash/yt-dlp $(DESTDIR)$(SHAREDIR)/bash-completion/completions/yt-dlp
install -Dm644 completions/zsh/_yt-dlp $(DESTDIR)$(SHAREDIR)/zsh/site-functions/_yt-dlp install -Dm644 completions/zsh/_yt-dlp $(DESTDIR)$(SHAREDIR)/zsh/site-functions/_yt-dlp
install -Dm644 completions/fish/yt-dlp.fish $(DESTDIR)$(SHAREDIR)/fish/vendor_completions.d/yt-dlp.fish install -Dm644 completions/fish/yt-dlp.fish $(DESTDIR)$(SHAREDIR)/fish/vendor_completions.d/yt-dlp.fish
@ -76,12 +79,13 @@ README.md: yt_dlp/*.py yt_dlp/*/*.py
CONTRIBUTING.md: README.md CONTRIBUTING.md: README.md
$(PYTHON) devscripts/make_contributing.py README.md CONTRIBUTING.md $(PYTHON) devscripts/make_contributing.py README.md CONTRIBUTING.md
issuetemplates: devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.md .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.md .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.md .github/ISSUE_TEMPLATE_tmpl/4_bug_report.md .github/ISSUE_TEMPLATE_tmpl/5_feature_request.md yt_dlp/version.py issuetemplates: devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.yml .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.yml .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.yml .github/ISSUE_TEMPLATE_tmpl/4_bug_report.yml .github/ISSUE_TEMPLATE_tmpl/5_feature_request.yml yt_dlp/version.py
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.md .github/ISSUE_TEMPLATE/1_broken_site.md $(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.yml .github/ISSUE_TEMPLATE/1_broken_site.yml
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.md .github/ISSUE_TEMPLATE/2_site_support_request.md $(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.yml .github/ISSUE_TEMPLATE/2_site_support_request.yml
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.md .github/ISSUE_TEMPLATE/3_site_feature_request.md $(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.yml .github/ISSUE_TEMPLATE/3_site_feature_request.yml
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/4_bug_report.md .github/ISSUE_TEMPLATE/4_bug_report.md $(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/4_bug_report.yml .github/ISSUE_TEMPLATE/4_bug_report.yml
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/5_feature_request.md .github/ISSUE_TEMPLATE/5_feature_request.md $(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/5_feature_request.yml .github/ISSUE_TEMPLATE/5_feature_request.yml
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/6_question.yml .github/ISSUE_TEMPLATE/6_question.yml
supportedsites: supportedsites:
$(PYTHON) devscripts/make_supportedsites.py supportedsites.md $(PYTHON) devscripts/make_supportedsites.py supportedsites.md
@ -110,7 +114,7 @@ _EXTRACTOR_FILES = $(shell find yt_dlp/extractor -iname '*.py' -and -not -iname
yt_dlp/extractor/lazy_extractors.py: devscripts/make_lazy_extractors.py devscripts/lazy_load_template.py $(_EXTRACTOR_FILES) yt_dlp/extractor/lazy_extractors.py: devscripts/make_lazy_extractors.py devscripts/lazy_load_template.py $(_EXTRACTOR_FILES)
$(PYTHON) devscripts/make_lazy_extractors.py $@ $(PYTHON) devscripts/make_lazy_extractors.py $@
yt-dlp.tar.gz: README.md yt-dlp.1 completions Changelog.md AUTHORS yt-dlp.tar.gz: all
@tar -czf $(DESTDIR)/yt-dlp.tar.gz --transform "s|^|yt-dlp/|" --owner 0 --group 0 \ @tar -czf $(DESTDIR)/yt-dlp.tar.gz --transform "s|^|yt-dlp/|" --owner 0 --group 0 \
--exclude '*.DS_Store' \ --exclude '*.DS_Store' \
--exclude '*.kate-swp' \ --exclude '*.kate-swp' \
@ -119,12 +123,12 @@ yt-dlp.tar.gz: README.md yt-dlp.1 completions Changelog.md AUTHORS
--exclude '*~' \ --exclude '*~' \
--exclude '__pycache__' \ --exclude '__pycache__' \
--exclude '.git' \ --exclude '.git' \
--exclude 'docs/_build' \
-- \ -- \
devscripts test \ README.md supportedsites.md Changelog.md LICENSE \
Changelog.md AUTHORS LICENSE README.md supportedsites.md \ CONTRIBUTING.md Collaborators.md CONTRIBUTORS AUTHORS \
Makefile MANIFEST.in yt-dlp.1 completions \ Makefile MANIFEST.in yt-dlp.1 README.txt completions \
setup.py setup.cfg yt-dlp setup.py setup.cfg yt-dlp yt_dlp requirements.txt \
devscripts test tox.ini pytest.ini
AUTHORS: .mailmap AUTHORS: .mailmap
git shortlog -s -n | cut -f2 | sort > AUTHORS git shortlog -s -n | cut -f2 | sort > AUTHORS

886
README.md

File diff suppressed because it is too large Load diff

View file

@ -1,9 +1,15 @@
# coding: utf-8 # coding: utf-8
import re import re
from ..utils import bug_reports_message, write_string
class LazyLoadMetaClass(type): class LazyLoadMetaClass(type):
def __getattr__(cls, name): def __getattr__(cls, name):
if '_real_class' not in cls.__dict__:
write_string(
f'WARNING: Falling back to normal extractor since lazy extractor '
f'{cls.__name__} does not have attribute {name}{bug_reports_message()}')
return getattr(cls._get_real_class(), name) return getattr(cls._get_real_class(), name)
@ -13,10 +19,10 @@ class LazyLoadExtractor(metaclass=LazyLoadMetaClass):
@classmethod @classmethod
def _get_real_class(cls): def _get_real_class(cls):
if '__real_class' not in cls.__dict__: if '_real_class' not in cls.__dict__:
mod = __import__(cls._module, fromlist=(cls.__name__,)) mod = __import__(cls._module, fromlist=(cls.__name__,))
cls.__real_class = getattr(mod, cls.__name__) cls._real_class = getattr(mod, cls.__name__)
return cls.__real_class return cls._real_class
def __new__(cls, *args, **kwargs): def __new__(cls, *args, **kwargs):
real_cls = cls._get_real_class() real_cls = cls._get_real_class()

View file

@ -1,33 +1,34 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import unicode_literals from __future__ import unicode_literals
# import io import io
import optparse import optparse
# import re import re
def main(): def main():
return # This is unused in yt-dlp
parser = optparse.OptionParser(usage='%prog INFILE OUTFILE') parser = optparse.OptionParser(usage='%prog INFILE OUTFILE')
options, args = parser.parse_args() options, args = parser.parse_args()
if len(args) != 2: if len(args) != 2:
parser.error('Expected an input and an output filename') parser.error('Expected an input and an output filename')
infile, outfile = args
""" infile, outfile = args
with io.open(infile, encoding='utf-8') as inf: with io.open(infile, encoding='utf-8') as inf:
readme = inf.read() readme = inf.read()
bug_text = re.search( """ bug_text = re.search(
# r'(?s)#\s*BUGS\s*[^\n]*\s*(.*?)#\s*COPYRIGHT', readme).group(1) r'(?s)#\s*BUGS\s*[^\n]*\s*(.*?)#\s*COPYRIGHT', readme).group(1)
# dev_text = re.search( dev_text = re.search(
# r'(?s)(#\s*DEVELOPER INSTRUCTIONS.*?)#\s*EMBEDDING yt-dlp', r'(?s)(#\s*DEVELOPER INSTRUCTIONS.*?)#\s*EMBEDDING yt-dlp', readme).group(1)
""" readme).group(1)
out = bug_text + dev_text out = bug_text + dev_text
with io.open(outfile, 'w', encoding='utf-8') as outf: with io.open(outfile, 'w', encoding='utf-8') as outf:
outf.write(out) """ outf.write(out)
if __name__ == '__main__': if __name__ == '__main__':
main() main()

View file

@ -7,11 +7,9 @@ import os
from os.path import dirname as dirn from os.path import dirname as dirn
import sys import sys
print('WARNING: Lazy loading extractors is an experimental feature that may not always work', file=sys.stderr)
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__))))) sys.path.insert(0, dirn(dirn((os.path.abspath(__file__)))))
lazy_extractors_filename = sys.argv[1] lazy_extractors_filename = sys.argv[1] if len(sys.argv) > 1 else 'yt_dlp/extractor/lazy_extractors.py'
if os.path.exists(lazy_extractors_filename): if os.path.exists(lazy_extractors_filename):
os.remove(lazy_extractors_filename) os.remove(lazy_extractors_filename)
@ -41,12 +39,6 @@ class {name}({bases}):
_module = '{module}' _module = '{module}'
''' '''
make_valid_template = '''
@classmethod
def _make_valid_url(cls):
return {valid_url!r}
'''
def get_base_name(base): def get_base_name(base):
if base is InfoExtractor: if base is InfoExtractor:
@ -63,15 +55,14 @@ def build_lazy_ie(ie, name):
bases=', '.join(map(get_base_name, ie.__bases__)), bases=', '.join(map(get_base_name, ie.__bases__)),
module=ie.__module__) module=ie.__module__)
valid_url = getattr(ie, '_VALID_URL', None) valid_url = getattr(ie, '_VALID_URL', None)
if not valid_url and hasattr(ie, '_make_valid_url'):
valid_url = ie._make_valid_url()
if valid_url: if valid_url:
s += f' _VALID_URL = {valid_url!r}\n' s += f' _VALID_URL = {valid_url!r}\n'
if not ie._WORKING: if not ie._WORKING:
s += ' _WORKING = False\n' s += ' _WORKING = False\n'
if ie.suitable.__func__ is not InfoExtractor.suitable.__func__: if ie.suitable.__func__ is not InfoExtractor.suitable.__func__:
s += f'\n{getsource(ie.suitable)}' s += f'\n{getsource(ie.suitable)}'
if hasattr(ie, '_make_valid_url'):
# search extractors
s += make_valid_template.format(valid_url=ie._make_valid_url())
return s return s

View file

@ -29,6 +29,9 @@ def main():
continue continue
if ie_desc is not None: if ie_desc is not None:
ie_md += ': {0}'.format(ie.IE_DESC) ie_md += ': {0}'.format(ie.IE_DESC)
search_key = getattr(ie, 'SEARCH_KEY', None)
if search_key is not None:
ie_md += f'; "{ie.SEARCH_KEY}:" prefix'
if not ie.working(): if not ie.working():
ie_md += ' (Currently broken)' ie_md += ' (Currently broken)'
yield ie_md yield ie_md

View file

@ -13,12 +13,14 @@ PREFIX = r'''%yt-dlp(1)
# NAME # NAME
youtube\-dl \- download videos from youtube.com or other video platforms yt\-dlp \- A youtube-dl fork with additional features and patches
# SYNOPSIS # SYNOPSIS
**yt-dlp** \[OPTIONS\] URL [URL...] **yt-dlp** \[OPTIONS\] URL [URL...]
# DESCRIPTION
''' '''
@ -33,47 +35,63 @@ def main():
with io.open(README_FILE, encoding='utf-8') as f: with io.open(README_FILE, encoding='utf-8') as f:
readme = f.read() readme = f.read()
readme = re.sub(r'(?s)^.*?(?=# DESCRIPTION)', '', readme) readme = filter_excluded_sections(readme)
readme = re.sub(r'\s+yt-dlp \[OPTIONS\] URL \[URL\.\.\.\]', '', readme) readme = move_sections(readme)
readme = PREFIX + readme
readme = filter_options(readme) readme = filter_options(readme)
with io.open(outfile, 'w', encoding='utf-8') as outf: with io.open(outfile, 'w', encoding='utf-8') as outf:
outf.write(readme) outf.write(PREFIX + readme)
def filter_excluded_sections(readme):
EXCLUDED_SECTION_BEGIN_STRING = re.escape('<!-- MANPAGE: BEGIN EXCLUDED SECTION -->')
EXCLUDED_SECTION_END_STRING = re.escape('<!-- MANPAGE: END EXCLUDED SECTION -->')
return re.sub(
rf'(?s){EXCLUDED_SECTION_BEGIN_STRING}.+?{EXCLUDED_SECTION_END_STRING}\n',
'', readme)
def move_sections(readme):
MOVE_TAG_TEMPLATE = '<!-- MANPAGE: MOVE "%s" SECTION HERE -->'
sections = re.findall(r'(?m)^%s$' % (
re.escape(MOVE_TAG_TEMPLATE).replace(r'\%', '%') % '(.+)'), readme)
for section_name in sections:
move_tag = MOVE_TAG_TEMPLATE % section_name
if readme.count(move_tag) > 1:
raise Exception(f'There is more than one occurrence of "{move_tag}". This is unexpected')
sections = re.findall(rf'(?sm)(^# {re.escape(section_name)}.+?)(?=^# )', readme)
if len(sections) < 1:
raise Exception(f'The section {section_name} does not exist')
elif len(sections) > 1:
raise Exception(f'There are multiple occurrences of section {section_name}, this is unhandled')
readme = readme.replace(sections[0], '', 1).replace(move_tag, sections[0], 1)
return readme
def filter_options(readme): def filter_options(readme):
ret = '' section = re.search(r'(?sm)^# USAGE AND OPTIONS\n.+?(?=^# )', readme).group(0)
in_options = False options = '# OPTIONS\n'
for line in readme.split('\n'): for line in section.split('\n')[1:]:
if line.startswith('# '): if line.lstrip().startswith('-'):
if line[2:].startswith('OPTIONS'): split = re.split(r'\s{2,}', line.lstrip())
in_options = True # Description string may start with `-` as well. If there is
else: # only one piece then it's a description bit not an option.
in_options = False if len(split) > 1:
option, description = split
split_option = option.split(' ')
if in_options: if not split_option[-1].startswith('-'): # metavar
if line.lstrip().startswith('-'): option = ' '.join(split_option[:-1] + [f'*{split_option[-1]}*'])
split = re.split(r'\s{2,}', line.lstrip())
# Description string may start with `-` as well. If there is
# only one piece then it's a description bit not an option.
if len(split) > 1:
option, description = split
split_option = option.split(' ')
if not split_option[-1].startswith('-'): # metavar # Pandoc's definition_lists. See http://pandoc.org/README.html
option = ' '.join(split_option[:-1] + ['*%s*' % split_option[-1]]) options += f'\n{option}\n: {description}\n'
continue
options += line.lstrip() + '\n'
# Pandoc's definition_lists. See http://pandoc.org/README.html return readme.replace(section, options, 1)
# for more information.
ret += '\n%s\n: %s\n' % (option, description)
continue
ret += line.lstrip() + '\n'
else:
ret += line + '\n'
return ret
if __name__ == '__main__': if __name__ == '__main__':

View file

@ -3,11 +3,11 @@
cd /d %~dp0.. cd /d %~dp0..
if ["%~1"]==[""] ( if ["%~1"]==[""] (
set "test_set=" set "test_set="test""
) else if ["%~1"]==["core"] ( ) else if ["%~1"]==["core"] (
set "test_set=-k "not download"" set "test_set="-m not download""
) else if ["%~1"]==["download"] ( ) else if ["%~1"]==["download"] (
set "test_set=-k download" set "test_set="-m "download""
) else ( ) else (
echo.Invalid test type "%~1". Use "core" ^| "download" echo.Invalid test type "%~1". Use "core" ^| "download"
exit /b 1 exit /b 1

View file

@ -3,12 +3,12 @@
if [ -z $1 ]; then if [ -z $1 ]; then
test_set='test' test_set='test'
elif [ $1 = 'core' ]; then elif [ $1 = 'core' ]; then
test_set='not download' test_set="-m not download"
elif [ $1 = 'download' ]; then elif [ $1 = 'download' ]; then
test_set='download' test_set="-m download"
else else
echo 'Invalid test type "'$1'". Use "core" | "download"' echo 'Invalid test type "'$1'". Use "core" | "download"'
exit 1 exit 1
fi fi
python3 -m pytest -k "$test_set" python3 -m pytest "$test_set"

View file

@ -0,0 +1,37 @@
#!/usr/bin/env python3
from __future__ import unicode_literals
import json
import os
import re
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from yt_dlp.compat import compat_urllib_request
# usage: python3 ./devscripts/update-formulae.py <path-to-formulae-rb> <version>
# version can be either 0-aligned (yt-dlp version) or normalized (PyPl version)
filename, version = sys.argv[1:]
normalized_version = '.'.join(str(int(x)) for x in version.split('.'))
pypi_release = json.loads(compat_urllib_request.urlopen(
'https://pypi.org/pypi/yt-dlp/%s/json' % normalized_version
).read().decode('utf-8'))
tarball_file = next(x for x in pypi_release['urls'] if x['filename'].endswith('.tar.gz'))
sha256sum = tarball_file['digests']['sha256']
url = tarball_file['url']
with open(filename, 'r') as r:
formulae_text = r.read()
formulae_text = re.sub(r'sha256 "[0-9a-f]*?"', 'sha256 "%s"' % sha256sum, formulae_text)
formulae_text = re.sub(r'url "[^"]*?"', 'url "%s"' % url, formulae_text)
with open(filename, 'w') as w:
w.write(formulae_text)

View file

@ -1,33 +1,42 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import unicode_literals
from datetime import datetime from datetime import datetime
# import urllib.request import sys
import subprocess
# response = urllib.request.urlopen('https://blackjack4494.github.io/youtube-dlc/update/LATEST_VERSION')
# old_version = response.read().decode('utf-8')
exec(compile(open('yt_dlp/version.py').read(), 'yt_dlp/version.py', 'exec')) with open('yt_dlp/version.py', 'rt') as f:
exec(compile(f.read(), 'yt_dlp/version.py', 'exec'))
old_version = locals()['__version__'] old_version = locals()['__version__']
old_version_list = old_version.split(".", 4) old_version_list = old_version.split('.')
old_ver = '.'.join(old_version_list[:3]) old_ver = '.'.join(old_version_list[:3])
old_rev = old_version_list[3] if len(old_version_list) > 3 else '' old_rev = old_version_list[3] if len(old_version_list) > 3 else ''
ver = datetime.utcnow().strftime("%Y.%m.%d") ver = datetime.utcnow().strftime("%Y.%m.%d")
rev = str(int(old_rev or 0) + 1) if old_ver == ver else ''
rev = (sys.argv[1:] or [''])[0] # Use first argument, if present as revision number
if not rev:
rev = str(int(old_rev or 0) + 1) if old_ver == ver else ''
VERSION = '.'.join((ver, rev)) if rev else ver VERSION = '.'.join((ver, rev)) if rev else ver
# VERSION_LIST = [(int(v) for v in ver.split(".") + [rev or 0])]
try:
sp = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], stdout=subprocess.PIPE)
GIT_HEAD = sp.communicate()[0].decode().strip() or None
except Exception:
GIT_HEAD = None
VERSION_FILE = f'''\
# Autogenerated by devscripts/update-version.py
__version__ = {VERSION!r}
RELEASE_GIT_HEAD = {GIT_HEAD!r}
'''
with open('yt_dlp/version.py', 'wt') as f:
f.write(VERSION_FILE)
print('::set-output name=ytdlp_version::' + VERSION) print('::set-output name=ytdlp_version::' + VERSION)
print(f'\nVersion = {VERSION}, Git HEAD = {GIT_HEAD}')
file_version_py = open('yt_dlp/version.py', 'rt')
data = file_version_py.read()
data = data.replace(old_version, VERSION)
file_version_py.close()
file_version_py = open('yt_dlp/version.py', 'wt')
file_version_py.write(data)
file_version_py.close()

5
docs/Contributing.md Normal file
View file

@ -0,0 +1,5 @@
---
orphan: true
---
```{include} ../Contributing.md
```

189
pyinst.py
View file

@ -1,82 +1,135 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# coding: utf-8 # coding: utf-8
import os
from __future__ import unicode_literals
import sys
# import os
import platform import platform
import sys
from PyInstaller.utils.hooks import collect_submodules from PyInstaller.utils.hooks import collect_submodules
from PyInstaller.utils.win32.versioninfo import (
VarStruct, VarFileInfo, StringStruct, StringTable,
StringFileInfo, FixedFileInfo, VSVersionInfo, SetVersion,
)
import PyInstaller.__main__
arch = sys.argv[1] if len(sys.argv) > 1 else platform.architecture()[0][:2]
assert arch in ('32', '64')
print('Building %sbit version' % arch)
_x86 = '_x86' if arch == '32' else ''
FILE_DESCRIPTION = 'yt-dlp%s' % (' (32 Bit)' if _x86 else '') OS_NAME = platform.system()
if OS_NAME == 'Windows':
from PyInstaller.utils.win32.versioninfo import (
VarStruct, VarFileInfo, StringStruct, StringTable,
StringFileInfo, FixedFileInfo, VSVersionInfo, SetVersion,
)
elif OS_NAME == 'Darwin':
pass
else:
raise Exception('{OS_NAME} is not supported')
# root_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..')) ARCH = platform.architecture()[0][:2]
# print('Changing working directory to %s' % root_dir)
# os.chdir(root_dir)
exec(compile(open('yt_dlp/version.py').read(), 'yt_dlp/version.py', 'exec'))
VERSION = locals()['__version__']
VERSION_LIST = VERSION.split('.') def main():
VERSION_LIST = list(map(int, VERSION_LIST)) + [0] * (4 - len(VERSION_LIST)) opts = parse_options()
version = read_version()
print('Version: %s%s' % (VERSION, _x86)) suffix = '_macos' if OS_NAME == 'Darwin' else '_x86' if ARCH == '32' else ''
print('Remember to update the version using devscipts\\update-version.py') final_file = 'dist/%syt-dlp%s%s' % (
'yt-dlp/' if '--onedir' in opts else '', suffix, '.exe' if OS_NAME == 'Windows' else '')
VERSION_FILE = VSVersionInfo( print(f'Building yt-dlp v{version} {ARCH}bit for {OS_NAME} with options {opts}')
ffi=FixedFileInfo( print('Remember to update the version using "devscripts/update-version.py"')
filevers=VERSION_LIST, if not os.path.isfile('yt_dlp/extractor/lazy_extractors.py'):
prodvers=VERSION_LIST, print('WARNING: Building without lazy_extractors. Run '
mask=0x3F, '"devscripts/make_lazy_extractors.py" to build lazy extractors', file=sys.stderr)
flags=0x0, print(f'Destination: {final_file}\n')
OS=0x4,
fileType=0x1, opts = [
subtype=0x0, f'--name=yt-dlp{suffix}',
date=(0, 0), '--icon=devscripts/logo.ico',
), '--upx-exclude=vcruntime140.dll',
kids=[ '--noconfirm',
StringFileInfo([ *dependency_options(),
StringTable( *opts,
'040904B0', [ 'yt_dlp/__main__.py',
StringStruct('Comments', 'yt-dlp%s Command Line Interface.' % _x86),
StringStruct('CompanyName', 'https://github.com/yt-dlp'),
StringStruct('FileDescription', FILE_DESCRIPTION),
StringStruct('FileVersion', VERSION),
StringStruct('InternalName', 'yt-dlp%s' % _x86),
StringStruct(
'LegalCopyright',
'pukkandan.ytdlp@gmail.com | UNLICENSE',
),
StringStruct('OriginalFilename', 'yt-dlp%s.exe' % _x86),
StringStruct('ProductName', 'yt-dlp%s' % _x86),
StringStruct(
'ProductVersion',
'%s%s on Python %s' % (VERSION, _x86, platform.python_version())),
])]),
VarFileInfo([VarStruct('Translation', [0, 1200])])
] ]
) print(f'Running PyInstaller with {opts}')
dependancies = ['Crypto', 'mutagen'] + collect_submodules('websockets') import PyInstaller.__main__
excluded_modules = ['test', 'ytdlp_plugins', 'youtube-dl', 'youtube-dlc']
PyInstaller.__main__.run([ PyInstaller.__main__.run(opts)
'--name=yt-dlp%s' % _x86,
'--onefile', set_version_info(final_file, version)
'--icon=devscripts/logo.ico',
*[f'--exclude-module={module}' for module in excluded_modules],
*[f'--hidden-import={module}' for module in dependancies], def parse_options():
'--upx-exclude=vcruntime140.dll', # Compatability with older arguments
'yt_dlp/__main__.py', opts = sys.argv[1:]
]) if opts[0:1] in (['32'], ['64']):
SetVersion('dist/yt-dlp%s.exe' % _x86, VERSION_FILE) if ARCH != opts[0]:
raise Exception(f'{opts[0]}bit executable cannot be built on a {ARCH}bit system')
opts = opts[1:]
return opts or ['--onefile']
def read_version():
exec(compile(open('yt_dlp/version.py').read(), 'yt_dlp/version.py', 'exec'))
return locals()['__version__']
def version_to_list(version):
version_list = version.split('.')
return list(map(int, version_list)) + [0] * (4 - len(version_list))
def dependency_options():
dependencies = [pycryptodome_module(), 'mutagen'] + collect_submodules('websockets')
excluded_modules = ['test', 'ytdlp_plugins', 'youtube-dl', 'youtube-dlc']
yield from (f'--hidden-import={module}' for module in dependencies)
yield from (f'--exclude-module={module}' for module in excluded_modules)
def pycryptodome_module():
try:
import Cryptodome # noqa: F401
except ImportError:
try:
import Crypto # noqa: F401
print('WARNING: Using Crypto since Cryptodome is not available. '
'Install with: pip install pycryptodomex', file=sys.stderr)
return 'Crypto'
except ImportError:
pass
return 'Cryptodome'
def set_version_info(exe, version):
if OS_NAME == 'Windows':
windows_set_version(exe, version)
def windows_set_version(exe, version):
version_list = version_to_list(version)
suffix = '_x86' if ARCH == '32' else ''
SetVersion(exe, VSVersionInfo(
ffi=FixedFileInfo(
filevers=version_list,
prodvers=version_list,
mask=0x3F,
flags=0x0,
OS=0x4,
fileType=0x1,
subtype=0x0,
date=(0, 0),
),
kids=[
StringFileInfo([StringTable('040904B0', [
StringStruct('Comments', 'yt-dlp%s Command Line Interface.' % suffix),
StringStruct('CompanyName', 'https://github.com/yt-dlp'),
StringStruct('FileDescription', 'yt-dlp%s' % (' (32 Bit)' if ARCH == '32' else '')),
StringStruct('FileVersion', version),
StringStruct('InternalName', f'yt-dlp{suffix}'),
StringStruct('LegalCopyright', 'pukkandan.ytdlp@gmail.com | UNLICENSE'),
StringStruct('OriginalFilename', f'yt-dlp{suffix}.exe'),
StringStruct('ProductName', f'yt-dlp{suffix}'),
StringStruct(
'ProductVersion', f'{version}{suffix} on Python {platform.python_version()}'),
])]), VarFileInfo([VarStruct('Translation', [0, 1200])])
]
))
if __name__ == '__main__':
main()

View file

@ -1,3 +1,3 @@
mutagen mutagen
pycryptodome pycryptodomex
websockets websockets

View file

@ -1,52 +1,86 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# coding: utf-8 # coding: utf-8
from setuptools import setup, Command, find_packages
import os.path import os.path
import warnings import warnings
import sys import sys
from distutils.spawn import spawn
try:
from setuptools import setup, Command, find_packages
setuptools_available = True
except ImportError:
from distutils.core import setup, Command
setuptools_available = False
from distutils.spawn import spawn
# Get the version from yt_dlp/version.py without importing the package # Get the version from yt_dlp/version.py without importing the package
exec(compile(open('yt_dlp/version.py').read(), 'yt_dlp/version.py', 'exec')) exec(compile(open('yt_dlp/version.py').read(), 'yt_dlp/version.py', 'exec'))
DESCRIPTION = 'Command-line program to download videos from YouTube.com and many other other video platforms.' DESCRIPTION = 'A youtube-dl fork with additional features and patches'
LONG_DESCRIPTION = '\n\n'.join(( LONG_DESCRIPTION = '\n\n'.join((
'Official repository: <https://github.com/yt-dlp/yt-dlp>', 'Official repository: <https://github.com/yt-dlp/yt-dlp>',
'**PS**: Some links in this document will not work since this is a copy of the README.md from Github', '**PS**: Some links in this document will not work since this is a copy of the README.md from Github',
open('README.md', 'r', encoding='utf-8').read())) open('README.md', 'r', encoding='utf-8').read()))
REQUIREMENTS = ['mutagen', 'pycryptodome', 'websockets'] REQUIREMENTS = ['mutagen', 'pycryptodomex', 'websockets']
if sys.argv[1:2] == ['py2exe']: if sys.argv[1:2] == ['py2exe']:
raise NotImplementedError('py2exe is not currently supported; instead, use "pyinst.py" to build with pyinstaller') import py2exe
warnings.warn(
'py2exe builds do not support pycryptodomex and needs VC++14 to run. '
'The recommended way is to use "pyinst.py" to build using pyinstaller')
params = {
'console': [{
'script': './yt_dlp/__main__.py',
'dest_base': 'yt-dlp',
'version': __version__,
'description': DESCRIPTION,
'comments': LONG_DESCRIPTION.split('\n')[0],
'product_name': 'yt-dlp',
'product_version': __version__,
}],
'options': {
'py2exe': {
'bundle_files': 0,
'compressed': 1,
'optimize': 2,
'dist_dir': './dist',
'excludes': ['Crypto', 'Cryptodome'], # py2exe cannot import Crypto
'dll_excludes': ['w9xpopen.exe', 'crypt32.dll'],
}
},
'zipfile': None
}
else:
files_spec = [
('share/bash-completion/completions', ['completions/bash/yt-dlp']),
('share/zsh/site-functions', ['completions/zsh/_yt-dlp']),
('share/fish/vendor_completions.d', ['completions/fish/yt-dlp.fish']),
('share/doc/yt_dlp', ['README.txt']),
('share/man/man1', ['yt-dlp.1'])
]
root = os.path.dirname(os.path.abspath(__file__))
data_files = []
for dirname, files in files_spec:
resfiles = []
for fn in files:
if not os.path.exists(fn):
warnings.warn('Skipping file %s since it is not present. Try running `make pypi-files` first' % fn)
else:
resfiles.append(fn)
data_files.append((dirname, resfiles))
files_spec = [ params = {
('share/bash-completion/completions', ['completions/bash/yt-dlp']), 'data_files': data_files,
('share/zsh/site-functions', ['completions/zsh/_yt-dlp']), }
('share/fish/vendor_completions.d', ['completions/fish/yt-dlp.fish']),
('share/doc/yt_dlp', ['README.txt']),
('share/man/man1', ['yt-dlp.1'])
]
root = os.path.dirname(os.path.abspath(__file__))
data_files = []
for dirname, files in files_spec:
resfiles = []
for fn in files:
if not os.path.exists(fn):
warnings.warn('Skipping file %s since it is not present. Try running `make pypi-files` first' % fn)
else:
resfiles.append(fn)
data_files.append((dirname, resfiles))
params = { if setuptools_available:
'data_files': data_files, params['entry_points'] = {'console_scripts': ['yt-dlp = yt_dlp:main']}
} else:
params['entry_points'] = {'console_scripts': ['yt-dlp = yt_dlp:main']} params['scripts'] = ['yt-dlp']
class build_lazy_extractors(Command): class build_lazy_extractors(Command):
@ -64,7 +98,11 @@ class build_lazy_extractors(Command):
dry_run=self.dry_run) dry_run=self.dry_run)
packages = find_packages(exclude=('youtube_dl', 'test', 'ytdlp_plugins')) if setuptools_available:
packages = find_packages(exclude=('youtube_dl', 'youtube_dlc', 'test', 'ytdlp_plugins'))
else:
packages = ['yt_dlp', 'yt_dlp.downloader', 'yt_dlp.extractor', 'yt_dlp.postprocessor']
setup( setup(
name='yt-dlp', name='yt-dlp',
@ -81,7 +119,7 @@ setup(
'Documentation': 'https://yt-dlp.readthedocs.io', 'Documentation': 'https://yt-dlp.readthedocs.io',
'Source': 'https://github.com/yt-dlp/yt-dlp', 'Source': 'https://github.com/yt-dlp/yt-dlp',
'Tracker': 'https://github.com/yt-dlp/yt-dlp/issues', 'Tracker': 'https://github.com/yt-dlp/yt-dlp/issues',
#'Funding': 'https://donate.pypi.org', 'Funding': 'https://github.com/yt-dlp/yt-dlp/blob/master/Collaborators.md#collaborators',
}, },
classifiers=[ classifiers=[
'Topic :: Multimedia :: Video', 'Topic :: Multimedia :: Video',

View file

@ -1,4 +1,6 @@
# Supported sites # Supported sites
- **17live**
- **17live:clip**
- **1tv**: Первый канал - **1tv**: Первый канал
- **20min** - **20min**
- **220.ro** - **220.ro**
@ -19,6 +21,7 @@
- **9now.com.au** - **9now.com.au**
- **abc.net.au** - **abc.net.au**
- **abc.net.au:iview** - **abc.net.au:iview**
- **abc.net.au:iview:showseries**
- **abcnews** - **abcnews**
- **abcnews:video** - **abcnews:video**
- **abcotvs**: ABC Owned Television Stations - **abcotvs**: ABC Owned Television Stations
@ -46,10 +49,12 @@
- **Alura** - **Alura**
- **AluraCourse** - **AluraCourse**
- **Amara** - **Amara**
- **AmazonStore**
- **AMCNetworks** - **AMCNetworks**
- **AmericasTestKitchen** - **AmericasTestKitchen**
- **AmericasTestKitchenSeason** - **AmericasTestKitchenSeason**
- **anderetijden**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl - **anderetijden**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl
- **AnimalPlanet**
- **AnimeLab** - **AnimeLab**
- **AnimeLabShows** - **AnimeLabShows**
- **AnimeOnDemand** - **AnimeOnDemand**
@ -97,6 +102,7 @@
- **Bandcamp:weekly** - **Bandcamp:weekly**
- **BandcampMusic** - **BandcampMusic**
- **bangumi.bilibili.com**: BiliBili番剧 - **bangumi.bilibili.com**: BiliBili番剧
- **BannedVideo**
- **bbc**: BBC - **bbc**: BBC
- **bbc.co.uk**: BBC iPlayer - **bbc.co.uk**: BBC iPlayer
- **bbc.co.uk:article**: BBC articles - **bbc.co.uk:article**: BBC articles
@ -118,11 +124,14 @@
- **Bigflix** - **Bigflix**
- **Bild**: Bild.de - **Bild**: Bild.de
- **BiliBili** - **BiliBili**
- **Bilibili category extractor**
- **BilibiliAudio** - **BilibiliAudio**
- **BilibiliAudioAlbum** - **BilibiliAudioAlbum**
- **BilibiliChannel** - **BilibiliChannel**
- **BiliBiliPlayer** - **BiliBiliPlayer**
- **BiliBiliSearch**: Bilibili video search, "bilisearch" keyword - **BiliBiliSearch**: Bilibili video search; "bilisearch:" prefix
- **BiliIntl**
- **BiliIntlSeries**
- **BioBioChileTV** - **BioBioChileTV**
- **Biography** - **Biography**
- **BIQLE** - **BIQLE**
@ -133,6 +142,7 @@
- **BlackboardCollaborate** - **BlackboardCollaborate**
- **BleacherReport** - **BleacherReport**
- **BleacherReportCMS** - **BleacherReportCMS**
- **blogger.com**
- **Bloomberg** - **Bloomberg**
- **BokeCC** - **BokeCC**
- **BongaCams** - **BongaCams**
@ -142,6 +152,7 @@
- **BR**: Bayerischer Rundfunk - **BR**: Bayerischer Rundfunk
- **BravoTV** - **BravoTV**
- **Break** - **Break**
- **BreitBart**
- **brightcove:legacy** - **brightcove:legacy**
- **brightcove:new** - **brightcove:new**
- **BRMediathek**: Bayerischer Rundfunk Mediathek - **BRMediathek**: Bayerischer Rundfunk Mediathek
@ -150,11 +161,13 @@
- **BusinessInsider** - **BusinessInsider**
- **BuzzFeed** - **BuzzFeed**
- **BYUtv** - **BYUtv**
- **CableAV**
- **CAM4**
- **Camdemy** - **Camdemy**
- **CamdemyFolder** - **CamdemyFolder**
- **CamModels** - **CamModels**
- **CamTube**
- **CamWithHer** - **CamWithHer**
- **CanalAlpha**
- **canalc2.tv** - **canalc2.tv**
- **Canalplus**: mycanal.fr and piwiplus.fr - **Canalplus**: mycanal.fr and piwiplus.fr
- **Canvas** - **Canvas**
@ -163,10 +176,7 @@
- **CarambaTVPage** - **CarambaTVPage**
- **CartoonNetwork** - **CartoonNetwork**
- **cbc.ca** - **cbc.ca**
- **cbc.ca:olympics**
- **cbc.ca:player** - **cbc.ca:player**
- **cbc.ca:watch**
- **cbc.ca:watch:video**
- **CBS** - **CBS**
- **CBSInteractive** - **CBSInteractive**
- **CBSLocal** - **CBSLocal**
@ -180,11 +190,13 @@
- **CCTV**: 央视网 - **CCTV**: 央视网
- **CDA** - **CDA**
- **CeskaTelevize** - **CeskaTelevize**
- **CeskaTelevizePorady** - **CGTN**
- **channel9**: Channel 9 - **channel9**: Channel 9
- **CharlieRose** - **CharlieRose**
- **Chaturbate** - **Chaturbate**
- **Chilloutzone** - **Chilloutzone**
- **Chingari**
- **ChingariUser**
- **chirbit** - **chirbit**
- **chirbit:profile** - **chirbit:profile**
- **cielotv.it** - **cielotv.it**
@ -192,6 +204,7 @@
- **Cinemax** - **Cinemax**
- **CiscoLiveSearch** - **CiscoLiveSearch**
- **CiscoLiveSession** - **CiscoLiveSession**
- **ciscowebex**: Cisco Webex
- **CJSW** - **CJSW**
- **cliphunter** - **cliphunter**
- **Clippit** - **Clippit**
@ -214,26 +227,32 @@
- **CONtv** - **CONtv**
- **Corus** - **Corus**
- **Coub** - **Coub**
- **CozyTV**
- **cp24**
- **Cracked** - **Cracked**
- **Crackle** - **Crackle**
- **CrooksAndLiars** - **CrooksAndLiars**
- **crunchyroll** - **crunchyroll**
- **crunchyroll:beta**
- **crunchyroll:playlist** - **crunchyroll:playlist**
- **crunchyroll:playlist:beta**
- **CSpan**: C-SPAN - **CSpan**: C-SPAN
- **CtsNews**: 華視新聞 - **CtsNews**: 華視新聞
- **CTV** - **CTV**
- **CTVNews** - **CTVNews**
- **cu.ntv.co.jp**: Nippon Television Network - **cu.ntv.co.jp**: Nippon Television Network
- **Culturebox**
- **CultureUnplugged** - **CultureUnplugged**
- **curiositystream** - **curiositystream**
- **curiositystream:collection** - **curiositystream:collections**
- **curiositystream:series**
- **CWTV** - **CWTV**
- **DagelijkseKost**: dagelijksekost.een.be - **DagelijkseKost**: dagelijksekost.een.be
- **DailyMail** - **DailyMail**
- **dailymotion** - **dailymotion**
- **dailymotion:playlist** - **dailymotion:playlist**
- **dailymotion:user** - **dailymotion:user**
- **damtomo:record**
- **damtomo:video**
- **daum.net** - **daum.net**
- **daum.net:clip** - **daum.net:clip**
- **daum.net:playlist** - **daum.net:playlist**
@ -255,8 +274,11 @@
- **DiscoveryPlus** - **DiscoveryPlus**
- **DiscoveryPlusIndia** - **DiscoveryPlusIndia**
- **DiscoveryPlusIndiaShow** - **DiscoveryPlusIndiaShow**
- **DiscoveryPlusItaly**
- **DiscoveryPlusItalyShow**
- **DiscoveryVR** - **DiscoveryVR**
- **Disney** - **Disney**
- **DIYNetwork**
- **dlive:stream** - **dlive:stream**
- **dlive:vod** - **dlive:vod**
- **DoodStream** - **DoodStream**
@ -267,6 +289,8 @@
- **DPlay** - **DPlay**
- **DRBonanza** - **DRBonanza**
- **Dropbox** - **Dropbox**
- **Dropout**
- **DropoutSeason**
- **DrTuber** - **DrTuber**
- **drtv** - **drtv**
- **drtv:live** - **drtv:live**
@ -295,14 +319,18 @@
- **Embedly** - **Embedly**
- **EMPFlix** - **EMPFlix**
- **Engadget** - **Engadget**
- **Epicon**
- **EpiconSeries**
- **Eporner** - **Eporner**
- **EroProfile** - **EroProfile**
- **EroProfile:album** - **EroProfile:album**
- **Escapist** - **Escapist**
- **ESPN** - **ESPN**
- **ESPNArticle** - **ESPNArticle**
- **ESPNCricInfo**
- **EsriVideo** - **EsriVideo**
- **Europa** - **Europa**
- **EUScreen**
- **EWETV** - **EWETV**
- **ExpoTV** - **ExpoTV**
- **Expressen** - **Expressen**
@ -316,6 +344,7 @@
- **fc2** - **fc2**
- **fc2:embed** - **fc2:embed**
- **Fczenit** - **Fczenit**
- **Filmmodu**
- **filmon** - **filmon**
- **filmon:channel** - **filmon:channel**
- **Filmweb** - **Filmweb**
@ -332,13 +361,10 @@
- **foxnews**: Fox News and Fox Business Video - **foxnews**: Fox News and Fox Business Video
- **foxnews:article** - **foxnews:article**
- **FoxSports** - **FoxSports**
- **france2.fr:generation-what**
- **FranceCulture** - **FranceCulture**
- **FranceInter** - **FranceInter**
- **FranceTV** - **FranceTV**
- **FranceTVEmbed**
- **francetvinfo.fr** - **francetvinfo.fr**
- **FranceTVJeunesse**
- **FranceTVSite** - **FranceTVSite**
- **Freesound** - **Freesound**
- **freespeech.org** - **freespeech.org**
@ -353,15 +379,27 @@
- **Funk** - **Funk**
- **Fusion** - **Fusion**
- **Fux** - **Fux**
- **Gab**
- **GabTV**
- **Gaia** - **Gaia**
- **GameInformer** - **GameInformer**
- **GameJolt**
- **GameJoltCommunity**
- **GameJoltGame**
- **GameJoltGameSoundtrack**
- **GameJoltSearch**
- **GameJoltUser**
- **GameSpot** - **GameSpot**
- **GameStar** - **GameStar**
- **Gaskrank** - **Gaskrank**
- **Gazeta** - **Gazeta**
- **GDCVault** - **GDCVault**
- **GediDigital** - **GediDigital**
- **gem.cbc.ca**
- **gem.cbc.ca:live**
- **gem.cbc.ca:playlist**
- **generic**: Generic downloader that works on some sites - **generic**: Generic downloader that works on some sites
- **Gettr**
- **Gfycat** - **Gfycat**
- **GiantBomb** - **GiantBomb**
- **Giga** - **Giga**
@ -371,12 +409,16 @@
- **GloboArticle** - **GloboArticle**
- **Go** - **Go**
- **GodTube** - **GodTube**
- **Gofile**
- **Golem** - **Golem**
- **google:podcasts** - **google:podcasts**
- **google:podcasts:feed** - **google:podcasts:feed**
- **GoogleDrive** - **GoogleDrive**
- **GoPro**
- **Goshgay** - **Goshgay**
- **GoToStage**
- **GPUTechConf** - **GPUTechConf**
- **Gronkh**
- **Groupon** - **Groupon**
- **hbo** - **hbo**
- **HearThisAt** - **HearThisAt**
@ -405,9 +447,12 @@
- **hrfernsehen** - **hrfernsehen**
- **HRTi** - **HRTi**
- **HRTiPlaylist** - **HRTiPlaylist**
- **HSEProduct**
- **HSEShow**
- **Huajiao**: 花椒直播 - **Huajiao**: 花椒直播
- **HuffPost**: Huffington Post - **HuffPost**: Huffington Post
- **Hungama** - **Hungama**
- **HungamaAlbumPlaylist**
- **HungamaSong** - **HungamaSong**
- **Hypem** - **Hypem**
- **ign.com** - **ign.com**
@ -425,11 +470,13 @@
- **IndavideoEmbed** - **IndavideoEmbed**
- **InfoQ** - **InfoQ**
- **Instagram** - **Instagram**
- **instagram:tag**: Instagram hashtag search - **instagram:tag**: Instagram hashtag search URLs
- **instagram:user**: Instagram user profile - **instagram:user**: Instagram user profile
- **InstagramIOS**: IOS instagram:// URL
- **Internazionale** - **Internazionale**
- **InternetVideoArchive** - **InternetVideoArchive**
- **IPrima** - **IPrima**
- **IPrimaCNN**
- **iqiyi**: 爱奇艺 - **iqiyi**: 爱奇艺
- **Ir90Tv** - **Ir90Tv**
- **ITTF** - **ITTF**
@ -460,6 +507,7 @@
- **KinjaEmbed** - **KinjaEmbed**
- **KinoPoisk** - **KinoPoisk**
- **KonserthusetPlay** - **KonserthusetPlay**
- **Koo**
- **KrasView**: Красвью - **KrasView**: Красвью
- **Ku6** - **Ku6**
- **KUSI** - **KUSI**
@ -498,6 +546,7 @@
- **LineLive** - **LineLive**
- **LineLiveChannel** - **LineLiveChannel**
- **LineTV** - **LineTV**
- **LinkedIn**
- **linkedin:learning** - **linkedin:learning**
- **linkedin:learning:course** - **linkedin:learning:course**
- **LinuxAcademy** - **LinuxAcademy**
@ -520,6 +569,9 @@
- **MallTV** - **MallTV**
- **mangomolo:live** - **mangomolo:live**
- **mangomolo:video** - **mangomolo:video**
- **ManotoTV**: Manoto TV (Episode)
- **ManotoTVLive**: Manoto TV (Live)
- **ManotoTVShow**: Manoto TV (Show)
- **ManyVids** - **ManyVids**
- **MaoriTV** - **MaoriTV**
- **Markiza** - **Markiza**
@ -530,8 +582,11 @@
- **MedalTV** - **MedalTV**
- **media.ccc.de** - **media.ccc.de**
- **media.ccc.de:lists** - **media.ccc.de:lists**
- **Mediaite**
- **MediaKlikk**
- **Medialaan** - **Medialaan**
- **Mediaset** - **Mediaset**
- **MediasetShow**
- **Mediasite** - **Mediasite**
- **MediasiteCatalog** - **MediasiteCatalog**
- **MediasiteNamedCatalog** - **MediasiteNamedCatalog**
@ -546,6 +601,7 @@
- **Mgoon** - **Mgoon**
- **MGTV**: 芒果TV - **MGTV**: 芒果TV
- **MiaoPai** - **MiaoPai**
- **microsoftstream**: Microsoft Stream
- **mildom**: Record ongoing live by specific user in Mildom - **mildom**: Record ongoing live by specific user in Mildom
- **mildom:user:vod**: Download all VODs from specific user in Mildom - **mildom:user:vod**: Download all VODs from specific user in Mildom
- **mildom:vod**: Download a VOD in Mildom - **mildom:vod**: Download a VOD in Mildom
@ -558,11 +614,13 @@
- **mirrativ** - **mirrativ**
- **mirrativ:user** - **mirrativ:user**
- **MiTele**: mitele.es - **MiTele**: mitele.es
- **mixch**
- **mixcloud** - **mixcloud**
- **mixcloud:playlist** - **mixcloud:playlist**
- **mixcloud:user** - **mixcloud:user**
- **MLB** - **MLB**
- **MLBVideo** - **MLBVideo**
- **MLSSoccer**
- **Mnet** - **Mnet**
- **MNetTV** - **MNetTV**
- **MoeVideo**: LetitBit video services: moevideo.net, playreplay.net and videochart.net - **MoeVideo**: LetitBit video services: moevideo.net, playreplay.net and videochart.net
@ -588,6 +646,7 @@
- **mtvservices:embedded** - **mtvservices:embedded**
- **MTVUutisetArticle** - **MTVUutisetArticle**
- **MuenchenTV**: münchen.tv - **MuenchenTV**: münchen.tv
- **MuseScore**
- **mva**: Microsoft Virtual Academy videos - **mva**: Microsoft Virtual Academy videos
- **mva:course**: Microsoft Virtual Academy courses - **mva:course**: Microsoft Virtual Academy courses
- **Mwave** - **Mwave**
@ -604,6 +663,10 @@
- **MyviEmbed** - **MyviEmbed**
- **MyVisionTV** - **MyVisionTV**
- **n-tv.de** - **n-tv.de**
- **N1Info:article**
- **N1InfoAsset**
- **Nate**
- **NateProgram**
- **natgeo:video** - **natgeo:video**
- **NationalGeographicTV** - **NationalGeographicTV**
- **Naver** - **Naver**
@ -626,6 +689,7 @@
- **ndr:embed:base** - **ndr:embed:base**
- **NDTV** - **NDTV**
- **Nebula** - **Nebula**
- **nebula:collection**
- **NerdCubedFeed** - **NerdCubedFeed**
- **netease:album**: 网易云音乐 - 专辑 - **netease:album**: 网易云音乐 - 专辑
- **netease:djradio**: 网易云音乐 - 电台 - **netease:djradio**: 网易云音乐 - 电台
@ -637,7 +701,8 @@
- **NetPlus** - **NetPlus**
- **Netzkino** - **Netzkino**
- **Newgrounds** - **Newgrounds**
- **NewgroundsPlaylist** - **Newgrounds:playlist**
- **Newgrounds:user**
- **Newstube** - **Newstube**
- **NextMedia**: 蘋果日報 - **NextMedia**: 蘋果日報
- **NextMediaActionNews**: 蘋果日報 - 動新聞 - **NextMediaActionNews**: 蘋果日報 - 動新聞
@ -658,6 +723,9 @@
- **niconico**: ニコニコ動画 - **niconico**: ニコニコ動画
- **NiconicoPlaylist** - **NiconicoPlaylist**
- **NiconicoUser** - **NiconicoUser**
- **nicovideo:search**: Nico video search; "nicosearch:" prefix
- **nicovideo:search:date**: Nico video search, newest first; "nicosearchdate:" prefix
- **nicovideo:search_url**: Nico video search URLs
- **Nintendo** - **Nintendo**
- **Nitter** - **Nitter**
- **njoy**: N-JOY - **njoy**: N-JOY
@ -670,6 +738,7 @@
- **NosVideo** - **NosVideo**
- **Nova**: TN.cz, Prásk.tv, Nova.cz, Novaplus.cz, FANDA.tv, Krásná.cz and Doma.cz - **Nova**: TN.cz, Prásk.tv, Nova.cz, Novaplus.cz, FANDA.tv, Krásná.cz and Doma.cz
- **NovaEmbed** - **NovaEmbed**
- **NovaPlay**
- **nowness** - **nowness**
- **nowness:playlist** - **nowness:playlist**
- **nowness:series** - **nowness:series**
@ -695,12 +764,16 @@
- **NYTimes** - **NYTimes**
- **NYTimesArticle** - **NYTimesArticle**
- **NYTimesCooking** - **NYTimesCooking**
- **nzherald**
- **NZZ** - **NZZ**
- **ocw.mit.edu** - **ocw.mit.edu**
- **OdaTV** - **OdaTV**
- **Odnoklassniki** - **Odnoklassniki**
- **OktoberfestTV** - **OktoberfestTV**
- **OlympicsReplay**
- **on24**: ON24
- **OnDemandKorea** - **OnDemandKorea**
- **OneFootball**
- **onet.pl** - **onet.pl**
- **onet.tv** - **onet.tv**
- **onet.tv:channel** - **onet.tv:channel**
@ -708,6 +781,8 @@
- **OnionStudios** - **OnionStudios**
- **Ooyala** - **Ooyala**
- **OoyalaExternal** - **OoyalaExternal**
- **Opencast**
- **OpencastPlaylist**
- **openrec** - **openrec**
- **openrec:capture** - **openrec:capture**
- **OraTV** - **OraTV**
@ -740,9 +815,14 @@
- **parliamentlive.tv**: UK parliament videos - **parliamentlive.tv**: UK parliament videos
- **Parlview** - **Parlview**
- **Patreon** - **Patreon**
- **PatreonUser**
- **pbs**: Public Broadcasting Service (PBS) and member stations: PBS: Public Broadcasting Service, APT - Alabama Public Television (WBIQ), GPB/Georgia Public Broadcasting (WGTV), Mississippi Public Broadcasting (WMPN), Nashville Public Television (WNPT), WFSU-TV (WFSU), WSRE (WSRE), WTCI (WTCI), WPBA/Channel 30 (WPBA), Alaska Public Media (KAKM), Arizona PBS (KAET), KNME-TV/Channel 5 (KNME), Vegas PBS (KLVX), AETN/ARKANSAS ETV NETWORK (KETS), KET (WKLE), WKNO/Channel 10 (WKNO), LPB/LOUISIANA PUBLIC BROADCASTING (WLPB), OETA (KETA), Ozarks Public Television (KOZK), WSIU Public Broadcasting (WSIU), KEET TV (KEET), KIXE/Channel 9 (KIXE), KPBS San Diego (KPBS), KQED (KQED), KVIE Public Television (KVIE), PBS SoCal/KOCE (KOCE), ValleyPBS (KVPT), CONNECTICUT PUBLIC TELEVISION (WEDH), KNPB Channel 5 (KNPB), SOPTV (KSYS), Rocky Mountain PBS (KRMA), KENW-TV3 (KENW), KUED Channel 7 (KUED), Wyoming PBS (KCWC), Colorado Public Television / KBDI 12 (KBDI), KBYU-TV (KBYU), Thirteen/WNET New York (WNET), WGBH/Channel 2 (WGBH), WGBY (WGBY), NJTV Public Media NJ (WNJT), WLIW21 (WLIW), mpt/Maryland Public Television (WMPB), WETA Television and Radio (WETA), WHYY (WHYY), PBS 39 (WLVT), WVPT - Your Source for PBS and More! (WVPT), Howard University Television (WHUT), WEDU PBS (WEDU), WGCU Public Media (WGCU), WPBT2 (WPBT), WUCF TV (WUCF), WUFT/Channel 5 (WUFT), WXEL/Channel 42 (WXEL), WLRN/Channel 17 (WLRN), WUSF Public Broadcasting (WUSF), ETV (WRLK), UNC-TV (WUNC), PBS Hawaii - Oceanic Cable Channel 10 (KHET), Idaho Public Television (KAID), KSPS (KSPS), OPB (KOPB), KWSU/Channel 10 & KTNW/Channel 31 (KWSU), WILL-TV (WILL), Network Knowledge - WSEC/Springfield (WSEC), WTTW11 (WTTW), Iowa Public Television/IPTV (KDIN), Nine Network (KETC), PBS39 Fort Wayne (WFWA), WFYI Indianapolis (WFYI), Milwaukee Public Television (WMVS), WNIN (WNIN), WNIT Public Television (WNIT), WPT (WPNE), WVUT/Channel 22 (WVUT), WEIU/Channel 51 (WEIU), WQPT-TV (WQPT), WYCC PBS Chicago (WYCC), WIPB-TV (WIPB), WTIU (WTIU), CET (WCET), ThinkTVNetwork (WPTD), WBGU-TV (WBGU), WGVU TV (WGVU), NET1 (KUON), Pioneer Public Television (KWCM), SDPB Television (KUSD), TPT (KTCA), KSMQ (KSMQ), KPTS/Channel 8 (KPTS), KTWU/Channel 11 (KTWU), East Tennessee PBS (WSJK), WCTE-TV (WCTE), WLJT, Channel 11 (WLJT), WOSU TV (WOSU), WOUB/WOUC (WOUB), WVPB (WVPB), WKYU-PBS (WKYU), KERA 13 (KERA), MPBN (WCBB), Mountain Lake PBS (WCFE), NHPTV (WENH), Vermont PBS (WETK), witf (WITF), WQED Multimedia (WQED), WMHT Educational Telecommunications (WMHT), Q-TV (WDCQ), WTVS Detroit Public TV (WTVS), CMU Public Television (WCMU), WKAR-TV (WKAR), WNMU-TV Public TV 13 (WNMU), WDSE - WRPT (WDSE), WGTE TV (WGTE), Lakeland Public Television (KAWE), KMOS-TV - Channels 6.1, 6.2 and 6.3 (KMOS), MontanaPBS (KUSM), KRWG/Channel 22 (KRWG), KACV (KACV), KCOS/Channel 13 (KCOS), WCNY/Channel 24 (WCNY), WNED (WNED), WPBS (WPBS), WSKG Public TV (WSKG), WXXI (WXXI), WPSU (WPSU), WVIA Public Media Studios (WVIA), WTVI (WTVI), Western Reserve PBS (WNEO), WVIZ/PBS ideastream (WVIZ), KCTS 9 (KCTS), Basin PBS (KPBT), KUHT / Channel 8 (KUHT), KLRN (KLRN), KLRU (KLRU), WTJX Channel 12 (WTJX), WCVE PBS (WCVE), KBTC Public Television (KBTC) - **pbs**: Public Broadcasting Service (PBS) and member stations: PBS: Public Broadcasting Service, APT - Alabama Public Television (WBIQ), GPB/Georgia Public Broadcasting (WGTV), Mississippi Public Broadcasting (WMPN), Nashville Public Television (WNPT), WFSU-TV (WFSU), WSRE (WSRE), WTCI (WTCI), WPBA/Channel 30 (WPBA), Alaska Public Media (KAKM), Arizona PBS (KAET), KNME-TV/Channel 5 (KNME), Vegas PBS (KLVX), AETN/ARKANSAS ETV NETWORK (KETS), KET (WKLE), WKNO/Channel 10 (WKNO), LPB/LOUISIANA PUBLIC BROADCASTING (WLPB), OETA (KETA), Ozarks Public Television (KOZK), WSIU Public Broadcasting (WSIU), KEET TV (KEET), KIXE/Channel 9 (KIXE), KPBS San Diego (KPBS), KQED (KQED), KVIE Public Television (KVIE), PBS SoCal/KOCE (KOCE), ValleyPBS (KVPT), CONNECTICUT PUBLIC TELEVISION (WEDH), KNPB Channel 5 (KNPB), SOPTV (KSYS), Rocky Mountain PBS (KRMA), KENW-TV3 (KENW), KUED Channel 7 (KUED), Wyoming PBS (KCWC), Colorado Public Television / KBDI 12 (KBDI), KBYU-TV (KBYU), Thirteen/WNET New York (WNET), WGBH/Channel 2 (WGBH), WGBY (WGBY), NJTV Public Media NJ (WNJT), WLIW21 (WLIW), mpt/Maryland Public Television (WMPB), WETA Television and Radio (WETA), WHYY (WHYY), PBS 39 (WLVT), WVPT - Your Source for PBS and More! (WVPT), Howard University Television (WHUT), WEDU PBS (WEDU), WGCU Public Media (WGCU), WPBT2 (WPBT), WUCF TV (WUCF), WUFT/Channel 5 (WUFT), WXEL/Channel 42 (WXEL), WLRN/Channel 17 (WLRN), WUSF Public Broadcasting (WUSF), ETV (WRLK), UNC-TV (WUNC), PBS Hawaii - Oceanic Cable Channel 10 (KHET), Idaho Public Television (KAID), KSPS (KSPS), OPB (KOPB), KWSU/Channel 10 & KTNW/Channel 31 (KWSU), WILL-TV (WILL), Network Knowledge - WSEC/Springfield (WSEC), WTTW11 (WTTW), Iowa Public Television/IPTV (KDIN), Nine Network (KETC), PBS39 Fort Wayne (WFWA), WFYI Indianapolis (WFYI), Milwaukee Public Television (WMVS), WNIN (WNIN), WNIT Public Television (WNIT), WPT (WPNE), WVUT/Channel 22 (WVUT), WEIU/Channel 51 (WEIU), WQPT-TV (WQPT), WYCC PBS Chicago (WYCC), WIPB-TV (WIPB), WTIU (WTIU), CET (WCET), ThinkTVNetwork (WPTD), WBGU-TV (WBGU), WGVU TV (WGVU), NET1 (KUON), Pioneer Public Television (KWCM), SDPB Television (KUSD), TPT (KTCA), KSMQ (KSMQ), KPTS/Channel 8 (KPTS), KTWU/Channel 11 (KTWU), East Tennessee PBS (WSJK), WCTE-TV (WCTE), WLJT, Channel 11 (WLJT), WOSU TV (WOSU), WOUB/WOUC (WOUB), WVPB (WVPB), WKYU-PBS (WKYU), KERA 13 (KERA), MPBN (WCBB), Mountain Lake PBS (WCFE), NHPTV (WENH), Vermont PBS (WETK), witf (WITF), WQED Multimedia (WQED), WMHT Educational Telecommunications (WMHT), Q-TV (WDCQ), WTVS Detroit Public TV (WTVS), CMU Public Television (WCMU), WKAR-TV (WKAR), WNMU-TV Public TV 13 (WNMU), WDSE - WRPT (WDSE), WGTE TV (WGTE), Lakeland Public Television (KAWE), KMOS-TV - Channels 6.1, 6.2 and 6.3 (KMOS), MontanaPBS (KUSM), KRWG/Channel 22 (KRWG), KACV (KACV), KCOS/Channel 13 (KCOS), WCNY/Channel 24 (WCNY), WNED (WNED), WPBS (WPBS), WSKG Public TV (WSKG), WXXI (WXXI), WPSU (WPSU), WVIA Public Media Studios (WVIA), WTVI (WTVI), Western Reserve PBS (WNEO), WVIZ/PBS ideastream (WVIZ), KCTS 9 (KCTS), Basin PBS (KPBT), KUHT / Channel 8 (KUHT), KLRN (KLRN), KLRU (KLRU), WTJX Channel 12 (WTJX), WCVE PBS (WCVE), KBTC Public Television (KBTC)
- **PearVideo** - **PearVideo**
- **peer.tv**
- **PeerTube** - **PeerTube**
- **PeerTube:Playlist**
- **peloton**
- **peloton:live**: Peloton Live
- **People** - **People**
- **PerformGroup** - **PerformGroup**
- **periscope**: Periscope - **periscope**: Periscope
@ -756,7 +836,10 @@
- **Pinkbike** - **Pinkbike**
- **Pinterest** - **Pinterest**
- **PinterestCollection** - **PinterestCollection**
- **pixiv:sketch**
- **pixiv:sketch:user**
- **Pladform** - **Pladform**
- **PlanetMarathi**
- **Platzi** - **Platzi**
- **PlatziCourse** - **PlatziCourse**
- **play.fm** - **play.fm**
@ -773,7 +856,12 @@
- **podomatic** - **podomatic**
- **Pokemon** - **Pokemon**
- **PokemonWatch** - **PokemonWatch**
- **PolsatGo**
- **PolskieRadio** - **PolskieRadio**
- **polskieradio:kierowcow**
- **polskieradio:player**
- **polskieradio:podcast**
- **polskieradio:podcast:list**
- **PolskieRadioCategory** - **PolskieRadioCategory**
- **Popcorntimes** - **Popcorntimes**
- **PopcornTV** - **PopcornTV**
@ -783,6 +871,7 @@
- **PornHd** - **PornHd**
- **PornHub**: PornHub and Thumbzilla - **PornHub**: PornHub and Thumbzilla
- **PornHubPagedVideoList** - **PornHubPagedVideoList**
- **PornHubPlaylist**
- **PornHubUser** - **PornHubUser**
- **PornHubUserVideosUpload** - **PornHubUserVideosUpload**
- **Pornotube** - **Pornotube**
@ -790,6 +879,7 @@
- **PornoXO** - **PornoXO**
- **PornTube** - **PornTube**
- **PressTV** - **PressTV**
- **ProjectVeritas**
- **prosiebensat1**: ProSiebenSat.1 Digital - **prosiebensat1**: ProSiebenSat.1 Digital
- **puhutv** - **puhutv**
- **puhutv:serie** - **puhutv:serie**
@ -806,16 +896,26 @@
- **QuicklineLive** - **QuicklineLive**
- **R7** - **R7**
- **R7Article** - **R7Article**
- **Radiko**
- **RadikoRadio**
- **radio.de** - **radio.de**
- **radiobremen** - **radiobremen**
- **radiocanada** - **radiocanada**
- **radiocanada:audiovideo** - **radiocanada:audiovideo**
- **radiofrance** - **radiofrance**
- **RadioJavan** - **RadioJavan**
- **radiokapital**
- **radiokapital:show**
- **RadioZetPodcast**
- **radlive**
- **radlive:channel**
- **radlive:season**
- **Rai** - **Rai**
- **RaiPlay** - **RaiPlay**
- **RaiPlayLive** - **RaiPlayLive**
- **RaiPlayPlaylist** - **RaiPlayPlaylist**
- **RaiPlayRadio**
- **RaiPlayRadioPlaylist**
- **RayWenderlich** - **RayWenderlich**
- **RayWenderlichCourse** - **RayWenderlichCourse**
- **RBMARadio** - **RBMARadio**
@ -831,7 +931,9 @@
- **RedBullTV** - **RedBullTV**
- **RedBullTVRrnContent** - **RedBullTVRrnContent**
- **Reddit** - **Reddit**
- **RedditR** - **RedGifs**
- **RedGifsSearch**: Redgifs search
- **RedGifsUser**: Redgifs user
- **RedTube** - **RedTube**
- **RegioTV** - **RegioTV**
- **RENTV** - **RENTV**
@ -843,6 +945,7 @@
- **RMCDecouverte** - **RMCDecouverte**
- **RockstarGames** - **RockstarGames**
- **RoosterTeeth** - **RoosterTeeth**
- **RoosterTeethSeries**
- **RottenTomatoes** - **RottenTomatoes**
- **Roxwel** - **Roxwel**
- **Rozhlas** - **Rozhlas**
@ -854,21 +957,25 @@
- **rtl2:you** - **rtl2:you**
- **rtl2:you:series** - **rtl2:you:series**
- **RTP** - **RTP**
- **RTRFM**
- **RTS**: RTS.ch - **RTS**: RTS.ch
- **rtve.es:alacarta**: RTVE a la carta - **rtve.es:alacarta**: RTVE a la carta
- **rtve.es:audio**: RTVE audio
- **rtve.es:infantil**: RTVE infantil - **rtve.es:infantil**: RTVE infantil
- **rtve.es:live**: RTVE.es live streams - **rtve.es:live**: RTVE.es live streams
- **rtve.es:television** - **rtve.es:television**
- **RTVNH** - **RTVNH**
- **RTVS** - **RTVS**
- **RUHD** - **RUHD**
- **RumbleChannel**
- **RumbleEmbed** - **RumbleEmbed**
- **rutube**: Rutube videos - **rutube**: Rutube videos
- **rutube:channel**: Rutube channels - **rutube:channel**: Rutube channel
- **rutube:embed**: Rutube embedded videos - **rutube:embed**: Rutube embedded videos
- **rutube:movie**: Rutube movies - **rutube:movie**: Rutube movies
- **rutube:person**: Rutube person videos - **rutube:person**: Rutube person videos
- **rutube:playlist**: Rutube playlists - **rutube:playlist**: Rutube playlists
- **rutube:tags**: Rutube tags
- **RUTV**: RUTV.RU - **RUTV**: RUTV.RU
- **Ruutu** - **Ruutu**
- **Ruv** - **Ruv**
@ -884,7 +991,7 @@
- **SBS**: sbs.com.au - **SBS**: sbs.com.au
- **schooltv** - **schooltv**
- **ScienceChannel** - **ScienceChannel**
- **screen.yahoo:search**: Yahoo screen search - **screen.yahoo:search**: Yahoo screen search; "yvsearch:" prefix
- **Screencast** - **Screencast**
- **ScreencastOMatic** - **ScreencastOMatic**
- **ScrippsNetworks** - **ScrippsNetworks**
@ -892,6 +999,7 @@
- **SCTE** - **SCTE**
- **SCTECourse** - **SCTECourse**
- **Seeker** - **Seeker**
- **SenateGov**
- **SenateISVP** - **SenateISVP**
- **SendtoNews** - **SendtoNews**
- **Servus** - **Servus**
@ -907,14 +1015,17 @@
- **simplecast:episode** - **simplecast:episode**
- **simplecast:podcast** - **simplecast:podcast**
- **Sina** - **Sina**
- **Skeb**
- **sky.it** - **sky.it**
- **sky:news** - **sky:news**
- **sky:news:story**
- **sky:sports** - **sky:sports**
- **sky:sports:news** - **sky:sports:news**
- **skyacademy.it** - **skyacademy.it**
- **SkylineWebcams** - **SkylineWebcams**
- **skynewsarabia:article** - **skynewsarabia:article**
- **skynewsarabia:video** - **skynewsarabia:video**
- **SkyNewsAU**
- **Slideshare** - **Slideshare**
- **SlidesLive** - **SlidesLive**
- **Slutload** - **Slutload**
@ -924,7 +1035,8 @@
- **SonyLIVSeries** - **SonyLIVSeries**
- **soundcloud** - **soundcloud**
- **soundcloud:playlist** - **soundcloud:playlist**
- **soundcloud:search**: Soundcloud search - **soundcloud:related**
- **soundcloud:search**: Soundcloud search; "scsearch:" prefix
- **soundcloud:set** - **soundcloud:set**
- **soundcloud:trackstation** - **soundcloud:trackstation**
- **soundcloud:user** - **soundcloud:user**
@ -936,11 +1048,12 @@
- **southpark.de** - **southpark.de**
- **southpark.nl** - **southpark.nl**
- **southparkstudios.dk** - **southparkstudios.dk**
- **SovietsCloset**
- **SovietsClosetPlaylist**
- **SpankBang** - **SpankBang**
- **SpankBangPlaylist** - **SpankBangPlaylist**
- **Spankwire** - **Spankwire**
- **Spiegel** - **Spiegel**
- **sport.francetvinfo.fr**
- **Sport5** - **Sport5**
- **SportBox** - **SportBox**
- **SportDeutschland** - **SportDeutschland**
@ -956,6 +1069,7 @@
- **SRGSSR** - **SRGSSR**
- **SRGSSRPlay**: srf.ch, rts.ch, rsi.ch, rtr.ch and swissinfo.ch play sites - **SRGSSRPlay**: srf.ch, rts.ch, rsi.ch, rtr.ch and swissinfo.ch play sites
- **stanfordoc**: Stanford Open ClassRoom - **stanfordoc**: Stanford Open ClassRoom
- **startv**
- **Steam** - **Steam**
- **Stitcher** - **Stitcher**
- **StitcherShow** - **StitcherShow**
@ -963,10 +1077,13 @@
- **StoryFireSeries** - **StoryFireSeries**
- **StoryFireUser** - **StoryFireUser**
- **Streamable** - **Streamable**
- **Streamanity**
- **streamcloud.eu** - **streamcloud.eu**
- **StreamCZ** - **StreamCZ**
- **StreamFF**
- **StreetVoice** - **StreetVoice**
- **StretchInternet** - **StretchInternet**
- **Stripchat**
- **stv:player** - **stv:player**
- **SunPorno** - **SunPorno**
- **sverigesradio:episode** - **sverigesradio:episode**
@ -980,7 +1097,6 @@
- **SztvHu** - **SztvHu**
- **t-online.de** - **t-online.de**
- **Tagesschau** - **Tagesschau**
- **tagesschau:player**
- **Tass** - **Tass**
- **TBS** - **TBS**
- **TDSLifeway** - **TDSLifeway**
@ -1018,16 +1134,27 @@
- **TheScene** - **TheScene**
- **TheStar** - **TheStar**
- **TheSun** - **TheSun**
- **ThetaStream**
- **ThetaVideo**
- **TheWeatherChannel** - **TheWeatherChannel**
- **ThisAmericanLife** - **ThisAmericanLife**
- **ThisAV** - **ThisAV**
- **ThisOldHouse** - **ThisOldHouse**
- **ThreeSpeak**
- **ThreeSpeakUser**
- **TikTok** - **TikTok**
- **tiktok:effect**
- **tiktok:sound**
- **tiktok:tag**
- **tiktok:user**
- **tinypic**: tinypic.com videos - **tinypic**: tinypic.com videos
- **TMZ** - **TMZ**
- **TNAFlix** - **TNAFlix**
- **TNAFlixNetworkEmbed** - **TNAFlixNetworkEmbed**
- **toggle** - **toggle**
- **toggo**
- **Tokentube**
- **Tokentube:channel**
- **ToonGoggles** - **ToonGoggles**
- **tou.tv** - **tou.tv**
- **Toypics**: Toypics video - **Toypics**: Toypics video
@ -1035,7 +1162,10 @@
- **TrailerAddict** (Currently broken) - **TrailerAddict** (Currently broken)
- **Trilulilu** - **Trilulilu**
- **Trovo** - **Trovo**
- **TrovoChannelClip**: All Clips of a trovo.live channel; "trovoclip:" prefix
- **TrovoChannelVod**: All VODs of a trovo.live channel; "trovovod:" prefix
- **TrovoVod** - **TrovoVod**
- **TrueID**
- **TruNews** - **TruNews**
- **TruTV** - **TruTV**
- **Tube8** - **Tube8**
@ -1050,10 +1180,11 @@
- **Turbo** - **Turbo**
- **tv.dfb.de** - **tv.dfb.de**
- **TV2** - **TV2**
- **tv2.hu**
- **TV2Article** - **TV2Article**
- **TV2DK** - **TV2DK**
- **TV2DKBornholmPlay** - **TV2DKBornholmPlay**
- **tv2play.hu**
- **tv2playseries.hu**
- **TV4**: tv4.se and tv4play.se - **TV4**: tv4.se and tv4play.se
- **TV5MondePlus**: TV5MONDE+ - **TV5MondePlus**: TV5MONDE+
- **tv5unis** - **tv5unis**
@ -1079,6 +1210,7 @@
- **tvp**: Telewizja Polska - **tvp**: Telewizja Polska
- **tvp:embed**: Telewizja Polska - **tvp:embed**: Telewizja Polska
- **tvp:series** - **tvp:series**
- **tvp:stream**
- **TVPlayer** - **TVPlayer**
- **TVPlayHome** - **TVPlayHome**
- **Tweakers** - **Tweakers**
@ -1122,6 +1254,7 @@
- **Varzesh3** - **Varzesh3**
- **Vbox7** - **Vbox7**
- **VeeHD** - **VeeHD**
- **Veo**
- **Veoh** - **Veoh**
- **Vesti**: Вести.Ru - **Vesti**: Вести.Ru
- **Vevo** - **Vevo**
@ -1137,7 +1270,7 @@
- **Viddler** - **Viddler**
- **Videa** - **Videa**
- **video.arnes.si**: Arnes Video - **video.arnes.si**: Arnes Video
- **video.google:search**: Google Video search - **video.google:search**: Google Video search; "gvsearch:" prefix (Currently broken)
- **video.sky.it** - **video.sky.it**
- **video.sky.it:live** - **video.sky.it:live**
- **VideoDetective** - **VideoDetective**
@ -1150,9 +1283,6 @@
- **VidioLive** - **VidioLive**
- **VidioPremier** - **VidioPremier**
- **VidLii** - **VidLii**
- **vidme**
- **vidme:user**
- **vidme:user:likes**
- **vier**: vier.be and vijf.be - **vier**: vier.be and vijf.be
- **vier:videos** - **vier:videos**
- **viewlift** - **viewlift**
@ -1187,6 +1317,8 @@
- **VODPl** - **VODPl**
- **VODPlatform** - **VODPlatform**
- **VoiceRepublic** - **VoiceRepublic**
- **voicy**
- **voicy:channel**
- **Voot** - **Voot**
- **VootSeries** - **VootSeries**
- **VoxMedia** - **VoxMedia**
@ -1202,6 +1334,7 @@
- **VTXTV** - **VTXTV**
- **vube**: Vube.com - **vube**: Vube.com
- **VuClip** - **VuClip**
- **Vupload**
- **VVVVID** - **VVVVID**
- **VVVVIDShow** - **VVVVIDShow**
- **VyboryMos** - **VyboryMos**
@ -1227,11 +1360,14 @@
- **WeiboMobile** - **WeiboMobile**
- **WeiqiTV**: WQTV - **WeiqiTV**: WQTV
- **whowatch** - **whowatch**
- **Willow**
- **WimTV** - **WimTV**
- **Wistia** - **Wistia**
- **WistiaPlaylist** - **WistiaPlaylist**
- **wnl**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl - **wnl**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl
- **WorldStarHipHop** - **WorldStarHipHop**
- **wppilot**
- **wppilot:channels**
- **WSJ**: Wall Street Journal - **WSJ**: Wall Street Journal
- **WSJArticle** - **WSJArticle**
- **WWE** - **WWE**
@ -1279,19 +1415,19 @@
- **YouPorn** - **YouPorn**
- **YourPorn** - **YourPorn**
- **YourUpload** - **YourUpload**
- **youtube**: YouTube.com - **youtube**: YouTube
- **youtube:favorites**: YouTube.com liked videos, ":ytfav" for short (requires authentication) - **youtube:favorites**: YouTube liked videos; ":ytfav" keyword (requires cookies)
- **youtube:history**: Youtube watch history, ":ythis" for short (requires authentication) - **youtube:history**: Youtube watch history; ":ythis" keyword (requires cookies)
- **youtube:playlist**: YouTube.com playlists - **youtube:playlist**: YouTube playlists
- **youtube:recommended**: YouTube.com recommended videos, ":ytrec" for short (requires authentication) - **youtube:recommended**: YouTube recommended videos; ":ytrec" keyword
- **youtube:search**: YouTube.com searches, "ytsearch" keyword - **youtube:search**: YouTube search; "ytsearch:" prefix
- **youtube:search:date**: YouTube.com searches, newest videos first, "ytsearchdate" keyword - **youtube:search:date**: YouTube search, newest videos first; "ytsearchdate:" prefix
- **youtube:search_url**: YouTube.com search URLs - **youtube:search_url**: YouTube search URLs with sorting and filter support
- **youtube:subscriptions**: YouTube.com subscriptions feed, ":ytsubs" for short (requires authentication) - **youtube:subscriptions**: YouTube subscriptions feed; ":ytsubs" keyword (requires cookies)
- **youtube:tab**: YouTube.com tab - **youtube:tab**: YouTube Tabs
- **youtube:watchlater**: Youtube watch later list, ":ytwatchlater" for short (requires authentication) - **youtube:watchlater**: Youtube watch later list; ":ytwatchlater" keyword (requires cookies)
- **YoutubeYtBe**: youtu.be - **YoutubeYtBe**: youtu.be
- **YoutubeYtUser**: YouTube.com user videos, URL or "ytuser" keyword - **YoutubeYtUser**: YouTube user videos; "ytuser:" prefix
- **Zapiks** - **Zapiks**
- **Zattoo** - **Zattoo**
- **ZattooLive** - **ZattooLive**
@ -1299,6 +1435,8 @@
- **ZDFChannel** - **ZDFChannel**
- **Zee5** - **Zee5**
- **zee5:series** - **zee5:series**
- **ZenYandex**
- **ZenYandexChannel**
- **Zhihu** - **Zhihu**
- **zingmp3**: mp3.zing.vn - **zingmp3**: mp3.zing.vn
- **zingmp3:album** - **zingmp3:album**

View file

@ -22,7 +22,7 @@ from yt_dlp.utils import (
) )
if "pytest" in sys.modules: if 'pytest' in sys.modules:
import pytest import pytest
is_download_test = pytest.mark.download is_download_test = pytest.mark.download
else: else:
@ -32,9 +32,9 @@ else:
def get_params(override=None): def get_params(override=None):
PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)), PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)),
"parameters.json") 'parameters.json')
LOCAL_PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)), LOCAL_PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)),
"local_parameters.json") 'local_parameters.json')
with io.open(PARAMETERS_FILE, encoding='utf-8') as pf: with io.open(PARAMETERS_FILE, encoding='utf-8') as pf:
parameters = json.load(pf) parameters = json.load(pf)
if os.path.exists(LOCAL_PARAMETERS_FILE): if os.path.exists(LOCAL_PARAMETERS_FILE):
@ -194,6 +194,51 @@ def expect_dict(self, got_dict, expected_dict):
expect_value(self, got, expected, info_field) expect_value(self, got, expected, info_field)
def sanitize_got_info_dict(got_dict):
IGNORED_FIELDS = (
# Format keys
'url', 'manifest_url', 'format', 'format_id', 'format_note', 'width', 'height', 'resolution',
'dynamic_range', 'tbr', 'abr', 'acodec', 'asr', 'vbr', 'fps', 'vcodec', 'container', 'filesize',
'filesize_approx', 'player_url', 'protocol', 'fragment_base_url', 'fragments', 'preference',
'language', 'language_preference', 'quality', 'source_preference', 'http_headers',
'stretched_ratio', 'no_resume', 'has_drm', 'downloader_options',
# RTMP formats
'page_url', 'app', 'play_path', 'tc_url', 'flash_version', 'rtmp_live', 'rtmp_conn', 'rtmp_protocol', 'rtmp_real_time',
# Lists
'formats', 'thumbnails', 'subtitles', 'automatic_captions', 'comments', 'entries',
# Auto-generated
'autonumber', 'playlist', 'format_index', 'video_ext', 'audio_ext', 'duration_string', 'epoch',
'fulltitle', 'extractor', 'extractor_key', 'filepath', 'infojson_filename', 'original_url',
# Only live_status needs to be checked
'is_live', 'was_live',
)
IGNORED_PREFIXES = ('', 'playlist', 'requested', 'webpage')
def sanitize(key, value):
if isinstance(value, str) and len(value) > 100:
return f'md5:{md5(value)}'
elif isinstance(value, list) and len(value) > 10:
return f'count:{len(value)}'
return value
test_info_dict = {
key: sanitize(key, value) for key, value in got_dict.items()
if value is not None and key not in IGNORED_FIELDS and not any(
key.startswith(f'{prefix}_') for prefix in IGNORED_PREFIXES)
}
# display_id may be generated from id
if test_info_dict.get('display_id') == test_info_dict['id']:
test_info_dict.pop('display_id')
return test_info_dict
def expect_info_dict(self, got_dict, expected_dict): def expect_info_dict(self, got_dict, expected_dict):
expect_dict(self, got_dict, expected_dict) expect_dict(self, got_dict, expected_dict)
# Check for the presence of mandatory fields # Check for the presence of mandatory fields
@ -207,10 +252,8 @@ def expect_info_dict(self, got_dict, expected_dict):
for key in ['webpage_url', 'extractor', 'extractor_key']: for key in ['webpage_url', 'extractor', 'extractor_key']:
self.assertTrue(got_dict.get(key), 'Missing field: %s' % key) self.assertTrue(got_dict.get(key), 'Missing field: %s' % key)
# Are checkable fields missing from the test case definition? test_info_dict = sanitize_got_info_dict(got_dict)
test_info_dict = dict((key, value if not isinstance(value, compat_str) or len(value) < 250 else 'md5:' + md5(value))
for key, value in got_dict.items()
if value and key in ('id', 'title', 'description', 'uploader', 'upload_date', 'timestamp', 'uploader_id', 'location', 'age_limit'))
missing_keys = set(test_info_dict.keys()) - set(expected_dict.keys()) missing_keys = set(test_info_dict.keys()) - set(expected_dict.keys())
if missing_keys: if missing_keys:
def _repr(v): def _repr(v):

View file

@ -9,7 +9,7 @@
"forcetitle": false, "forcetitle": false,
"forceurl": false, "forceurl": false,
"force_write_download_archive": false, "force_write_download_archive": false,
"format": "best", "format": "b/bv",
"ignoreerrors": false, "ignoreerrors": false,
"listformats": null, "listformats": null,
"logtostderr": false, "logtostderr": false,
@ -44,6 +44,5 @@
"writesubtitles": false, "writesubtitles": false,
"allsubtitles": false, "allsubtitles": false,
"listsubtitles": false, "listsubtitles": false,
"socket_timeout": 20,
"fixup": "never" "fixup": "never"
} }

View file

@ -99,10 +99,10 @@ class TestInfoExtractor(unittest.TestCase):
self.assertRaises(RegexNotFoundError, ie._html_search_meta, ('z', 'x'), html, None, fatal=True) self.assertRaises(RegexNotFoundError, ie._html_search_meta, ('z', 'x'), html, None, fatal=True)
def test_search_json_ld_realworld(self): def test_search_json_ld_realworld(self):
# https://github.com/ytdl-org/youtube-dl/issues/23306 _TESTS = [
expect_dict( # https://github.com/ytdl-org/youtube-dl/issues/23306
self, (
self.ie._search_json_ld(r'''<script type="application/ld+json"> r'''<script type="application/ld+json">
{ {
"@context": "http://schema.org/", "@context": "http://schema.org/",
"@type": "VideoObject", "@type": "VideoObject",
@ -135,17 +135,86 @@ class TestInfoExtractor(unittest.TestCase):
"name": "Kleio Valentien", "name": "Kleio Valentien",
"url": "https://www.eporner.com/pornstar/kleio-valentien/" "url": "https://www.eporner.com/pornstar/kleio-valentien/"
}]} }]}
</script>''', None), </script>''',
{ {
'title': '1 On 1 With Kleio', 'title': '1 On 1 With Kleio',
'description': 'Kleio Valentien', 'description': 'Kleio Valentien',
'url': 'https://gvideo.eporner.com/xN49A1cT3eB/xN49A1cT3eB.mp4', 'url': 'https://gvideo.eporner.com/xN49A1cT3eB/xN49A1cT3eB.mp4',
'timestamp': 1449347075, 'timestamp': 1449347075,
'duration': 743.0, 'duration': 743.0,
'view_count': 1120958, 'view_count': 1120958,
'width': 1920, 'width': 1920,
'height': 1080, 'height': 1080,
}) },
{},
),
(
r'''<script type="application/ld+json">
{
"@context": "https://schema.org",
"@graph": [
{
"@type": "NewsArticle",
"mainEntityOfPage": {
"@type": "WebPage",
"@id": "https://www.ant1news.gr/Society/article/620286/symmoria-anilikon-dikigoros-thymaton-ithelan-na-toys-apoteleiosoyn"
},
"headline": "Συμμορία ανηλίκων δικηγόρος θυμάτων: ήθελαν να τους αποτελειώσουν",
"name": "Συμμορία ανηλίκων δικηγόρος θυμάτων: ήθελαν να τους αποτελειώσουν",
"description": "Τα παιδιά δέχθηκαν την επίθεση επειδή αρνήθηκαν να γίνουν μέλη της συμμορίας, ανέφερε ο Γ. Ζαχαρόπουλος.",
"image": {
"@type": "ImageObject",
"url": "https://ant1media.azureedge.net/imgHandler/1100/a635c968-be71-447c-bf9c-80d843ece21e.jpg",
"width": 1100,
"height": 756 },
"datePublished": "2021-11-10T08:50:00+03:00",
"dateModified": "2021-11-10T08:52:53+03:00",
"author": {
"@type": "Person",
"@id": "https://www.ant1news.gr/",
"name": "Ant1news",
"image": "https://www.ant1news.gr/images/logo-e5d7e4b3e714c88e8d2eca96130142f6.png",
"url": "https://www.ant1news.gr/"
},
"publisher": {
"@type": "Organization",
"@id": "https://www.ant1news.gr#publisher",
"name": "Ant1news",
"url": "https://www.ant1news.gr",
"logo": {
"@type": "ImageObject",
"url": "https://www.ant1news.gr/images/logo-e5d7e4b3e714c88e8d2eca96130142f6.png",
"width": 400,
"height": 400 },
"sameAs": [
"https://www.facebook.com/Ant1news.gr",
"https://twitter.com/antennanews",
"https://www.youtube.com/channel/UC0smvAbfczoN75dP0Hw4Pzw",
"https://www.instagram.com/ant1news/"
]
},
"keywords": "μαχαίρωμα,συμμορία ανηλίκων,ΕΙΔΗΣΕΙΣ,ΕΙΔΗΣΕΙΣ ΣΗΜΕΡΑ,ΝΕΑ,Κοινωνία - Ant1news",
"articleSection": "Κοινωνία"
}
]
}
</script>''',
{
'timestamp': 1636523400,
'title': 'md5:91fe569e952e4d146485740ae927662b',
},
{'expected_type': 'NewsArticle'},
),
]
for html, expected_dict, search_json_ld_kwargs in _TESTS:
expect_dict(
self,
self.ie._search_json_ld(html, None, **search_json_ld_kwargs),
expected_dict
)
def test_download_json(self): def test_download_json(self):
uri = encode_data_uri(b'{"foo": "blah"}', 'application/json') uri = encode_data_uri(b'{"foo": "blah"}', 'application/json')

View file

@ -137,7 +137,7 @@ class TestFormatSelection(unittest.TestCase):
test('webm/mp4', '47') test('webm/mp4', '47')
test('3gp/40/mp4', '35') test('3gp/40/mp4', '35')
test('example-with-dashes', 'example-with-dashes') test('example-with-dashes', 'example-with-dashes')
test('all', '35', 'example-with-dashes', '45', '47', '2') # Order doesn't actually matter for this test('all', '2', '47', '45', 'example-with-dashes', '35')
test('mergeall', '2+47+45+example-with-dashes+35', multi=True) test('mergeall', '2+47+45+example-with-dashes+35', multi=True)
def test_format_selection_audio(self): def test_format_selection_audio(self):
@ -520,7 +520,7 @@ class TestFormatSelection(unittest.TestCase):
ydl = YDL({'format': 'all[width>=400][width<=600]'}) ydl = YDL({'format': 'all[width>=400][width<=600]'})
ydl.process_ie_result(info_dict) ydl.process_ie_result(info_dict)
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts] downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
self.assertEqual(downloaded_ids, ['B', 'C', 'D']) self.assertEqual(downloaded_ids, ['D', 'C', 'B'])
ydl = YDL({'format': 'best[height<40]'}) ydl = YDL({'format': 'best[height<40]'})
try: try:
@ -649,12 +649,14 @@ class TestYoutubeDL(unittest.TestCase):
'title2': '%PATH%', 'title2': '%PATH%',
'title3': 'foo/bar\\test', 'title3': 'foo/bar\\test',
'title4': 'foo "bar" test', 'title4': 'foo "bar" test',
'title5': 'áéí 𝐀',
'timestamp': 1618488000, 'timestamp': 1618488000,
'duration': 100000, 'duration': 100000,
'playlist_index': 1, 'playlist_index': 1,
'playlist_autonumber': 2,
'_last_playlist_index': 100, '_last_playlist_index': 100,
'n_entries': 10, 'n_entries': 10,
'formats': [{'id': 'id1'}, {'id': 'id2'}, {'id': 'id3'}] 'formats': [{'id': 'id 1'}, {'id': 'id 2'}, {'id': 'id 3'}]
} }
def test_prepare_outtmpl_and_filename(self): def test_prepare_outtmpl_and_filename(self):
@ -664,8 +666,7 @@ class TestYoutubeDL(unittest.TestCase):
ydl._num_downloads = 1 ydl._num_downloads = 1
self.assertEqual(ydl.validate_outtmpl(tmpl), None) self.assertEqual(ydl.validate_outtmpl(tmpl), None)
outtmpl, tmpl_dict = ydl.prepare_outtmpl(tmpl, info or self.outtmpl_info) out = ydl.evaluate_outtmpl(tmpl, info or self.outtmpl_info)
out = ydl.escape_outtmpl(outtmpl) % tmpl_dict
fname = ydl.prepare_filename(info or self.outtmpl_info) fname = ydl.prepare_filename(info or self.outtmpl_info)
if not isinstance(expected, (list, tuple)): if not isinstance(expected, (list, tuple)):
@ -689,6 +690,7 @@ class TestYoutubeDL(unittest.TestCase):
test('%(duration_string)s', ('27:46:40', '27-46-40')) test('%(duration_string)s', ('27:46:40', '27-46-40'))
test('%(resolution)s', '1080p') test('%(resolution)s', '1080p')
test('%(playlist_index)s', '001') test('%(playlist_index)s', '001')
test('%(playlist_autonumber)s', '02')
test('%(autonumber)s', '00001') test('%(autonumber)s', '00001')
test('%(autonumber+2)03d', '005', autonumber_start=3) test('%(autonumber+2)03d', '005', autonumber_start=3)
test('%(autonumber)s', '001', autonumber_size=3) test('%(autonumber)s', '001', autonumber_size=3)
@ -715,6 +717,7 @@ class TestYoutubeDL(unittest.TestCase):
test('%(id)s', '.abcd', info={'id': '.abcd'}) test('%(id)s', '.abcd', info={'id': '.abcd'})
test('%(id)s', 'ab__cd', info={'id': 'ab__cd'}) test('%(id)s', 'ab__cd', info={'id': 'ab__cd'})
test('%(id)s', ('ab:cd', 'ab -cd'), info={'id': 'ab:cd'}) test('%(id)s', ('ab:cd', 'ab -cd'), info={'id': 'ab:cd'})
test('%(id.0)s', '-', info={'id': '--'})
# Invalid templates # Invalid templates
self.assertTrue(isinstance(YoutubeDL.validate_outtmpl('%(title)'), ValueError)) self.assertTrue(isinstance(YoutubeDL.validate_outtmpl('%(title)'), ValueError))
@ -735,6 +738,7 @@ class TestYoutubeDL(unittest.TestCase):
test(NA_TEST_OUTTMPL, 'NA-NA-def-1234.mp4') test(NA_TEST_OUTTMPL, 'NA-NA-def-1234.mp4')
test(NA_TEST_OUTTMPL, 'none-none-def-1234.mp4', outtmpl_na_placeholder='none') test(NA_TEST_OUTTMPL, 'none-none-def-1234.mp4', outtmpl_na_placeholder='none')
test(NA_TEST_OUTTMPL, '--def-1234.mp4', outtmpl_na_placeholder='') test(NA_TEST_OUTTMPL, '--def-1234.mp4', outtmpl_na_placeholder='')
test('%(non_existent.0)s', 'NA')
# String formatting # String formatting
FMT_TEST_OUTTMPL = '%%(height)%s.%%(ext)s' FMT_TEST_OUTTMPL = '%%(height)%s.%%(ext)s'
@ -760,17 +764,32 @@ class TestYoutubeDL(unittest.TestCase):
test('a%(width|)d', 'a', outtmpl_na_placeholder='none') test('a%(width|)d', 'a', outtmpl_na_placeholder='none')
FORMATS = self.outtmpl_info['formats'] FORMATS = self.outtmpl_info['formats']
sanitize = lambda x: x.replace(':', ' -').replace('"', "'") sanitize = lambda x: x.replace(':', ' -').replace('"', "'").replace('\n', ' ')
# Custom type casting # Custom type casting
test('%(formats.:.id)l', 'id1, id2, id3') test('%(formats.:.id)l', 'id 1, id 2, id 3')
test('%(formats.:.id)#l', ('id 1\nid 2\nid 3', 'id 1 id 2 id 3'))
test('%(ext)l', 'mp4') test('%(ext)l', 'mp4')
test('%(formats.:.id) 15l', ' id1, id2, id3') test('%(formats.:.id) 18l', ' id 1, id 2, id 3')
test('%(formats)j', (json.dumps(FORMATS), sanitize(json.dumps(FORMATS)))) test('%(formats)j', (json.dumps(FORMATS), sanitize(json.dumps(FORMATS))))
test('%(formats)#j', (json.dumps(FORMATS, indent=4), sanitize(json.dumps(FORMATS, indent=4))))
test('%(title5).3B', 'á')
test('%(title5)U', 'áéí 𝐀')
test('%(title5)#U', 'a\u0301e\u0301i\u0301 𝐀')
test('%(title5)+U', 'áéí A')
test('%(title5)+#U', 'a\u0301e\u0301i\u0301 A')
test('%(height)D', '1K')
test('%(height)5.2D', ' 1.08K')
test('%(title4)#S', 'foo_bar_test')
test('%(title4).10S', ('foo \'bar\' ', 'foo \'bar\'' + ('#' if compat_os_name == 'nt' else ' ')))
if compat_os_name == 'nt': if compat_os_name == 'nt':
test('%(title4)q', ('"foo \\"bar\\" test"', "'foo _'bar_' test'")) test('%(title4)q', ('"foo \\"bar\\" test"', "'foo _'bar_' test'"))
test('%(formats.:.id)#q', ('"id 1" "id 2" "id 3"', "'id 1' 'id 2' 'id 3'"))
test('%(formats.0.id)#q', ('"id 1"', "'id 1'"))
else: else:
test('%(title4)q', ('\'foo "bar" test\'', "'foo 'bar' test'")) test('%(title4)q', ('\'foo "bar" test\'', "'foo 'bar' test'"))
test('%(formats.:.id)#q', "'id 1' 'id 2' 'id 3'")
test('%(formats.0.id)#q', "'id 1'")
# Internal formatting # Internal formatting
test('%(timestamp-1000>%H-%M-%S)s', '11-43-20') test('%(timestamp-1000>%H-%M-%S)s', '11-43-20')
@ -788,6 +807,17 @@ class TestYoutubeDL(unittest.TestCase):
test('%(formats.0.id.-1+id)f', '1235.000000') test('%(formats.0.id.-1+id)f', '1235.000000')
test('%(formats.0.id.-1+formats.1.id.-1)d', '3') test('%(formats.0.id.-1+formats.1.id.-1)d', '3')
# Alternates
test('%(title,id)s', '1234')
test('%(width-100,height+20|def)d', '1100')
test('%(width-100,height+width|def)s', 'def')
test('%(timestamp-x>%H\\,%M\\,%S,timestamp>%H\\,%M\\,%S)s', '12,00,00')
# Replacement
test('%(id&foo)s.bar', 'foo.bar')
test('%(title&foo)s.bar', 'NA.bar')
test('%(title&foo|baz)s.bar', 'baz.bar')
# Laziness # Laziness
def gen(): def gen():
yield from range(5) yield from range(5)
@ -803,6 +833,12 @@ class TestYoutubeDL(unittest.TestCase):
compat_setenv('__yt_dlp_var', 'expanded') compat_setenv('__yt_dlp_var', 'expanded')
envvar = '%__yt_dlp_var%' if compat_os_name == 'nt' else '$__yt_dlp_var' envvar = '%__yt_dlp_var%' if compat_os_name == 'nt' else '$__yt_dlp_var'
test(envvar, (envvar, 'expanded')) test(envvar, (envvar, 'expanded'))
if compat_os_name == 'nt':
test('%s%', ('%s%', '%s%'))
compat_setenv('s', 'expanded')
test('%s%', ('%s%', 'expanded')) # %s% should be expanded before escaping %s
compat_setenv('(test)s', 'expanded')
test('%(test)s%', ('NA%', 'expanded')) # Environment should take priority over template
# Path expansion and escaping # Path expansion and escaping
test('Hello %(title1)s', 'Hello $PATH') test('Hello %(title1)s', 'Hello $PATH')
@ -992,6 +1028,7 @@ class TestYoutubeDL(unittest.TestCase):
test_selection({'playlist_items': '2-4'}, [2, 3, 4]) test_selection({'playlist_items': '2-4'}, [2, 3, 4])
test_selection({'playlist_items': '2,4'}, [2, 4]) test_selection({'playlist_items': '2,4'}, [2, 4])
test_selection({'playlist_items': '10'}, []) test_selection({'playlist_items': '10'}, [])
test_selection({'playlist_items': '0'}, [])
# Tests for https://github.com/ytdl-org/youtube-dl/issues/10591 # Tests for https://github.com/ytdl-org/youtube-dl/issues/10591
test_selection({'playlist_items': '2-4,3-4,3'}, [2, 3, 4]) test_selection({'playlist_items': '2-4,3-4,3'}, [2, 3, 4])

View file

@ -7,7 +7,22 @@ import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from yt_dlp.aes import aes_decrypt, aes_encrypt, aes_cbc_decrypt, aes_cbc_encrypt, aes_decrypt_text from yt_dlp.aes import (
aes_decrypt,
aes_encrypt,
aes_ecb_encrypt,
aes_ecb_decrypt,
aes_cbc_decrypt,
aes_cbc_decrypt_bytes,
aes_cbc_encrypt,
aes_ctr_decrypt,
aes_ctr_encrypt,
aes_gcm_decrypt_and_verify,
aes_gcm_decrypt_and_verify_bytes,
aes_decrypt_text,
BLOCK_SIZE_BYTES,
)
from yt_dlp.compat import compat_pycrypto_AES
from yt_dlp.utils import bytes_to_intlist, intlist_to_bytes from yt_dlp.utils import bytes_to_intlist, intlist_to_bytes
import base64 import base64
@ -27,18 +42,43 @@ class TestAES(unittest.TestCase):
self.assertEqual(decrypted, msg) self.assertEqual(decrypted, msg)
def test_cbc_decrypt(self): def test_cbc_decrypt(self):
data = bytes_to_intlist( data = b'\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6\x27\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd'
b"\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6'\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd" decrypted = intlist_to_bytes(aes_cbc_decrypt(bytes_to_intlist(data), self.key, self.iv))
)
decrypted = intlist_to_bytes(aes_cbc_decrypt(data, self.key, self.iv))
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg) self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
if compat_pycrypto_AES:
decrypted = aes_cbc_decrypt_bytes(data, intlist_to_bytes(self.key), intlist_to_bytes(self.iv))
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
def test_cbc_encrypt(self): def test_cbc_encrypt(self):
data = bytes_to_intlist(self.secret_msg) data = bytes_to_intlist(self.secret_msg)
encrypted = intlist_to_bytes(aes_cbc_encrypt(data, self.key, self.iv)) encrypted = intlist_to_bytes(aes_cbc_encrypt(data, self.key, self.iv))
self.assertEqual( self.assertEqual(
encrypted, encrypted,
b"\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6'\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd") b'\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6\'\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd')
def test_ctr_decrypt(self):
data = bytes_to_intlist(b'\x03\xc7\xdd\xd4\x8e\xb3\xbc\x1a*O\xdc1\x12+8Aio\xd1z\xb5#\xaf\x08')
decrypted = intlist_to_bytes(aes_ctr_decrypt(data, self.key, self.iv))
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
def test_ctr_encrypt(self):
data = bytes_to_intlist(self.secret_msg)
encrypted = intlist_to_bytes(aes_ctr_encrypt(data, self.key, self.iv))
self.assertEqual(
encrypted,
b'\x03\xc7\xdd\xd4\x8e\xb3\xbc\x1a*O\xdc1\x12+8Aio\xd1z\xb5#\xaf\x08')
def test_gcm_decrypt(self):
data = b'\x159Y\xcf5eud\x90\x9c\x85&]\x14\x1d\x0f.\x08\xb4T\xe4/\x17\xbd'
authentication_tag = b'\xe8&I\x80rI\x07\x9d}YWuU@:e'
decrypted = intlist_to_bytes(aes_gcm_decrypt_and_verify(
bytes_to_intlist(data), self.key, bytes_to_intlist(authentication_tag), self.iv[:12]))
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
if compat_pycrypto_AES:
decrypted = aes_gcm_decrypt_and_verify_bytes(
data, intlist_to_bytes(self.key), authentication_tag, intlist_to_bytes(self.iv[:12]))
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
def test_decrypt_text(self): def test_decrypt_text(self):
password = intlist_to_bytes(self.key).decode('utf-8') password = intlist_to_bytes(self.key).decode('utf-8')
@ -57,6 +97,19 @@ class TestAES(unittest.TestCase):
decrypted = (aes_decrypt_text(encrypted, password, 32)) decrypted = (aes_decrypt_text(encrypted, password, 32))
self.assertEqual(decrypted, self.secret_msg) self.assertEqual(decrypted, self.secret_msg)
def test_ecb_encrypt(self):
data = bytes_to_intlist(self.secret_msg)
data += [0x08] * (BLOCK_SIZE_BYTES - len(data) % BLOCK_SIZE_BYTES)
encrypted = intlist_to_bytes(aes_ecb_encrypt(data, self.key, self.iv))
self.assertEqual(
encrypted,
b'\xaa\x86]\x81\x97>\x02\x92\x9d\x1bR[[L/u\xd3&\xd1(h\xde{\x81\x94\xba\x02\xae\xbd\xa6\xd0:')
def test_ecb_decrypt(self):
data = bytes_to_intlist(b'\xaa\x86]\x81\x97>\x02\x92\x9d\x1bR[[L/u\xd3&\xd1(h\xde{\x81\x94\xba\x02\xae\xbd\xa6\xd0:')
decrypted = intlist_to_bytes(aes_ecb_decrypt(data, self.key, self.iv))
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View file

@ -38,7 +38,6 @@ class TestAllURLsMatching(unittest.TestCase):
assertTab('https://www.youtube.com/AsapSCIENCE') assertTab('https://www.youtube.com/AsapSCIENCE')
assertTab('https://www.youtube.com/embedded') assertTab('https://www.youtube.com/embedded')
assertTab('https://www.youtube.com/playlist?list=UUBABnxM4Ar9ten8Mdjj1j0Q') assertTab('https://www.youtube.com/playlist?list=UUBABnxM4Ar9ten8Mdjj1j0Q')
assertTab('https://www.youtube.com/course?list=ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8')
assertTab('https://www.youtube.com/playlist?list=PLwP_SiAcdui0KVebT0mU9Apz359a4ubsC') assertTab('https://www.youtube.com/playlist?list=PLwP_SiAcdui0KVebT0mU9Apz359a4ubsC')
assertTab('https://www.youtube.com/watch?v=AV6J6_AeFEQ&playnext=1&list=PL4023E734DA416012') # 668 assertTab('https://www.youtube.com/watch?v=AV6J6_AeFEQ&playnext=1&list=PL4023E734DA416012') # 668
self.assertFalse('youtube:playlist' in self.matching_ies('PLtS2H6bU1M')) self.assertFalse('youtube:playlist' in self.matching_ies('PLtS2H6bU1M'))

View file

@ -3,16 +3,30 @@ from datetime import datetime, timezone
from yt_dlp import cookies from yt_dlp import cookies
from yt_dlp.cookies import ( from yt_dlp.cookies import (
CRYPTO_AVAILABLE,
LinuxChromeCookieDecryptor, LinuxChromeCookieDecryptor,
MacChromeCookieDecryptor, MacChromeCookieDecryptor,
WindowsChromeCookieDecryptor, WindowsChromeCookieDecryptor,
YDLLogger,
parse_safari_cookies, parse_safari_cookies,
pbkdf2_sha1, pbkdf2_sha1,
_get_linux_desktop_environment,
_LinuxDesktopEnvironment,
) )
class Logger:
def debug(self, message):
print(f'[verbose] {message}')
def info(self, message):
print(message)
def warning(self, message, only_once=False):
self.error(message)
def error(self, message):
raise Exception(message)
class MonkeyPatch: class MonkeyPatch:
def __init__(self, module, temporary_values): def __init__(self, module, temporary_values):
self._module = module self._module = module
@ -30,6 +44,37 @@ class MonkeyPatch:
class TestCookies(unittest.TestCase): class TestCookies(unittest.TestCase):
def test_get_desktop_environment(self):
""" based on https://chromium.googlesource.com/chromium/src/+/refs/heads/main/base/nix/xdg_util_unittest.cc """
test_cases = [
({}, _LinuxDesktopEnvironment.OTHER),
({'DESKTOP_SESSION': 'gnome'}, _LinuxDesktopEnvironment.GNOME),
({'DESKTOP_SESSION': 'mate'}, _LinuxDesktopEnvironment.GNOME),
({'DESKTOP_SESSION': 'kde4'}, _LinuxDesktopEnvironment.KDE),
({'DESKTOP_SESSION': 'kde'}, _LinuxDesktopEnvironment.KDE),
({'DESKTOP_SESSION': 'xfce'}, _LinuxDesktopEnvironment.XFCE),
({'GNOME_DESKTOP_SESSION_ID': 1}, _LinuxDesktopEnvironment.GNOME),
({'KDE_FULL_SESSION': 1}, _LinuxDesktopEnvironment.KDE),
({'XDG_CURRENT_DESKTOP': 'X-Cinnamon'}, _LinuxDesktopEnvironment.CINNAMON),
({'XDG_CURRENT_DESKTOP': 'GNOME'}, _LinuxDesktopEnvironment.GNOME),
({'XDG_CURRENT_DESKTOP': 'GNOME:GNOME-Classic'}, _LinuxDesktopEnvironment.GNOME),
({'XDG_CURRENT_DESKTOP': 'GNOME : GNOME-Classic'}, _LinuxDesktopEnvironment.GNOME),
({'XDG_CURRENT_DESKTOP': 'Unity', 'DESKTOP_SESSION': 'gnome-fallback'}, _LinuxDesktopEnvironment.GNOME),
({'XDG_CURRENT_DESKTOP': 'KDE', 'KDE_SESSION_VERSION': '5'}, _LinuxDesktopEnvironment.KDE),
({'XDG_CURRENT_DESKTOP': 'KDE'}, _LinuxDesktopEnvironment.KDE),
({'XDG_CURRENT_DESKTOP': 'Pantheon'}, _LinuxDesktopEnvironment.PANTHEON),
({'XDG_CURRENT_DESKTOP': 'Unity'}, _LinuxDesktopEnvironment.UNITY),
({'XDG_CURRENT_DESKTOP': 'Unity:Unity7'}, _LinuxDesktopEnvironment.UNITY),
({'XDG_CURRENT_DESKTOP': 'Unity:Unity8'}, _LinuxDesktopEnvironment.UNITY),
]
for env, expected_desktop_environment in test_cases:
self.assertEqual(_get_linux_desktop_environment(env), expected_desktop_environment)
def test_chrome_cookie_decryptor_linux_derive_key(self): def test_chrome_cookie_decryptor_linux_derive_key(self):
key = LinuxChromeCookieDecryptor.derive_key(b'abc') key = LinuxChromeCookieDecryptor.derive_key(b'abc')
self.assertEqual(key, b'7\xa1\xec\xd4m\xfcA\xc7\xb19Z\xd0\x19\xdcM\x17') self.assertEqual(key, b'7\xa1\xec\xd4m\xfcA\xc7\xb19Z\xd0\x19\xdcM\x17')
@ -42,32 +87,30 @@ class TestCookies(unittest.TestCase):
with MonkeyPatch(cookies, {'_get_linux_keyring_password': lambda *args, **kwargs: b''}): with MonkeyPatch(cookies, {'_get_linux_keyring_password': lambda *args, **kwargs: b''}):
encrypted_value = b'v10\xccW%\xcd\xe6\xe6\x9fM" \xa7\xb0\xca\xe4\x07\xd6' encrypted_value = b'v10\xccW%\xcd\xe6\xe6\x9fM" \xa7\xb0\xca\xe4\x07\xd6'
value = 'USD' value = 'USD'
decryptor = LinuxChromeCookieDecryptor('Chrome', YDLLogger()) decryptor = LinuxChromeCookieDecryptor('Chrome', Logger())
self.assertEqual(decryptor.decrypt(encrypted_value), value) self.assertEqual(decryptor.decrypt(encrypted_value), value)
def test_chrome_cookie_decryptor_linux_v11(self): def test_chrome_cookie_decryptor_linux_v11(self):
with MonkeyPatch(cookies, {'_get_linux_keyring_password': lambda *args, **kwargs: b'', with MonkeyPatch(cookies, {'_get_linux_keyring_password': lambda *args, **kwargs: b''}):
'KEYRING_AVAILABLE': True}):
encrypted_value = b'v11#\x81\x10>`w\x8f)\xc0\xb2\xc1\r\xf4\x1al\xdd\x93\xfd\xf8\xf8N\xf2\xa9\x83\xf1\xe9o\x0elVQd' encrypted_value = b'v11#\x81\x10>`w\x8f)\xc0\xb2\xc1\r\xf4\x1al\xdd\x93\xfd\xf8\xf8N\xf2\xa9\x83\xf1\xe9o\x0elVQd'
value = 'tz=Europe.London' value = 'tz=Europe.London'
decryptor = LinuxChromeCookieDecryptor('Chrome', YDLLogger()) decryptor = LinuxChromeCookieDecryptor('Chrome', Logger())
self.assertEqual(decryptor.decrypt(encrypted_value), value) self.assertEqual(decryptor.decrypt(encrypted_value), value)
@unittest.skipIf(not CRYPTO_AVAILABLE, 'cryptography library not available')
def test_chrome_cookie_decryptor_windows_v10(self): def test_chrome_cookie_decryptor_windows_v10(self):
with MonkeyPatch(cookies, { with MonkeyPatch(cookies, {
'_get_windows_v10_key': lambda *args, **kwargs: b'Y\xef\xad\xad\xeerp\xf0Y\xe6\x9b\x12\xc2<z\x16]\n\xbb\xb8\xcb\xd7\x9bA\xc3\x14e\x99{\xd6\xf4&' '_get_windows_v10_key': lambda *args, **kwargs: b'Y\xef\xad\xad\xeerp\xf0Y\xe6\x9b\x12\xc2<z\x16]\n\xbb\xb8\xcb\xd7\x9bA\xc3\x14e\x99{\xd6\xf4&'
}): }):
encrypted_value = b'v10T\xb8\xf3\xb8\x01\xa7TtcV\xfc\x88\xb8\xb8\xef\x05\xb5\xfd\x18\xc90\x009\xab\xb1\x893\x85)\x87\xe1\xa9-\xa3\xad=' encrypted_value = b'v10T\xb8\xf3\xb8\x01\xa7TtcV\xfc\x88\xb8\xb8\xef\x05\xb5\xfd\x18\xc90\x009\xab\xb1\x893\x85)\x87\xe1\xa9-\xa3\xad='
value = '32101439' value = '32101439'
decryptor = WindowsChromeCookieDecryptor('', YDLLogger()) decryptor = WindowsChromeCookieDecryptor('', Logger())
self.assertEqual(decryptor.decrypt(encrypted_value), value) self.assertEqual(decryptor.decrypt(encrypted_value), value)
def test_chrome_cookie_decryptor_mac_v10(self): def test_chrome_cookie_decryptor_mac_v10(self):
with MonkeyPatch(cookies, {'_get_mac_keyring_password': lambda *args, **kwargs: b'6eIDUdtKAacvlHwBVwvg/Q=='}): with MonkeyPatch(cookies, {'_get_mac_keyring_password': lambda *args, **kwargs: b'6eIDUdtKAacvlHwBVwvg/Q=='}):
encrypted_value = b'v10\xb3\xbe\xad\xa1[\x9fC\xa1\x98\xe0\x9a\x01\xd9\xcf\xbfc' encrypted_value = b'v10\xb3\xbe\xad\xa1[\x9fC\xa1\x98\xe0\x9a\x01\xd9\xcf\xbfc'
value = '2021-06-01-22' value = '2021-06-01-22'
decryptor = MacChromeCookieDecryptor('', YDLLogger()) decryptor = MacChromeCookieDecryptor('', Logger())
self.assertEqual(decryptor.decrypt(encrypted_value), value) self.assertEqual(decryptor.decrypt(encrypted_value), value)
def test_safari_cookie_parsing(self): def test_safari_cookie_parsing(self):

0
test/test_download.py Normal file → Executable file
View file

View file

@ -112,6 +112,71 @@ class TestJSInterpreter(unittest.TestCase):
''') ''')
self.assertEqual(jsi.call_function('z'), 5) self.assertEqual(jsi.call_function('z'), 5)
def test_for_loop(self):
jsi = JSInterpreter('''
function x() { a=0; for (i=0; i-10; i++) {a++} a }
''')
self.assertEqual(jsi.call_function('x'), 10)
def test_switch(self):
jsi = JSInterpreter('''
function x(f) { switch(f){
case 1:f+=1;
case 2:f+=2;
case 3:f+=3;break;
case 4:f+=4;
default:f=0;
} return f }
''')
self.assertEqual(jsi.call_function('x', 1), 7)
self.assertEqual(jsi.call_function('x', 3), 6)
self.assertEqual(jsi.call_function('x', 5), 0)
def test_switch_default(self):
jsi = JSInterpreter('''
function x(f) { switch(f){
case 2: f+=2;
default: f-=1;
case 5:
case 6: f+=6;
case 0: break;
case 1: f+=1;
} return f }
''')
self.assertEqual(jsi.call_function('x', 1), 2)
self.assertEqual(jsi.call_function('x', 5), 11)
self.assertEqual(jsi.call_function('x', 9), 14)
def test_try(self):
jsi = JSInterpreter('''
function x() { try{return 10} catch(e){return 5} }
''')
self.assertEqual(jsi.call_function('x'), 10)
def test_for_loop_continue(self):
jsi = JSInterpreter('''
function x() { a=0; for (i=0; i-10; i++) { continue; a++ } a }
''')
self.assertEqual(jsi.call_function('x'), 0)
def test_for_loop_break(self):
jsi = JSInterpreter('''
function x() { a=0; for (i=0; i-10; i++) { break; a++ } a }
''')
self.assertEqual(jsi.call_function('x'), 0)
def test_literal_list(self):
jsi = JSInterpreter('''
function x() { [1, 2, "asdf", [5, 6, 7]][3] }
''')
self.assertEqual(jsi.call_function('x'), [5, 6, 7])
def test_comma(self):
jsi = JSInterpreter('''
function x() { a=5; a -= 1, a+=3; return a }
''')
self.assertEqual(jsi.call_function('x'), 7)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View file

@ -6,6 +6,7 @@ from __future__ import unicode_literals
import os import os
import sys import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from yt_dlp import YoutubeDL from yt_dlp import YoutubeDL
@ -15,6 +16,7 @@ from yt_dlp.postprocessor import (
FFmpegThumbnailsConvertorPP, FFmpegThumbnailsConvertorPP,
MetadataFromFieldPP, MetadataFromFieldPP,
MetadataParserPP, MetadataParserPP,
ModifyChaptersPP
) )
@ -68,3 +70,493 @@ class TestExec(unittest.TestCase):
self.assertEqual(pp.parse_cmd('echo', info), cmd) self.assertEqual(pp.parse_cmd('echo', info), cmd)
self.assertEqual(pp.parse_cmd('echo {}', info), cmd) self.assertEqual(pp.parse_cmd('echo {}', info), cmd)
self.assertEqual(pp.parse_cmd('echo %(filepath)q', info), cmd) self.assertEqual(pp.parse_cmd('echo %(filepath)q', info), cmd)
class TestModifyChaptersPP(unittest.TestCase):
def setUp(self):
self._pp = ModifyChaptersPP(YoutubeDL())
@staticmethod
def _sponsor_chapter(start, end, cat, remove=False):
c = {'start_time': start, 'end_time': end, '_categories': [(cat, start, end)]}
if remove:
c['remove'] = True
return c
@staticmethod
def _chapter(start, end, title=None, remove=False):
c = {'start_time': start, 'end_time': end}
if title is not None:
c['title'] = title
if remove:
c['remove'] = True
return c
def _chapters(self, ends, titles):
self.assertEqual(len(ends), len(titles))
start = 0
chapters = []
for e, t in zip(ends, titles):
chapters.append(self._chapter(start, e, t))
start = e
return chapters
def _remove_marked_arrange_sponsors_test_impl(
self, chapters, expected_chapters, expected_removed):
actual_chapters, actual_removed = (
self._pp._remove_marked_arrange_sponsors(chapters))
for c in actual_removed:
c.pop('title', None)
c.pop('_categories', None)
actual_chapters = [{
'start_time': c['start_time'],
'end_time': c['end_time'],
'title': c['title'],
} for c in actual_chapters]
self.assertSequenceEqual(expected_chapters, actual_chapters)
self.assertSequenceEqual(expected_removed, actual_removed)
def test_remove_marked_arrange_sponsors_CanGetThroughUnaltered(self):
chapters = self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, chapters, [])
def test_remove_marked_arrange_sponsors_ChapterWithSponsors(self):
chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 20, 'sponsor'),
self._sponsor_chapter(30, 40, 'preview'),
self._sponsor_chapter(50, 60, 'filler')]
expected = self._chapters(
[10, 20, 30, 40, 50, 60, 70],
['c', '[SponsorBlock]: Sponsor', 'c', '[SponsorBlock]: Preview/Recap',
'c', '[SponsorBlock]: Filler Tangent', 'c'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_UniqueNamesForOverlappingSponsors(self):
chapters = self._chapters([120], ['c']) + [
self._sponsor_chapter(10, 45, 'sponsor'), self._sponsor_chapter(20, 40, 'selfpromo'),
self._sponsor_chapter(50, 70, 'sponsor'), self._sponsor_chapter(60, 85, 'selfpromo'),
self._sponsor_chapter(90, 120, 'selfpromo'), self._sponsor_chapter(100, 110, 'sponsor')]
expected = self._chapters(
[10, 20, 40, 45, 50, 60, 70, 85, 90, 100, 110, 120],
['c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Sponsor, Unpaid/Self Promotion',
'[SponsorBlock]: Sponsor',
'c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Sponsor, Unpaid/Self Promotion',
'[SponsorBlock]: Unpaid/Self Promotion',
'c', '[SponsorBlock]: Unpaid/Self Promotion', '[SponsorBlock]: Unpaid/Self Promotion, Sponsor',
'[SponsorBlock]: Unpaid/Self Promotion'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_ChapterWithCuts(self):
cuts = [self._chapter(10, 20, remove=True),
self._sponsor_chapter(30, 40, 'sponsor', remove=True),
self._chapter(50, 60, remove=True)]
chapters = self._chapters([70], ['c']) + cuts
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([40], ['c']), cuts)
def test_remove_marked_arrange_sponsors_ChapterWithSponsorsAndCuts(self):
chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 20, 'sponsor'),
self._sponsor_chapter(30, 40, 'selfpromo', remove=True),
self._sponsor_chapter(50, 60, 'interaction')]
expected = self._chapters([10, 20, 40, 50, 60],
['c', '[SponsorBlock]: Sponsor', 'c',
'[SponsorBlock]: Interaction Reminder', 'c'])
self._remove_marked_arrange_sponsors_test_impl(
chapters, expected, [self._chapter(30, 40, remove=True)])
def test_remove_marked_arrange_sponsors_ChapterWithSponsorCutInTheMiddle(self):
cuts = [self._sponsor_chapter(20, 30, 'selfpromo', remove=True),
self._chapter(40, 50, remove=True)]
chapters = self._chapters([70], ['c']) + [self._sponsor_chapter(10, 60, 'sponsor')] + cuts
expected = self._chapters(
[10, 40, 50], ['c', '[SponsorBlock]: Sponsor', 'c'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_ChapterWithCutHidingSponsor(self):
cuts = [self._sponsor_chapter(20, 50, 'selpromo', remove=True)]
chapters = self._chapters([60], ['c']) + [
self._sponsor_chapter(10, 20, 'intro'),
self._sponsor_chapter(30, 40, 'sponsor'),
self._sponsor_chapter(50, 60, 'outro'),
] + cuts
expected = self._chapters(
[10, 20, 30], ['c', '[SponsorBlock]: Intermission/Intro Animation', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_ChapterWithAdjacentSponsors(self):
chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 20, 'sponsor'),
self._sponsor_chapter(20, 30, 'selfpromo'),
self._sponsor_chapter(30, 40, 'interaction')]
expected = self._chapters(
[10, 20, 30, 40, 70],
['c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Unpaid/Self Promotion',
'[SponsorBlock]: Interaction Reminder', 'c'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_ChapterWithAdjacentCuts(self):
chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 20, 'sponsor'),
self._sponsor_chapter(20, 30, 'interaction', remove=True),
self._chapter(30, 40, remove=True),
self._sponsor_chapter(40, 50, 'selpromo', remove=True),
self._sponsor_chapter(50, 60, 'interaction')]
expected = self._chapters([10, 20, 30, 40],
['c', '[SponsorBlock]: Sponsor',
'[SponsorBlock]: Interaction Reminder', 'c'])
self._remove_marked_arrange_sponsors_test_impl(
chapters, expected, [self._chapter(20, 50, remove=True)])
def test_remove_marked_arrange_sponsors_ChapterWithOverlappingSponsors(self):
chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 30, 'sponsor'),
self._sponsor_chapter(20, 50, 'selfpromo'),
self._sponsor_chapter(40, 60, 'interaction')]
expected = self._chapters(
[10, 20, 30, 40, 50, 60, 70],
['c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Sponsor, Unpaid/Self Promotion',
'[SponsorBlock]: Unpaid/Self Promotion', '[SponsorBlock]: Unpaid/Self Promotion, Interaction Reminder',
'[SponsorBlock]: Interaction Reminder', 'c'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_ChapterWithOverlappingCuts(self):
chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 30, 'sponsor', remove=True),
self._sponsor_chapter(20, 50, 'selfpromo', remove=True),
self._sponsor_chapter(40, 60, 'interaction', remove=True)]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([20], ['c']), [self._chapter(10, 60, remove=True)])
def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingSponsors(self):
chapters = self._chapters([170], ['c']) + [
self._sponsor_chapter(0, 30, 'intro'),
self._sponsor_chapter(20, 50, 'sponsor'),
self._sponsor_chapter(40, 60, 'selfpromo'),
self._sponsor_chapter(70, 90, 'sponsor'),
self._sponsor_chapter(80, 100, 'sponsor'),
self._sponsor_chapter(90, 110, 'sponsor'),
self._sponsor_chapter(120, 140, 'selfpromo'),
self._sponsor_chapter(130, 160, 'interaction'),
self._sponsor_chapter(150, 170, 'outro')]
expected = self._chapters(
[20, 30, 40, 50, 60, 70, 110, 120, 130, 140, 150, 160, 170],
['[SponsorBlock]: Intermission/Intro Animation', '[SponsorBlock]: Intermission/Intro Animation, Sponsor', '[SponsorBlock]: Sponsor',
'[SponsorBlock]: Sponsor, Unpaid/Self Promotion', '[SponsorBlock]: Unpaid/Self Promotion', 'c',
'[SponsorBlock]: Sponsor', 'c', '[SponsorBlock]: Unpaid/Self Promotion',
'[SponsorBlock]: Unpaid/Self Promotion, Interaction Reminder',
'[SponsorBlock]: Interaction Reminder',
'[SponsorBlock]: Interaction Reminder, Endcards/Credits', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingCuts(self):
chapters = self._chapters([170], ['c']) + [
self._chapter(0, 30, remove=True),
self._sponsor_chapter(20, 50, 'sponsor', remove=True),
self._chapter(40, 60, remove=True),
self._sponsor_chapter(70, 90, 'sponsor', remove=True),
self._chapter(80, 100, remove=True),
self._chapter(90, 110, remove=True),
self._sponsor_chapter(120, 140, 'sponsor', remove=True),
self._sponsor_chapter(130, 160, 'selfpromo', remove=True),
self._chapter(150, 170, remove=True)]
expected_cuts = [self._chapter(0, 60, remove=True),
self._chapter(70, 110, remove=True),
self._chapter(120, 170, remove=True)]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([20], ['c']), expected_cuts)
def test_remove_marked_arrange_sponsors_OverlappingSponsorsDifferentTitlesAfterCut(self):
chapters = self._chapters([60], ['c']) + [
self._sponsor_chapter(10, 60, 'sponsor'),
self._sponsor_chapter(10, 40, 'intro'),
self._sponsor_chapter(30, 50, 'interaction'),
self._sponsor_chapter(30, 50, 'selfpromo', remove=True),
self._sponsor_chapter(40, 50, 'interaction'),
self._sponsor_chapter(50, 60, 'outro')]
expected = self._chapters(
[10, 30, 40], ['c', '[SponsorBlock]: Sponsor, Intermission/Intro Animation', '[SponsorBlock]: Sponsor, Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(
chapters, expected, [self._chapter(30, 50, remove=True)])
def test_remove_marked_arrange_sponsors_SponsorsNoLongerOverlapAfterCut(self):
chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 30, 'sponsor'),
self._sponsor_chapter(20, 50, 'interaction'),
self._sponsor_chapter(30, 50, 'selpromo', remove=True),
self._sponsor_chapter(40, 60, 'sponsor'),
self._sponsor_chapter(50, 60, 'interaction')]
expected = self._chapters(
[10, 20, 40, 50], ['c', '[SponsorBlock]: Sponsor',
'[SponsorBlock]: Sponsor, Interaction Reminder', 'c'])
self._remove_marked_arrange_sponsors_test_impl(
chapters, expected, [self._chapter(30, 50, remove=True)])
def test_remove_marked_arrange_sponsors_SponsorsStillOverlapAfterCut(self):
chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 60, 'sponsor'),
self._sponsor_chapter(20, 60, 'interaction'),
self._sponsor_chapter(30, 50, 'selfpromo', remove=True)]
expected = self._chapters(
[10, 20, 40, 50], ['c', '[SponsorBlock]: Sponsor',
'[SponsorBlock]: Sponsor, Interaction Reminder', 'c'])
self._remove_marked_arrange_sponsors_test_impl(
chapters, expected, [self._chapter(30, 50, remove=True)])
def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingSponsorsAndCuts(self):
chapters = self._chapters([200], ['c']) + [
self._sponsor_chapter(10, 40, 'sponsor'),
self._sponsor_chapter(10, 30, 'intro'),
self._chapter(20, 30, remove=True),
self._sponsor_chapter(30, 40, 'selfpromo'),
self._sponsor_chapter(50, 70, 'sponsor'),
self._sponsor_chapter(60, 80, 'interaction'),
self._chapter(70, 80, remove=True),
self._sponsor_chapter(70, 90, 'sponsor'),
self._sponsor_chapter(80, 100, 'interaction'),
self._sponsor_chapter(120, 170, 'selfpromo'),
self._sponsor_chapter(130, 180, 'outro'),
self._chapter(140, 150, remove=True),
self._chapter(150, 160, remove=True)]
expected = self._chapters(
[10, 20, 30, 40, 50, 70, 80, 100, 110, 130, 140, 160],
['c', '[SponsorBlock]: Sponsor, Intermission/Intro Animation', '[SponsorBlock]: Sponsor, Unpaid/Self Promotion',
'c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Sponsor, Interaction Reminder',
'[SponsorBlock]: Interaction Reminder', 'c', '[SponsorBlock]: Unpaid/Self Promotion',
'[SponsorBlock]: Unpaid/Self Promotion, Endcards/Credits', '[SponsorBlock]: Endcards/Credits', 'c'])
expected_cuts = [self._chapter(20, 30, remove=True),
self._chapter(70, 80, remove=True),
self._chapter(140, 160, remove=True)]
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, expected_cuts)
def test_remove_marked_arrange_sponsors_SponsorOverlapsMultipleChapters(self):
chapters = (self._chapters([20, 40, 60, 80, 100], ['c1', 'c2', 'c3', 'c4', 'c5'])
+ [self._sponsor_chapter(10, 90, 'sponsor')])
expected = self._chapters([10, 90, 100], ['c1', '[SponsorBlock]: Sponsor', 'c5'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_CutOverlapsMultipleChapters(self):
cuts = [self._chapter(10, 90, remove=True)]
chapters = self._chapters([20, 40, 60, 80, 100], ['c1', 'c2', 'c3', 'c4', 'c5']) + cuts
expected = self._chapters([10, 20], ['c1', 'c5'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorsWithinSomeChaptersAndOverlappingOthers(self):
chapters = (self._chapters([10, 40, 60, 80], ['c1', 'c2', 'c3', 'c4'])
+ [self._sponsor_chapter(20, 30, 'sponsor'),
self._sponsor_chapter(50, 70, 'selfpromo')])
expected = self._chapters([10, 20, 30, 40, 50, 70, 80],
['c1', 'c2', '[SponsorBlock]: Sponsor', 'c2', 'c3',
'[SponsorBlock]: Unpaid/Self Promotion', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_CutsWithinSomeChaptersAndOverlappingOthers(self):
cuts = [self._chapter(20, 30, remove=True), self._chapter(50, 70, remove=True)]
chapters = self._chapters([10, 40, 60, 80], ['c1', 'c2', 'c3', 'c4']) + cuts
expected = self._chapters([10, 30, 40, 50], ['c1', 'c2', 'c3', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_ChaptersAfterLastSponsor(self):
chapters = (self._chapters([20, 40, 50, 60], ['c1', 'c2', 'c3', 'c4'])
+ [self._sponsor_chapter(10, 30, 'music_offtopic')])
expected = self._chapters(
[10, 30, 40, 50, 60],
['c1', '[SponsorBlock]: Non-Music Section', 'c2', 'c3', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_ChaptersAfterLastCut(self):
cuts = [self._chapter(10, 30, remove=True)]
chapters = self._chapters([20, 40, 50, 60], ['c1', 'c2', 'c3', 'c4']) + cuts
expected = self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorStartsAtChapterStart(self):
chapters = (self._chapters([10, 20, 40], ['c1', 'c2', 'c3'])
+ [self._sponsor_chapter(20, 30, 'sponsor')])
expected = self._chapters([10, 20, 30, 40], ['c1', 'c2', '[SponsorBlock]: Sponsor', 'c3'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_CutStartsAtChapterStart(self):
cuts = [self._chapter(20, 30, remove=True)]
chapters = self._chapters([10, 20, 40], ['c1', 'c2', 'c3']) + cuts
expected = self._chapters([10, 20, 30], ['c1', 'c2', 'c3'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorEndsAtChapterEnd(self):
chapters = (self._chapters([10, 30, 40], ['c1', 'c2', 'c3'])
+ [self._sponsor_chapter(20, 30, 'sponsor')])
expected = self._chapters([10, 20, 30, 40], ['c1', 'c2', '[SponsorBlock]: Sponsor', 'c3'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_CutEndsAtChapterEnd(self):
cuts = [self._chapter(20, 30, remove=True)]
chapters = self._chapters([10, 30, 40], ['c1', 'c2', 'c3']) + cuts
expected = self._chapters([10, 20, 30], ['c1', 'c2', 'c3'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorCoincidesWithChapters(self):
chapters = (self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4'])
+ [self._sponsor_chapter(10, 30, 'sponsor')])
expected = self._chapters([10, 30, 40], ['c1', '[SponsorBlock]: Sponsor', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_CutCoincidesWithChapters(self):
cuts = [self._chapter(10, 30, remove=True)]
chapters = self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4']) + cuts
expected = self._chapters([10, 20], ['c1', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorsAtVideoBoundaries(self):
chapters = (self._chapters([20, 40, 60], ['c1', 'c2', 'c3'])
+ [self._sponsor_chapter(0, 10, 'intro'), self._sponsor_chapter(50, 60, 'outro')])
expected = self._chapters(
[10, 20, 40, 50, 60], ['[SponsorBlock]: Intermission/Intro Animation', 'c1', 'c2', 'c3', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_CutsAtVideoBoundaries(self):
cuts = [self._chapter(0, 10, remove=True), self._chapter(50, 60, remove=True)]
chapters = self._chapters([20, 40, 60], ['c1', 'c2', 'c3']) + cuts
expected = self._chapters([10, 30, 40], ['c1', 'c2', 'c3'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorsOverlapChaptersAtVideoBoundaries(self):
chapters = (self._chapters([10, 40, 50], ['c1', 'c2', 'c3'])
+ [self._sponsor_chapter(0, 20, 'intro'), self._sponsor_chapter(30, 50, 'outro')])
expected = self._chapters(
[20, 30, 50], ['[SponsorBlock]: Intermission/Intro Animation', 'c2', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_CutsOverlapChaptersAtVideoBoundaries(self):
cuts = [self._chapter(0, 20, remove=True), self._chapter(30, 50, remove=True)]
chapters = self._chapters([10, 40, 50], ['c1', 'c2', 'c3']) + cuts
expected = self._chapters([10], ['c2'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_EverythingSponsored(self):
chapters = (self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4'])
+ [self._sponsor_chapter(0, 20, 'intro'), self._sponsor_chapter(20, 40, 'outro')])
expected = self._chapters([20, 40], ['[SponsorBlock]: Intermission/Intro Animation', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_EverythingCut(self):
cuts = [self._chapter(0, 20, remove=True), self._chapter(20, 40, remove=True)]
chapters = self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4']) + cuts
self._remove_marked_arrange_sponsors_test_impl(
chapters, [], [self._chapter(0, 40, remove=True)])
def test_remove_marked_arrange_sponsors_TinyChaptersInTheOriginalArePreserved(self):
chapters = self._chapters([0.1, 0.2, 0.3, 0.4], ['c1', 'c2', 'c3', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, chapters, [])
def test_remove_marked_arrange_sponsors_TinySponsorsAreIgnored(self):
chapters = [self._sponsor_chapter(0, 0.1, 'intro'), self._chapter(0.1, 0.2, 'c1'),
self._sponsor_chapter(0.2, 0.3, 'sponsor'), self._chapter(0.3, 0.4, 'c2'),
self._sponsor_chapter(0.4, 0.5, 'outro')]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([0.3, 0.5], ['c1', 'c2']), [])
def test_remove_marked_arrange_sponsors_TinyChaptersResultingFromCutsAreIgnored(self):
cuts = [self._chapter(1.5, 2.5, remove=True)]
chapters = self._chapters([2, 3, 3.5], ['c1', 'c2', 'c3']) + cuts
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([2, 2.5], ['c1', 'c3']), cuts)
def test_remove_marked_arrange_sponsors_SingleTinyChapterIsPreserved(self):
cuts = [self._chapter(0.5, 2, remove=True)]
chapters = self._chapters([2], ['c']) + cuts
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([0.5], ['c']), cuts)
def test_remove_marked_arrange_sponsors_TinyChapterAtTheStartPrependedToTheNext(self):
cuts = [self._chapter(0.5, 2, remove=True)]
chapters = self._chapters([2, 4], ['c1', 'c2']) + cuts
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([2.5], ['c2']), cuts)
def test_remove_marked_arrange_sponsors_TinyChaptersResultingFromSponsorOverlapAreIgnored(self):
chapters = self._chapters([1, 3, 4], ['c1', 'c2', 'c3']) + [
self._sponsor_chapter(1.5, 2.5, 'sponsor')]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([1.5, 2.5, 4], ['c1', '[SponsorBlock]: Sponsor', 'c3']), [])
def test_remove_marked_arrange_sponsors_TinySponsorsOverlapsAreIgnored(self):
chapters = self._chapters([2, 3, 5], ['c1', 'c2', 'c3']) + [
self._sponsor_chapter(1, 3, 'sponsor'),
self._sponsor_chapter(2.5, 4, 'selfpromo')
]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([1, 3, 4, 5], [
'c1', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Unpaid/Self Promotion', 'c3']), [])
def test_remove_marked_arrange_sponsors_TinySponsorsPrependedToTheNextSponsor(self):
chapters = self._chapters([4], ['c']) + [
self._sponsor_chapter(1.5, 2, 'sponsor'),
self._sponsor_chapter(2, 4, 'selfpromo')
]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([1.5, 4], ['c', '[SponsorBlock]: Unpaid/Self Promotion']), [])
def test_remove_marked_arrange_sponsors_SmallestSponsorInTheOverlapGetsNamed(self):
self._pp._sponsorblock_chapter_title = '[SponsorBlock]: %(name)s'
chapters = self._chapters([10], ['c']) + [
self._sponsor_chapter(2, 8, 'sponsor'),
self._sponsor_chapter(4, 6, 'selfpromo')
]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([2, 4, 6, 8, 10], [
'c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Unpaid/Self Promotion',
'[SponsorBlock]: Sponsor', 'c'
]), [])
def test_make_concat_opts_CommonCase(self):
sponsor_chapters = [self._chapter(1, 2, 's1'), self._chapter(10, 20, 's2')]
expected = '''ffconcat version 1.0
file 'file:test'
outpoint 1.000000
file 'file:test'
inpoint 2.000000
outpoint 10.000000
file 'file:test'
inpoint 20.000000
'''
opts = self._pp._make_concat_opts(sponsor_chapters, 30)
self.assertEqual(expected, ''.join(self._pp._concat_spec(['test'] * len(opts), opts)))
def test_make_concat_opts_NoZeroDurationChunkAtVideoStart(self):
sponsor_chapters = [self._chapter(0, 1, 's1'), self._chapter(10, 20, 's2')]
expected = '''ffconcat version 1.0
file 'file:test'
inpoint 1.000000
outpoint 10.000000
file 'file:test'
inpoint 20.000000
'''
opts = self._pp._make_concat_opts(sponsor_chapters, 30)
self.assertEqual(expected, ''.join(self._pp._concat_spec(['test'] * len(opts), opts)))
def test_make_concat_opts_NoZeroDurationChunkAtVideoEnd(self):
sponsor_chapters = [self._chapter(1, 2, 's1'), self._chapter(10, 20, 's2')]
expected = '''ffconcat version 1.0
file 'file:test'
outpoint 1.000000
file 'file:test'
inpoint 2.000000
outpoint 10.000000
'''
opts = self._pp._make_concat_opts(sponsor_chapters, 20)
self.assertEqual(expected, ''.join(self._pp._concat_spec(['test'] * len(opts), opts)))
def test_quote_for_concat_RunsOfQuotes(self):
self.assertEqual(
r"'special '\'' '\'\''characters'\'\'\''galore'",
self._pp._quote_for_ffmpeg("special ' ''characters'''galore"))
def test_quote_for_concat_QuotesAtStart(self):
self.assertEqual(
r"\'\'\''special '\'' characters '\'' galore'",
self._pp._quote_for_ffmpeg("'''special ' characters ' galore"))
def test_quote_for_concat_QuotesAtEnd(self):
self.assertEqual(
r"'special '\'' characters '\'' galore'\'\'\'",
self._pp._quote_for_ffmpeg("special ' characters ' galore'''"))

View file

@ -19,6 +19,7 @@ from yt_dlp.extractor import (
CeskaTelevizeIE, CeskaTelevizeIE,
LyndaIE, LyndaIE,
NPOIE, NPOIE,
PBSIE,
ComedyCentralIE, ComedyCentralIE,
NRKTVIE, NRKTVIE,
RaiPlayIE, RaiPlayIE,
@ -372,5 +373,42 @@ class TestDemocracynowSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['en']), 'acaca989e24a9e45a6719c9b3d60815c') self.assertEqual(md5(subtitles['en']), 'acaca989e24a9e45a6719c9b3d60815c')
@is_download_test
class TestPBSSubtitles(BaseTestSubtitles):
url = 'https://www.pbs.org/video/how-fantasy-reflects-our-world-picecq/'
IE = PBSIE
def test_allsubtitles(self):
self.DL.params['writesubtitles'] = True
self.DL.params['allsubtitles'] = True
subtitles = self.getSubtitles()
self.assertEqual(set(subtitles.keys()), set(['en']))
def test_subtitles_dfxp_format(self):
self.DL.params['writesubtitles'] = True
self.DL.params['subtitlesformat'] = 'dfxp'
subtitles = self.getSubtitles()
self.assertIn(md5(subtitles['en']), ['643b034254cdc3768ff1e750b6b5873b'])
def test_subtitles_vtt_format(self):
self.DL.params['writesubtitles'] = True
self.DL.params['subtitlesformat'] = 'vtt'
subtitles = self.getSubtitles()
self.assertIn(
md5(subtitles['en']), ['937a05711555b165d4c55a9667017045', 'f49ea998d6824d94959c8152a368ff73'])
def test_subtitles_srt_format(self):
self.DL.params['writesubtitles'] = True
self.DL.params['subtitlesformat'] = 'srt'
subtitles = self.getSubtitles()
self.assertIn(md5(subtitles['en']), ['2082c21b43759d9bf172931b2f2ca371'])
def test_subtitles_sami_format(self):
self.DL.params['writesubtitles'] = True
self.DL.params['subtitlesformat'] = 'sami'
subtitles = self.getSubtitles()
self.assertIn(md5(subtitles['en']), ['4256b16ac7da6a6780fafd04294e85cd'])
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View file

@ -37,6 +37,7 @@ from yt_dlp.utils import (
ExtractorError, ExtractorError,
find_xpath_attr, find_xpath_attr,
fix_xml_ampersands, fix_xml_ampersands,
format_bytes,
float_or_none, float_or_none,
get_element_by_class, get_element_by_class,
get_element_by_attribute, get_element_by_attribute,
@ -848,30 +849,52 @@ class TestUtil(unittest.TestCase):
self.assertEqual(parse_codecs('avc1.77.30, mp4a.40.2'), { self.assertEqual(parse_codecs('avc1.77.30, mp4a.40.2'), {
'vcodec': 'avc1.77.30', 'vcodec': 'avc1.77.30',
'acodec': 'mp4a.40.2', 'acodec': 'mp4a.40.2',
'dynamic_range': None,
}) })
self.assertEqual(parse_codecs('mp4a.40.2'), { self.assertEqual(parse_codecs('mp4a.40.2'), {
'vcodec': 'none', 'vcodec': 'none',
'acodec': 'mp4a.40.2', 'acodec': 'mp4a.40.2',
'dynamic_range': None,
}) })
self.assertEqual(parse_codecs('mp4a.40.5,avc1.42001e'), { self.assertEqual(parse_codecs('mp4a.40.5,avc1.42001e'), {
'vcodec': 'avc1.42001e', 'vcodec': 'avc1.42001e',
'acodec': 'mp4a.40.5', 'acodec': 'mp4a.40.5',
'dynamic_range': None,
}) })
self.assertEqual(parse_codecs('avc3.640028'), { self.assertEqual(parse_codecs('avc3.640028'), {
'vcodec': 'avc3.640028', 'vcodec': 'avc3.640028',
'acodec': 'none', 'acodec': 'none',
'dynamic_range': None,
}) })
self.assertEqual(parse_codecs(', h264,,newcodec,aac'), { self.assertEqual(parse_codecs(', h264,,newcodec,aac'), {
'vcodec': 'h264', 'vcodec': 'h264',
'acodec': 'aac', 'acodec': 'aac',
'dynamic_range': None,
}) })
self.assertEqual(parse_codecs('av01.0.05M.08'), { self.assertEqual(parse_codecs('av01.0.05M.08'), {
'vcodec': 'av01.0.05M.08', 'vcodec': 'av01.0.05M.08',
'acodec': 'none', 'acodec': 'none',
'dynamic_range': None,
})
self.assertEqual(parse_codecs('vp9.2'), {
'vcodec': 'vp9.2',
'acodec': 'none',
'dynamic_range': 'HDR10',
})
self.assertEqual(parse_codecs('av01.0.12M.10.0.110.09.16.09.0'), {
'vcodec': 'av01.0.12M.10',
'acodec': 'none',
'dynamic_range': 'HDR10',
})
self.assertEqual(parse_codecs('dvhe'), {
'vcodec': 'dvhe',
'acodec': 'none',
'dynamic_range': 'DV',
}) })
self.assertEqual(parse_codecs('theora, vorbis'), { self.assertEqual(parse_codecs('theora, vorbis'), {
'vcodec': 'theora', 'vcodec': 'theora',
'acodec': 'vorbis', 'acodec': 'vorbis',
'dynamic_range': None,
}) })
self.assertEqual(parse_codecs('unknownvcodec, unknownacodec'), { self.assertEqual(parse_codecs('unknownvcodec, unknownacodec'), {
'vcodec': 'unknownvcodec', 'vcodec': 'unknownvcodec',
@ -1134,19 +1157,29 @@ class TestUtil(unittest.TestCase):
self.assertEqual(parse_count('1000'), 1000) self.assertEqual(parse_count('1000'), 1000)
self.assertEqual(parse_count('1.000'), 1000) self.assertEqual(parse_count('1.000'), 1000)
self.assertEqual(parse_count('1.1k'), 1100) self.assertEqual(parse_count('1.1k'), 1100)
self.assertEqual(parse_count('1.1 k'), 1100)
self.assertEqual(parse_count('1,1 k'), 1100)
self.assertEqual(parse_count('1.1kk'), 1100000) self.assertEqual(parse_count('1.1kk'), 1100000)
self.assertEqual(parse_count('1.1kk '), 1100000) self.assertEqual(parse_count('1.1kk '), 1100000)
self.assertEqual(parse_count('1,1kk'), 1100000)
self.assertEqual(parse_count('100 views'), 100)
self.assertEqual(parse_count('1,100 views'), 1100)
self.assertEqual(parse_count('1.1kk views'), 1100000) self.assertEqual(parse_count('1.1kk views'), 1100000)
self.assertEqual(parse_count('10M views'), 10000000)
self.assertEqual(parse_count('has 10M views'), 10000000)
def test_parse_resolution(self): def test_parse_resolution(self):
self.assertEqual(parse_resolution(None), {}) self.assertEqual(parse_resolution(None), {})
self.assertEqual(parse_resolution(''), {}) self.assertEqual(parse_resolution(''), {})
self.assertEqual(parse_resolution('1920x1080'), {'width': 1920, 'height': 1080}) self.assertEqual(parse_resolution(' 1920x1080'), {'width': 1920, 'height': 1080})
self.assertEqual(parse_resolution('1920×1080'), {'width': 1920, 'height': 1080}) self.assertEqual(parse_resolution('1920×1080 '), {'width': 1920, 'height': 1080})
self.assertEqual(parse_resolution('1920 x 1080'), {'width': 1920, 'height': 1080}) self.assertEqual(parse_resolution('1920 x 1080'), {'width': 1920, 'height': 1080})
self.assertEqual(parse_resolution('720p'), {'height': 720}) self.assertEqual(parse_resolution('720p'), {'height': 720})
self.assertEqual(parse_resolution('4k'), {'height': 2160}) self.assertEqual(parse_resolution('4k'), {'height': 2160})
self.assertEqual(parse_resolution('8K'), {'height': 4320}) self.assertEqual(parse_resolution('8K'), {'height': 4320})
self.assertEqual(parse_resolution('pre_1920x1080_post'), {'width': 1920, 'height': 1080})
self.assertEqual(parse_resolution('ep1x2'), {})
self.assertEqual(parse_resolution('1920, 1080'), {'width': 1920, 'height': 1080})
def test_parse_bitrate(self): def test_parse_bitrate(self):
self.assertEqual(parse_bitrate(None), None) self.assertEqual(parse_bitrate(None), None)
@ -1197,12 +1230,49 @@ ffmpeg version 2.4.4 Copyright (c) 2000-2014 the FFmpeg ...'''), '2.4.4')
def test_render_table(self): def test_render_table(self):
self.assertEqual( self.assertEqual(
render_table( render_table(
['a', 'bcd'], ['a', 'empty', 'bcd'],
[[123, 4], [9999, 51]]), [[123, '', 4], [9999, '', 51]]),
'a empty bcd\n'
'123 4\n'
'9999 51')
self.assertEqual(
render_table(
['a', 'empty', 'bcd'],
[[123, '', 4], [9999, '', 51]],
hide_empty=True),
'a bcd\n' 'a bcd\n'
'123 4\n' '123 4\n'
'9999 51') '9999 51')
self.assertEqual(
render_table(
['\ta', 'bcd'],
[['1\t23', 4], ['\t9999', 51]]),
' a bcd\n'
'1 23 4\n'
'9999 51')
self.assertEqual(
render_table(
['a', 'bcd'],
[[123, 4], [9999, 51]],
delim='-'),
'a bcd\n'
'--------\n'
'123 4\n'
'9999 51')
self.assertEqual(
render_table(
['a', 'bcd'],
[[123, 4], [9999, 51]],
delim='-', extra_gap=2),
'a bcd\n'
'----------\n'
'123 4\n'
'9999 51')
def test_match_str(self): def test_match_str(self):
# Unary # Unary
self.assertFalse(match_str('xy', {'x': 1200})) self.assertFalse(match_str('xy', {'x': 1200}))
@ -1231,6 +1301,7 @@ ffmpeg version 2.4.4 Copyright (c) 2000-2014 the FFmpeg ...'''), '2.4.4')
self.assertFalse(match_str('x>2K', {'x': 1200})) self.assertFalse(match_str('x>2K', {'x': 1200}))
self.assertTrue(match_str('x>=1200 & x < 1300', {'x': 1200})) self.assertTrue(match_str('x>=1200 & x < 1300', {'x': 1200}))
self.assertFalse(match_str('x>=1100 & x < 1200', {'x': 1200})) self.assertFalse(match_str('x>=1100 & x < 1200', {'x': 1200}))
self.assertTrue(match_str('x > 1:0:0', {'x': 3700}))
# String # String
self.assertFalse(match_str('y=a212', {'y': 'foobar42'})) self.assertFalse(match_str('y=a212', {'y': 'foobar42'}))
@ -1367,21 +1438,21 @@ The first line
</body> </body>
</tt>'''.encode('utf-8') </tt>'''.encode('utf-8')
srt_data = '''1 srt_data = '''1
00:00:02,080 --> 00:00:05,839 00:00:02,080 --> 00:00:05,840
<font color="white" face="sansSerif" size="16">default style<font color="red">custom style</font></font> <font color="white" face="sansSerif" size="16">default style<font color="red">custom style</font></font>
2 2
00:00:02,080 --> 00:00:05,839 00:00:02,080 --> 00:00:05,840
<b><font color="cyan" face="sansSerif" size="16"><font color="lime">part 1 <b><font color="cyan" face="sansSerif" size="16"><font color="lime">part 1
</font>part 2</font></b> </font>part 2</font></b>
3 3
00:00:05,839 --> 00:00:09,560 00:00:05,840 --> 00:00:09,560
<u><font color="lime">line 3 <u><font color="lime">line 3
part 3</font></u> part 3</font></u>
4 4
00:00:09,560 --> 00:00:12,359 00:00:09,560 --> 00:00:12,360
<i><u><font color="yellow"><font color="lime">inner <i><u><font color="yellow"><font color="lime">inner
</font>style</font></u></i> </font>style</font></u></i>
@ -1594,9 +1665,9 @@ Line 1
self.assertEqual(repr(LazyList(it)), repr(it)) self.assertEqual(repr(LazyList(it)), repr(it))
self.assertEqual(str(LazyList(it)), str(it)) self.assertEqual(str(LazyList(it)), str(it))
self.assertEqual(list(LazyList(it).reverse()), it[::-1]) self.assertEqual(list(LazyList(it, reverse=True)), it[::-1])
self.assertEqual(list(LazyList(it).reverse()[1:3:7]), it[::-1][1:3:7]) self.assertEqual(list(reversed(LazyList(it))[::-1]), it)
self.assertEqual(list(LazyList(it).reverse()[::-1]), it) self.assertEqual(list(reversed(LazyList(it))[1:3:7]), it[::-1][1:3:7])
def test_LazyList_laziness(self): def test_LazyList_laziness(self):
@ -1609,15 +1680,27 @@ Line 1
test(ll, 5, 5, range(6)) test(ll, 5, 5, range(6))
test(ll, -3, 7, range(10)) test(ll, -3, 7, range(10))
ll = LazyList(range(10)).reverse() ll = LazyList(range(10), reverse=True)
test(ll, -1, 0, range(1)) test(ll, -1, 0, range(1))
test(ll, 3, 6, range(10)) test(ll, 3, 6, range(10))
ll = LazyList(itertools.count()) ll = LazyList(itertools.count())
test(ll, 10, 10, range(11)) test(ll, 10, 10, range(11))
ll.reverse() ll = reversed(ll)
test(ll, -15, 14, range(15)) test(ll, -15, 14, range(15))
def test_format_bytes(self):
self.assertEqual(format_bytes(0), '0.00B')
self.assertEqual(format_bytes(1000), '1000.00B')
self.assertEqual(format_bytes(1024), '1.00KiB')
self.assertEqual(format_bytes(1024**2), '1.00MiB')
self.assertEqual(format_bytes(1024**3), '1.00GiB')
self.assertEqual(format_bytes(1024**4), '1.00TiB')
self.assertEqual(format_bytes(1024**5), '1.00PiB')
self.assertEqual(format_bytes(1024**6), '1.00EiB')
self.assertEqual(format_bytes(1024**7), '1.00ZiB')
self.assertEqual(format_bytes(1024**8), '1.00YiB')
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View file

@ -26,29 +26,31 @@ class TestYoutubeLists(unittest.TestCase):
def test_youtube_playlist_noplaylist(self): def test_youtube_playlist_noplaylist(self):
dl = FakeYDL() dl = FakeYDL()
dl.params['noplaylist'] = True dl.params['noplaylist'] = True
ie = YoutubePlaylistIE(dl) ie = YoutubeTabIE(dl)
result = ie.extract('https://www.youtube.com/watch?v=FXxLjLQi3Fg&list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re') result = ie.extract('https://www.youtube.com/watch?v=FXxLjLQi3Fg&list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re')
self.assertEqual(result['_type'], 'url') self.assertEqual(result['_type'], 'url')
self.assertEqual(YoutubeIE().extract_id(result['url']), 'FXxLjLQi3Fg') self.assertEqual(YoutubeIE.extract_id(result['url']), 'FXxLjLQi3Fg')
def test_youtube_course(self): def test_youtube_course(self):
print('Skipping: Course URLs no longer exists')
return
dl = FakeYDL() dl = FakeYDL()
ie = YoutubePlaylistIE(dl) ie = YoutubePlaylistIE(dl)
# TODO find a > 100 (paginating?) videos course # TODO find a > 100 (paginating?) videos course
result = ie.extract('https://www.youtube.com/course?list=ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8') result = ie.extract('https://www.youtube.com/course?list=ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8')
entries = list(result['entries']) entries = list(result['entries'])
self.assertEqual(YoutubeIE().extract_id(entries[0]['url']), 'j9WZyLZCBzs') self.assertEqual(YoutubeIE.extract_id(entries[0]['url']), 'j9WZyLZCBzs')
self.assertEqual(len(entries), 25) self.assertEqual(len(entries), 25)
self.assertEqual(YoutubeIE().extract_id(entries[-1]['url']), 'rYefUsYuEp0') self.assertEqual(YoutubeIE.extract_id(entries[-1]['url']), 'rYefUsYuEp0')
def test_youtube_mix(self): def test_youtube_mix(self):
dl = FakeYDL() dl = FakeYDL()
ie = YoutubePlaylistIE(dl) ie = YoutubeTabIE(dl)
result = ie.extract('https://www.youtube.com/watch?v=W01L70IGBgE&index=2&list=RDOQpdSVF_k_w') result = ie.extract('https://www.youtube.com/watch?v=tyITL_exICo&list=RDCLAK5uy_kLWIr9gv1XLlPbaDS965-Db4TrBoUTxQ8')
entries = result['entries'] entries = list(result['entries'])
self.assertTrue(len(entries) >= 50) self.assertTrue(len(entries) >= 50)
original_video = entries[0] original_video = entries[0]
self.assertEqual(original_video['id'], 'OQpdSVF_k_w') self.assertEqual(original_video['id'], 'tyITL_exICo')
def test_youtube_toptracks(self): def test_youtube_toptracks(self):
print('Skipping: The playlist page gives error 500') print('Skipping: The playlist page gives error 500')
@ -68,10 +70,10 @@ class TestYoutubeLists(unittest.TestCase):
entries = list(result['entries']) entries = list(result['entries'])
self.assertTrue(len(entries) == 1) self.assertTrue(len(entries) == 1)
video = entries[0] video = entries[0]
self.assertEqual(video['_type'], 'url_transparent') self.assertEqual(video['_type'], 'url')
self.assertEqual(video['ie_key'], 'Youtube') self.assertEqual(video['ie_key'], 'Youtube')
self.assertEqual(video['id'], 'BaW_jenozKc') self.assertEqual(video['id'], 'BaW_jenozKc')
self.assertEqual(video['url'], 'BaW_jenozKc') self.assertEqual(video['url'], 'https://www.youtube.com/watch?v=BaW_jenozKc')
self.assertEqual(video['title'], 'youtube-dl test video "\'/\\ä↭𝕐') self.assertEqual(video['title'], 'youtube-dl test video "\'/\\ä↭𝕐')
self.assertEqual(video['duration'], 10) self.assertEqual(video['duration'], 10)
self.assertEqual(video['uploader'], 'Philipp Hagemeister') self.assertEqual(video['uploader'], 'Philipp Hagemeister')

View file

@ -14,9 +14,10 @@ import string
from test.helper import FakeYDL, is_download_test from test.helper import FakeYDL, is_download_test
from yt_dlp.extractor import YoutubeIE from yt_dlp.extractor import YoutubeIE
from yt_dlp.jsinterp import JSInterpreter
from yt_dlp.compat import compat_str, compat_urlretrieve from yt_dlp.compat import compat_str, compat_urlretrieve
_TESTS = [ _SIG_TESTS = [
( (
'https://s.ytimg.com/yts/jsbin/html5player-vflHOr_nV.js', 'https://s.ytimg.com/yts/jsbin/html5player-vflHOr_nV.js',
86, 86,
@ -64,6 +65,29 @@ _TESTS = [
) )
] ]
_NSIG_TESTS = [
(
'https://www.youtube.com/s/player/9216d1f7/player_ias.vflset/en_US/base.js',
'SLp9F5bwjAdhE9F-', 'gWnb9IK2DJ8Q1w',
),
(
'https://www.youtube.com/s/player/f8cb7a3b/player_ias.vflset/en_US/base.js',
'oBo2h5euWy6osrUt', 'ivXHpm7qJjJN',
),
(
'https://www.youtube.com/s/player/2dfe380c/player_ias.vflset/en_US/base.js',
'oBo2h5euWy6osrUt', '3DIBbn3qdQ',
),
(
'https://www.youtube.com/s/player/f1ca6900/player_ias.vflset/en_US/base.js',
'cu3wyu6LQn2hse', 'jvxetvmlI9AN9Q',
),
(
'https://www.youtube.com/s/player/8040e515/player_ias.vflset/en_US/base.js',
'wvOFaY-yjgDuIEg5', 'HkfBFDHmgw4rsw',
),
]
@is_download_test @is_download_test
class TestPlayerInfo(unittest.TestCase): class TestPlayerInfo(unittest.TestCase):
@ -97,35 +121,49 @@ class TestSignature(unittest.TestCase):
os.mkdir(self.TESTDATA_DIR) os.mkdir(self.TESTDATA_DIR)
def make_tfunc(url, sig_input, expected_sig): def t_factory(name, sig_func, url_pattern):
m = re.match(r'.*-([a-zA-Z0-9_-]+)(?:/watch_as3|/html5player)?\.[a-z]+$', url) def make_tfunc(url, sig_input, expected_sig):
assert m, '%r should follow URL format' % url m = url_pattern.match(url)
test_id = m.group(1) assert m, '%r should follow URL format' % url
test_id = m.group('id')
def test_func(self): def test_func(self):
basename = 'player-%s.js' % test_id basename = f'player-{name}-{test_id}.js'
fn = os.path.join(self.TESTDATA_DIR, basename) fn = os.path.join(self.TESTDATA_DIR, basename)
if not os.path.exists(fn): if not os.path.exists(fn):
compat_urlretrieve(url, fn) compat_urlretrieve(url, fn)
with io.open(fn, encoding='utf-8') as testf:
jscode = testf.read()
self.assertEqual(sig_func(jscode, sig_input), expected_sig)
ydl = FakeYDL() test_func.__name__ = f'test_{name}_js_{test_id}'
ie = YoutubeIE(ydl) setattr(TestSignature, test_func.__name__, test_func)
with io.open(fn, encoding='utf-8') as testf: return make_tfunc
jscode = testf.read()
func = ie._parse_sig_js(jscode)
src_sig = (
compat_str(string.printable[:sig_input])
if isinstance(sig_input, int) else sig_input)
got_sig = func(src_sig)
self.assertEqual(got_sig, expected_sig)
test_func.__name__ = str('test_signature_js_' + test_id)
setattr(TestSignature, test_func.__name__, test_func)
for test_spec in _TESTS: def signature(jscode, sig_input):
make_tfunc(*test_spec) func = YoutubeIE(FakeYDL())._parse_sig_js(jscode)
src_sig = (
compat_str(string.printable[:sig_input])
if isinstance(sig_input, int) else sig_input)
return func(src_sig)
def n_sig(jscode, sig_input):
funcname = YoutubeIE(FakeYDL())._extract_n_function_name(jscode)
return JSInterpreter(jscode).call_function(funcname, sig_input)
make_sig_test = t_factory(
'signature', signature, re.compile(r'.*-(?P<id>[a-zA-Z0-9_-]+)(?:/watch_as3|/html5player)?\.[a-z]+$'))
for test_spec in _SIG_TESTS:
make_sig_test(*test_spec)
make_nsig_test = t_factory(
'nsig', n_sig, re.compile(r'.+/player/(?P<id>[a-zA-Z0-9_-]+)/.+.js$'))
for test_spec in _NSIG_TESTS:
make_nsig_test(*test_spec)
if __name__ == '__main__': if __name__ == '__main__':

File diff suppressed because it is too large Load diff

View file

@ -1,7 +1,7 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals f'You are using an unsupported version of Python. Only Python versions 3.6 and above are supported by yt-dlp' # noqa: F541
__license__ = 'Public Domain' __license__ = 'Public Domain'
@ -13,28 +13,30 @@ import random
import re import re
import sys import sys
from .options import ( from .options import (
parseOpts, parseOpts,
) )
from .compat import ( from .compat import (
compat_getpass, compat_getpass,
compat_os_name,
compat_shlex_quote, compat_shlex_quote,
workaround_optparse_bug9161, workaround_optparse_bug9161,
) )
from .cookies import SUPPORTED_BROWSERS from .cookies import SUPPORTED_BROWSERS, SUPPORTED_KEYRINGS
from .utils import ( from .utils import (
DateRange, DateRange,
decodeOption, decodeOption,
DownloadCancelled,
DownloadError, DownloadError,
error_to_compat_str, error_to_compat_str,
ExistingVideoReached,
expand_path, expand_path,
GeoUtils,
float_or_none,
int_or_none,
match_filter_func, match_filter_func,
MaxDownloadsReached, parse_duration,
preferredencoding, preferredencoding,
read_batch_urls, read_batch_urls,
RejectedVideoReached,
render_table, render_table,
SameFileError, SameFileError,
setproctitle, setproctitle,
@ -71,7 +73,7 @@ def _real_main(argv=None):
setproctitle('yt-dlp') setproctitle('yt-dlp')
parser, opts, args = parseOpts(argv) parser, opts, args = parseOpts(argv)
warnings = [] warnings, deprecation_warnings = [], []
# Set user agent # Set user agent
if opts.user_agent is not None: if opts.user_agent is not None:
@ -94,6 +96,8 @@ def _real_main(argv=None):
if opts.batchfile is not None: if opts.batchfile is not None:
try: try:
if opts.batchfile == '-': if opts.batchfile == '-':
write_string('Reading URLs from stdin - EOF (%s) to end:\n' % (
'Ctrl+Z' if compat_os_name == 'nt' else 'Ctrl+D'))
batchfd = sys.stdin batchfd = sys.stdin
else: else:
batchfd = io.open( batchfd = io.open(
@ -122,10 +126,10 @@ def _real_main(argv=None):
desc = getattr(ie, 'IE_DESC', ie.IE_NAME) desc = getattr(ie, 'IE_DESC', ie.IE_NAME)
if desc is False: if desc is False:
continue continue
if hasattr(ie, 'SEARCH_KEY'): if getattr(ie, 'SEARCH_KEY', None) is not None:
_SEARCHES = ('cute kittens', 'slithering pythons', 'falling cat', 'angry poodle', 'purple fish', 'running tortoise', 'sleeping bunny', 'burping cow') _SEARCHES = ('cute kittens', 'slithering pythons', 'falling cat', 'angry poodle', 'purple fish', 'running tortoise', 'sleeping bunny', 'burping cow')
_COUNTS = ('', '5', '10', 'all') _COUNTS = ('', '5', '10', 'all')
desc += ' (Example: "%s%s:%s" )' % (ie.SEARCH_KEY, random.choice(_COUNTS), random.choice(_SEARCHES)) desc += f'; "{ie.SEARCH_KEY}:" prefix (Example: "{ie.SEARCH_KEY}{random.choice(_COUNTS)}:{random.choice(_SEARCHES)}")'
write_string(desc + '\n', out=sys.stdout) write_string(desc + '\n', out=sys.stdout)
sys.exit(0) sys.exit(0)
if opts.ap_list_mso: if opts.ap_list_mso:
@ -134,6 +138,11 @@ def _real_main(argv=None):
sys.exit(0) sys.exit(0)
# Conflicting, missing and erroneous options # Conflicting, missing and erroneous options
if opts.format == 'best':
warnings.append('.\n '.join((
'"-f best" selects the best pre-merged format which is often not the best option',
'To let yt-dlp download and merge the best available formats, simply do not pass any format selection',
'If you know what you are doing and want only the best pre-merged format, use "-f b" instead to suppress this warning')))
if opts.usenetrc and (opts.username is not None or opts.password is not None): if opts.usenetrc and (opts.username is not None or opts.password is not None):
parser.error('using .netrc conflicts with giving username/password') parser.error('using .netrc conflicts with giving username/password')
if opts.password is not None and opts.username is None: if opts.password is not None and opts.username is None:
@ -193,7 +202,14 @@ def _real_main(argv=None):
if opts.overwrites: # --yes-overwrites implies --no-continue if opts.overwrites: # --yes-overwrites implies --no-continue
opts.continue_dl = False opts.continue_dl = False
if opts.concurrent_fragment_downloads <= 0: if opts.concurrent_fragment_downloads <= 0:
raise ValueError('Concurrent fragments must be positive') parser.error('Concurrent fragments must be positive')
if opts.wait_for_video is not None:
min_wait, max_wait, *_ = map(parse_duration, opts.wait_for_video.split('-', 1) + [None])
if min_wait is None or (max_wait is None and '-' in opts.wait_for_video):
parser.error('Invalid time range to wait')
elif max_wait is not None and max_wait < min_wait:
parser.error('Minimum time range to wait must not be longer than the maximum')
opts.wait_for_video = (min_wait, max_wait)
def parse_retries(retries, name=''): def parse_retries(retries, name=''):
if retries in ('inf', 'infinite'): if retries in ('inf', 'infinite'):
@ -206,6 +222,8 @@ def _real_main(argv=None):
return parsed_retries return parsed_retries
if opts.retries is not None: if opts.retries is not None:
opts.retries = parse_retries(opts.retries) opts.retries = parse_retries(opts.retries)
if opts.file_access_retries is not None:
opts.file_access_retries = parse_retries(opts.file_access_retries, 'file access ')
if opts.fragment_retries is not None: if opts.fragment_retries is not None:
opts.fragment_retries = parse_retries(opts.fragment_retries, 'fragment ') opts.fragment_retries = parse_retries(opts.fragment_retries, 'fragment ')
if opts.extractor_retries is not None: if opts.extractor_retries is not None:
@ -221,15 +239,17 @@ def _real_main(argv=None):
parser.error('invalid http chunk size specified') parser.error('invalid http chunk size specified')
opts.http_chunk_size = numeric_chunksize opts.http_chunk_size = numeric_chunksize
if opts.playliststart <= 0: if opts.playliststart <= 0:
raise ValueError('Playlist start must be positive') raise parser.error('Playlist start must be positive')
if opts.playlistend not in (-1, None) and opts.playlistend < opts.playliststart: if opts.playlistend not in (-1, None) and opts.playlistend < opts.playliststart:
raise ValueError('Playlist end must be greater than playlist start') raise parser.error('Playlist end must be greater than playlist start')
if opts.extractaudio: if opts.extractaudio:
opts.audioformat = opts.audioformat.lower()
if opts.audioformat not in ['best'] + list(FFmpegExtractAudioPP.SUPPORTED_EXTS): if opts.audioformat not in ['best'] + list(FFmpegExtractAudioPP.SUPPORTED_EXTS):
parser.error('invalid audio format specified') parser.error('invalid audio format specified')
if opts.audioquality: if opts.audioquality:
opts.audioquality = opts.audioquality.strip('k').strip('K') opts.audioquality = opts.audioquality.strip('k').strip('K')
if not opts.audioquality.isdigit(): audioquality = int_or_none(float_or_none(opts.audioquality)) # int_or_none prevents inf, nan
if audioquality is None or audioquality < 0:
parser.error('invalid audio quality specified') parser.error('invalid audio quality specified')
if opts.recodevideo is not None: if opts.recodevideo is not None:
opts.recodevideo = opts.recodevideo.replace(' ', '') opts.recodevideo = opts.recodevideo.replace(' ', '')
@ -245,12 +265,27 @@ def _real_main(argv=None):
if opts.convertthumbnails is not None: if opts.convertthumbnails is not None:
if opts.convertthumbnails not in FFmpegThumbnailsConvertorPP.SUPPORTED_EXTS: if opts.convertthumbnails not in FFmpegThumbnailsConvertorPP.SUPPORTED_EXTS:
parser.error('invalid thumbnail format specified') parser.error('invalid thumbnail format specified')
if opts.cookiesfrombrowser is not None: if opts.cookiesfrombrowser is not None:
opts.cookiesfrombrowser = [ mobj = re.match(r'(?P<name>[^+:]+)(\s*\+\s*(?P<keyring>[^:]+))?(\s*:(?P<profile>.+))?', opts.cookiesfrombrowser)
part.strip() or None for part in opts.cookiesfrombrowser.split(':', 1)] if mobj is None:
if opts.cookiesfrombrowser[0] not in SUPPORTED_BROWSERS: parser.error(f'invalid cookies from browser arguments: {opts.cookiesfrombrowser}')
parser.error('unsupported browser specified for cookies') browser_name, keyring, profile = mobj.group('name', 'keyring', 'profile')
browser_name = browser_name.lower()
if browser_name not in SUPPORTED_BROWSERS:
parser.error(f'unsupported browser specified for cookies: "{browser_name}". '
f'Supported browsers are: {", ".join(sorted(SUPPORTED_BROWSERS))}')
if keyring is not None:
keyring = keyring.upper()
if keyring not in SUPPORTED_KEYRINGS:
parser.error(f'unsupported keyring specified for cookies: "{keyring}". '
f'Supported keyrings are: {", ".join(sorted(SUPPORTED_KEYRINGS))}')
opts.cookiesfrombrowser = (browser_name, profile, keyring)
geo_bypass_code = opts.geo_bypass_ip_block or opts.geo_bypass_country
if geo_bypass_code is not None:
try:
GeoUtils.random_ipv4(geo_bypass_code)
except Exception:
parser.error('unsupported geo-bypass country or ip-block')
if opts.date is not None: if opts.date is not None:
date = DateRange.day(opts.date) date = DateRange.day(opts.date)
@ -259,6 +294,9 @@ def _real_main(argv=None):
compat_opts = opts.compat_opts compat_opts = opts.compat_opts
def report_conflict(arg1, arg2):
warnings.append(f'{arg2} is ignored since {arg1} was given')
def _unused_compat_opt(name): def _unused_compat_opt(name):
if name not in compat_opts: if name not in compat_opts:
return False return False
@ -280,9 +318,14 @@ def _real_main(argv=None):
setattr(opts, opt_name, default) setattr(opts, opt_name, default)
return None return None
set_default_compat('abort-on-error', 'ignoreerrors') set_default_compat('abort-on-error', 'ignoreerrors', 'only_download')
set_default_compat('no-playlist-metafiles', 'allow_playlist_files') set_default_compat('no-playlist-metafiles', 'allow_playlist_files')
set_default_compat('no-clean-infojson', 'clean_infojson') set_default_compat('no-clean-infojson', 'clean_infojson')
if 'no-attach-info-json' in compat_opts:
if opts.embed_infojson:
_unused_compat_opt('no-attach-info-json')
else:
opts.embed_infojson = False
if 'format-sort' in compat_opts: if 'format-sort' in compat_opts:
opts.format_sort.extend(InfoExtractor.FormatSort.ytdl_default) opts.format_sort.extend(InfoExtractor.FormatSort.ytdl_default)
_video_multistreams_set = set_default_compat('multistreams', 'allow_multiple_video_streams', False, remove_compat=False) _video_multistreams_set = set_default_compat('multistreams', 'allow_multiple_video_streams', False, remove_compat=False)
@ -290,10 +333,14 @@ def _real_main(argv=None):
if _video_multistreams_set is False and _audio_multistreams_set is False: if _video_multistreams_set is False and _audio_multistreams_set is False:
_unused_compat_opt('multistreams') _unused_compat_opt('multistreams')
outtmpl_default = opts.outtmpl.get('default') outtmpl_default = opts.outtmpl.get('default')
if opts.useid:
if outtmpl_default is None:
outtmpl_default = opts.outtmpl['default'] = '%(id)s.%(ext)s'
else:
report_conflict('--output', '--id')
if 'filename' in compat_opts: if 'filename' in compat_opts:
if outtmpl_default is None: if outtmpl_default is None:
outtmpl_default = '%(title)s-%(id)s.%(ext)s' outtmpl_default = opts.outtmpl['default'] = '%(title)s-%(id)s.%(ext)s'
opts.outtmpl.update({'default': outtmpl_default})
else: else:
_unused_compat_opt('filename') _unused_compat_opt('filename')
@ -303,10 +350,14 @@ def _real_main(argv=None):
parser.error('invalid %s %r: %s' % (msg, tmpl, error_to_compat_str(err))) parser.error('invalid %s %r: %s' % (msg, tmpl, error_to_compat_str(err)))
for k, tmpl in opts.outtmpl.items(): for k, tmpl in opts.outtmpl.items():
validate_outtmpl(tmpl, '%s output template' % k) validate_outtmpl(tmpl, f'{k} output template')
opts.forceprint = opts.forceprint or [] opts.forceprint = opts.forceprint or []
for tmpl in opts.forceprint or []: for tmpl in opts.forceprint or []:
validate_outtmpl(tmpl, 'print template') validate_outtmpl(tmpl, 'print template')
validate_outtmpl(opts.sponsorblock_chapter_title, 'SponsorBlock chapter title')
for k, tmpl in opts.progress_template.items():
k = f'{k[:-6]} console title' if '-title' in k else f'{k} progress'
validate_outtmpl(tmpl, f'{k} template')
if opts.extractaudio and not opts.keepvideo and opts.format is None: if opts.extractaudio and not opts.keepvideo and opts.format is None:
opts.format = 'bestaudio/best' opts.format = 'bestaudio/best'
@ -353,47 +404,67 @@ def _real_main(argv=None):
if opts.getcomments and not printing_json: if opts.getcomments and not printing_json:
opts.writeinfojson = True opts.writeinfojson = True
def report_conflict(arg1, arg2): if opts.no_sponsorblock:
warnings.append('%s is ignored since %s was given' % (arg2, arg1)) opts.sponsorblock_mark = set()
opts.sponsorblock_remove = set()
sponsorblock_query = opts.sponsorblock_mark | opts.sponsorblock_remove
if opts.remuxvideo and opts.recodevideo: opts.remove_chapters = opts.remove_chapters or []
report_conflict('--recode-video', '--remux-video')
opts.remuxvideo = False if (opts.remove_chapters or sponsorblock_query) and opts.sponskrub is not False:
if opts.sponskrub:
if opts.remove_chapters:
report_conflict('--remove-chapters', '--sponskrub')
if opts.sponsorblock_mark:
report_conflict('--sponsorblock-mark', '--sponskrub')
if opts.sponsorblock_remove:
report_conflict('--sponsorblock-remove', '--sponskrub')
opts.sponskrub = False
if opts.sponskrub_cut and opts.split_chapters and opts.sponskrub is not False: if opts.sponskrub_cut and opts.split_chapters and opts.sponskrub is not False:
report_conflict('--split-chapter', '--sponskrub-cut') report_conflict('--split-chapter', '--sponskrub-cut')
opts.sponskrub_cut = False opts.sponskrub_cut = False
if opts.remuxvideo and opts.recodevideo:
report_conflict('--recode-video', '--remux-video')
opts.remuxvideo = False
if opts.allow_unplayable_formats: if opts.allow_unplayable_formats:
if opts.extractaudio: def report_unplayable_conflict(opt_name, arg, default=False, allowed=None):
report_conflict('--allow-unplayable-formats', '--extract-audio') val = getattr(opts, opt_name)
opts.extractaudio = False if (not allowed and val) or (allowed and not allowed(val)):
if opts.remuxvideo: report_conflict('--allow-unplayable-formats', arg)
report_conflict('--allow-unplayable-formats', '--remux-video') setattr(opts, opt_name, default)
opts.remuxvideo = False
if opts.recodevideo: report_unplayable_conflict('extractaudio', '--extract-audio')
report_conflict('--allow-unplayable-formats', '--recode-video') report_unplayable_conflict('remuxvideo', '--remux-video')
opts.recodevideo = False report_unplayable_conflict('recodevideo', '--recode-video')
if opts.addmetadata: report_unplayable_conflict('addmetadata', '--embed-metadata')
report_conflict('--allow-unplayable-formats', '--add-metadata') report_unplayable_conflict('addchapters', '--embed-chapters')
opts.addmetadata = False report_unplayable_conflict('embed_infojson', '--embed-info-json')
if opts.embedsubtitles: opts.embed_infojson = False
report_conflict('--allow-unplayable-formats', '--embed-subs') report_unplayable_conflict('embedsubtitles', '--embed-subs')
opts.embedsubtitles = False report_unplayable_conflict('embedthumbnail', '--embed-thumbnail')
if opts.embedthumbnail: report_unplayable_conflict('xattrs', '--xattrs')
report_conflict('--allow-unplayable-formats', '--embed-thumbnail') report_unplayable_conflict('fixup', '--fixup', default='never', allowed=lambda x: x in (None, 'never', 'ignore'))
opts.embedthumbnail = False
if opts.xattrs:
report_conflict('--allow-unplayable-formats', '--xattrs')
opts.xattrs = False
if opts.fixup and opts.fixup.lower() not in ('never', 'ignore'):
report_conflict('--allow-unplayable-formats', '--fixup')
opts.fixup = 'never' opts.fixup = 'never'
if opts.sponskrub: report_unplayable_conflict('remove_chapters', '--remove-chapters', default=[])
report_conflict('--allow-unplayable-formats', '--sponskrub') report_unplayable_conflict('sponsorblock_remove', '--sponsorblock-remove', default=set())
report_unplayable_conflict('sponskrub', '--sponskrub', default=set())
opts.sponskrub = False opts.sponskrub = False
if (opts.addmetadata or opts.sponsorblock_mark) and opts.addchapters is None:
opts.addchapters = True
# PostProcessors # PostProcessors
postprocessors = [] postprocessors = list(opts.add_postprocessors)
if sponsorblock_query:
postprocessors.append({
'key': 'SponsorBlock',
'categories': sponsorblock_query,
'api': opts.sponsorblock_api,
# Run this immediately after extraction is complete
'when': 'pre_process'
})
if opts.parse_metadata: if opts.parse_metadata:
postprocessors.append({ postprocessors.append({
'key': 'MetadataParser', 'key': 'MetadataParser',
@ -439,16 +510,7 @@ def _real_main(argv=None):
'key': 'FFmpegVideoConvertor', 'key': 'FFmpegVideoConvertor',
'preferedformat': opts.recodevideo, 'preferedformat': opts.recodevideo,
}) })
# FFmpegMetadataPP should be run after FFmpegVideoConvertorPP and # If ModifyChapters is going to remove chapters, subtitles must already be in the container.
# FFmpegExtractAudioPP as containers before conversion may not support
# metadata (3gp, webm, etc.)
# And this post-processor should be placed before other metadata
# manipulating post-processors (FFmpegEmbedSubtitle) to prevent loss of
# extra metadata. By default ffmpeg preserves metadata applicable for both
# source and target containers. From this point the container won't change,
# so metadata can be added here.
if opts.addmetadata:
postprocessors.append({'key': 'FFmpegMetadata'})
if opts.embedsubtitles: if opts.embedsubtitles:
already_have_subtitle = opts.writesubtitles and 'no-keep-subs' not in compat_opts already_have_subtitle = opts.writesubtitles and 'no-keep-subs' not in compat_opts
postprocessors.append({ postprocessors.append({
@ -462,6 +524,44 @@ def _real_main(argv=None):
# this was the old behaviour if only --all-sub was given. # this was the old behaviour if only --all-sub was given.
if opts.allsubtitles and not opts.writeautomaticsub: if opts.allsubtitles and not opts.writeautomaticsub:
opts.writesubtitles = True opts.writesubtitles = True
# ModifyChapters must run before FFmpegMetadataPP
remove_chapters_patterns, remove_ranges = [], []
for regex in opts.remove_chapters:
if regex.startswith('*'):
dur = list(map(parse_duration, regex[1:].split('-')))
if len(dur) == 2 and all(t is not None for t in dur):
remove_ranges.append(tuple(dur))
continue
parser.error(f'invalid --remove-chapters time range {regex!r}. Must be of the form *start-end')
try:
remove_chapters_patterns.append(re.compile(regex))
except re.error as err:
parser.error(f'invalid --remove-chapters regex {regex!r} - {err}')
if opts.remove_chapters or sponsorblock_query:
postprocessors.append({
'key': 'ModifyChapters',
'remove_chapters_patterns': remove_chapters_patterns,
'remove_sponsor_segments': opts.sponsorblock_remove,
'remove_ranges': remove_ranges,
'sponsorblock_chapter_title': opts.sponsorblock_chapter_title,
'force_keyframes': opts.force_keyframes_at_cuts
})
# FFmpegMetadataPP should be run after FFmpegVideoConvertorPP and
# FFmpegExtractAudioPP as containers before conversion may not support
# metadata (3gp, webm, etc.)
# By default ffmpeg preserves metadata applicable for both
# source and target containers. From this point the container won't change,
# so metadata can be added here.
if opts.addmetadata or opts.addchapters or opts.embed_infojson:
if opts.embed_infojson is None:
opts.embed_infojson = 'if_exists'
postprocessors.append({
'key': 'FFmpegMetadata',
'add_chapters': opts.addchapters,
'add_metadata': opts.addmetadata,
'add_infojson': opts.embed_infojson,
})
# Deprecated
# This should be above EmbedThumbnail since sponskrub removes the thumbnail attachment # This should be above EmbedThumbnail since sponskrub removes the thumbnail attachment
# but must be below EmbedSubtitle and FFmpegMetadata # but must be below EmbedSubtitle and FFmpegMetadata
# See https://github.com/yt-dlp/yt-dlp/issues/204 , https://github.com/faissaloo/SponSkrub/issues/29 # See https://github.com/yt-dlp/yt-dlp/issues/204 , https://github.com/faissaloo/SponSkrub/issues/29
@ -474,18 +574,22 @@ def _real_main(argv=None):
'cut': opts.sponskrub_cut, 'cut': opts.sponskrub_cut,
'force': opts.sponskrub_force, 'force': opts.sponskrub_force,
'ignoreerror': opts.sponskrub is None, 'ignoreerror': opts.sponskrub is None,
'_from_cli': True,
}) })
if opts.embedthumbnail: if opts.embedthumbnail:
already_have_thumbnail = opts.writethumbnail or opts.write_all_thumbnails
postprocessors.append({ postprocessors.append({
'key': 'EmbedThumbnail', 'key': 'EmbedThumbnail',
# already_have_thumbnail = True prevents the file from being deleted after embedding # already_have_thumbnail = True prevents the file from being deleted after embedding
'already_have_thumbnail': already_have_thumbnail 'already_have_thumbnail': opts.writethumbnail
}) })
if not already_have_thumbnail: if not opts.writethumbnail:
opts.writethumbnail = True opts.writethumbnail = True
opts.outtmpl['pl_thumbnail'] = ''
if opts.split_chapters: if opts.split_chapters:
postprocessors.append({'key': 'FFmpegSplitChapters'}) postprocessors.append({
'key': 'FFmpegSplitChapters',
'force_keyframes': opts.force_keyframes_at_cuts,
})
# XAttrMetadataPP should be run after post-processors that may change file contents # XAttrMetadataPP should be run after post-processors that may change file contents
if opts.xattrs: if opts.xattrs:
postprocessors.append({'key': 'XAttrMetadata'}) postprocessors.append({'key': 'XAttrMetadata'})
@ -509,6 +613,19 @@ def _real_main(argv=None):
opts.postprocessor_args.setdefault('sponskrub', []) opts.postprocessor_args.setdefault('sponskrub', [])
opts.postprocessor_args['default'] = opts.postprocessor_args['default-compat'] opts.postprocessor_args['default'] = opts.postprocessor_args['default-compat']
def report_deprecation(val, old, new=None):
if not val:
return
deprecation_warnings.append(
f'{old} is deprecated and may be removed in a future version. Use {new} instead' if new
else f'{old} is deprecated and may not work as expected')
report_deprecation(opts.sponskrub, '--sponskrub', '--sponsorblock-mark or --sponsorblock-remove')
report_deprecation(not opts.prefer_ffmpeg, '--prefer-avconv', 'ffmpeg')
report_deprecation(opts.include_ads, '--include-ads')
# report_deprecation(opts.call_home, '--call-home') # We may re-implement this in future
# report_deprecation(opts.writeannotations, '--write-annotations') # It's just that no website has it
final_ext = ( final_ext = (
opts.recodevideo if opts.recodevideo in FFmpegVideoConvertorPP.SUPPORTED_EXTS opts.recodevideo if opts.recodevideo in FFmpegVideoConvertorPP.SUPPORTED_EXTS
else opts.remuxvideo if opts.remuxvideo in FFmpegVideoRemuxerPP.SUPPORTED_EXTS else opts.remuxvideo if opts.remuxvideo in FFmpegVideoRemuxerPP.SUPPORTED_EXTS
@ -521,6 +638,7 @@ def _real_main(argv=None):
ydl_opts = { ydl_opts = {
'usenetrc': opts.usenetrc, 'usenetrc': opts.usenetrc,
'netrc_location': opts.netrc_location,
'username': opts.username, 'username': opts.username,
'password': opts.password, 'password': opts.password,
'twofactor': opts.twofactor, 'twofactor': opts.twofactor,
@ -567,6 +685,7 @@ def _real_main(argv=None):
'throttledratelimit': opts.throttledratelimit, 'throttledratelimit': opts.throttledratelimit,
'overwrites': opts.overwrites, 'overwrites': opts.overwrites,
'retries': opts.retries, 'retries': opts.retries,
'file_access_retries': opts.file_access_retries,
'fragment_retries': opts.fragment_retries, 'fragment_retries': opts.fragment_retries,
'extractor_retries': opts.extractor_retries, 'extractor_retries': opts.extractor_retries,
'skip_unavailable_fragments': opts.skip_unavailable_fragments, 'skip_unavailable_fragments': opts.skip_unavailable_fragments,
@ -576,8 +695,9 @@ def _real_main(argv=None):
'noresizebuffer': opts.noresizebuffer, 'noresizebuffer': opts.noresizebuffer,
'http_chunk_size': opts.http_chunk_size, 'http_chunk_size': opts.http_chunk_size,
'continuedl': opts.continue_dl, 'continuedl': opts.continue_dl,
'noprogress': opts.noprogress, 'noprogress': opts.quiet if opts.noprogress is None else opts.noprogress,
'progress_with_newline': opts.progress_with_newline, 'progress_with_newline': opts.progress_with_newline,
'progress_template': opts.progress_template,
'playliststart': opts.playliststart, 'playliststart': opts.playliststart,
'playlistend': opts.playlistend, 'playlistend': opts.playlistend,
'playlistreverse': opts.playlist_reverse, 'playlistreverse': opts.playlist_reverse,
@ -593,8 +713,8 @@ def _real_main(argv=None):
'allow_playlist_files': opts.allow_playlist_files, 'allow_playlist_files': opts.allow_playlist_files,
'clean_infojson': opts.clean_infojson, 'clean_infojson': opts.clean_infojson,
'getcomments': opts.getcomments, 'getcomments': opts.getcomments,
'writethumbnail': opts.writethumbnail, 'writethumbnail': opts.writethumbnail is True,
'write_all_thumbnails': opts.write_all_thumbnails, 'write_all_thumbnails': opts.writethumbnail == 'all',
'writelink': opts.writelink, 'writelink': opts.writelink,
'writeurllink': opts.writeurllink, 'writeurllink': opts.writeurllink,
'writewebloclink': opts.writewebloclink, 'writewebloclink': opts.writewebloclink,
@ -626,6 +746,7 @@ def _real_main(argv=None):
'download_archive': download_archive_fn, 'download_archive': download_archive_fn,
'break_on_existing': opts.break_on_existing, 'break_on_existing': opts.break_on_existing,
'break_on_reject': opts.break_on_reject, 'break_on_reject': opts.break_on_reject,
'break_per_url': opts.break_per_url,
'skip_playlist_after_errors': opts.skip_playlist_after_errors, 'skip_playlist_after_errors': opts.skip_playlist_after_errors,
'cookiefile': opts.cookiefile, 'cookiefile': opts.cookiefile,
'cookiesfrombrowser': opts.cookiesfrombrowser, 'cookiesfrombrowser': opts.cookiesfrombrowser,
@ -644,6 +765,8 @@ def _real_main(argv=None):
'youtube_include_hls_manifest': opts.youtube_include_hls_manifest, 'youtube_include_hls_manifest': opts.youtube_include_hls_manifest,
'encoding': opts.encoding, 'encoding': opts.encoding,
'extract_flat': opts.extract_flat, 'extract_flat': opts.extract_flat,
'live_from_start': opts.live_from_start,
'wait_for_video': opts.wait_for_video,
'mark_watched': opts.mark_watched, 'mark_watched': opts.mark_watched,
'merge_output_format': opts.merge_output_format, 'merge_output_format': opts.merge_output_format,
'final_ext': final_ext, 'final_ext': final_ext,
@ -672,16 +795,13 @@ def _real_main(argv=None):
'geo_bypass': opts.geo_bypass, 'geo_bypass': opts.geo_bypass,
'geo_bypass_country': opts.geo_bypass_country, 'geo_bypass_country': opts.geo_bypass_country,
'geo_bypass_ip_block': opts.geo_bypass_ip_block, 'geo_bypass_ip_block': opts.geo_bypass_ip_block,
'warnings': warnings, '_warnings': warnings,
'_deprecation_warnings': deprecation_warnings,
'compat_opts': compat_opts, 'compat_opts': compat_opts,
# just for deprecation check
'autonumber': opts.autonumber or None,
'usetitle': opts.usetitle or None,
'useid': opts.useid or None,
} }
with YoutubeDL(ydl_opts) as ydl: with YoutubeDL(ydl_opts) as ydl:
actual_use = len(all_urls) or opts.load_info_filename actual_use = all_urls or opts.load_info_filename
# Remove cache dir # Remove cache dir
if opts.rm_cachedir: if opts.rm_cachedir:
@ -710,7 +830,7 @@ def _real_main(argv=None):
retcode = ydl.download_with_info_file(expand_path(opts.load_info_filename)) retcode = ydl.download_with_info_file(expand_path(opts.load_info_filename))
else: else:
retcode = ydl.download(all_urls) retcode = ydl.download(all_urls)
except (MaxDownloadsReached, ExistingVideoReached, RejectedVideoReached): except DownloadCancelled:
ydl.to_screen('Aborting remaining downloads') ydl.to_screen('Aborting remaining downloads')
retcode = 101 retcode = 101
@ -722,15 +842,15 @@ def main(argv=None):
_real_main(argv) _real_main(argv)
except DownloadError: except DownloadError:
sys.exit(1) sys.exit(1)
except SameFileError: except SameFileError as e:
sys.exit('ERROR: fixed output name but more than one file to download') sys.exit(f'ERROR: {e}')
except KeyboardInterrupt: except KeyboardInterrupt:
sys.exit('\nERROR: Interrupted by user') sys.exit('\nERROR: Interrupted by user')
except BrokenPipeError: except BrokenPipeError as e:
# https://docs.python.org/3/library/signal.html#note-on-sigpipe # https://docs.python.org/3/library/signal.html#note-on-sigpipe
devnull = os.open(os.devnull, os.O_WRONLY) devnull = os.open(os.devnull, os.O_WRONLY)
os.dup2(devnull, sys.stdout.fileno()) os.dup2(devnull, sys.stdout.fileno())
sys.exit(r'\nERROR: {err}') sys.exit(f'\nERROR: {e}')
__all__ = ['main', 'YoutubeDL', 'gen_extractors', 'list_extractors'] __all__ = ['main', 'YoutubeDL', 'gen_extractors', 'list_extractors']

View file

@ -2,36 +2,110 @@ from __future__ import unicode_literals
from math import ceil from math import ceil
from .compat import compat_b64decode from .compat import compat_b64decode, compat_pycrypto_AES
from .utils import bytes_to_intlist, intlist_to_bytes from .utils import bytes_to_intlist, intlist_to_bytes
if compat_pycrypto_AES:
def aes_cbc_decrypt_bytes(data, key, iv):
""" Decrypt bytes with AES-CBC using pycryptodome """
return compat_pycrypto_AES.new(key, compat_pycrypto_AES.MODE_CBC, iv).decrypt(data)
def aes_gcm_decrypt_and_verify_bytes(data, key, tag, nonce):
""" Decrypt bytes with AES-GCM using pycryptodome """
return compat_pycrypto_AES.new(key, compat_pycrypto_AES.MODE_GCM, nonce).decrypt_and_verify(data, tag)
else:
def aes_cbc_decrypt_bytes(data, key, iv):
""" Decrypt bytes with AES-CBC using native implementation since pycryptodome is unavailable """
return intlist_to_bytes(aes_cbc_decrypt(*map(bytes_to_intlist, (data, key, iv))))
def aes_gcm_decrypt_and_verify_bytes(data, key, tag, nonce):
""" Decrypt bytes with AES-GCM using native implementation since pycryptodome is unavailable """
return intlist_to_bytes(aes_gcm_decrypt_and_verify(*map(bytes_to_intlist, (data, key, tag, nonce))))
BLOCK_SIZE_BYTES = 16 BLOCK_SIZE_BYTES = 16
def aes_ctr_decrypt(data, key, counter): def aes_ecb_encrypt(data, key, iv=None):
""" """
Decrypt with aes in counter mode Encrypt with aes in ECB mode
@param {int[]} data cipher @param {int[]} data cleartext
@param {int[]} key 16/24/32-Byte cipher key @param {int[]} key 16/24/32-Byte cipher key
@param {instance} counter Instance whose next_value function (@returns {int[]} 16-Byte block) @param {int[]} iv Unused for this mode
returns the next counter block @returns {int[]} encrypted data
"""
expanded_key = key_expansion(key)
block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES))
encrypted_data = []
for i in range(block_count):
block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]
encrypted_data += aes_encrypt(block, expanded_key)
encrypted_data = encrypted_data[:len(data)]
return encrypted_data
def aes_ecb_decrypt(data, key, iv=None):
"""
Decrypt with aes in ECB mode
@param {int[]} data cleartext
@param {int[]} key 16/24/32-Byte cipher key
@param {int[]} iv Unused for this mode
@returns {int[]} decrypted data @returns {int[]} decrypted data
""" """
expanded_key = key_expansion(key) expanded_key = key_expansion(key)
block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES)) block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES))
decrypted_data = [] encrypted_data = []
for i in range(block_count): for i in range(block_count):
counter_block = counter.next_value() block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]
encrypted_data += aes_decrypt(block, expanded_key)
encrypted_data = encrypted_data[:len(data)]
return encrypted_data
def aes_ctr_decrypt(data, key, iv):
"""
Decrypt with aes in counter mode
@param {int[]} data cipher
@param {int[]} key 16/24/32-Byte cipher key
@param {int[]} iv 16-Byte initialization vector
@returns {int[]} decrypted data
"""
return aes_ctr_encrypt(data, key, iv)
def aes_ctr_encrypt(data, key, iv):
"""
Encrypt with aes in counter mode
@param {int[]} data cleartext
@param {int[]} key 16/24/32-Byte cipher key
@param {int[]} iv 16-Byte initialization vector
@returns {int[]} encrypted data
"""
expanded_key = key_expansion(key)
block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES))
counter = iter_vector(iv)
encrypted_data = []
for i in range(block_count):
counter_block = next(counter)
block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES] block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]
block += [0] * (BLOCK_SIZE_BYTES - len(block)) block += [0] * (BLOCK_SIZE_BYTES - len(block))
cipher_counter_block = aes_encrypt(counter_block, expanded_key) cipher_counter_block = aes_encrypt(counter_block, expanded_key)
decrypted_data += xor(block, cipher_counter_block) encrypted_data += xor(block, cipher_counter_block)
decrypted_data = decrypted_data[:len(data)] encrypted_data = encrypted_data[:len(data)]
return decrypted_data return encrypted_data
def aes_cbc_decrypt(data, key, iv): def aes_cbc_decrypt(data, key, iv):
@ -88,39 +162,47 @@ def aes_cbc_encrypt(data, key, iv):
return encrypted_data return encrypted_data
def key_expansion(data): def aes_gcm_decrypt_and_verify(data, key, tag, nonce):
""" """
Generate key schedule Decrypt with aes in GBM mode and checks authenticity using tag
@param {int[]} data 16/24/32-Byte cipher key @param {int[]} data cipher
@returns {int[]} 176/208/240-Byte expanded key @param {int[]} key 16-Byte cipher key
@param {int[]} tag authentication tag
@param {int[]} nonce IV (recommended 12-Byte)
@returns {int[]} decrypted data
""" """
data = data[:] # copy
rcon_iteration = 1
key_size_bytes = len(data)
expanded_key_size_bytes = (key_size_bytes // 4 + 7) * BLOCK_SIZE_BYTES
while len(data) < expanded_key_size_bytes: # XXX: check aes, gcm param
temp = data[-4:]
temp = key_schedule_core(temp, rcon_iteration)
rcon_iteration += 1
data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
for _ in range(3): hash_subkey = aes_encrypt([0] * BLOCK_SIZE_BYTES, key_expansion(key))
temp = data[-4:]
data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
if key_size_bytes == 32: if len(nonce) == 12:
temp = data[-4:] j0 = nonce + [0, 0, 0, 1]
temp = sub_bytes(temp) else:
data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes]) fill = (BLOCK_SIZE_BYTES - (len(nonce) % BLOCK_SIZE_BYTES)) % BLOCK_SIZE_BYTES + 8
ghash_in = nonce + [0] * fill + bytes_to_intlist((8 * len(nonce)).to_bytes(8, 'big'))
j0 = ghash(hash_subkey, ghash_in)
for _ in range(3 if key_size_bytes == 32 else 2 if key_size_bytes == 24 else 0): # TODO: add nonce support to aes_ctr_decrypt
temp = data[-4:]
data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
data = data[:expanded_key_size_bytes]
return data # nonce_ctr = j0[:12]
iv_ctr = inc(j0)
decrypted_data = aes_ctr_decrypt(data, key, iv_ctr + [0] * (BLOCK_SIZE_BYTES - len(iv_ctr)))
pad_len = len(data) // 16 * 16
s_tag = ghash(
hash_subkey,
data
+ [0] * (BLOCK_SIZE_BYTES - len(data) + pad_len) # pad
+ bytes_to_intlist((0 * 8).to_bytes(8, 'big') # length of associated data
+ ((len(data) * 8).to_bytes(8, 'big'))) # length of data
)
if tag != aes_ctr_encrypt(s_tag, key, j0):
raise ValueError("Mismatching authentication tag")
return decrypted_data
def aes_encrypt(data, expanded_key): def aes_encrypt(data, expanded_key):
@ -138,7 +220,7 @@ def aes_encrypt(data, expanded_key):
data = sub_bytes(data) data = sub_bytes(data)
data = shift_rows(data) data = shift_rows(data)
if i != rounds: if i != rounds:
data = mix_columns(data) data = list(iter_mix_columns(data, MIX_COLUMN_MATRIX))
data = xor(data, expanded_key[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]) data = xor(data, expanded_key[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES])
return data return data
@ -157,7 +239,7 @@ def aes_decrypt(data, expanded_key):
for i in range(rounds, 0, -1): for i in range(rounds, 0, -1):
data = xor(data, expanded_key[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]) data = xor(data, expanded_key[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES])
if i != rounds: if i != rounds:
data = mix_columns_inv(data) data = list(iter_mix_columns(data, MIX_COLUMN_MATRIX_INV))
data = shift_rows_inv(data) data = shift_rows_inv(data)
data = sub_bytes_inv(data) data = sub_bytes_inv(data)
data = xor(data, expanded_key[:BLOCK_SIZE_BYTES]) data = xor(data, expanded_key[:BLOCK_SIZE_BYTES])
@ -189,15 +271,7 @@ def aes_decrypt_text(data, password, key_size_bytes):
nonce = data[:NONCE_LENGTH_BYTES] nonce = data[:NONCE_LENGTH_BYTES]
cipher = data[NONCE_LENGTH_BYTES:] cipher = data[NONCE_LENGTH_BYTES:]
class Counter(object): decrypted_data = aes_ctr_decrypt(cipher, key, nonce + [0] * (BLOCK_SIZE_BYTES - NONCE_LENGTH_BYTES))
__value = nonce + [0] * (BLOCK_SIZE_BYTES - NONCE_LENGTH_BYTES)
def next_value(self):
temp = self.__value
self.__value = inc(self.__value)
return temp
decrypted_data = aes_ctr_decrypt(cipher, key, Counter())
plaintext = intlist_to_bytes(decrypted_data) plaintext = intlist_to_bytes(decrypted_data)
return plaintext return plaintext
@ -278,6 +352,47 @@ RIJNDAEL_LOG_TABLE = (0x00, 0x00, 0x19, 0x01, 0x32, 0x02, 0x1a, 0xc6, 0x4b, 0xc7
0x67, 0x4a, 0xed, 0xde, 0xc5, 0x31, 0xfe, 0x18, 0x0d, 0x63, 0x8c, 0x80, 0xc0, 0xf7, 0x70, 0x07) 0x67, 0x4a, 0xed, 0xde, 0xc5, 0x31, 0xfe, 0x18, 0x0d, 0x63, 0x8c, 0x80, 0xc0, 0xf7, 0x70, 0x07)
def key_expansion(data):
"""
Generate key schedule
@param {int[]} data 16/24/32-Byte cipher key
@returns {int[]} 176/208/240-Byte expanded key
"""
data = data[:] # copy
rcon_iteration = 1
key_size_bytes = len(data)
expanded_key_size_bytes = (key_size_bytes // 4 + 7) * BLOCK_SIZE_BYTES
while len(data) < expanded_key_size_bytes:
temp = data[-4:]
temp = key_schedule_core(temp, rcon_iteration)
rcon_iteration += 1
data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
for _ in range(3):
temp = data[-4:]
data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
if key_size_bytes == 32:
temp = data[-4:]
temp = sub_bytes(temp)
data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
for _ in range(3 if key_size_bytes == 32 else 2 if key_size_bytes == 24 else 0):
temp = data[-4:]
data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
data = data[:expanded_key_size_bytes]
return data
def iter_vector(iv):
while True:
yield iv
iv = inc(iv)
def sub_bytes(data): def sub_bytes(data):
return [SBOX[x] for x in data] return [SBOX[x] for x in data]
@ -302,48 +417,36 @@ def xor(data1, data2):
return [x ^ y for x, y in zip(data1, data2)] return [x ^ y for x, y in zip(data1, data2)]
def rijndael_mul(a, b): def iter_mix_columns(data, matrix):
if(a == 0 or b == 0): for i in (0, 4, 8, 12):
return 0 for row in matrix:
return RIJNDAEL_EXP_TABLE[(RIJNDAEL_LOG_TABLE[a] + RIJNDAEL_LOG_TABLE[b]) % 0xFF] mixed = 0
for j in range(4):
# xor is (+) and (-)
def mix_column(data, matrix): mixed ^= (0 if data[i:i + 4][j] == 0 or row[j] == 0 else
data_mixed = [] RIJNDAEL_EXP_TABLE[(RIJNDAEL_LOG_TABLE[data[i + j]] + RIJNDAEL_LOG_TABLE[row[j]]) % 0xFF])
for row in range(4): yield mixed
mixed = 0
for column in range(4):
# xor is (+) and (-)
mixed ^= rijndael_mul(data[column], matrix[row][column])
data_mixed.append(mixed)
return data_mixed
def mix_columns(data, matrix=MIX_COLUMN_MATRIX):
data_mixed = []
for i in range(4):
column = data[i * 4: (i + 1) * 4]
data_mixed += mix_column(column, matrix)
return data_mixed
def mix_columns_inv(data):
return mix_columns(data, MIX_COLUMN_MATRIX_INV)
def shift_rows(data): def shift_rows(data):
data_shifted = [] return [data[((column + row) & 0b11) * 4 + row] for column in range(4) for row in range(4)]
for column in range(4):
for row in range(4):
data_shifted.append(data[((column + row) & 0b11) * 4 + row])
return data_shifted
def shift_rows_inv(data): def shift_rows_inv(data):
return [data[((column - row) & 0b11) * 4 + row] for column in range(4) for row in range(4)]
def shift_block(data):
data_shifted = [] data_shifted = []
for column in range(4):
for row in range(4): bit = 0
data_shifted.append(data[((column - row) & 0b11) * 4 + row]) for n in data:
if bit:
n |= 0x100
bit = n & 1
n >>= 1
data_shifted.append(n)
return data_shifted return data_shifted
@ -358,4 +461,50 @@ def inc(data):
return data return data
__all__ = ['aes_encrypt', 'key_expansion', 'aes_ctr_decrypt', 'aes_cbc_decrypt', 'aes_decrypt_text'] def block_product(block_x, block_y):
# NIST SP 800-38D, Algorithm 1
if len(block_x) != BLOCK_SIZE_BYTES or len(block_y) != BLOCK_SIZE_BYTES:
raise ValueError("Length of blocks need to be %d bytes" % BLOCK_SIZE_BYTES)
block_r = [0xE1] + [0] * (BLOCK_SIZE_BYTES - 1)
block_v = block_y[:]
block_z = [0] * BLOCK_SIZE_BYTES
for i in block_x:
for bit in range(7, -1, -1):
if i & (1 << bit):
block_z = xor(block_z, block_v)
do_xor = block_v[-1] & 1
block_v = shift_block(block_v)
if do_xor:
block_v = xor(block_v, block_r)
return block_z
def ghash(subkey, data):
# NIST SP 800-38D, Algorithm 2
if len(data) % BLOCK_SIZE_BYTES:
raise ValueError("Length of data should be %d bytes" % BLOCK_SIZE_BYTES)
last_y = [0] * BLOCK_SIZE_BYTES
for i in range(0, len(data), BLOCK_SIZE_BYTES):
block = data[i : i + BLOCK_SIZE_BYTES] # noqa: E203
last_y = block_product(xor(last_y, block), subkey)
return last_y
__all__ = [
'aes_ctr_decrypt',
'aes_cbc_decrypt',
'aes_cbc_decrypt_bytes',
'aes_decrypt_text',
'aes_encrypt',
'aes_gcm_decrypt_and_verify',
'aes_gcm_decrypt_and_verify_bytes',
'key_expansion'
]

View file

@ -50,6 +50,7 @@ class Cache(object):
except OSError as ose: except OSError as ose:
if ose.errno != errno.EEXIST: if ose.errno != errno.EEXIST:
raise raise
self._ydl.write_debug(f'Saving {section}.{key} to cache')
write_json_file(data, fn) write_json_file(data, fn)
except Exception: except Exception:
tb = traceback.format_exc() tb = traceback.format_exc()
@ -66,6 +67,7 @@ class Cache(object):
try: try:
try: try:
with io.open(cache_fn, 'r', encoding='utf-8') as cachef: with io.open(cache_fn, 'r', encoding='utf-8') as cachef:
self._ydl.write_debug(f'Loading {section}.{key} from cache')
return json.load(cachef) return json.load(cachef)
except ValueError: except ValueError:
try: try:

View file

@ -19,6 +19,7 @@ import shlex
import shutil import shutil
import socket import socket
import struct import struct
import subprocess
import sys import sys
import tokenize import tokenize
import urllib import urllib
@ -33,6 +34,8 @@ class compat_HTMLParseError(Exception):
pass pass
# compat_ctypes_WINFUNCTYPE = ctypes.WINFUNCTYPE
# will not work since ctypes.WINFUNCTYPE does not exist in UNIX machines
def compat_ctypes_WINFUNCTYPE(*args, **kwargs): def compat_ctypes_WINFUNCTYPE(*args, **kwargs):
return ctypes.WINFUNCTYPE(*args, **kwargs) return ctypes.WINFUNCTYPE(*args, **kwargs)
@ -130,6 +133,49 @@ except AttributeError:
asyncio.run = compat_asyncio_run asyncio.run = compat_asyncio_run
# Python 3.8+ does not honor %HOME% on windows, but this breaks compatibility with youtube-dl
# See https://github.com/yt-dlp/yt-dlp/issues/792
# https://docs.python.org/3/library/os.path.html#os.path.expanduser
if compat_os_name in ('nt', 'ce') and 'HOME' in os.environ:
_userhome = os.environ['HOME']
def compat_expanduser(path):
if not path.startswith('~'):
return path
i = path.replace('\\', '/', 1).find('/') # ~user
if i < 0:
i = len(path)
userhome = os.path.join(os.path.dirname(_userhome), path[1:i]) if i > 1 else _userhome
return userhome + path[i:]
else:
compat_expanduser = os.path.expanduser
try:
from Cryptodome.Cipher import AES as compat_pycrypto_AES
except ImportError:
try:
from Crypto.Cipher import AES as compat_pycrypto_AES
except ImportError:
compat_pycrypto_AES = None
WINDOWS_VT_MODE = False if compat_os_name == 'nt' else None
def windows_enable_vt_mode(): # TODO: Do this the proper way https://bugs.python.org/issue30075
if compat_os_name != 'nt':
return
global WINDOWS_VT_MODE
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
try:
subprocess.Popen('', shell=True, startupinfo=startupinfo)
WINDOWS_VT_MODE = True
except Exception:
pass
# Deprecated # Deprecated
compat_basestring = str compat_basestring = str
@ -152,7 +198,6 @@ compat_cookies = http.cookies
compat_cookies_SimpleCookie = compat_cookies.SimpleCookie compat_cookies_SimpleCookie = compat_cookies.SimpleCookie
compat_etree_Element = etree.Element compat_etree_Element = etree.Element
compat_etree_register_namespace = etree.register_namespace compat_etree_register_namespace = etree.register_namespace
compat_expanduser = os.path.expanduser
compat_get_terminal_size = shutil.get_terminal_size compat_get_terminal_size = shutil.get_terminal_size
compat_getenv = os.getenv compat_getenv = os.getenv
compat_getpass = getpass.getpass compat_getpass = getpass.getpass
@ -189,6 +234,7 @@ compat_xml_parse_error = etree.ParseError
# Set public objects # Set public objects
__all__ = [ __all__ = [
'WINDOWS_VT_MODE',
'compat_HTMLParseError', 'compat_HTMLParseError',
'compat_HTMLParser', 'compat_HTMLParser',
'compat_HTTPError', 'compat_HTTPError',
@ -224,6 +270,7 @@ __all__ = [
'compat_os_name', 'compat_os_name',
'compat_parse_qs', 'compat_parse_qs',
'compat_print', 'compat_print',
'compat_pycrypto_AES',
'compat_realpath', 'compat_realpath',
'compat_setenv', 'compat_setenv',
'compat_shlex_quote', 'compat_shlex_quote',
@ -252,5 +299,6 @@ __all__ = [
'compat_xml_parse_error', 'compat_xml_parse_error',
'compat_xpath', 'compat_xpath',
'compat_zip', 'compat_zip',
'windows_enable_vt_mode',
'workaround_optparse_bug9161', 'workaround_optparse_bug9161',
] ]

View file

@ -1,3 +1,4 @@
import contextlib
import ctypes import ctypes
import json import json
import os import os
@ -7,19 +8,17 @@ import subprocess
import sys import sys
import tempfile import tempfile
from datetime import datetime, timedelta, timezone from datetime import datetime, timedelta, timezone
from enum import Enum, auto
from hashlib import pbkdf2_hmac from hashlib import pbkdf2_hmac
from yt_dlp.aes import aes_cbc_decrypt from .aes import aes_cbc_decrypt_bytes, aes_gcm_decrypt_and_verify_bytes
from yt_dlp.compat import ( from .compat import (
compat_b64decode, compat_b64decode,
compat_cookiejar_Cookie, compat_cookiejar_Cookie,
) )
from yt_dlp.utils import ( from .utils import (
bug_reports_message,
bytes_to_intlist,
expand_path, expand_path,
intlist_to_bytes, Popen,
process_communicate_or_kill,
YoutubeDLCookieJar, YoutubeDLCookieJar,
) )
@ -33,25 +32,16 @@ except ImportError:
try: try:
from Crypto.Cipher import AES import secretstorage
CRYPTO_AVAILABLE = True SECRETSTORAGE_AVAILABLE = True
except ImportError: except ImportError:
CRYPTO_AVAILABLE = False SECRETSTORAGE_AVAILABLE = False
SECRETSTORAGE_UNAVAILABLE_REASON = (
try: 'as the `secretstorage` module is not installed. '
import keyring 'Please install by running `python3 -m pip install secretstorage`.')
KEYRING_AVAILABLE = True
KEYRING_UNAVAILABLE_REASON = f'due to unknown reasons{bug_reports_message()}'
except ImportError:
KEYRING_AVAILABLE = False
KEYRING_UNAVAILABLE_REASON = (
'as the `keyring` module is not installed. '
'Please install by running `python3 -m pip install keyring`. '
'Depending on your platform, additional packages may be required '
'to access the keyring; see https://pypi.org/project/keyring')
except Exception as _err: except Exception as _err:
KEYRING_AVAILABLE = False SECRETSTORAGE_AVAILABLE = False
KEYRING_UNAVAILABLE_REASON = 'as the `keyring` module could not be initialized: %s' % _err SECRETSTORAGE_UNAVAILABLE_REASON = f'as the `secretstorage` module could not be initialized. {_err}'
CHROMIUM_BASED_BROWSERS = {'brave', 'chrome', 'chromium', 'edge', 'opera', 'vivaldi'} CHROMIUM_BASED_BROWSERS = {'brave', 'chrome', 'chromium', 'edge', 'opera', 'vivaldi'}
@ -82,8 +72,8 @@ class YDLLogger:
def load_cookies(cookie_file, browser_specification, ydl): def load_cookies(cookie_file, browser_specification, ydl):
cookie_jars = [] cookie_jars = []
if browser_specification is not None: if browser_specification is not None:
browser_name, profile = _parse_browser_specification(*browser_specification) browser_name, profile, keyring = _parse_browser_specification(*browser_specification)
cookie_jars.append(extract_cookies_from_browser(browser_name, profile, YDLLogger(ydl))) cookie_jars.append(extract_cookies_from_browser(browser_name, profile, YDLLogger(ydl), keyring=keyring))
if cookie_file is not None: if cookie_file is not None:
cookie_file = expand_path(cookie_file) cookie_file = expand_path(cookie_file)
@ -95,13 +85,13 @@ def load_cookies(cookie_file, browser_specification, ydl):
return _merge_cookie_jars(cookie_jars) return _merge_cookie_jars(cookie_jars)
def extract_cookies_from_browser(browser_name, profile=None, logger=YDLLogger()): def extract_cookies_from_browser(browser_name, profile=None, logger=YDLLogger(), *, keyring=None):
if browser_name == 'firefox': if browser_name == 'firefox':
return _extract_firefox_cookies(profile, logger) return _extract_firefox_cookies(profile, logger)
elif browser_name == 'safari': elif browser_name == 'safari':
return _extract_safari_cookies(profile, logger) return _extract_safari_cookies(profile, logger)
elif browser_name in CHROMIUM_BASED_BROWSERS: elif browser_name in CHROMIUM_BASED_BROWSERS:
return _extract_chrome_cookies(browser_name, profile, logger) return _extract_chrome_cookies(browser_name, profile, keyring, logger)
else: else:
raise ValueError('unknown browser: {}'.format(browser_name)) raise ValueError('unknown browser: {}'.format(browser_name))
@ -123,9 +113,9 @@ def _extract_firefox_cookies(profile, logger):
cookie_database_path = _find_most_recently_used_file(search_root, 'cookies.sqlite') cookie_database_path = _find_most_recently_used_file(search_root, 'cookies.sqlite')
if cookie_database_path is None: if cookie_database_path is None:
raise FileNotFoundError('could not find firefox cookies database in {}'.format(search_root)) raise FileNotFoundError('could not find firefox cookies database in {}'.format(search_root))
logger.debug('extracting from: "{}"'.format(cookie_database_path)) logger.debug('Extracting cookies from: "{}"'.format(cookie_database_path))
with tempfile.TemporaryDirectory(prefix='youtube_dl') as tmpdir: with tempfile.TemporaryDirectory(prefix='yt_dlp') as tmpdir:
cursor = None cursor = None
try: try:
cursor = _open_database_copy(cookie_database_path, tmpdir) cursor = _open_database_copy(cookie_database_path, tmpdir)
@ -215,7 +205,7 @@ def _get_chromium_based_browser_settings(browser_name):
} }
def _extract_chrome_cookies(browser_name, profile, logger): def _extract_chrome_cookies(browser_name, profile, keyring, logger):
logger.info('Extracting cookies from {}'.format(browser_name)) logger.info('Extracting cookies from {}'.format(browser_name))
if not SQLITE_AVAILABLE: if not SQLITE_AVAILABLE:
@ -240,11 +230,11 @@ def _extract_chrome_cookies(browser_name, profile, logger):
cookie_database_path = _find_most_recently_used_file(search_root, 'Cookies') cookie_database_path = _find_most_recently_used_file(search_root, 'Cookies')
if cookie_database_path is None: if cookie_database_path is None:
raise FileNotFoundError('could not find {} cookies database in "{}"'.format(browser_name, search_root)) raise FileNotFoundError('could not find {} cookies database in "{}"'.format(browser_name, search_root))
logger.debug('extracting from: "{}"'.format(cookie_database_path)) logger.debug('Extracting cookies from: "{}"'.format(cookie_database_path))
decryptor = get_cookie_decryptor(config['browser_dir'], config['keyring_name'], logger) decryptor = get_cookie_decryptor(config['browser_dir'], config['keyring_name'], logger, keyring=keyring)
with tempfile.TemporaryDirectory(prefix='youtube_dl') as tmpdir: with tempfile.TemporaryDirectory(prefix='yt_dlp') as tmpdir:
cursor = None cursor = None
try: try:
cursor = _open_database_copy(cookie_database_path, tmpdir) cursor = _open_database_copy(cookie_database_path, tmpdir)
@ -255,6 +245,7 @@ def _extract_chrome_cookies(browser_name, profile, logger):
'expires_utc, {} FROM cookies'.format(secure_column)) 'expires_utc, {} FROM cookies'.format(secure_column))
jar = YoutubeDLCookieJar() jar = YoutubeDLCookieJar()
failed_cookies = 0 failed_cookies = 0
unencrypted_cookies = 0
for host_key, name, value, encrypted_value, path, expires_utc, is_secure in cursor.fetchall(): for host_key, name, value, encrypted_value, path, expires_utc, is_secure in cursor.fetchall():
host_key = host_key.decode('utf-8') host_key = host_key.decode('utf-8')
name = name.decode('utf-8') name = name.decode('utf-8')
@ -266,6 +257,8 @@ def _extract_chrome_cookies(browser_name, profile, logger):
if value is None: if value is None:
failed_cookies += 1 failed_cookies += 1
continue continue
else:
unencrypted_cookies += 1
cookie = compat_cookiejar_Cookie( cookie = compat_cookiejar_Cookie(
version=0, name=name, value=value, port=None, port_specified=False, version=0, name=name, value=value, port=None, port_specified=False,
@ -278,6 +271,9 @@ def _extract_chrome_cookies(browser_name, profile, logger):
else: else:
failed_message = '' failed_message = ''
logger.info('Extracted {} cookies from {}{}'.format(len(jar), browser_name, failed_message)) logger.info('Extracted {} cookies from {}{}'.format(len(jar), browser_name, failed_message))
counts = decryptor.cookie_counts.copy()
counts['unencrypted'] = unencrypted_cookies
logger.debug('cookie version breakdown: {}'.format(counts))
return jar return jar
finally: finally:
if cursor is not None: if cursor is not None:
@ -313,10 +309,14 @@ class ChromeCookieDecryptor:
def decrypt(self, encrypted_value): def decrypt(self, encrypted_value):
raise NotImplementedError raise NotImplementedError
@property
def cookie_counts(self):
raise NotImplementedError
def get_cookie_decryptor(browser_root, browser_keyring_name, logger):
def get_cookie_decryptor(browser_root, browser_keyring_name, logger, *, keyring=None):
if sys.platform in ('linux', 'linux2'): if sys.platform in ('linux', 'linux2'):
return LinuxChromeCookieDecryptor(browser_keyring_name, logger) return LinuxChromeCookieDecryptor(browser_keyring_name, logger, keyring=keyring)
elif sys.platform == 'darwin': elif sys.platform == 'darwin':
return MacChromeCookieDecryptor(browser_keyring_name, logger) return MacChromeCookieDecryptor(browser_keyring_name, logger)
elif sys.platform == 'win32': elif sys.platform == 'win32':
@ -327,13 +327,12 @@ def get_cookie_decryptor(browser_root, browser_keyring_name, logger):
class LinuxChromeCookieDecryptor(ChromeCookieDecryptor): class LinuxChromeCookieDecryptor(ChromeCookieDecryptor):
def __init__(self, browser_keyring_name, logger): def __init__(self, browser_keyring_name, logger, *, keyring=None):
self._logger = logger self._logger = logger
self._v10_key = self.derive_key(b'peanuts') self._v10_key = self.derive_key(b'peanuts')
if KEYRING_AVAILABLE: password = _get_linux_keyring_password(browser_keyring_name, keyring, logger)
self._v11_key = self.derive_key(_get_linux_keyring_password(browser_keyring_name)) self._v11_key = None if password is None else self.derive_key(password)
else: self._cookie_counts = {'v10': 0, 'v11': 0, 'other': 0}
self._v11_key = None
@staticmethod @staticmethod
def derive_key(password): def derive_key(password):
@ -341,28 +340,36 @@ class LinuxChromeCookieDecryptor(ChromeCookieDecryptor):
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_linux.cc # https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_linux.cc
return pbkdf2_sha1(password, salt=b'saltysalt', iterations=1, key_length=16) return pbkdf2_sha1(password, salt=b'saltysalt', iterations=1, key_length=16)
@property
def cookie_counts(self):
return self._cookie_counts
def decrypt(self, encrypted_value): def decrypt(self, encrypted_value):
version = encrypted_value[:3] version = encrypted_value[:3]
ciphertext = encrypted_value[3:] ciphertext = encrypted_value[3:]
if version == b'v10': if version == b'v10':
self._cookie_counts['v10'] += 1
return _decrypt_aes_cbc(ciphertext, self._v10_key, self._logger) return _decrypt_aes_cbc(ciphertext, self._v10_key, self._logger)
elif version == b'v11': elif version == b'v11':
self._cookie_counts['v11'] += 1
if self._v11_key is None: if self._v11_key is None:
self._logger.warning(f'cannot decrypt cookie {KEYRING_UNAVAILABLE_REASON}', only_once=True) self._logger.warning('cannot decrypt v11 cookies: no key found', only_once=True)
return None return None
return _decrypt_aes_cbc(ciphertext, self._v11_key, self._logger) return _decrypt_aes_cbc(ciphertext, self._v11_key, self._logger)
else: else:
self._cookie_counts['other'] += 1
return None return None
class MacChromeCookieDecryptor(ChromeCookieDecryptor): class MacChromeCookieDecryptor(ChromeCookieDecryptor):
def __init__(self, browser_keyring_name, logger): def __init__(self, browser_keyring_name, logger):
self._logger = logger self._logger = logger
password = _get_mac_keyring_password(browser_keyring_name) password = _get_mac_keyring_password(browser_keyring_name, logger)
self._v10_key = None if password is None else self.derive_key(password) self._v10_key = None if password is None else self.derive_key(password)
self._cookie_counts = {'v10': 0, 'other': 0}
@staticmethod @staticmethod
def derive_key(password): def derive_key(password):
@ -370,11 +377,16 @@ class MacChromeCookieDecryptor(ChromeCookieDecryptor):
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_mac.mm # https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_mac.mm
return pbkdf2_sha1(password, salt=b'saltysalt', iterations=1003, key_length=16) return pbkdf2_sha1(password, salt=b'saltysalt', iterations=1003, key_length=16)
@property
def cookie_counts(self):
return self._cookie_counts
def decrypt(self, encrypted_value): def decrypt(self, encrypted_value):
version = encrypted_value[:3] version = encrypted_value[:3]
ciphertext = encrypted_value[3:] ciphertext = encrypted_value[3:]
if version == b'v10': if version == b'v10':
self._cookie_counts['v10'] += 1
if self._v10_key is None: if self._v10_key is None:
self._logger.warning('cannot decrypt v10 cookies: no key found', only_once=True) self._logger.warning('cannot decrypt v10 cookies: no key found', only_once=True)
return None return None
@ -382,6 +394,7 @@ class MacChromeCookieDecryptor(ChromeCookieDecryptor):
return _decrypt_aes_cbc(ciphertext, self._v10_key, self._logger) return _decrypt_aes_cbc(ciphertext, self._v10_key, self._logger)
else: else:
self._cookie_counts['other'] += 1
# other prefixes are considered 'old data' which were stored as plaintext # other prefixes are considered 'old data' which were stored as plaintext
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_mac.mm # https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_mac.mm
return encrypted_value return encrypted_value
@ -391,20 +404,21 @@ class WindowsChromeCookieDecryptor(ChromeCookieDecryptor):
def __init__(self, browser_root, logger): def __init__(self, browser_root, logger):
self._logger = logger self._logger = logger
self._v10_key = _get_windows_v10_key(browser_root, logger) self._v10_key = _get_windows_v10_key(browser_root, logger)
self._cookie_counts = {'v10': 0, 'other': 0}
@property
def cookie_counts(self):
return self._cookie_counts
def decrypt(self, encrypted_value): def decrypt(self, encrypted_value):
version = encrypted_value[:3] version = encrypted_value[:3]
ciphertext = encrypted_value[3:] ciphertext = encrypted_value[3:]
if version == b'v10': if version == b'v10':
self._cookie_counts['v10'] += 1
if self._v10_key is None: if self._v10_key is None:
self._logger.warning('cannot decrypt v10 cookies: no key found', only_once=True) self._logger.warning('cannot decrypt v10 cookies: no key found', only_once=True)
return None return None
elif not CRYPTO_AVAILABLE:
self._logger.warning('cannot decrypt cookie as the `pycryptodome` module is not installed. '
'Please install by running `python3 -m pip install pycryptodome`',
only_once=True)
return None
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_win.cc # https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_win.cc
# kNonceLength # kNonceLength
@ -421,6 +435,7 @@ class WindowsChromeCookieDecryptor(ChromeCookieDecryptor):
return _decrypt_aes_gcm(ciphertext, self._v10_key, nonce, authentication_tag, self._logger) return _decrypt_aes_gcm(ciphertext, self._v10_key, nonce, authentication_tag, self._logger)
else: else:
self._cookie_counts['other'] += 1
# any other prefix means the data is DPAPI encrypted # any other prefix means the data is DPAPI encrypted
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_win.cc # https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_win.cc
return _decrypt_windows_dpapi(encrypted_value, self._logger).decode('utf-8') return _decrypt_windows_dpapi(encrypted_value, self._logger).decode('utf-8')
@ -559,7 +574,7 @@ def _parse_safari_cookies_record(data, jar, logger):
p.skip_to(value_offset) p.skip_to(value_offset)
value = p.read_cstring() value = p.read_cstring()
except UnicodeDecodeError: except UnicodeDecodeError:
logger.warning('failed to parse cookie because UTF-8 decoding failed') logger.warning('failed to parse Safari cookie because UTF-8 decoding failed', only_once=True)
return record_size return record_size
p.skip_to(record_size, 'space at the end of the record') p.skip_to(record_size, 'space at the end of the record')
@ -590,37 +605,221 @@ def parse_safari_cookies(data, jar=None, logger=YDLLogger()):
return jar return jar
def _get_linux_keyring_password(browser_keyring_name): class _LinuxDesktopEnvironment(Enum):
password = keyring.get_password('{} Keys'.format(browser_keyring_name), """
'{} Safe Storage'.format(browser_keyring_name)) https://chromium.googlesource.com/chromium/src/+/refs/heads/main/base/nix/xdg_util.h
if password is None: DesktopEnvironment
# this sometimes occurs in KDE because chrome does not check hasEntry and instead """
# just tries to read the value (which kwallet returns "") whereas keyring checks hasEntry OTHER = auto()
# to verify this: CINNAMON = auto()
# dbus-monitor "interface='org.kde.KWallet'" "type=method_return" GNOME = auto()
# while starting chrome. KDE = auto()
# this may be a bug as the intended behaviour is to generate a random password and store PANTHEON = auto()
# it, but that doesn't matter here. UNITY = auto()
password = '' XFCE = auto()
return password.encode('utf-8')
def _get_mac_keyring_password(browser_keyring_name): class _LinuxKeyring(Enum):
if KEYRING_AVAILABLE: """
password = keyring.get_password('{} Safe Storage'.format(browser_keyring_name), browser_keyring_name) https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/key_storage_util_linux.h
return password.encode('utf-8') SelectedLinuxBackend
"""
KWALLET = auto()
GNOMEKEYRING = auto()
BASICTEXT = auto()
SUPPORTED_KEYRINGS = _LinuxKeyring.__members__.keys()
def _get_linux_desktop_environment(env):
"""
https://chromium.googlesource.com/chromium/src/+/refs/heads/main/base/nix/xdg_util.cc
GetDesktopEnvironment
"""
xdg_current_desktop = env.get('XDG_CURRENT_DESKTOP', None)
desktop_session = env.get('DESKTOP_SESSION', None)
if xdg_current_desktop is not None:
xdg_current_desktop = xdg_current_desktop.split(':')[0].strip()
if xdg_current_desktop == 'Unity':
if desktop_session is not None and 'gnome-fallback' in desktop_session:
return _LinuxDesktopEnvironment.GNOME
else:
return _LinuxDesktopEnvironment.UNITY
elif xdg_current_desktop == 'GNOME':
return _LinuxDesktopEnvironment.GNOME
elif xdg_current_desktop == 'X-Cinnamon':
return _LinuxDesktopEnvironment.CINNAMON
elif xdg_current_desktop == 'KDE':
return _LinuxDesktopEnvironment.KDE
elif xdg_current_desktop == 'Pantheon':
return _LinuxDesktopEnvironment.PANTHEON
elif xdg_current_desktop == 'XFCE':
return _LinuxDesktopEnvironment.XFCE
elif desktop_session is not None:
if desktop_session in ('mate', 'gnome'):
return _LinuxDesktopEnvironment.GNOME
elif 'kde' in desktop_session:
return _LinuxDesktopEnvironment.KDE
elif 'xfce' in desktop_session:
return _LinuxDesktopEnvironment.XFCE
else: else:
proc = subprocess.Popen(['security', 'find-generic-password', if 'GNOME_DESKTOP_SESSION_ID' in env:
'-w', # write password to stdout return _LinuxDesktopEnvironment.GNOME
'-a', browser_keyring_name, # match 'account' elif 'KDE_FULL_SESSION' in env:
'-s', '{} Safe Storage'.format(browser_keyring_name)], # match 'service' return _LinuxDesktopEnvironment.KDE
stdout=subprocess.PIPE, else:
stderr=subprocess.DEVNULL) return _LinuxDesktopEnvironment.OTHER
try:
stdout, stderr = process_communicate_or_kill(proc)
return stdout def _choose_linux_keyring(logger):
except BaseException: """
return None https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/key_storage_util_linux.cc
SelectBackend
"""
desktop_environment = _get_linux_desktop_environment(os.environ)
logger.debug('detected desktop environment: {}'.format(desktop_environment.name))
if desktop_environment == _LinuxDesktopEnvironment.KDE:
linux_keyring = _LinuxKeyring.KWALLET
elif desktop_environment == _LinuxDesktopEnvironment.OTHER:
linux_keyring = _LinuxKeyring.BASICTEXT
else:
linux_keyring = _LinuxKeyring.GNOMEKEYRING
return linux_keyring
def _get_kwallet_network_wallet(logger):
""" The name of the wallet used to store network passwords.
https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/kwallet_dbus.cc
KWalletDBus::NetworkWallet
which does a dbus call to the following function:
https://api.kde.org/frameworks/kwallet/html/classKWallet_1_1Wallet.html
Wallet::NetworkWallet
"""
default_wallet = 'kdewallet'
try:
proc = Popen([
'dbus-send', '--session', '--print-reply=literal',
'--dest=org.kde.kwalletd5',
'/modules/kwalletd5',
'org.kde.KWallet.networkWallet'
], stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
stdout, stderr = proc.communicate_or_kill()
if proc.returncode != 0:
logger.warning('failed to read NetworkWallet')
return default_wallet
else:
network_wallet = stdout.decode('utf-8').strip()
logger.debug('NetworkWallet = "{}"'.format(network_wallet))
return network_wallet
except BaseException as e:
logger.warning('exception while obtaining NetworkWallet: {}'.format(e))
return default_wallet
def _get_kwallet_password(browser_keyring_name, logger):
logger.debug('using kwallet-query to obtain password from kwallet')
if shutil.which('kwallet-query') is None:
logger.error('kwallet-query command not found. KWallet and kwallet-query '
'must be installed to read from KWallet. kwallet-query should be'
'included in the kwallet package for your distribution')
return b''
network_wallet = _get_kwallet_network_wallet(logger)
try:
proc = Popen([
'kwallet-query',
'--read-password', '{} Safe Storage'.format(browser_keyring_name),
'--folder', '{} Keys'.format(browser_keyring_name),
network_wallet
], stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
stdout, stderr = proc.communicate_or_kill()
if proc.returncode != 0:
logger.error('kwallet-query failed with return code {}. Please consult '
'the kwallet-query man page for details'.format(proc.returncode))
return b''
else:
if stdout.lower().startswith(b'failed to read'):
logger.debug('failed to read password from kwallet. Using empty string instead')
# this sometimes occurs in KDE because chrome does not check hasEntry and instead
# just tries to read the value (which kwallet returns "") whereas kwallet-query
# checks hasEntry. To verify this:
# dbus-monitor "interface='org.kde.KWallet'" "type=method_return"
# while starting chrome.
# this may be a bug as the intended behaviour is to generate a random password and store
# it, but that doesn't matter here.
return b''
else:
logger.debug('password found')
if stdout[-1:] == b'\n':
stdout = stdout[:-1]
return stdout
except BaseException as e:
logger.warning(f'exception running kwallet-query: {type(e).__name__}({e})')
return b''
def _get_gnome_keyring_password(browser_keyring_name, logger):
if not SECRETSTORAGE_AVAILABLE:
logger.error('secretstorage not available {}'.format(SECRETSTORAGE_UNAVAILABLE_REASON))
return b''
# the Gnome keyring does not seem to organise keys in the same way as KWallet,
# using `dbus-monitor` during startup, it can be observed that chromium lists all keys
# and presumably searches for its key in the list. It appears that we must do the same.
# https://github.com/jaraco/keyring/issues/556
with contextlib.closing(secretstorage.dbus_init()) as con:
col = secretstorage.get_default_collection(con)
for item in col.get_all_items():
if item.get_label() == '{} Safe Storage'.format(browser_keyring_name):
return item.get_secret()
else:
logger.error('failed to read from keyring')
return b''
def _get_linux_keyring_password(browser_keyring_name, keyring, logger):
# note: chrome/chromium can be run with the following flags to determine which keyring backend
# it has chosen to use
# chromium --enable-logging=stderr --v=1 2>&1 | grep key_storage_
# Chromium supports a flag: --password-store=<basic|gnome|kwallet> so the automatic detection
# will not be sufficient in all cases.
keyring = _LinuxKeyring[keyring] or _choose_linux_keyring(logger)
logger.debug(f'Chosen keyring: {keyring.name}')
if keyring == _LinuxKeyring.KWALLET:
return _get_kwallet_password(browser_keyring_name, logger)
elif keyring == _LinuxKeyring.GNOMEKEYRING:
return _get_gnome_keyring_password(browser_keyring_name, logger)
elif keyring == _LinuxKeyring.BASICTEXT:
# when basic text is chosen, all cookies are stored as v10 (so no keyring password is required)
return None
assert False, f'Unknown keyring {keyring}'
def _get_mac_keyring_password(browser_keyring_name, logger):
logger.debug('using find-generic-password to obtain password from OSX keychain')
try:
proc = Popen(
['security', 'find-generic-password',
'-w', # write password to stdout
'-a', browser_keyring_name, # match 'account'
'-s', '{} Safe Storage'.format(browser_keyring_name)], # match 'service'
stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
stdout, stderr = proc.communicate_or_kill()
if stdout[-1:] == b'\n':
stdout = stdout[:-1]
return stdout
except BaseException as e:
logger.warning(f'exception running find-generic-password: {type(e).__name__}({e})')
return None
def _get_windows_v10_key(browser_root, logger): def _get_windows_v10_key(browser_root, logger):
@ -628,7 +827,7 @@ def _get_windows_v10_key(browser_root, logger):
if path is None: if path is None:
logger.error('could not find local state file') logger.error('could not find local state file')
return None return None
with open(path, 'r') as f: with open(path, 'r', encoding='utf8') as f:
data = json.load(f) data = json.load(f)
try: try:
base64_key = data['os_crypt']['encrypted_key'] base64_key = data['os_crypt']['encrypted_key']
@ -648,29 +847,26 @@ def pbkdf2_sha1(password, salt, iterations, key_length):
def _decrypt_aes_cbc(ciphertext, key, logger, initialization_vector=b' ' * 16): def _decrypt_aes_cbc(ciphertext, key, logger, initialization_vector=b' ' * 16):
plaintext = aes_cbc_decrypt(bytes_to_intlist(ciphertext), plaintext = aes_cbc_decrypt_bytes(ciphertext, key, initialization_vector)
bytes_to_intlist(key),
bytes_to_intlist(initialization_vector))
padding_length = plaintext[-1] padding_length = plaintext[-1]
try: try:
return intlist_to_bytes(plaintext[:-padding_length]).decode('utf-8') return plaintext[:-padding_length].decode('utf-8')
except UnicodeDecodeError: except UnicodeDecodeError:
logger.warning('failed to decrypt cookie because UTF-8 decoding failed. Possibly the key is wrong?') logger.warning('failed to decrypt cookie (AES-CBC) because UTF-8 decoding failed. Possibly the key is wrong?', only_once=True)
return None return None
def _decrypt_aes_gcm(ciphertext, key, nonce, authentication_tag, logger): def _decrypt_aes_gcm(ciphertext, key, nonce, authentication_tag, logger):
cipher = AES.new(key, AES.MODE_GCM, nonce)
try: try:
plaintext = cipher.decrypt_and_verify(ciphertext, authentication_tag) plaintext = aes_gcm_decrypt_and_verify_bytes(ciphertext, key, authentication_tag, nonce)
except ValueError: except ValueError:
logger.warning('failed to decrypt cookie because the MAC check failed. Possibly the key is wrong?') logger.warning('failed to decrypt cookie (AES-GCM) because the MAC check failed. Possibly the key is wrong?', only_once=True)
return None return None
try: try:
return plaintext.decode('utf-8') return plaintext.decode('utf-8')
except UnicodeDecodeError: except UnicodeDecodeError:
logger.warning('failed to decrypt cookie because UTF-8 decoding failed. Possibly the key is wrong?') logger.warning('failed to decrypt cookie (AES-GCM) because UTF-8 decoding failed. Possibly the key is wrong?', only_once=True)
return None return None
@ -698,7 +894,7 @@ def _decrypt_windows_dpapi(ciphertext, logger):
ctypes.byref(blob_out) # pDataOut ctypes.byref(blob_out) # pDataOut
) )
if not ret: if not ret:
logger.warning('failed to decrypt with DPAPI') logger.warning('failed to decrypt with DPAPI', only_once=True)
return None return None
result = ctypes.string_at(blob_out.pbData, blob_out.cbData) result = ctypes.string_at(blob_out.pbData, blob_out.cbData)
@ -747,9 +943,11 @@ def _is_path(value):
return os.path.sep in value return os.path.sep in value
def _parse_browser_specification(browser_name, profile=None): def _parse_browser_specification(browser_name, profile=None, keyring=None):
if browser_name not in SUPPORTED_BROWSERS: if browser_name not in SUPPORTED_BROWSERS:
raise ValueError(f'unsupported browser: "{browser_name}"') raise ValueError(f'unsupported browser: "{browser_name}"')
if keyring not in (None, *SUPPORTED_KEYRINGS):
raise ValueError(f'unsupported keyring: "{keyring}"')
if profile is not None and _is_path(profile): if profile is not None and _is_path(profile):
profile = os.path.expanduser(profile) profile = os.path.expanduser(profile)
return browser_name, profile return browser_name, profile, keyring

View file

@ -10,10 +10,20 @@ from ..utils import (
def get_suitable_downloader(info_dict, params={}, default=NO_DEFAULT, protocol=None, to_stdout=False): def get_suitable_downloader(info_dict, params={}, default=NO_DEFAULT, protocol=None, to_stdout=False):
info_dict['protocol'] = determine_protocol(info_dict) info_dict['protocol'] = determine_protocol(info_dict)
info_copy = info_dict.copy() info_copy = info_dict.copy()
if protocol:
info_copy['protocol'] = protocol
info_copy['to_stdout'] = to_stdout info_copy['to_stdout'] = to_stdout
return _get_suitable_downloader(info_copy, params, default)
protocols = (protocol or info_copy['protocol']).split('+')
downloaders = [_get_suitable_downloader(info_copy, proto, params, default) for proto in protocols]
if set(downloaders) == {FFmpegFD} and FFmpegFD.can_merge_formats(info_copy, params):
return FFmpegFD
elif (set(downloaders) == {DashSegmentsFD}
and not (to_stdout and len(protocols) > 1)
and set(protocols) == {'http_dash_segments_generator'}):
return DashSegmentsFD
elif len(downloaders) == 1:
return downloaders[0]
return None
# Some of these require get_suitable_downloader # Some of these require get_suitable_downloader
@ -36,6 +46,7 @@ from .external import (
PROTOCOL_MAP = { PROTOCOL_MAP = {
'rtmp': RtmpFD, 'rtmp': RtmpFD,
'rtmpe': RtmpFD,
'rtmp_ffmpeg': FFmpegFD, 'rtmp_ffmpeg': FFmpegFD,
'm3u8_native': HlsFD, 'm3u8_native': HlsFD,
'm3u8': FFmpegFD, 'm3u8': FFmpegFD,
@ -43,6 +54,7 @@ PROTOCOL_MAP = {
'rtsp': RtspFD, 'rtsp': RtspFD,
'f4m': F4mFD, 'f4m': F4mFD,
'http_dash_segments': DashSegmentsFD, 'http_dash_segments': DashSegmentsFD,
'http_dash_segments_generator': DashSegmentsFD,
'ism': IsmFD, 'ism': IsmFD,
'mhtml': MhtmlFD, 'mhtml': MhtmlFD,
'niconico_dmc': NiconicoDmcFD, 'niconico_dmc': NiconicoDmcFD,
@ -57,6 +69,7 @@ def shorten_protocol_name(proto, simplify=False):
'm3u8_native': 'm3u8_n', 'm3u8_native': 'm3u8_n',
'rtmp_ffmpeg': 'rtmp_f', 'rtmp_ffmpeg': 'rtmp_f',
'http_dash_segments': 'dash', 'http_dash_segments': 'dash',
'http_dash_segments_generator': 'dash_g',
'niconico_dmc': 'dmc', 'niconico_dmc': 'dmc',
'websocket_frag': 'WSfrag', 'websocket_frag': 'WSfrag',
} }
@ -65,6 +78,7 @@ def shorten_protocol_name(proto, simplify=False):
'https': 'http', 'https': 'http',
'ftps': 'ftp', 'ftps': 'ftp',
'm3u8_native': 'm3u8', 'm3u8_native': 'm3u8',
'http_dash_segments_generator': 'dash',
'rtmp_ffmpeg': 'rtmp', 'rtmp_ffmpeg': 'rtmp',
'm3u8_frag_urls': 'm3u8', 'm3u8_frag_urls': 'm3u8',
'dash_frag_urls': 'dash', 'dash_frag_urls': 'dash',
@ -72,7 +86,7 @@ def shorten_protocol_name(proto, simplify=False):
return short_protocol_names.get(proto, proto) return short_protocol_names.get(proto, proto)
def _get_suitable_downloader(info_dict, params, default): def _get_suitable_downloader(info_dict, protocol, params, default):
"""Get the downloader class that can handle the info dict.""" """Get the downloader class that can handle the info dict."""
if default is NO_DEFAULT: if default is NO_DEFAULT:
default = HttpFD default = HttpFD
@ -80,7 +94,7 @@ def _get_suitable_downloader(info_dict, params, default):
# if (info_dict.get('start_time') or info_dict.get('end_time')) and not info_dict.get('requested_formats') and FFmpegFD.can_download(info_dict): # if (info_dict.get('start_time') or info_dict.get('end_time')) and not info_dict.get('requested_formats') and FFmpegFD.can_download(info_dict):
# return FFmpegFD # return FFmpegFD
protocol = info_dict['protocol'] info_dict['protocol'] = protocol
downloaders = params.get('external_downloader') downloaders = params.get('external_downloader')
external_downloader = ( external_downloader = (
downloaders if isinstance(downloaders, compat_str) or downloaders is None downloaders if isinstance(downloaders, compat_str) or downloaders is None

View file

@ -1,20 +1,26 @@
from __future__ import division, unicode_literals from __future__ import division, unicode_literals
import copy
import os import os
import re import re
import sys
import time import time
import random import random
import errno
from ..compat import compat_os_name
from ..utils import ( from ..utils import (
decodeArgument, decodeArgument,
encodeFilename, encodeFilename,
error_to_compat_str, error_to_compat_str,
format_bytes, format_bytes,
sanitize_open,
shell_quote, shell_quote,
timeconvert, timeconvert,
timetuple_from_msec,
)
from ..minicurses import (
MultilineLogger,
MultilinePrinter,
QuietMultilinePrinter,
BreaklineStatusPrinter
) )
@ -35,12 +41,11 @@ class FileDownloader(object):
ratelimit: Download speed limit, in bytes/sec. ratelimit: Download speed limit, in bytes/sec.
throttledratelimit: Assume the download is being throttled below this speed (bytes/sec) throttledratelimit: Assume the download is being throttled below this speed (bytes/sec)
retries: Number of times to retry for HTTP error 5xx retries: Number of times to retry for HTTP error 5xx
file_access_retries: Number of times to retry on file access error
buffersize: Size of download buffer in bytes. buffersize: Size of download buffer in bytes.
noresizebuffer: Do not automatically resize the download buffer. noresizebuffer: Do not automatically resize the download buffer.
continuedl: Try to continue downloads if possible. continuedl: Try to continue downloads if possible.
noprogress: Do not print the progress bar. noprogress: Do not print the progress bar.
logtostderr: Log messages to stderr instead of stdout.
consoletitle: Display progress in console window's titlebar.
nopart: Do not use temporary .part files. nopart: Do not use temporary .part files.
updatetime: Use the Last-modified header to set output file timestamps. updatetime: Use the Last-modified header to set output file timestamps.
test: Download only first bytes to test the downloader. test: Download only first bytes to test the downloader.
@ -56,6 +61,7 @@ class FileDownloader(object):
http_chunk_size: Size of a chunk for chunk-based HTTP downloading. May be http_chunk_size: Size of a chunk for chunk-based HTTP downloading. May be
useful for bypassing bandwidth throttling imposed by useful for bypassing bandwidth throttling imposed by
a webserver (experimental) a webserver (experimental)
progress_template: See YoutubeDL.py
Subclasses of this one must re-define the real_download method. Subclasses of this one must re-define the real_download method.
""" """
@ -68,18 +74,17 @@ class FileDownloader(object):
self.ydl = ydl self.ydl = ydl
self._progress_hooks = [] self._progress_hooks = []
self.params = params self.params = params
self._prepare_multiline_status()
self.add_progress_hook(self.report_progress) self.add_progress_hook(self.report_progress)
@staticmethod @staticmethod
def format_seconds(seconds): def format_seconds(seconds):
(mins, secs) = divmod(seconds, 60) time = timetuple_from_msec(seconds * 1000)
(hours, mins) = divmod(mins, 60) if time.hours > 99:
if hours > 99:
return '--:--:--' return '--:--:--'
if hours == 0: if not time.hours:
return '%02d:%02d' % (mins, secs) return '%02d:%02d' % time[1:-1]
else: return '%02d:%02d:%02d' % time[:-1]
return '%02d:%02d:%02d' % (hours, mins, secs)
@staticmethod @staticmethod
def calc_percent(byte_counter, data_len): def calc_percent(byte_counter, data_len):
@ -91,6 +96,8 @@ class FileDownloader(object):
def format_percent(percent): def format_percent(percent):
if percent is None: if percent is None:
return '---.-%' return '---.-%'
elif percent == 100:
return '100%'
return '%6s' % ('%3.1f%%' % percent) return '%6s' % ('%3.1f%%' % percent)
@staticmethod @staticmethod
@ -203,16 +210,28 @@ class FileDownloader(object):
def ytdl_filename(self, filename): def ytdl_filename(self, filename):
return filename + '.ytdl' return filename + '.ytdl'
def sanitize_open(self, filename, open_mode):
file_access_retries = self.params.get('file_access_retries', 10)
retry = 0
while True:
try:
return sanitize_open(filename, open_mode)
except (IOError, OSError) as err:
retry = retry + 1
if retry > file_access_retries or err.errno not in (errno.EACCES,):
raise
self.to_screen(
'[download] Got file access error. Retrying (attempt %d of %s) ...'
% (retry, self.format_retries(file_access_retries)))
time.sleep(0.01)
def try_rename(self, old_filename, new_filename): def try_rename(self, old_filename, new_filename):
if old_filename == new_filename: if old_filename == new_filename:
return return
try: try:
if self.params.get('overwrites', False): os.replace(old_filename, new_filename)
if os.path.isfile(encodeFilename(new_filename)):
os.remove(encodeFilename(new_filename))
os.rename(encodeFilename(old_filename), encodeFilename(new_filename))
except (IOError, OSError) as err: except (IOError, OSError) as err:
self.report_error('unable to rename file: %s' % error_to_compat_str(err)) self.report_error(f'unable to rename file: {err}')
def try_utime(self, filename, last_modified_hdr): def try_utime(self, filename, last_modified_hdr):
"""Try to set the last-modified time of the given file.""" """Try to set the last-modified time of the given file."""
@ -239,39 +258,67 @@ class FileDownloader(object):
"""Report destination filename.""" """Report destination filename."""
self.to_screen('[download] Destination: ' + filename) self.to_screen('[download] Destination: ' + filename)
def _report_progress_status(self, msg, is_last_line=False): def _prepare_multiline_status(self, lines=1):
fullmsg = '[download] ' + msg if self.params.get('noprogress'):
if self.params.get('progress_with_newline', False): self._multiline = QuietMultilinePrinter()
self.to_screen(fullmsg) elif self.ydl.params.get('logger'):
self._multiline = MultilineLogger(self.ydl.params['logger'], lines)
elif self.params.get('progress_with_newline'):
self._multiline = BreaklineStatusPrinter(self.ydl._screen_file, lines)
else: else:
if compat_os_name == 'nt': self._multiline = MultilinePrinter(self.ydl._screen_file, lines, not self.params.get('quiet'))
prev_len = getattr(self, '_report_progress_prev_line_length', self._multiline.allow_colors = self._multiline._HAVE_FULLCAP and not self.params.get('no_color')
0)
if prev_len > len(fullmsg): def _finish_multiline_status(self):
fullmsg += ' ' * (prev_len - len(fullmsg)) self._multiline.end()
self._report_progress_prev_line_length = len(fullmsg)
clear_line = '\r' _progress_styles = {
else: 'downloaded_bytes': 'light blue',
clear_line = ('\r\x1b[K' if sys.stderr.isatty() else '\r') 'percent': 'light blue',
self.to_screen(clear_line + fullmsg, skip_eol=not is_last_line) 'eta': 'yellow',
self.to_console_title('yt-dlp ' + msg) 'speed': 'green',
'elapsed': 'bold white',
'total_bytes': '',
'total_bytes_estimate': '',
}
def _report_progress_status(self, s, default_template):
for name, style in self._progress_styles.items():
name = f'_{name}_str'
if name not in s:
continue
s[name] = self._format_progress(s[name], style)
s['_default_template'] = default_template % s
progress_dict = s.copy()
progress_dict.pop('info_dict')
progress_dict = {'info': s['info_dict'], 'progress': progress_dict}
progress_template = self.params.get('progress_template', {})
self._multiline.print_at_line(self.ydl.evaluate_outtmpl(
progress_template.get('download') or '[download] %(progress._default_template)s',
progress_dict), s.get('progress_idx') or 0)
self.to_console_title(self.ydl.evaluate_outtmpl(
progress_template.get('download-title') or 'yt-dlp %(progress._default_template)s',
progress_dict))
def _format_progress(self, *args, **kwargs):
return self.ydl._format_text(
self._multiline.stream, self._multiline.allow_colors, *args, **kwargs)
def report_progress(self, s): def report_progress(self, s):
if s['status'] == 'finished': if s['status'] == 'finished':
if self.params.get('noprogress', False): if self.params.get('noprogress'):
self.to_screen('[download] Download completed') self.to_screen('[download] Download completed')
else: msg_template = '100%%'
msg_template = '100%%' if s.get('total_bytes') is not None:
if s.get('total_bytes') is not None: s['_total_bytes_str'] = format_bytes(s['total_bytes'])
s['_total_bytes_str'] = format_bytes(s['total_bytes']) msg_template += ' of %(_total_bytes_str)s'
msg_template += ' of %(_total_bytes_str)s' if s.get('elapsed') is not None:
if s.get('elapsed') is not None: s['_elapsed_str'] = self.format_seconds(s['elapsed'])
s['_elapsed_str'] = self.format_seconds(s['elapsed']) msg_template += ' in %(_elapsed_str)s'
msg_template += ' in %(_elapsed_str)s' s['_percent_str'] = self.format_percent(100)
self._report_progress_status( self._report_progress_status(s, msg_template)
msg_template % s, is_last_line=True)
if self.params.get('noprogress'):
return return
if s['status'] != 'downloading': if s['status'] != 'downloading':
@ -280,7 +327,7 @@ class FileDownloader(object):
if s.get('eta') is not None: if s.get('eta') is not None:
s['_eta_str'] = self.format_eta(s['eta']) s['_eta_str'] = self.format_eta(s['eta'])
else: else:
s['_eta_str'] = 'Unknown ETA' s['_eta_str'] = 'Unknown'
if s.get('total_bytes') and s.get('downloaded_bytes') is not None: if s.get('total_bytes') and s.get('downloaded_bytes') is not None:
s['_percent_str'] = self.format_percent(100 * s['downloaded_bytes'] / s['total_bytes']) s['_percent_str'] = self.format_percent(100 * s['downloaded_bytes'] / s['total_bytes'])
@ -312,9 +359,12 @@ class FileDownloader(object):
else: else:
msg_template = '%(_downloaded_bytes_str)s at %(_speed_str)s' msg_template = '%(_downloaded_bytes_str)s at %(_speed_str)s'
else: else:
msg_template = '%(_percent_str)s % at %(_speed_str)s ETA %(_eta_str)s' msg_template = '%(_percent_str)s at %(_speed_str)s ETA %(_eta_str)s'
if s.get('fragment_index') and s.get('fragment_count'):
self._report_progress_status(msg_template % s) msg_template += ' (frag %(fragment_index)s/%(fragment_count)s)'
elif s.get('fragment_index'):
msg_template += ' (frag %(fragment_index)s)'
self._report_progress_status(s, msg_template)
def report_resuming_byte(self, resume_len): def report_resuming_byte(self, resume_len):
"""Report attempt to resume at given byte.""" """Report attempt to resume at given byte."""
@ -365,6 +415,7 @@ class FileDownloader(object):
'status': 'finished', 'status': 'finished',
'total_bytes': os.path.getsize(encodeFilename(filename)), 'total_bytes': os.path.getsize(encodeFilename(filename)),
}, info_dict) }, info_dict)
self._finish_multiline_status()
return True, False return True, False
if subtitle is False: if subtitle is False:
@ -386,7 +437,9 @@ class FileDownloader(object):
'[download] Sleeping %s seconds ...' % ( '[download] Sleeping %s seconds ...' % (
sleep_interval_sub)) sleep_interval_sub))
time.sleep(sleep_interval_sub) time.sleep(sleep_interval_sub)
return self.real_download(filename, info_dict), True ret = self.real_download(filename, info_dict)
self._finish_multiline_status()
return ret, True
def real_download(self, filename, info_dict): def real_download(self, filename, info_dict):
"""Real download process. Redefine in subclasses.""" """Real download process. Redefine in subclasses."""
@ -395,13 +448,10 @@ class FileDownloader(object):
def _hook_progress(self, status, info_dict): def _hook_progress(self, status, info_dict):
if not self._progress_hooks: if not self._progress_hooks:
return return
info_dict = dict(info_dict) status['info_dict'] = info_dict
for key in ('__original_infodict', '__postprocessors'):
info_dict.pop(key, None)
# youtube-dl passes the same status object to all the hooks. # youtube-dl passes the same status object to all the hooks.
# Some third party scripts seems to be relying on this. # Some third party scripts seems to be relying on this.
# So keep this behavior if possible # So keep this behavior if possible
status['info_dict'] = copy.deepcopy(info_dict)
for ph in self._progress_hooks: for ph in self._progress_hooks:
ph(status) ph(status)

View file

@ -1,4 +1,5 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import time
from ..downloader import get_suitable_downloader from ..downloader import get_suitable_downloader
from .fragment import FragmentFD from .fragment import FragmentFD
@ -15,27 +16,53 @@ class DashSegmentsFD(FragmentFD):
FD_NAME = 'dashsegments' FD_NAME = 'dashsegments'
def real_download(self, filename, info_dict): def real_download(self, filename, info_dict):
if info_dict.get('is_live'): if info_dict.get('is_live') and set(info_dict['protocol'].split('+')) != {'http_dash_segments_generator'}:
self.report_error('Live DASH videos are not supported') self.report_error('Live DASH videos are not supported')
fragment_base_url = info_dict.get('fragment_base_url') real_start = time.time()
fragments = info_dict['fragments'][:1] if self.params.get(
'test', False) else info_dict['fragments']
real_downloader = get_suitable_downloader( real_downloader = get_suitable_downloader(
info_dict, self.params, None, protocol='dash_frag_urls', to_stdout=(filename == '-')) info_dict, self.params, None, protocol='dash_frag_urls', to_stdout=(filename == '-'))
ctx = { requested_formats = [{**info_dict, **fmt} for fmt in info_dict.get('requested_formats', [])]
'filename': filename, args = []
'total_frags': len(fragments), for fmt in requested_formats or [info_dict]:
} try:
fragment_count = 1 if self.params.get('test') else len(fmt['fragments'])
except TypeError:
fragment_count = None
ctx = {
'filename': fmt.get('filepath') or filename,
'live': 'is_from_start' if fmt.get('is_from_start') else fmt.get('is_live'),
'total_frags': fragment_count,
}
if real_downloader: if real_downloader:
self._prepare_external_frag_download(ctx) self._prepare_external_frag_download(ctx)
else: else:
self._prepare_and_start_frag_download(ctx, info_dict) self._prepare_and_start_frag_download(ctx, fmt)
ctx['start'] = real_start
fragments_to_download = self._get_fragments(fmt, ctx)
if real_downloader:
self.to_screen(
'[%s] Fragment downloads will be delegated to %s' % (self.FD_NAME, real_downloader.get_basename()))
info_dict['fragments'] = list(fragments_to_download)
fd = real_downloader(self.ydl, self.params)
return fd.real_download(filename, info_dict)
args.append([ctx, fragments_to_download, fmt])
return self.download_and_append_fragments_multiple(*args)
def _resolve_fragments(self, fragments, ctx):
fragments = fragments(ctx) if callable(fragments) else fragments
return [next(iter(fragments))] if self.params.get('test') else fragments
def _get_fragments(self, fmt, ctx):
fragment_base_url = fmt.get('fragment_base_url')
fragments = self._resolve_fragments(fmt['fragments'], ctx)
fragments_to_download = []
frag_index = 0 frag_index = 0
for i, fragment in enumerate(fragments): for i, fragment in enumerate(fragments):
frag_index += 1 frag_index += 1
@ -46,18 +73,8 @@ class DashSegmentsFD(FragmentFD):
assert fragment_base_url assert fragment_base_url
fragment_url = urljoin(fragment_base_url, fragment['path']) fragment_url = urljoin(fragment_base_url, fragment['path'])
fragments_to_download.append({ yield {
'frag_index': frag_index, 'frag_index': frag_index,
'index': i, 'index': i,
'url': fragment_url, 'url': fragment_url,
}) }
if real_downloader:
self.to_screen(
'[%s] Fragment downloads will be delegated to %s' % (self.FD_NAME, real_downloader.get_basename()))
info_copy = info_dict.copy()
info_copy['fragments'] = fragments_to_download
fd = real_downloader(self.ydl, self.params)
return fd.real_download(filename, info_copy)
return self.download_and_append_fragments(ctx, fragments_to_download, info_dict)

View file

@ -6,13 +6,7 @@ import subprocess
import sys import sys
import time import time
try: from .fragment import FragmentFD
from Crypto.Cipher import AES
can_decrypt_frag = True
except ImportError:
can_decrypt_frag = False
from .common import FileDownloader
from ..compat import ( from ..compat import (
compat_setenv, compat_setenv,
compat_str, compat_str,
@ -27,14 +21,11 @@ from ..utils import (
encodeArgument, encodeArgument,
handle_youtubedl_headers, handle_youtubedl_headers,
check_executable, check_executable,
is_outdated_version, Popen,
process_communicate_or_kill,
sanitized_Request,
sanitize_open,
) )
class ExternalFD(FileDownloader): class ExternalFD(FragmentFD):
SUPPORTED_PROTOCOLS = ('http', 'https', 'ftp', 'ftps') SUPPORTED_PROTOCOLS = ('http', 'https', 'ftp', 'ftps')
can_download_to_stdout = False can_download_to_stdout = False
@ -122,73 +113,54 @@ class ExternalFD(FileDownloader):
self._debug_cmd(cmd) self._debug_cmd(cmd)
if 'fragments' in info_dict: if 'fragments' not in info_dict:
fragment_retries = self.params.get('fragment_retries', 0) p = Popen(cmd, stderr=subprocess.PIPE)
skip_unavailable_fragments = self.params.get('skip_unavailable_fragments', True) _, stderr = p.communicate_or_kill()
count = 0
while count <= fragment_retries:
p = subprocess.Popen(
cmd, stderr=subprocess.PIPE)
_, stderr = process_communicate_or_kill(p)
if p.returncode == 0:
break
# TODO: Decide whether to retry based on error code
# https://aria2.github.io/manual/en/html/aria2c.html#exit-status
self.to_stderr(stderr.decode('utf-8', 'replace'))
count += 1
if count <= fragment_retries:
self.to_screen(
'[%s] Got error. Retrying fragments (attempt %d of %s)...'
% (self.get_basename(), count, self.format_retries(fragment_retries)))
if count > fragment_retries:
if not skip_unavailable_fragments:
self.report_error('Giving up after %s fragment retries' % fragment_retries)
return -1
dest, _ = sanitize_open(tmpfilename, 'wb')
for frag_index, fragment in enumerate(info_dict['fragments']):
fragment_filename = '%s-Frag%d' % (tmpfilename, frag_index)
try:
src, _ = sanitize_open(fragment_filename, 'rb')
except IOError:
if skip_unavailable_fragments and frag_index > 1:
self.to_screen('[%s] Skipping fragment %d ...' % (self.get_basename(), frag_index))
continue
self.report_error('Unable to open fragment %d' % frag_index)
return -1
decrypt_info = fragment.get('decrypt_info')
if decrypt_info:
if decrypt_info['METHOD'] == 'AES-128':
iv = decrypt_info.get('IV')
decrypt_info['KEY'] = decrypt_info.get('KEY') or self.ydl.urlopen(
self._prepare_url(info_dict, info_dict.get('_decryption_key_url') or decrypt_info['URI'])).read()
encrypted_data = src.read()
decrypted_data = AES.new(
decrypt_info['KEY'], AES.MODE_CBC, iv).decrypt(encrypted_data)
dest.write(decrypted_data)
else:
fragment_data = src.read()
dest.write(fragment_data)
else:
fragment_data = src.read()
dest.write(fragment_data)
src.close()
if not self.params.get('keep_fragments', False):
os.remove(encodeFilename(fragment_filename))
dest.close()
os.remove(encodeFilename('%s.frag.urls' % tmpfilename))
else:
p = subprocess.Popen(
cmd, stderr=subprocess.PIPE)
_, stderr = process_communicate_or_kill(p)
if p.returncode != 0: if p.returncode != 0:
self.to_stderr(stderr.decode('utf-8', 'replace')) self.to_stderr(stderr.decode('utf-8', 'replace'))
return p.returncode return p.returncode
def _prepare_url(self, info_dict, url): fragment_retries = self.params.get('fragment_retries', 0)
headers = info_dict.get('http_headers') skip_unavailable_fragments = self.params.get('skip_unavailable_fragments', True)
return sanitized_Request(url, None, headers) if headers else url
count = 0
while count <= fragment_retries:
p = Popen(cmd, stderr=subprocess.PIPE)
_, stderr = p.communicate_or_kill()
if p.returncode == 0:
break
# TODO: Decide whether to retry based on error code
# https://aria2.github.io/manual/en/html/aria2c.html#exit-status
self.to_stderr(stderr.decode('utf-8', 'replace'))
count += 1
if count <= fragment_retries:
self.to_screen(
'[%s] Got error. Retrying fragments (attempt %d of %s)...'
% (self.get_basename(), count, self.format_retries(fragment_retries)))
if count > fragment_retries:
if not skip_unavailable_fragments:
self.report_error('Giving up after %s fragment retries' % fragment_retries)
return -1
decrypt_fragment = self.decrypter(info_dict)
dest, _ = self.sanitize_open(tmpfilename, 'wb')
for frag_index, fragment in enumerate(info_dict['fragments']):
fragment_filename = '%s-Frag%d' % (tmpfilename, frag_index)
try:
src, _ = self.sanitize_open(fragment_filename, 'rb')
except IOError as err:
if skip_unavailable_fragments and frag_index > 1:
self.report_skip_fragment(frag_index, err)
continue
self.report_error(f'Unable to open fragment {frag_index}; {err}')
return -1
dest.write(decrypt_fragment(fragment, src.read()))
src.close()
if not self.params.get('keep_fragments', False):
os.remove(encodeFilename(fragment_filename))
dest.close()
os.remove(encodeFilename('%s.frag.urls' % tmpfilename))
return 0
class CurlFD(ExternalFD): class CurlFD(ExternalFD):
@ -223,8 +195,8 @@ class CurlFD(ExternalFD):
self._debug_cmd(cmd) self._debug_cmd(cmd)
# curl writes the progress to stderr so don't capture it. # curl writes the progress to stderr so don't capture it.
p = subprocess.Popen(cmd) p = Popen(cmd)
process_communicate_or_kill(p) p.communicate_or_kill()
return p.returncode return p.returncode
@ -288,10 +260,12 @@ class Aria2cFD(ExternalFD):
if info_dict.get('http_headers') is not None: if info_dict.get('http_headers') is not None:
for key, val in info_dict['http_headers'].items(): for key, val in info_dict['http_headers'].items():
cmd += ['--header', '%s: %s' % (key, val)] cmd += ['--header', '%s: %s' % (key, val)]
cmd += self._option('--max-overall-download-limit', 'ratelimit')
cmd += self._option('--interface', 'source_address') cmd += self._option('--interface', 'source_address')
cmd += self._option('--all-proxy', 'proxy') cmd += self._option('--all-proxy', 'proxy')
cmd += self._bool_option('--check-certificate', 'nocheckcertificate', 'false', 'true', '=') cmd += self._bool_option('--check-certificate', 'nocheckcertificate', 'false', 'true', '=')
cmd += self._bool_option('--remote-time', 'updatetime', 'true', 'false', '=') cmd += self._bool_option('--remote-time', 'updatetime', 'true', 'false', '=')
cmd += self._bool_option('--show-console-readout', 'noprogress', 'false', 'true', '=')
cmd += self._configuration_args() cmd += self._configuration_args()
# aria2c strips out spaces from the beginning/end of filenames and paths. # aria2c strips out spaces from the beginning/end of filenames and paths.
@ -316,7 +290,7 @@ class Aria2cFD(ExternalFD):
for frag_index, fragment in enumerate(info_dict['fragments']): for frag_index, fragment in enumerate(info_dict['fragments']):
fragment_filename = '%s-Frag%d' % (os.path.basename(tmpfilename), frag_index) fragment_filename = '%s-Frag%d' % (os.path.basename(tmpfilename), frag_index)
url_list.append('%s\n\tout=%s' % (fragment['url'], fragment_filename)) url_list.append('%s\n\tout=%s' % (fragment['url'], fragment_filename))
stream, _ = sanitize_open(url_list_file, 'wb') stream, _ = self.sanitize_open(url_list_file, 'wb')
stream.write('\n'.join(url_list).encode('utf-8')) stream.write('\n'.join(url_list).encode('utf-8'))
stream.close() stream.close()
cmd += ['-i', url_list_file] cmd += ['-i', url_list_file]
@ -351,12 +325,16 @@ class FFmpegFD(ExternalFD):
# Fixme: This may be wrong when --ffmpeg-location is used # Fixme: This may be wrong when --ffmpeg-location is used
return FFmpegPostProcessor().available return FFmpegPostProcessor().available
@classmethod
def supports(cls, info_dict):
return all(proto in cls.SUPPORTED_PROTOCOLS for proto in info_dict['protocol'].split('+'))
def on_process_started(self, proc, stdin): def on_process_started(self, proc, stdin):
""" Override this in subclasses """ """ Override this in subclasses """
pass pass
@classmethod @classmethod
def can_merge_formats(cls, info_dict, params={}): def can_merge_formats(cls, info_dict, params):
return ( return (
info_dict.get('requested_formats') info_dict.get('requested_formats')
and info_dict.get('protocol') and info_dict.get('protocol')
@ -465,8 +443,7 @@ class FFmpegFD(ExternalFD):
if info_dict.get('requested_formats') or protocol == 'http_dash_segments': if info_dict.get('requested_formats') or protocol == 'http_dash_segments':
for (i, fmt) in enumerate(info_dict.get('requested_formats') or [info_dict]): for (i, fmt) in enumerate(info_dict.get('requested_formats') or [info_dict]):
stream_number = fmt.get('manifest_stream_number', 0) stream_number = fmt.get('manifest_stream_number', 0)
a_or_v = 'a' if fmt.get('acodec') != 'none' else 'v' args.extend(['-map', f'{i}:{stream_number}'])
args.extend(['-map', f'{i}:{a_or_v}:{stream_number}'])
if self.params.get('test', False): if self.params.get('test', False):
args += ['-fs', compat_str(self._TEST_FILE_SIZE)] args += ['-fs', compat_str(self._TEST_FILE_SIZE)]
@ -480,7 +457,7 @@ class FFmpegFD(ExternalFD):
args += ['-f', 'mpegts'] args += ['-f', 'mpegts']
else: else:
args += ['-f', 'mp4'] args += ['-f', 'mp4']
if (ffpp.basename == 'ffmpeg' and is_outdated_version(ffpp._versions['ffmpeg'], '3.2', False)) and (not info_dict.get('acodec') or info_dict['acodec'].split('.')[0] in ('aac', 'mp4a')): if (ffpp.basename == 'ffmpeg' and ffpp._features.get('needs_adtstoasc')) and (not info_dict.get('acodec') or info_dict['acodec'].split('.')[0] in ('aac', 'mp4a')):
args += ['-bsf:a', 'aac_adtstoasc'] args += ['-bsf:a', 'aac_adtstoasc']
elif protocol == 'rtmp': elif protocol == 'rtmp':
args += ['-f', 'flv'] args += ['-f', 'flv']
@ -495,7 +472,7 @@ class FFmpegFD(ExternalFD):
args.append(encodeFilename(ffpp._ffmpeg_filename_argument(tmpfilename), True)) args.append(encodeFilename(ffpp._ffmpeg_filename_argument(tmpfilename), True))
self._debug_cmd(args) self._debug_cmd(args)
proc = subprocess.Popen(args, stdin=subprocess.PIPE, env=env) proc = Popen(args, stdin=subprocess.PIPE, env=env)
if url in ('-', 'pipe:'): if url in ('-', 'pipe:'):
self.on_process_started(proc, proc.stdin) self.on_process_started(proc, proc.stdin)
try: try:
@ -507,7 +484,7 @@ class FFmpegFD(ExternalFD):
# streams). Note that Windows is not affected and produces playable # streams). Note that Windows is not affected and produces playable
# files (see https://github.com/ytdl-org/youtube-dl/issues/8300). # files (see https://github.com/ytdl-org/youtube-dl/issues/8300).
if isinstance(e, KeyboardInterrupt) and sys.platform != 'win32' and url not in ('-', 'pipe:'): if isinstance(e, KeyboardInterrupt) and sys.platform != 'win32' and url not in ('-', 'pipe:'):
process_communicate_or_kill(proc, b'q') proc.communicate_or_kill(b'q')
else: else:
proc.kill() proc.kill()
proc.wait() proc.wait()
@ -522,7 +499,7 @@ class AVconvFD(FFmpegFD):
_BY_NAME = dict( _BY_NAME = dict(
(klass.get_basename(), klass) (klass.get_basename(), klass)
for name, klass in globals().items() for name, klass in globals().items()
if name.endswith('FD') and name != 'ExternalFD' if name.endswith('FD') and name not in ('ExternalFD', 'FragmentFD')
) )

View file

@ -366,7 +366,7 @@ class F4mFD(FragmentFD):
ctx = { ctx = {
'filename': filename, 'filename': filename,
'total_frags': total_frags, 'total_frags': total_frags,
'live': live, 'live': bool(live),
} }
self._prepare_frag_download(ctx) self._prepare_frag_download(ctx)

View file

@ -1,14 +1,10 @@
from __future__ import division, unicode_literals from __future__ import division, unicode_literals
import http.client
import json
import math
import os import os
import time import time
import json
try:
from Crypto.Cipher import AES
can_decrypt_frag = True
except ImportError:
can_decrypt_frag = False
try: try:
import concurrent.futures import concurrent.futures
@ -18,7 +14,9 @@ except ImportError:
from .common import FileDownloader from .common import FileDownloader
from .http import HttpFD from .http import HttpFD
from ..aes import aes_cbc_decrypt_bytes
from ..compat import ( from ..compat import (
compat_os_name,
compat_urllib_error, compat_urllib_error,
compat_struct_pack, compat_struct_pack,
) )
@ -26,7 +24,6 @@ from ..utils import (
DownloadError, DownloadError,
error_to_compat_str, error_to_compat_str,
encodeFilename, encodeFilename,
sanitize_open,
sanitized_Request, sanitized_Request,
) )
@ -35,6 +32,10 @@ class HttpQuietDownloader(HttpFD):
def to_screen(self, *args, **kargs): def to_screen(self, *args, **kargs):
pass pass
def report_retry(self, err, count, retries):
super().to_screen(
f'[download] Got server HTTP error: {err}. Retrying (attempt {count} of {self.format_retries(retries)}) ...')
class FragmentFD(FileDownloader): class FragmentFD(FileDownloader):
""" """
@ -48,6 +49,7 @@ class FragmentFD(FileDownloader):
Skip unavailable fragments (DASH and hlsnative only) Skip unavailable fragments (DASH and hlsnative only)
keep_fragments: Keep downloaded fragments on disk after downloading is keep_fragments: Keep downloaded fragments on disk after downloading is
finished finished
concurrent_fragment_downloads: The number of threads to use for native hls and dash downloads
_no_ytdl_file: Don't use .ytdl file _no_ytdl_file: Don't use .ytdl file
For each incomplete fragment download yt-dlp keeps on disk a special For each incomplete fragment download yt-dlp keeps on disk a special
@ -76,8 +78,9 @@ class FragmentFD(FileDownloader):
'\r[download] Got server HTTP error: %s. Retrying fragment %d (attempt %d of %s) ...' '\r[download] Got server HTTP error: %s. Retrying fragment %d (attempt %d of %s) ...'
% (error_to_compat_str(err), frag_index, count, self.format_retries(retries))) % (error_to_compat_str(err), frag_index, count, self.format_retries(retries)))
def report_skip_fragment(self, frag_index): def report_skip_fragment(self, frag_index, err=None):
self.to_screen('[download] Skipping fragment %d ...' % frag_index) err = f' {err};' if err else ''
self.to_screen(f'[download]{err} Skipping fragment {frag_index:d} ...')
def _prepare_url(self, info_dict, url): def _prepare_url(self, info_dict, url):
headers = info_dict.get('http_headers') headers = info_dict.get('http_headers')
@ -88,11 +91,11 @@ class FragmentFD(FileDownloader):
self._start_frag_download(ctx, info_dict) self._start_frag_download(ctx, info_dict)
def __do_ytdl_file(self, ctx): def __do_ytdl_file(self, ctx):
return not ctx['live'] and not ctx['tmpfilename'] == '-' and not self.params.get('_no_ytdl_file') return ctx['live'] is not True and ctx['tmpfilename'] != '-' and not self.params.get('_no_ytdl_file')
def _read_ytdl_file(self, ctx): def _read_ytdl_file(self, ctx):
assert 'ytdl_corrupt' not in ctx assert 'ytdl_corrupt' not in ctx
stream, _ = sanitize_open(self.ytdl_filename(ctx['filename']), 'r') stream, _ = self.sanitize_open(self.ytdl_filename(ctx['filename']), 'r')
try: try:
ytdl_data = json.loads(stream.read()) ytdl_data = json.loads(stream.read())
ctx['fragment_index'] = ytdl_data['downloader']['current_fragment']['index'] ctx['fragment_index'] = ytdl_data['downloader']['current_fragment']['index']
@ -104,7 +107,7 @@ class FragmentFD(FileDownloader):
stream.close() stream.close()
def _write_ytdl_file(self, ctx): def _write_ytdl_file(self, ctx):
frag_index_stream, _ = sanitize_open(self.ytdl_filename(ctx['filename']), 'w') frag_index_stream, _ = self.sanitize_open(self.ytdl_filename(ctx['filename']), 'w')
try: try:
downloader = { downloader = {
'current_fragment': { 'current_fragment': {
@ -125,6 +128,7 @@ class FragmentFD(FileDownloader):
'url': frag_url, 'url': frag_url,
'http_headers': headers or info_dict.get('http_headers'), 'http_headers': headers or info_dict.get('http_headers'),
'request_data': request_data, 'request_data': request_data,
'ctx_id': ctx.get('ctx_id'),
} }
success = ctx['dl'].download(fragment_filename, fragment_info_dict) success = ctx['dl'].download(fragment_filename, fragment_info_dict)
if not success: if not success:
@ -135,7 +139,7 @@ class FragmentFD(FileDownloader):
return True, self._read_fragment(ctx) return True, self._read_fragment(ctx)
def _read_fragment(self, ctx): def _read_fragment(self, ctx):
down, frag_sanitized = sanitize_open(ctx['fragment_filename_sanitized'], 'rb') down, frag_sanitized = self.sanitize_open(ctx['fragment_filename_sanitized'], 'rb')
ctx['fragment_filename_sanitized'] = frag_sanitized ctx['fragment_filename_sanitized'] = frag_sanitized
frag_content = down.read() frag_content = down.read()
down.close() down.close()
@ -169,7 +173,7 @@ class FragmentFD(FileDownloader):
self.ydl, self.ydl,
{ {
'continuedl': True, 'continuedl': True,
'quiet': True, 'quiet': self.params.get('quiet'),
'noprogress': True, 'noprogress': True,
'ratelimit': self.params.get('ratelimit'), 'ratelimit': self.params.get('ratelimit'),
'retries': self.params.get('retries', 0), 'retries': self.params.get('retries', 0),
@ -211,7 +215,7 @@ class FragmentFD(FileDownloader):
self._write_ytdl_file(ctx) self._write_ytdl_file(ctx)
assert ctx['fragment_index'] == 0 assert ctx['fragment_index'] == 0
dest_stream, tmpfilename = sanitize_open(tmpfilename, open_mode) dest_stream, tmpfilename = self.sanitize_open(tmpfilename, open_mode)
ctx.update({ ctx.update({
'dl': dl, 'dl': dl,
@ -224,6 +228,7 @@ class FragmentFD(FileDownloader):
def _start_frag_download(self, ctx, info_dict): def _start_frag_download(self, ctx, info_dict):
resume_len = ctx['complete_frags_downloaded_bytes'] resume_len = ctx['complete_frags_downloaded_bytes']
total_frags = ctx['total_frags'] total_frags = ctx['total_frags']
ctx_id = ctx.get('ctx_id')
# This dict stores the download progress, it's updated by the progress # This dict stores the download progress, it's updated by the progress
# hook # hook
state = { state = {
@ -238,6 +243,7 @@ class FragmentFD(FileDownloader):
start = time.time() start = time.time()
ctx.update({ ctx.update({
'started': start, 'started': start,
'fragment_started': start,
# Amount of fragment's bytes downloaded by the time of the previous # Amount of fragment's bytes downloaded by the time of the previous
# frag progress hook invocation # frag progress hook invocation
'prev_frag_downloaded_bytes': 0, 'prev_frag_downloaded_bytes': 0,
@ -247,6 +253,12 @@ class FragmentFD(FileDownloader):
if s['status'] not in ('downloading', 'finished'): if s['status'] not in ('downloading', 'finished'):
return return
if ctx_id is not None and s.get('ctx_id') != ctx_id:
return
state['max_progress'] = ctx.get('max_progress')
state['progress_idx'] = ctx.get('progress_idx')
time_now = time.time() time_now = time.time()
state['elapsed'] = time_now - start state['elapsed'] = time_now - start
frag_total_bytes = s.get('total_bytes') or 0 frag_total_bytes = s.get('total_bytes') or 0
@ -262,6 +274,9 @@ class FragmentFD(FileDownloader):
ctx['fragment_index'] = state['fragment_index'] ctx['fragment_index'] = state['fragment_index']
state['downloaded_bytes'] += frag_total_bytes - ctx['prev_frag_downloaded_bytes'] state['downloaded_bytes'] += frag_total_bytes - ctx['prev_frag_downloaded_bytes']
ctx['complete_frags_downloaded_bytes'] = state['downloaded_bytes'] ctx['complete_frags_downloaded_bytes'] = state['downloaded_bytes']
ctx['speed'] = state['speed'] = self.calc_speed(
ctx['fragment_started'], time_now, frag_total_bytes)
ctx['fragment_started'] = time.time()
ctx['prev_frag_downloaded_bytes'] = 0 ctx['prev_frag_downloaded_bytes'] = 0
else: else:
frag_downloaded_bytes = s['downloaded_bytes'] frag_downloaded_bytes = s['downloaded_bytes']
@ -270,8 +285,8 @@ class FragmentFD(FileDownloader):
state['eta'] = self.calc_eta( state['eta'] = self.calc_eta(
start, time_now, estimated_size - resume_len, start, time_now, estimated_size - resume_len,
state['downloaded_bytes'] - resume_len) state['downloaded_bytes'] - resume_len)
state['speed'] = s.get('speed') or ctx.get('speed') ctx['speed'] = state['speed'] = self.calc_speed(
ctx['speed'] = state['speed'] ctx['fragment_started'], time_now, frag_downloaded_bytes)
ctx['prev_frag_downloaded_bytes'] = frag_downloaded_bytes ctx['prev_frag_downloaded_bytes'] = frag_downloaded_bytes
self._hook_progress(state, info_dict) self._hook_progress(state, info_dict)
@ -306,6 +321,9 @@ class FragmentFD(FileDownloader):
'filename': ctx['filename'], 'filename': ctx['filename'],
'status': 'finished', 'status': 'finished',
'elapsed': elapsed, 'elapsed': elapsed,
'ctx_id': ctx.get('ctx_id'),
'max_progress': ctx.get('max_progress'),
'progress_idx': ctx.get('progress_idx'),
}, info_dict) }, info_dict)
def _prepare_external_frag_download(self, ctx): def _prepare_external_frag_download(self, ctx):
@ -329,15 +347,96 @@ class FragmentFD(FileDownloader):
'fragment_index': 0, 'fragment_index': 0,
}) })
def download_and_append_fragments(self, ctx, fragments, info_dict, *, pack_func=None, finish_func=None): def decrypter(self, info_dict):
_key_cache = {}
def _get_key(url):
if url not in _key_cache:
_key_cache[url] = self.ydl.urlopen(self._prepare_url(info_dict, url)).read()
return _key_cache[url]
def decrypt_fragment(fragment, frag_content):
decrypt_info = fragment.get('decrypt_info')
if not decrypt_info or decrypt_info['METHOD'] != 'AES-128':
return frag_content
iv = decrypt_info.get('IV') or compat_struct_pack('>8xq', fragment['media_sequence'])
decrypt_info['KEY'] = decrypt_info.get('KEY') or _get_key(info_dict.get('_decryption_key_url') or decrypt_info['URI'])
# Don't decrypt the content in tests since the data is explicitly truncated and it's not to a valid block
# size (see https://github.com/ytdl-org/youtube-dl/pull/27660). Tests only care that the correct data downloaded,
# not what it decrypts to.
if self.params.get('test', False):
return frag_content
decrypted_data = aes_cbc_decrypt_bytes(frag_content, decrypt_info['KEY'], iv)
return decrypted_data[:-decrypted_data[-1]]
return decrypt_fragment
def download_and_append_fragments_multiple(self, *args, pack_func=None, finish_func=None):
'''
@params (ctx1, fragments1, info_dict1), (ctx2, fragments2, info_dict2), ...
all args must be either tuple or list
'''
interrupt_trigger = [True]
max_progress = len(args)
if max_progress == 1:
return self.download_and_append_fragments(*args[0], pack_func=pack_func, finish_func=finish_func)
max_workers = self.params.get('concurrent_fragment_downloads', 1)
if max_progress > 1:
self._prepare_multiline_status(max_progress)
def thread_func(idx, ctx, fragments, info_dict, tpe):
ctx['max_progress'] = max_progress
ctx['progress_idx'] = idx
return self.download_and_append_fragments(
ctx, fragments, info_dict, pack_func=pack_func, finish_func=finish_func,
tpe=tpe, interrupt_trigger=interrupt_trigger)
class FTPE(concurrent.futures.ThreadPoolExecutor):
# has to stop this or it's going to wait on the worker thread itself
def __exit__(self, exc_type, exc_val, exc_tb):
pass
spins = []
if compat_os_name == 'nt':
self.report_warning('Ctrl+C does not work on Windows when used with parallel threads. '
'This is a known issue and patches are welcome')
for idx, (ctx, fragments, info_dict) in enumerate(args):
tpe = FTPE(math.ceil(max_workers / max_progress))
job = tpe.submit(thread_func, idx, ctx, fragments, info_dict, tpe)
spins.append((tpe, job))
result = True
for tpe, job in spins:
try:
result = result and job.result()
except KeyboardInterrupt:
interrupt_trigger[0] = False
finally:
tpe.shutdown(wait=True)
if not interrupt_trigger[0]:
raise KeyboardInterrupt()
return result
def download_and_append_fragments(
self, ctx, fragments, info_dict, *, pack_func=None, finish_func=None,
tpe=None, interrupt_trigger=None):
if not interrupt_trigger:
interrupt_trigger = (True, )
fragment_retries = self.params.get('fragment_retries', 0) fragment_retries = self.params.get('fragment_retries', 0)
is_fatal = (lambda idx: idx == 0) if self.params.get('skip_unavailable_fragments', True) else (lambda _: True) is_fatal = (
((lambda _: False) if info_dict.get('is_live') else (lambda idx: idx == 0))
if self.params.get('skip_unavailable_fragments', True) else (lambda _: True))
if not pack_func: if not pack_func:
pack_func = lambda frag_content, _: frag_content pack_func = lambda frag_content, _: frag_content
def download_fragment(fragment, ctx): def download_fragment(fragment, ctx):
frag_index = ctx['fragment_index'] = fragment['frag_index'] frag_index = ctx['fragment_index'] = fragment['frag_index']
headers = info_dict.get('http_headers', {}) ctx['last_error'] = None
if not interrupt_trigger[0]:
return False, frag_index
headers = info_dict.get('http_headers', {}).copy()
byte_range = fragment.get('byte_range') byte_range = fragment.get('byte_range')
if byte_range: if byte_range:
headers['Range'] = 'bytes=%d-%d' % (byte_range['start'], byte_range['end'] - 1) headers['Range'] = 'bytes=%d-%d' % (byte_range['start'], byte_range['end'] - 1)
@ -351,12 +450,13 @@ class FragmentFD(FileDownloader):
if not success: if not success:
return False, frag_index return False, frag_index
break break
except compat_urllib_error.HTTPError as err: except (compat_urllib_error.HTTPError, http.client.IncompleteRead) as err:
# Unavailable (possibly temporary) fragments may be served. # Unavailable (possibly temporary) fragments may be served.
# First we try to retry then either skip or abort. # First we try to retry then either skip or abort.
# See https://github.com/ytdl-org/youtube-dl/issues/10165, # See https://github.com/ytdl-org/youtube-dl/issues/10165,
# https://github.com/ytdl-org/youtube-dl/issues/10448). # https://github.com/ytdl-org/youtube-dl/issues/10448).
count += 1 count += 1
ctx['last_error'] = err
if count <= fragment_retries: if count <= fragment_retries:
self.report_retry_fragment(err, frag_index, count, fragment_retries) self.report_retry_fragment(err, frag_index, count, fragment_retries)
except DownloadError: except DownloadError:
@ -374,24 +474,10 @@ class FragmentFD(FileDownloader):
return False, frag_index return False, frag_index
return frag_content, frag_index return frag_content, frag_index
def decrypt_fragment(fragment, frag_content):
decrypt_info = fragment.get('decrypt_info')
if not decrypt_info or decrypt_info['METHOD'] != 'AES-128':
return frag_content
iv = decrypt_info.get('IV') or compat_struct_pack('>8xq', fragment['media_sequence'])
decrypt_info['KEY'] = decrypt_info.get('KEY') or self.ydl.urlopen(
self._prepare_url(info_dict, info_dict.get('_decryption_key_url') or decrypt_info['URI'])).read()
# Don't decrypt the content in tests since the data is explicitly truncated and it's not to a valid block
# size (see https://github.com/ytdl-org/youtube-dl/pull/27660). Tests only care that the correct data downloaded,
# not what it decrypts to.
if self.params.get('test', False):
return frag_content
return AES.new(decrypt_info['KEY'], AES.MODE_CBC, iv).decrypt(frag_content)
def append_fragment(frag_content, frag_index, ctx): def append_fragment(frag_content, frag_index, ctx):
if not frag_content: if not frag_content:
if not is_fatal(frag_index - 1): if not is_fatal(frag_index - 1):
self.report_skip_fragment(frag_index) self.report_skip_fragment(frag_index, 'fragment not found')
return True return True
else: else:
ctx['dest_stream'].close() ctx['dest_stream'].close()
@ -401,7 +487,10 @@ class FragmentFD(FileDownloader):
self._append_fragment(ctx, pack_func(frag_content, frag_index)) self._append_fragment(ctx, pack_func(frag_content, frag_index))
return True return True
max_workers = self.params.get('concurrent_fragment_downloads', 1) decrypt_fragment = self.decrypter(info_dict)
max_workers = math.ceil(
self.params.get('concurrent_fragment_downloads', 1) / ctx.get('max_progress', 1))
if can_threaded_download and max_workers > 1: if can_threaded_download and max_workers > 1:
def _download_fragment(fragment): def _download_fragment(fragment):
@ -410,8 +499,10 @@ class FragmentFD(FileDownloader):
return fragment, frag_content, frag_index, ctx_copy.get('fragment_filename_sanitized') return fragment, frag_content, frag_index, ctx_copy.get('fragment_filename_sanitized')
self.report_warning('The download speed shown is only of one thread. This is a known issue and patches are welcome') self.report_warning('The download speed shown is only of one thread. This is a known issue and patches are welcome')
with concurrent.futures.ThreadPoolExecutor(max_workers) as pool: with tpe or concurrent.futures.ThreadPoolExecutor(max_workers) as pool:
for fragment, frag_content, frag_index, frag_filename in pool.map(_download_fragment, fragments): for fragment, frag_content, frag_index, frag_filename in pool.map(_download_fragment, fragments):
if not interrupt_trigger[0]:
break
ctx['fragment_filename_sanitized'] = frag_filename ctx['fragment_filename_sanitized'] = frag_filename
ctx['fragment_index'] = frag_index ctx['fragment_index'] = frag_index
result = append_fragment(decrypt_fragment(fragment, frag_content), frag_index, ctx) result = append_fragment(decrypt_fragment(fragment, frag_content), frag_index, ctx)
@ -419,6 +510,8 @@ class FragmentFD(FileDownloader):
return False return False
else: else:
for fragment in fragments: for fragment in fragments:
if not interrupt_trigger[0]:
break
frag_content, frag_index = download_fragment(fragment, ctx) frag_content, frag_index = download_fragment(fragment, ctx)
result = append_fragment(decrypt_fragment(fragment, frag_content), frag_index, ctx) result = append_fragment(decrypt_fragment(fragment, frag_content), frag_index, ctx)
if not result: if not result:

View file

@ -5,10 +5,11 @@ import io
import binascii import binascii
from ..downloader import get_suitable_downloader from ..downloader import get_suitable_downloader
from .fragment import FragmentFD, can_decrypt_frag from .fragment import FragmentFD
from .external import FFmpegFD from .external import FFmpegFD
from ..compat import ( from ..compat import (
compat_pycrypto_AES,
compat_urlparse, compat_urlparse,
) )
from ..utils import ( from ..utils import (
@ -29,7 +30,7 @@ class HlsFD(FragmentFD):
FD_NAME = 'hlsnative' FD_NAME = 'hlsnative'
@staticmethod @staticmethod
def can_download(manifest, info_dict, allow_unplayable_formats=False, with_crypto=can_decrypt_frag): def can_download(manifest, info_dict, allow_unplayable_formats=False):
UNSUPPORTED_FEATURES = [ UNSUPPORTED_FEATURES = [
# r'#EXT-X-BYTERANGE', # playlists composed of byte ranges of media files [2] # r'#EXT-X-BYTERANGE', # playlists composed of byte ranges of media files [2]
@ -56,9 +57,6 @@ class HlsFD(FragmentFD):
def check_results(): def check_results():
yield not info_dict.get('is_live') yield not info_dict.get('is_live')
is_aes128_enc = '#EXT-X-KEY:METHOD=AES-128' in manifest
yield with_crypto or not is_aes128_enc
yield not (is_aes128_enc and r'#EXT-X-BYTERANGE' in manifest)
for feature in UNSUPPORTED_FEATURES: for feature in UNSUPPORTED_FEATURES:
yield not re.search(feature, manifest) yield not re.search(feature, manifest)
return all(check_results()) return all(check_results())
@ -71,16 +69,29 @@ class HlsFD(FragmentFD):
man_url = urlh.geturl() man_url = urlh.geturl()
s = urlh.read().decode('utf-8', 'ignore') s = urlh.read().decode('utf-8', 'ignore')
if not self.can_download(s, info_dict, self.params.get('allow_unplayable_formats')): can_download, message = self.can_download(s, info_dict, self.params.get('allow_unplayable_formats')), None
if info_dict.get('extra_param_to_segment_url') or info_dict.get('_decryption_key_url'): if can_download and not compat_pycrypto_AES and '#EXT-X-KEY:METHOD=AES-128' in s:
self.report_error('pycryptodome not found. Please install') if FFmpegFD.available():
can_download, message = False, 'The stream has AES-128 encryption and pycryptodomex is not available'
else:
message = ('The stream has AES-128 encryption and neither ffmpeg nor pycryptodomex are available; '
'Decryption will be performed natively, but will be extremely slow')
if not can_download:
has_drm = re.search('|'.join([
r'#EXT-X-FAXS-CM:', # Adobe Flash Access
r'#EXT-X-(?:SESSION-)?KEY:.*?URI="skd://', # Apple FairPlay
]), s)
if has_drm and not self.params.get('allow_unplayable_formats'):
self.report_error(
'This video is DRM protected; Try selecting another format with --format or '
'add --check-formats to automatically fallback to the next best format')
return False return False
if self.can_download(s, info_dict, with_crypto=True): message = message or 'Unsupported features have been detected'
self.report_warning('pycryptodome is needed to download this file natively')
fd = FFmpegFD(self.ydl, self.params) fd = FFmpegFD(self.ydl, self.params)
self.report_warning( self.report_warning(f'{message}; extraction will be delegated to {fd.get_basename()}')
'%s detected unsupported features; extraction will be delegated to %s' % (self.FD_NAME, fd.get_basename()))
return fd.real_download(filename, info_dict) return fd.real_download(filename, info_dict)
elif message:
self.report_warning(message)
is_webvtt = info_dict['ext'] == 'vtt' is_webvtt = info_dict['ext'] == 'vtt'
if is_webvtt: if is_webvtt:
@ -172,6 +183,7 @@ class HlsFD(FragmentFD):
'byte_range': byte_range, 'byte_range': byte_range,
'media_sequence': media_sequence, 'media_sequence': media_sequence,
}) })
media_sequence += 1
elif line.startswith('#EXT-X-MAP'): elif line.startswith('#EXT-X-MAP'):
if format_index and discontinuity_count != format_index: if format_index and discontinuity_count != format_index:
@ -196,6 +208,7 @@ class HlsFD(FragmentFD):
'byte_range': byte_range, 'byte_range': byte_range,
'media_sequence': media_sequence 'media_sequence': media_sequence
}) })
media_sequence += 1
if map_info.get('BYTERANGE'): if map_info.get('BYTERANGE'):
splitted_byte_range = map_info.get('BYTERANGE').split('@') splitted_byte_range = map_info.get('BYTERANGE').split('@')
@ -235,20 +248,18 @@ class HlsFD(FragmentFD):
elif line.startswith('#EXT-X-DISCONTINUITY'): elif line.startswith('#EXT-X-DISCONTINUITY'):
discontinuity_count += 1 discontinuity_count += 1
i += 1 i += 1
media_sequence += 1
# We only download the first fragment during the test # We only download the first fragment during the test
if self.params.get('test', False): if self.params.get('test', False):
fragments = [fragments[0] if fragments else None] fragments = [fragments[0] if fragments else None]
if real_downloader: if real_downloader:
info_copy = info_dict.copy() info_dict['fragments'] = fragments
info_copy['fragments'] = fragments
fd = real_downloader(self.ydl, self.params) fd = real_downloader(self.ydl, self.params)
# TODO: Make progress updates work without hooking twice # TODO: Make progress updates work without hooking twice
# for ph in self._progress_hooks: # for ph in self._progress_hooks:
# fd.add_progress_hook(ph) # fd.add_progress_hook(ph)
return fd.real_download(filename, info_copy) return fd.real_download(filename, info_dict)
if is_webvtt: if is_webvtt:
def pack_fragment(frag_content, frag_index): def pack_fragment(frag_content, frag_index):

View file

@ -16,7 +16,6 @@ from ..utils import (
ContentTooShortError, ContentTooShortError,
encodeFilename, encodeFilename,
int_or_none, int_or_none,
sanitize_open,
sanitized_Request, sanitized_Request,
ThrottledDownload, ThrottledDownload,
write_xattr, write_xattr,
@ -48,8 +47,9 @@ class HttpFD(FileDownloader):
is_test = self.params.get('test', False) is_test = self.params.get('test', False)
chunk_size = self._TEST_FILE_SIZE if is_test else ( chunk_size = self._TEST_FILE_SIZE if is_test else (
info_dict.get('downloader_options', {}).get('http_chunk_size') self.params.get('http_chunk_size')
or self.params.get('http_chunk_size') or 0) or info_dict.get('downloader_options', {}).get('http_chunk_size')
or 0)
ctx.open_mode = 'wb' ctx.open_mode = 'wb'
ctx.resume_len = 0 ctx.resume_len = 0
@ -57,6 +57,7 @@ class HttpFD(FileDownloader):
ctx.block_size = self.params.get('buffersize', 1024) ctx.block_size = self.params.get('buffersize', 1024)
ctx.start_time = time.time() ctx.start_time = time.time()
ctx.chunk_size = None ctx.chunk_size = None
throttle_start = None
if self.params.get('continuedl', True): if self.params.get('continuedl', True):
# Establish possible resume length # Establish possible resume length
@ -189,13 +190,16 @@ class HttpFD(FileDownloader):
# Unexpected HTTP error # Unexpected HTTP error
raise raise
raise RetryDownload(err) raise RetryDownload(err)
except socket.error as err: except socket.timeout as err:
if err.errno != errno.ECONNRESET:
# Connection reset is no problem, just retry
raise
raise RetryDownload(err) raise RetryDownload(err)
except socket.error as err:
if err.errno in (errno.ECONNRESET, errno.ETIMEDOUT):
# Connection reset is no problem, just retry
raise RetryDownload(err)
raise
def download(): def download():
nonlocal throttle_start
data_len = ctx.data.info().get('Content-length', None) data_len = ctx.data.info().get('Content-length', None)
# Range HTTP header may be ignored/unsupported by a webserver # Range HTTP header may be ignored/unsupported by a webserver
@ -224,7 +228,6 @@ class HttpFD(FileDownloader):
# measure time over whole while-loop, so slow_down() and best_block_size() work together properly # measure time over whole while-loop, so slow_down() and best_block_size() work together properly
now = None # needed for slow_down() in the first loop run now = None # needed for slow_down() in the first loop run
before = start # start measuring before = start # start measuring
throttle_start = None
def retry(e): def retry(e):
to_stdout = ctx.tmpfilename == '-' to_stdout = ctx.tmpfilename == '-'
@ -259,7 +262,7 @@ class HttpFD(FileDownloader):
# Open destination file just in time # Open destination file just in time
if ctx.stream is None: if ctx.stream is None:
try: try:
ctx.stream, ctx.tmpfilename = sanitize_open( ctx.stream, ctx.tmpfilename = self.sanitize_open(
ctx.tmpfilename, ctx.open_mode) ctx.tmpfilename, ctx.open_mode)
assert ctx.stream is not None assert ctx.stream is not None
ctx.filename = self.undo_temp_name(ctx.tmpfilename) ctx.filename = self.undo_temp_name(ctx.tmpfilename)
@ -310,6 +313,7 @@ class HttpFD(FileDownloader):
'eta': eta, 'eta': eta,
'speed': speed, 'speed': speed,
'elapsed': now - ctx.start_time, 'elapsed': now - ctx.start_time,
'ctx_id': info_dict.get('ctx_id'),
}, info_dict) }, info_dict)
if data_len is not None and byte_counter == data_len: if data_len is not None and byte_counter == data_len:
@ -324,7 +328,7 @@ class HttpFD(FileDownloader):
if ctx.stream is not None and ctx.tmpfilename != '-': if ctx.stream is not None and ctx.tmpfilename != '-':
ctx.stream.close() ctx.stream.close()
raise ThrottledDownload() raise ThrottledDownload()
else: elif speed:
throttle_start = None throttle_start = None
if not is_test and ctx.chunk_size and ctx.data_len is not None and byte_counter < ctx.data_len: if not is_test and ctx.chunk_size and ctx.data_len is not None and byte_counter < ctx.data_len:
@ -357,6 +361,7 @@ class HttpFD(FileDownloader):
'filename': ctx.filename, 'filename': ctx.filename,
'status': 'finished', 'status': 'finished',
'elapsed': time.time() - ctx.start_time, 'elapsed': time.time() - ctx.start_time,
'ctx_id': info_dict.get('ctx_id'),
}, info_dict) }, info_dict)
return True return True
@ -369,6 +374,8 @@ class HttpFD(FileDownloader):
count += 1 count += 1
if count <= retries: if count <= retries:
self.report_retry(e.source_error, count, retries) self.report_retry(e.source_error, count, retries)
else:
self.to_screen(f'[download] Got server HTTP error: {e.source_error}')
continue continue
except NextFragment: except NextFragment:
continue continue

View file

@ -114,8 +114,8 @@ body > figure > img {
fragment_base_url = info_dict.get('fragment_base_url') fragment_base_url = info_dict.get('fragment_base_url')
fragments = info_dict['fragments'][:1] if self.params.get( fragments = info_dict['fragments'][:1] if self.params.get(
'test', False) else info_dict['fragments'] 'test', False) else info_dict['fragments']
title = info_dict['title'] title = info_dict.get('title', info_dict['format_id'])
origin = info_dict['webpage_url'] origin = info_dict.get('webpage_url', info_dict['url'])
ctx = { ctx = {
'filename': filename, 'filename': filename,

View file

@ -6,7 +6,7 @@ import threading
from .common import FileDownloader from .common import FileDownloader
from ..downloader import get_suitable_downloader from ..downloader import get_suitable_downloader
from ..extractor.niconico import NiconicoIE from ..extractor.niconico import NiconicoIE
from ..compat import compat_urllib_request from ..utils import sanitized_Request
class NiconicoDmcFD(FileDownloader): class NiconicoDmcFD(FileDownloader):
@ -29,9 +29,11 @@ class NiconicoDmcFD(FileDownloader):
heartbeat_data = heartbeat_info_dict['data'].encode() heartbeat_data = heartbeat_info_dict['data'].encode()
heartbeat_interval = heartbeat_info_dict.get('interval', 30) heartbeat_interval = heartbeat_info_dict.get('interval', 30)
request = sanitized_Request(heartbeat_url, heartbeat_data)
def heartbeat(): def heartbeat():
try: try:
compat_urllib_request.urlopen(url=heartbeat_url, data=heartbeat_data) self.ydl.urlopen(request).read()
except Exception: except Exception:
self.to_screen('[%s] Heartbeat failed' % self.FD_NAME) self.to_screen('[%s] Heartbeat failed' % self.FD_NAME)

View file

@ -12,6 +12,7 @@ from ..utils import (
encodeFilename, encodeFilename,
encodeArgument, encodeArgument,
get_exe_version, get_exe_version,
Popen,
) )
@ -26,7 +27,7 @@ class RtmpFD(FileDownloader):
start = time.time() start = time.time()
resume_percent = None resume_percent = None
resume_downloaded_data_len = None resume_downloaded_data_len = None
proc = subprocess.Popen(args, stderr=subprocess.PIPE) proc = Popen(args, stderr=subprocess.PIPE)
cursor_in_new_line = True cursor_in_new_line = True
proc_stderr_closed = False proc_stderr_closed = False
try: try:

View file

@ -183,7 +183,7 @@ class YoutubeLiveChatFD(FragmentFD):
request_data['currentPlayerState'] = {'playerOffsetMs': str(max(offset - 5000, 0))} request_data['currentPlayerState'] = {'playerOffsetMs': str(max(offset - 5000, 0))}
if click_tracking_params: if click_tracking_params:
request_data['context']['clickTracking'] = {'clickTrackingParams': click_tracking_params} request_data['context']['clickTracking'] = {'clickTrackingParams': click_tracking_params}
headers = ie.generate_api_headers(ytcfg, visitor_data=visitor_data) headers = ie.generate_api_headers(ytcfg=ytcfg, visitor_data=visitor_data)
headers.update({'content-type': 'application/json'}) headers.update({'content-type': 'application/json'})
fragment_request_data = json.dumps(request_data, ensure_ascii=False).encode('utf-8') + b'\n' fragment_request_data = json.dumps(request_data, ensure_ascii=False).encode('utf-8') + b'\n'
success, continuation_id, offset, click_tracking_params = download_and_parse_fragment( success, continuation_id, offset, click_tracking_params = download_and_parse_fragment(

View file

@ -1,14 +1,15 @@
from __future__ import unicode_literals import os
from ..utils import load_plugins from ..utils import load_plugins
try: _LAZY_LOADER = False
from .lazy_extractors import * if not os.environ.get('YTDLP_NO_LAZY_EXTRACTORS'):
from .lazy_extractors import _ALL_CLASSES try:
_LAZY_LOADER = True from .lazy_extractors import *
_PLUGIN_CLASSES = [] from .lazy_extractors import _ALL_CLASSES
except ImportError: _LAZY_LOADER = True
_LAZY_LOADER = False except ImportError:
pass
if not _LAZY_LOADER: if not _LAZY_LOADER:
from .extractors import * from .extractors import *
@ -19,8 +20,8 @@ if not _LAZY_LOADER:
] ]
_ALL_CLASSES.append(GenericIE) _ALL_CLASSES.append(GenericIE)
_PLUGIN_CLASSES = load_plugins('extractor', 'IE', globals()) _PLUGIN_CLASSES = load_plugins('extractor', 'IE', globals())
_ALL_CLASSES = _PLUGIN_CLASSES + _ALL_CLASSES _ALL_CLASSES = list(_PLUGIN_CLASSES.values()) + _ALL_CLASSES
def gen_extractor_classes(): def gen_extractor_classes():

View file

@ -8,6 +8,7 @@ import time
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str from ..compat import compat_str
from ..utils import ( from ..utils import (
dict_get,
ExtractorError, ExtractorError,
js_to_json, js_to_json,
int_or_none, int_or_none,
@ -233,8 +234,6 @@ class ABCIViewIE(InfoExtractor):
}] }]
is_live = video_params.get('livestream') == '1' is_live = video_params.get('livestream') == '1'
if is_live:
title = self._live_title(title)
return { return {
'id': video_id, 'id': video_id,
@ -255,3 +254,66 @@ class ABCIViewIE(InfoExtractor):
'subtitles': subtitles, 'subtitles': subtitles,
'is_live': is_live, 'is_live': is_live,
} }
class ABCIViewShowSeriesIE(InfoExtractor):
IE_NAME = 'abc.net.au:iview:showseries'
_VALID_URL = r'https?://iview\.abc\.net\.au/show/(?P<id>[^/]+)(?:/series/\d+)?$'
_GEO_COUNTRIES = ['AU']
_TESTS = [{
'url': 'https://iview.abc.net.au/show/upper-middle-bogan',
'info_dict': {
'id': '124870-1',
'title': 'Series 1',
'description': 'md5:93119346c24a7c322d446d8eece430ff',
'series': 'Upper Middle Bogan',
'season': 'Series 1',
'thumbnail': r're:^https?://cdn\.iview\.abc\.net\.au/thumbs/.*\.jpg$'
},
'playlist_count': 8,
}, {
'url': 'https://iview.abc.net.au/show/upper-middle-bogan',
'info_dict': {
'id': 'CO1108V001S00',
'ext': 'mp4',
'title': 'Series 1 Ep 1 I\'m A Swan',
'description': 'md5:7b676758c1de11a30b79b4d301e8da93',
'series': 'Upper Middle Bogan',
'uploader_id': 'abc1',
'upload_date': '20210630',
'timestamp': 1625036400,
},
'params': {
'noplaylist': True,
'skip_download': 'm3u8',
},
}]
def _real_extract(self, url):
show_id = self._match_id(url)
webpage = self._download_webpage(url, show_id)
webpage_data = self._search_regex(
r'window\.__INITIAL_STATE__\s*=\s*[\'"](.+?)[\'"]\s*;',
webpage, 'initial state')
video_data = self._parse_json(
unescapeHTML(webpage_data).encode('utf-8').decode('unicode_escape'), show_id)
video_data = video_data['route']['pageData']['_embedded']
if self.get_param('noplaylist') and 'highlightVideo' in video_data:
self.to_screen('Downloading just the highlight video because of --no-playlist')
return self.url_result(video_data['highlightVideo']['shareUrl'], ie=ABCIViewIE.ie_key())
self.to_screen(f'Downloading playlist {show_id} - add --no-playlist to just download the highlight video')
series = video_data['selectedSeries']
return {
'_type': 'playlist',
'entries': [self.url_result(episode['shareUrl'])
for episode in series['_embedded']['videoEpisodes']],
'id': series.get('id'),
'title': dict_get(series, ('title', 'displaySubtitle')),
'description': series.get('description'),
'series': dict_get(series, ('showTitle', 'displayTitle')),
'season': dict_get(series, ('title', 'displaySubtitle')),
'thumbnail': series.get('thumbnail'),
}

View file

@ -15,6 +15,7 @@ from ..compat import (
compat_ord, compat_ord,
) )
from ..utils import ( from ..utils import (
ass_subtitles_timecode,
bytes_to_intlist, bytes_to_intlist,
bytes_to_long, bytes_to_long,
ExtractorError, ExtractorError,
@ -68,10 +69,6 @@ class ADNIE(InfoExtractor):
'end': 4, 'end': 4,
} }
@staticmethod
def _ass_subtitles_timecode(seconds):
return '%01d:%02d:%02d.%02d' % (seconds / 3600, (seconds % 3600) / 60, seconds % 60, (seconds % 1) * 100)
def _get_subtitles(self, sub_url, video_id): def _get_subtitles(self, sub_url, video_id):
if not sub_url: if not sub_url:
return None return None
@ -117,8 +114,8 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
continue continue
alignment = self._POS_ALIGN_MAP.get(position_align, 2) + self._LINE_ALIGN_MAP.get(line_align, 0) alignment = self._POS_ALIGN_MAP.get(position_align, 2) + self._LINE_ALIGN_MAP.get(line_align, 0)
ssa += os.linesep + 'Dialogue: Marked=0,%s,%s,Default,,0,0,0,,%s%s' % ( ssa += os.linesep + 'Dialogue: Marked=0,%s,%s,Default,,0,0,0,,%s%s' % (
self._ass_subtitles_timecode(start), ass_subtitles_timecode(start),
self._ass_subtitles_timecode(end), ass_subtitles_timecode(end),
'{\\a%d}' % alignment if alignment != 2 else '', '{\\a%d}' % alignment if alignment != 2 else '',
text.replace('\n', '\\N').replace('<i>', '{\\i1}').replace('</i>', '{\\i0}')) text.replace('\n', '\\N').replace('<i>', '{\\i1}').replace('</i>', '{\\i0}'))

View file

@ -31,7 +31,7 @@ class AdobeConnectIE(InfoExtractor):
return { return {
'id': video_id, 'id': video_id,
'title': self._live_title(title) if is_live else title, 'title': title,
'formats': formats, 'formats': formats,
'is_live': is_live, 'is_live': is_live,
} }

View file

@ -37,6 +37,11 @@ MSO_INFO = {
'username_field': 'email', 'username_field': 'email',
'password_field': 'loginpassword', 'password_field': 'loginpassword',
}, },
'RCN': {
'name': 'RCN',
'username_field': 'username',
'password_field': 'password',
},
'Rogers': { 'Rogers': {
'name': 'Rogers', 'name': 'Rogers',
'username_field': 'UserName', 'username_field': 'UserName',

View file

@ -9,6 +9,7 @@ from ..utils import (
float_or_none, float_or_none,
int_or_none, int_or_none,
ISO639Utils, ISO639Utils,
join_nonempty,
OnDemandPagedList, OnDemandPagedList,
parse_duration, parse_duration,
str_or_none, str_or_none,
@ -263,7 +264,7 @@ class AdobeTVVideoIE(AdobeTVBaseIE):
continue continue
formats.append({ formats.append({
'filesize': int_or_none(source.get('kilobytes') or None, invscale=1000), 'filesize': int_or_none(source.get('kilobytes') or None, invscale=1000),
'format_id': '-'.join(filter(None, [source.get('format'), source.get('label')])), 'format_id': join_nonempty(source.get('format'), source.get('label')),
'height': int_or_none(source.get('height') or None), 'height': int_or_none(source.get('height') or None),
'tbr': int_or_none(source.get('bitrate') or None), 'tbr': int_or_none(source.get('bitrate') or None),
'width': int_or_none(source.get('width') or None), 'width': int_or_none(source.get('width') or None),

View file

@ -6,9 +6,11 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_xpath from ..compat import compat_xpath
from ..utils import ( from ..utils import (
date_from_str,
determine_ext, determine_ext,
ExtractorError, ExtractorError,
int_or_none, int_or_none,
unified_strdate,
url_or_none, url_or_none,
urlencode_postdata, urlencode_postdata,
xpath_text, xpath_text,
@ -237,6 +239,7 @@ class AfreecaTVIE(InfoExtractor):
r'nTitleNo\s*=\s*(\d+)', webpage, 'title', default=video_id) r'nTitleNo\s*=\s*(\d+)', webpage, 'title', default=video_id)
partial_view = False partial_view = False
adult_view = False
for _ in range(2): for _ in range(2):
query = { query = {
'nTitleNo': video_id, 'nTitleNo': video_id,
@ -245,6 +248,8 @@ class AfreecaTVIE(InfoExtractor):
} }
if partial_view: if partial_view:
query['partialView'] = 'SKIP_ADULT' query['partialView'] = 'SKIP_ADULT'
if adult_view:
query['adultView'] = 'ADULT_VIEW'
video_xml = self._download_xml( video_xml = self._download_xml(
'http://afbbs.afreecatv.com:8080/api/video/get_video_info.php', 'http://afbbs.afreecatv.com:8080/api/video/get_video_info.php',
video_id, 'Downloading video info XML%s' video_id, 'Downloading video info XML%s'
@ -264,6 +269,9 @@ class AfreecaTVIE(InfoExtractor):
partial_view = True partial_view = True
continue continue
elif flag == 'ADULT': elif flag == 'ADULT':
if not adult_view:
adult_view = True
continue
error = 'Only users older than 19 are able to watch this video. Provide account credentials to download this content.' error = 'Only users older than 19 are able to watch this video. Provide account credentials to download this content.'
else: else:
error = flag error = flag
@ -309,8 +317,15 @@ class AfreecaTVIE(InfoExtractor):
if not file_url: if not file_url:
continue continue
key = file_element.get('key', '') key = file_element.get('key', '')
upload_date = self._search_regex( upload_date = unified_strdate(self._search_regex(
r'^(\d{8})_', key, 'upload date', default=None) r'^(\d{8})_', key, 'upload date', default=None))
if upload_date is not None:
# sometimes the upload date isn't included in the file name
# instead, another random ID is, which may parse as a valid
# date but be wildly out of a reasonable range
parsed_date = date_from_str(upload_date)
if parsed_date.year < 2000 or parsed_date.year >= 2100:
upload_date = None
file_duration = int_or_none(file_element.get('duration')) file_duration = int_or_none(file_element.get('duration'))
format_id = key if key else '%s_%s' % (video_id, file_num) format_id = key if key else '%s_%s' % (video_id, file_num)
if determine_ext(file_url) == 'm3u8': if determine_ext(file_url) == 'm3u8':

View file

@ -1,55 +1,86 @@
# coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import json import json
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import (
try_get,
)
class AlJazeeraIE(InfoExtractor): class AlJazeeraIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?aljazeera\.com/(?P<type>program/[^/]+|(?:feature|video)s)/\d{4}/\d{1,2}/\d{1,2}/(?P<id>[^/?&#]+)' _VALID_URL = r'https?://(?P<base>\w+\.aljazeera\.\w+)/(?P<type>programs?/[^/]+|(?:feature|video|new)s)?/\d{4}/\d{1,2}/\d{1,2}/(?P<id>[^/?&#]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://www.aljazeera.com/program/episode/2014/9/19/deliverance', 'url': 'https://balkans.aljazeera.net/videos/2021/11/6/pojedini-domovi-u-sarajevu-jos-pod-vodom-mjestanima-se-dostavlja-hrana',
'info_dict': { 'info_dict': {
'id': '3792260579001', 'id': '6280641530001',
'ext': 'mp4', 'ext': 'mp4',
'title': 'The Slum - Episode 1: Deliverance', 'title': 'Pojedini domovi u Sarajevu još pod vodom, mještanima se dostavlja hrana',
'description': 'As a birth attendant advocating for family planning, Remy is on the frontline of Tondo\'s battle with overcrowding.', 'timestamp': 1636219149,
'uploader_id': '665003303001', 'description': 'U sarajevskim naseljima Rajlovac i Reljevo stambeni objekti, ali i industrijska postrojenja i dalje su pod vodom.',
'timestamp': 1411116829, 'upload_date': '20211106',
'upload_date': '20140919', }
}, {
'url': 'https://balkans.aljazeera.net/videos/2021/11/6/djokovic-usao-u-finale-mastersa-u-parizu',
'info_dict': {
'id': '6280654936001',
'ext': 'mp4',
'title': 'Đoković ušao u finale Mastersa u Parizu',
'timestamp': 1636221686,
'description': 'Novak Đoković je u polufinalu Mastersa u Parizu nakon preokreta pobijedio Poljaka Huberta Hurkacza.',
'upload_date': '20211106',
}, },
'add_ie': ['BrightcoveNew'],
'skip': 'Not accessible from Travis CI server',
}, {
'url': 'https://www.aljazeera.com/videos/2017/5/11/sierra-leone-709-carat-diamond-to-be-auctioned-off',
'only_matching': True,
}, {
'url': 'https://www.aljazeera.com/features/2017/8/21/transforming-pakistans-buses-into-art',
'only_matching': True,
}] }]
BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/%s_default/index.html?videoId=%s' BRIGHTCOVE_URL_RE = r'https?://players.brightcove.net/(?P<account>\d+)/(?P<player_id>[a-zA-Z0-9]+)_(?P<embed>[^/]+)/index.html\?videoId=(?P<id>\d+)'
def _real_extract(self, url): def _real_extract(self, url):
post_type, name = self._match_valid_url(url).groups() base, post_type, id = self._match_valid_url(url).groups()
wp = {
'balkans.aljazeera.net': 'ajb',
'chinese.aljazeera.net': 'chinese',
'mubasher.aljazeera.net': 'ajm',
}.get(base) or 'aje'
post_type = { post_type = {
'features': 'post', 'features': 'post',
'program': 'episode', 'program': 'episode',
'programs': 'episode',
'videos': 'video', 'videos': 'video',
'news': 'news',
}[post_type.split('/')[0]] }[post_type.split('/')[0]]
video = self._download_json( video = self._download_json(
'https://www.aljazeera.com/graphql', name, query={ f'https://{base}/graphql', id, query={
'wp-site': wp,
'operationName': 'ArchipelagoSingleArticleQuery', 'operationName': 'ArchipelagoSingleArticleQuery',
'variables': json.dumps({ 'variables': json.dumps({
'name': name, 'name': id,
'postType': post_type, 'postType': post_type,
}), }),
}, headers={ }, headers={
'wp-site': 'aje', 'wp-site': wp,
})['data']['article']['video'] })
video_id = video['id'] video = try_get(video, lambda x: x['data']['article']['video']) or {}
account_id = video.get('accountId') or '665003303001' video_id = video.get('id')
player_id = video.get('playerId') or 'BkeSH5BDb' account = video.get('accountId') or '911432371001'
return self.url_result( player_id = video.get('playerId') or 'csvTfAlKW'
self.BRIGHTCOVE_URL_TEMPLATE % (account_id, player_id, video_id), embed = 'default'
'BrightcoveNew', video_id)
if video_id is None:
webpage = self._download_webpage(url, id)
account, player_id, embed, video_id = self._search_regex(self.BRIGHTCOVE_URL_RE, webpage, 'video id',
group=(1, 2, 3, 4), default=(None, None, None, None))
if video_id is None:
return {
'_type': 'url_transparent',
'url': url,
'ie_key': 'Generic'
}
return {
'_type': 'url_transparent',
'url': f'https://players.brightcove.net/{account}/{player_id}_{embed}/index.html?videoId={video_id}',
'ie_key': 'BrightcoveNew'
}

View file

@ -0,0 +1,53 @@
# coding: utf-8
from .common import InfoExtractor
from ..utils import int_or_none
class AmazonStoreIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?amazon\.(?:[a-z]{2,3})(?:\.[a-z]{2})?/(?:[^/]+/)?(?:dp|gp/product)/(?P<id>[^/&#$?]+)'
_TESTS = [{
'url': 'https://www.amazon.co.uk/dp/B098XNCHLD/',
'info_dict': {
'id': 'B098XNCHLD',
'title': 'md5:5f3194dbf75a8dcfc83079bd63a2abed',
},
'playlist_mincount': 1,
'playlist': [{
'info_dict': {
'id': 'A1F83G8C2ARO7P',
'ext': 'mp4',
'title': 'mcdodo usb c cable 100W 5a',
'thumbnail': r're:^https?://.*\.jpg$',
},
}]
}, {
'url': 'https://www.amazon.in/Sony-WH-1000XM4-Cancelling-Headphones-Bluetooth/dp/B0863TXGM3',
'info_dict': {
'id': 'B0863TXGM3',
'title': 'md5:b0bde4881d3cfd40d63af19f7898b8ff',
},
'playlist_mincount': 4,
}, {
'url': 'https://www.amazon.com/dp/B0845NXCXF/',
'info_dict': {
'id': 'B0845NXCXF',
'title': 'md5:2145cd4e3c7782f1ee73649a3cff1171',
},
'playlist-mincount': 1,
}]
def _real_extract(self, url):
id = self._match_id(url)
webpage = self._download_webpage(url, id)
data_json = self._parse_json(self._html_search_regex(r'var\s?obj\s?=\s?jQuery\.parseJSON\(\'(.*)\'\)', webpage, 'data'), id)
entries = [{
'id': video['marketPlaceID'],
'url': video['url'],
'title': video.get('title'),
'thumbnail': video.get('thumbUrl') or video.get('thumb'),
'duration': video.get('durationSeconds'),
'height': int_or_none(video.get('videoHeight')),
'width': int_or_none(video.get('videoWidth')),
} for video in (data_json.get('videos') or []) if video.get('isVideo') and video.get('url')]
return self.playlist_result(entries, playlist_id=id, playlist_title=data_json['title'])

View file

@ -8,6 +8,7 @@ from ..utils import (
determine_ext, determine_ext,
extract_attributes, extract_attributes,
ExtractorError, ExtractorError,
join_nonempty,
url_or_none, url_or_none,
urlencode_postdata, urlencode_postdata,
urljoin, urljoin,
@ -140,15 +141,8 @@ class AnimeOnDemandIE(InfoExtractor):
kind = self._search_regex( kind = self._search_regex(
r'videomaterialurl/\d+/([^/]+)/', r'videomaterialurl/\d+/([^/]+)/',
playlist_url, 'media kind', default=None) playlist_url, 'media kind', default=None)
format_id_list = [] format_id = join_nonempty(lang, kind) if lang or kind else str(num)
if lang: format_note = join_nonempty(kind, lang_note, delim=', ')
format_id_list.append(lang)
if kind:
format_id_list.append(kind)
if not format_id_list and num is not None:
format_id_list.append(compat_str(num))
format_id = '-'.join(format_id_list)
format_note = ', '.join(filter(None, (kind, lang_note)))
item_id_list = [] item_id_list = []
if format_id: if format_id:
item_id_list.append(format_id) item_id_list.append(format_id)
@ -195,12 +189,10 @@ class AnimeOnDemandIE(InfoExtractor):
if not file_: if not file_:
continue continue
ext = determine_ext(file_) ext = determine_ext(file_)
format_id_list = [lang, kind] format_id = join_nonempty(
if ext == 'm3u8': lang, kind,
format_id_list.append('hls') 'hls' if ext == 'm3u8' else None,
elif source.get('type') == 'video/dash' or ext == 'mpd': 'dash' if source.get('type') == 'video/dash' or ext == 'mpd' else None)
format_id_list.append('dash')
format_id = '-'.join(filter(None, format_id_list))
if ext == 'm3u8': if ext == 'm3u8':
file_formats = self._extract_m3u8_formats( file_formats = self._extract_m3u8_formats(
file_, video_id, 'mp4', file_, video_id, 'mp4',

View file

@ -16,6 +16,7 @@ from ..utils import (
determine_ext, determine_ext,
intlist_to_bytes, intlist_to_bytes,
int_or_none, int_or_none,
join_nonempty,
strip_jsonp, strip_jsonp,
unescapeHTML, unescapeHTML,
unsmuggle_url, unsmuggle_url,
@ -303,13 +304,13 @@ class AnvatoIE(InfoExtractor):
tbr = int_or_none(published_url.get('kbps')) tbr = int_or_none(published_url.get('kbps'))
a_format = { a_format = {
'url': video_url, 'url': video_url,
'format_id': ('-'.join(filter(None, ['http', published_url.get('cdn_name')]))).lower(), 'format_id': join_nonempty('http', published_url.get('cdn_name')).lower(),
'tbr': tbr if tbr != 0 else None, 'tbr': tbr or None,
} }
if media_format == 'm3u8' and tbr is not None: if media_format == 'm3u8' and tbr is not None:
a_format.update({ a_format.update({
'format_id': '-'.join(filter(None, ['hls', compat_str(tbr)])), 'format_id': join_nonempty('hls', tbr),
'ext': 'mp4', 'ext': 'mp4',
}) })
elif media_format == 'm3u8-variant' or ext == 'm3u8': elif media_format == 'm3u8-variant' or ext == 'm3u8':

View file

@ -3,33 +3,36 @@ from __future__ import unicode_literals
import re import re
import json import json
from .common import InfoExtractor from .common import InfoExtractor
from .youtube import YoutubeIE from .youtube import YoutubeIE, YoutubeBaseInfoExtractor
from ..compat import ( from ..compat import (
compat_urllib_parse_unquote, compat_urllib_parse_unquote,
compat_urllib_parse_unquote_plus, compat_urllib_parse_unquote_plus,
compat_HTTPError compat_HTTPError
) )
from ..utils import ( from ..utils import (
bug_reports_message,
clean_html, clean_html,
determine_ext,
dict_get, dict_get,
extract_attributes, extract_attributes,
ExtractorError, ExtractorError,
get_element_by_id,
HEADRequest, HEADRequest,
int_or_none, int_or_none,
KNOWN_EXTENSIONS, KNOWN_EXTENSIONS,
merge_dicts, merge_dicts,
mimetype2ext, mimetype2ext,
orderedSet,
parse_duration, parse_duration,
parse_qs, parse_qs,
RegexNotFoundError,
str_to_int, str_to_int,
str_or_none, str_or_none,
traverse_obj,
try_get, try_get,
unified_strdate, unified_strdate,
unified_timestamp, unified_timestamp,
urlhandle_detect_ext,
url_or_none
) )
@ -262,12 +265,12 @@ class YoutubeWebArchiveIE(InfoExtractor):
_VALID_URL = r"""(?x)^ _VALID_URL = r"""(?x)^
(?:https?://)?web\.archive\.org/ (?:https?://)?web\.archive\.org/
(?:web/)? (?:web/)?
(?:[0-9A-Za-z_*]+/)? # /web and the version index is optional (?:(?P<date>[0-9]{14})?[0-9A-Za-z_*]*/)? # /web and the version index is optional
(?:https?(?::|%3[Aa])//)? (?:https?(?::|%3[Aa])//)?
(?: (?:
(?:\w+\.)?youtube\.com/watch(?:\?|%3[fF])(?:[^\#]+(?:&|%26))?v(?:=|%3[dD]) # Youtube URL (?:\w+\.)?youtube\.com(?::(?:80|443))?/watch(?:\.php)?(?:\?|%3[fF])(?:[^\#]+(?:&|%26))?v(?:=|%3[dD]) # Youtube URL
|(wayback-fakeurl\.archive\.org/yt/) # Or the internal fake url |(?:wayback-fakeurl\.archive\.org/yt/) # Or the internal fake url
) )
(?P<id>[0-9A-Za-z_-]{11})(?:%26|\#|&|$) (?P<id>[0-9A-Za-z_-]{11})(?:%26|\#|&|$)
""" """
@ -278,141 +281,391 @@ class YoutubeWebArchiveIE(InfoExtractor):
'info_dict': { 'info_dict': {
'id': 'aYAGB11YrSs', 'id': 'aYAGB11YrSs',
'ext': 'webm', 'ext': 'webm',
'title': 'Team Fortress 2 - Sandviches!' 'title': 'Team Fortress 2 - Sandviches!',
'description': 'md5:4984c0f9a07f349fc5d8e82ab7af4eaf',
'upload_date': '20110926',
'uploader': 'Zeurel',
'channel_id': 'UCukCyHaD-bK3in_pKpfH9Eg',
'duration': 32,
'uploader_id': 'Zeurel',
'uploader_url': 'http://www.youtube.com/user/Zeurel'
} }
}, }, {
{
# Internal link # Internal link
'url': 'https://web.archive.org/web/2oe/http://wayback-fakeurl.archive.org/yt/97t7Xj_iBv0', 'url': 'https://web.archive.org/web/2oe/http://wayback-fakeurl.archive.org/yt/97t7Xj_iBv0',
'info_dict': { 'info_dict': {
'id': '97t7Xj_iBv0', 'id': '97t7Xj_iBv0',
'ext': 'mp4', 'ext': 'mp4',
'title': 'How Flexible Machines Could Save The World' 'title': 'Why Machines That Bend Are Better',
'description': 'md5:00404df2c632d16a674ff8df1ecfbb6c',
'upload_date': '20190312',
'uploader': 'Veritasium',
'channel_id': 'UCHnyfMqiRRG1u-2MsSQLbXA',
'duration': 771,
'uploader_id': '1veritasium',
'uploader_url': 'http://www.youtube.com/user/1veritasium'
} }
}, }, {
{ # Video from 2012, webm format itag 45. Newest capture is deleted video, with an invalid description.
# Video from 2012, webm format itag 45. # Should use the date in the link. Title ends with '- Youtube'. Capture has description in eow-description
'url': 'https://web.archive.org/web/20120712231619/http://www.youtube.com/watch?v=AkhihxRKcrs&gl=US&hl=en', 'url': 'https://web.archive.org/web/20120712231619/http://www.youtube.com/watch?v=AkhihxRKcrs&gl=US&hl=en',
'info_dict': { 'info_dict': {
'id': 'AkhihxRKcrs', 'id': 'AkhihxRKcrs',
'ext': 'webm', 'ext': 'webm',
'title': 'Limited Run: Mondo\'s Modern Classic 1 of 3 (SDCC 2012)' 'title': 'Limited Run: Mondo\'s Modern Classic 1 of 3 (SDCC 2012)',
'upload_date': '20120712',
'duration': 398,
'description': 'md5:ff4de6a7980cb65d951c2f6966a4f2f3',
'uploader_id': 'machinima',
'uploader_url': 'http://www.youtube.com/user/machinima'
} }
}, }, {
{ # FLV video. Video file URL does not provide itag information
# Old flash-only video. Webpage title starts with "YouTube - ".
'url': 'https://web.archive.org/web/20081211103536/http://www.youtube.com/watch?v=jNQXAC9IVRw', 'url': 'https://web.archive.org/web/20081211103536/http://www.youtube.com/watch?v=jNQXAC9IVRw',
'info_dict': { 'info_dict': {
'id': 'jNQXAC9IVRw', 'id': 'jNQXAC9IVRw',
'ext': 'unknown_video', 'ext': 'flv',
'title': 'Me at the zoo' 'title': 'Me at the zoo',
'upload_date': '20050423',
'channel_id': 'UC4QobU6STFB0P71PMvOGN5A',
'duration': 19,
'description': 'md5:10436b12e07ac43ff8df65287a56efb4',
'uploader_id': 'jawed',
'uploader_url': 'http://www.youtube.com/user/jawed'
} }
}, }, {
{
# Flash video with .flv extension (itag 34). Title has prefix "YouTube -"
# Title has some weird unicode characters too.
'url': 'https://web.archive.org/web/20110712231407/http://www.youtube.com/watch?v=lTx3G6h2xyA', 'url': 'https://web.archive.org/web/20110712231407/http://www.youtube.com/watch?v=lTx3G6h2xyA',
'info_dict': { 'info_dict': {
'id': 'lTx3G6h2xyA', 'id': 'lTx3G6h2xyA',
'ext': 'flv', 'ext': 'flv',
'title': 'Madeon - Pop Culture (live mashup)' 'title': 'Madeon - Pop Culture (live mashup)',
'upload_date': '20110711',
'uploader': 'Madeon',
'channel_id': 'UCqMDNf3Pn5L7pcNkuSEeO3w',
'duration': 204,
'description': 'md5:f7535343b6eda34a314eff8b85444680',
'uploader_id': 'itsmadeon',
'uploader_url': 'http://www.youtube.com/user/itsmadeon'
} }
}, }, {
{ # Some versions of Youtube have have "YouTube" as page title in html (and later rewritten by js). # First capture is of dead video, second is the oldest from CDX response.
'url': 'https://web.archive.org/https://www.youtube.com/watch?v=1JYutPM8O6E',
'info_dict': {
'id': '1JYutPM8O6E',
'ext': 'mp4',
'title': 'Fake Teen Doctor Strikes AGAIN! - Weekly Weird News',
'upload_date': '20160218',
'channel_id': 'UCdIaNUarhzLSXGoItz7BHVA',
'duration': 1236,
'description': 'md5:21032bae736421e89c2edf36d1936947',
'uploader_id': 'MachinimaETC',
'uploader_url': 'http://www.youtube.com/user/MachinimaETC'
}
}, {
# First capture of dead video, capture date in link links to dead capture.
'url': 'https://web.archive.org/web/20180803221945/https://www.youtube.com/watch?v=6FPhZJGvf4E',
'info_dict': {
'id': '6FPhZJGvf4E',
'ext': 'mp4',
'title': 'WTF: Video Games Still Launch BROKEN?! - T.U.G.S.',
'upload_date': '20160219',
'channel_id': 'UCdIaNUarhzLSXGoItz7BHVA',
'duration': 798,
'description': 'md5:a1dbf12d9a3bd7cb4c5e33b27d77ffe7',
'uploader_id': 'MachinimaETC',
'uploader_url': 'http://www.youtube.com/user/MachinimaETC'
},
'expected_warnings': [
r'unable to download capture webpage \(it may not be archived\)'
]
}, { # Very old YouTube page, has - YouTube in title.
'url': 'http://web.archive.org/web/20070302011044/http://youtube.com/watch?v=-06-KB9XTzg',
'info_dict': {
'id': '-06-KB9XTzg',
'ext': 'flv',
'title': 'New Coin Hack!! 100% Safe!!'
}
}, {
'url': 'web.archive.org/https://www.youtube.com/watch?v=dWW7qP423y8',
'info_dict': {
'id': 'dWW7qP423y8',
'ext': 'mp4',
'title': 'It\'s Bootleg AirPods Time.',
'upload_date': '20211021',
'channel_id': 'UC7Jwj9fkrf1adN4fMmTkpug',
'channel_url': 'http://www.youtube.com/channel/UC7Jwj9fkrf1adN4fMmTkpug',
'duration': 810,
'description': 'md5:7b567f898d8237b256f36c1a07d6d7bc',
'uploader': 'DankPods',
'uploader_id': 'UC7Jwj9fkrf1adN4fMmTkpug',
'uploader_url': 'http://www.youtube.com/channel/UC7Jwj9fkrf1adN4fMmTkpug'
}
}, {
# player response contains '};' See: https://github.com/ytdl-org/youtube-dl/issues/27093
'url': 'https://web.archive.org/web/20200827003909if_/http://www.youtube.com/watch?v=6Dh-RL__uN4',
'info_dict': {
'id': '6Dh-RL__uN4',
'ext': 'mp4',
'title': 'bitch lasagna',
'upload_date': '20181005',
'channel_id': 'UC-lHJZR3Gqxm24_Vd_AJ5Yw',
'channel_url': 'http://www.youtube.com/channel/UC-lHJZR3Gqxm24_Vd_AJ5Yw',
'duration': 135,
'description': 'md5:2dbe4051feeff2dab5f41f82bb6d11d0',
'uploader': 'PewDiePie',
'uploader_id': 'PewDiePie',
'uploader_url': 'http://www.youtube.com/user/PewDiePie'
}
}, {
'url': 'https://web.archive.org/web/http://www.youtube.com/watch?v=kH-G_aIBlFw', 'url': 'https://web.archive.org/web/http://www.youtube.com/watch?v=kH-G_aIBlFw',
'info_dict': { 'only_matching': True
'id': 'kH-G_aIBlFw', }, {
'ext': 'mp4', 'url': 'https://web.archive.org/web/20050214000000_if/http://www.youtube.com/watch?v=0altSZ96U4M',
'title': 'kH-G_aIBlFw' 'only_matching': True
}, }, {
'expected_warnings': [
'unable to extract title',
]
},
{
# First capture is a 302 redirect intermediary page.
'url': 'https://web.archive.org/web/20050214000000/http://www.youtube.com/watch?v=0altSZ96U4M',
'info_dict': {
'id': '0altSZ96U4M',
'ext': 'mp4',
'title': '0altSZ96U4M'
},
'expected_warnings': [
'unable to extract title',
]
},
{
# Video not archived, only capture is unavailable video page # Video not archived, only capture is unavailable video page
'url': 'https://web.archive.org/web/20210530071008/https://www.youtube.com/watch?v=lHJTf93HL1s&spfreload=10', 'url': 'https://web.archive.org/web/20210530071008/https://www.youtube.com/watch?v=lHJTf93HL1s&spfreload=10',
'only_matching': True, 'only_matching': True
}, }, { # Encoded url
{ # Encoded url
'url': 'https://web.archive.org/web/20120712231619/http%3A//www.youtube.com/watch%3Fgl%3DUS%26v%3DAkhihxRKcrs%26hl%3Den', 'url': 'https://web.archive.org/web/20120712231619/http%3A//www.youtube.com/watch%3Fgl%3DUS%26v%3DAkhihxRKcrs%26hl%3Den',
'only_matching': True, 'only_matching': True
}, }, {
{
'url': 'https://web.archive.org/web/20120712231619/http%3A//www.youtube.com/watch%3Fv%3DAkhihxRKcrs%26gl%3DUS%26hl%3Den', 'url': 'https://web.archive.org/web/20120712231619/http%3A//www.youtube.com/watch%3Fv%3DAkhihxRKcrs%26gl%3DUS%26hl%3Den',
'only_matching': True, 'only_matching': True
}, {
'url': 'https://web.archive.org/web/20060527081937/http://www.youtube.com:80/watch.php?v=ELTFsLT73fA&amp;search=soccer',
'only_matching': True
}, {
'url': 'https://web.archive.org/http://www.youtube.com:80/watch?v=-05VVye-ffg',
'only_matching': True
} }
] ]
_YT_INITIAL_DATA_RE = r'(?:(?:(?:window\s*\[\s*["\']ytInitialData["\']\s*\]|ytInitialData)\s*=\s*({.+?})\s*;)|%s)' % YoutubeBaseInfoExtractor._YT_INITIAL_DATA_RE
_YT_INITIAL_PLAYER_RESPONSE_RE = r'(?:(?:(?:window\s*\[\s*["\']ytInitialPlayerResponse["\']\s*\]|ytInitialPlayerResponse)\s*=[(\s]*({.+?})[)\s]*;)|%s)' % YoutubeBaseInfoExtractor._YT_INITIAL_PLAYER_RESPONSE_RE
_YT_INITIAL_BOUNDARY_RE = r'(?:(?:var\s+meta|</script|\n)|%s)' % YoutubeBaseInfoExtractor._YT_INITIAL_BOUNDARY_RE
_YT_DEFAULT_THUMB_SERVERS = ['i.ytimg.com'] # thumbnails most likely archived on these servers
_YT_ALL_THUMB_SERVERS = orderedSet(
_YT_DEFAULT_THUMB_SERVERS + ['img.youtube.com', *[f'{c}{n or ""}.ytimg.com' for c in ('i', 's') for n in (*range(0, 5), 9)]])
_WAYBACK_BASE_URL = 'https://web.archive.org/web/%sif_/'
_OLDEST_CAPTURE_DATE = 20050214000000
_NEWEST_CAPTURE_DATE = 20500101000000
def _call_cdx_api(self, item_id, url, filters: list = None, collapse: list = None, query: dict = None, note='Downloading CDX API JSON'):
# CDX docs: https://github.com/internetarchive/wayback/blob/master/wayback-cdx-server/README.md
query = {
'url': url,
'output': 'json',
'fl': 'original,mimetype,length,timestamp',
'limit': 500,
'filter': ['statuscode:200'] + (filters or []),
'collapse': collapse or [],
**(query or {})
}
res = self._download_json('https://web.archive.org/cdx/search/cdx', item_id, note, query=query)
if isinstance(res, list) and len(res) >= 2:
# format response to make it easier to use
return list(dict(zip(res[0], v)) for v in res[1:])
elif not isinstance(res, list) or len(res) != 0:
self.report_warning('Error while parsing CDX API response' + bug_reports_message())
def _extract_yt_initial_variable(self, webpage, regex, video_id, name):
return self._parse_json(self._search_regex(
(r'%s\s*%s' % (regex, self._YT_INITIAL_BOUNDARY_RE),
regex), webpage, name, default='{}'), video_id, fatal=False)
def _extract_webpage_title(self, webpage):
page_title = self._html_search_regex(
r'<title>([^<]*)</title>', webpage, 'title', default='')
# YouTube video pages appear to always have either 'YouTube -' as prefix or '- YouTube' as suffix.
return self._html_search_regex(
r'(?:YouTube\s*-\s*(.*)$)|(?:(.*)\s*-\s*YouTube$)',
page_title, 'title', default='')
def _extract_metadata(self, video_id, webpage):
search_meta = ((lambda x: self._html_search_meta(x, webpage, default=None)) if webpage else (lambda x: None))
player_response = self._extract_yt_initial_variable(
webpage, self._YT_INITIAL_PLAYER_RESPONSE_RE, video_id, 'initial player response') or {}
initial_data = self._extract_yt_initial_variable(
webpage, self._YT_INITIAL_DATA_RE, video_id, 'initial player response') or {}
initial_data_video = traverse_obj(
initial_data, ('contents', 'twoColumnWatchNextResults', 'results', 'results', 'contents', ..., 'videoPrimaryInfoRenderer'),
expected_type=dict, get_all=False, default={})
video_details = traverse_obj(
player_response, 'videoDetails', expected_type=dict, get_all=False, default={})
microformats = traverse_obj(
player_response, ('microformat', 'playerMicroformatRenderer'), expected_type=dict, get_all=False, default={})
video_title = (
video_details.get('title')
or YoutubeBaseInfoExtractor._get_text(microformats, 'title')
or YoutubeBaseInfoExtractor._get_text(initial_data_video, 'title')
or self._extract_webpage_title(webpage)
or search_meta(['og:title', 'twitter:title', 'title']))
channel_id = str_or_none(
video_details.get('channelId')
or microformats.get('externalChannelId')
or search_meta('channelId')
or self._search_regex(
r'data-channel-external-id=(["\'])(?P<id>(?:(?!\1).)+)\1', # @b45a9e6
webpage, 'channel id', default=None, group='id'))
channel_url = f'http://www.youtube.com/channel/{channel_id}' if channel_id else None
duration = int_or_none(
video_details.get('lengthSeconds')
or microformats.get('lengthSeconds')
or parse_duration(search_meta('duration')))
description = (
video_details.get('shortDescription')
or YoutubeBaseInfoExtractor._get_text(microformats, 'description')
or clean_html(get_element_by_id('eow-description', webpage)) # @9e6dd23
or search_meta(['description', 'og:description', 'twitter:description']))
uploader = video_details.get('author')
# Uploader ID and URL
uploader_mobj = re.search(
r'<link itemprop="url" href="(?P<uploader_url>https?://www\.youtube\.com/(?:user|channel)/(?P<uploader_id>[^"]+))">', # @fd05024
webpage)
if uploader_mobj is not None:
uploader_id, uploader_url = uploader_mobj.group('uploader_id'), uploader_mobj.group('uploader_url')
else:
# @a6211d2
uploader_url = url_or_none(microformats.get('ownerProfileUrl'))
uploader_id = self._search_regex(
r'(?:user|channel)/([^/]+)', uploader_url or '', 'uploader id', default=None)
upload_date = unified_strdate(
dict_get(microformats, ('uploadDate', 'publishDate'))
or search_meta(['uploadDate', 'datePublished'])
or self._search_regex(
[r'(?s)id="eow-date.*?>(.*?)</span>',
r'(?:id="watch-uploader-info".*?>.*?|["\']simpleText["\']\s*:\s*["\'])(?:Published|Uploaded|Streamed live|Started) on (.+?)[<"\']'], # @7998520
webpage, 'upload date', default=None))
return {
'title': video_title,
'description': description,
'upload_date': upload_date,
'uploader': uploader,
'channel_id': channel_id,
'channel_url': channel_url,
'duration': duration,
'uploader_url': uploader_url,
'uploader_id': uploader_id,
}
def _extract_thumbnails(self, video_id):
try_all = 'thumbnails' in self._configuration_arg('check_all')
thumbnail_base_urls = ['http://{server}/vi{webp}/{video_id}'.format(
webp='_webp' if ext == 'webp' else '', video_id=video_id, server=server)
for server in (self._YT_ALL_THUMB_SERVERS if try_all else self._YT_DEFAULT_THUMB_SERVERS) for ext in (('jpg', 'webp') if try_all else ('jpg',))]
thumbnails = []
for url in thumbnail_base_urls:
response = self._call_cdx_api(
video_id, url, filters=['mimetype:image/(?:webp|jpeg)'],
collapse=['urlkey'], query={'matchType': 'prefix'})
if not response:
continue
thumbnails.extend(
{
'url': (self._WAYBACK_BASE_URL % (int_or_none(thumbnail_dict.get('timestamp')) or self._OLDEST_CAPTURE_DATE)) + thumbnail_dict.get('original'),
'filesize': int_or_none(thumbnail_dict.get('length')),
'preference': int_or_none(thumbnail_dict.get('length'))
} for thumbnail_dict in response)
if not try_all:
break
self._remove_duplicate_formats(thumbnails)
return thumbnails
def _get_capture_dates(self, video_id, url_date):
capture_dates = []
# Note: CDX API will not find watch pages with extra params in the url.
response = self._call_cdx_api(
video_id, f'https://www.youtube.com/watch?v={video_id}',
filters=['mimetype:text/html'], collapse=['timestamp:6', 'digest'], query={'matchType': 'prefix'}) or []
all_captures = sorted([int_or_none(r['timestamp']) for r in response if int_or_none(r['timestamp']) is not None])
# Prefer the new polymer UI captures as we support extracting more metadata from them
# WBM captures seem to all switch to this layout ~July 2020
modern_captures = list(filter(lambda x: x >= 20200701000000, all_captures))
if modern_captures:
capture_dates.append(modern_captures[0])
capture_dates.append(url_date)
if all_captures:
capture_dates.append(all_captures[0])
if 'captures' in self._configuration_arg('check_all'):
capture_dates.extend(modern_captures + all_captures)
# Fallbacks if any of the above fail
capture_dates.extend([self._OLDEST_CAPTURE_DATE, self._NEWEST_CAPTURE_DATE])
return orderedSet(capture_dates)
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url)
title = video_id # if we are not able get a title
def _extract_title(webpage): url_date, video_id = self._match_valid_url(url).groups()
page_title = self._html_search_regex(
r'<title>([^<]*)</title>', webpage, 'title', fatal=False) or ''
# YouTube video pages appear to always have either 'YouTube -' as suffix or '- YouTube' as prefix.
try:
page_title = self._html_search_regex(
r'(?:YouTube\s*-\s*(.*)$)|(?:(.*)\s*-\s*YouTube$)',
page_title, 'title', default='')
except RegexNotFoundError:
page_title = None
if not page_title: urlh = None
self.report_warning('unable to extract title', video_id=video_id)
return
return page_title
# If the video is no longer available, the oldest capture may be one before it was removed.
# Setting the capture date in url to early date seems to redirect to earliest capture.
webpage = self._download_webpage(
'https://web.archive.org/web/20050214000000/http://www.youtube.com/watch?v=%s' % video_id,
video_id=video_id, fatal=False, errnote='unable to download video webpage (probably not archived).')
if webpage:
title = _extract_title(webpage) or title
# Use link translator mentioned in https://github.com/ytdl-org/youtube-dl/issues/13655
internal_fake_url = 'https://web.archive.org/web/2oe_/http://wayback-fakeurl.archive.org/yt/%s' % video_id
try: try:
video_file_webpage = self._request_webpage( urlh = self._request_webpage(
HEADRequest(internal_fake_url), video_id, HEADRequest('https://web.archive.org/web/2oe_/http://wayback-fakeurl.archive.org/yt/%s' % video_id),
note='Fetching video file url', expected_status=True) video_id, note='Fetching archived video file url', expected_status=True)
except ExtractorError as e: except ExtractorError as e:
# HTTP Error 404 is expected if the video is not saved. # HTTP Error 404 is expected if the video is not saved.
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 404: if isinstance(e.cause, compat_HTTPError) and e.cause.code == 404:
raise ExtractorError( self.raise_no_formats(
'HTTP Error %s. Most likely the video is not archived or issue with web.archive.org.' % e.cause.code, 'The requested video is not archived, indexed, or there is an issue with web.archive.org',
expected=True) expected=True)
raise else:
video_file_url = compat_urllib_parse_unquote(video_file_webpage.url) raise
video_file_url_qs = parse_qs(video_file_url)
# Attempt to recover any ext & format info from playback url capture_dates = self._get_capture_dates(video_id, int_or_none(url_date))
format = {'url': video_file_url} self.write_debug('Captures to try: ' + ', '.join(str(i) for i in capture_dates if i is not None))
itag = try_get(video_file_url_qs, lambda x: x['itag'][0]) info = {'id': video_id}
if itag and itag in YoutubeIE._formats: # Naughty access but it works for capture in capture_dates:
format.update(YoutubeIE._formats[itag]) if not capture:
format.update({'format_id': itag}) continue
else: webpage = self._download_webpage(
mime = try_get(video_file_url_qs, lambda x: x['mime'][0]) (self._WAYBACK_BASE_URL + 'http://www.youtube.com/watch?v=%s') % (capture, video_id),
ext = mimetype2ext(mime) or determine_ext(video_file_url) video_id=video_id, fatal=False, errnote='unable to download capture webpage (it may not be archived)',
format.update({'ext': ext}) note='Downloading capture webpage')
return { current_info = self._extract_metadata(video_id, webpage or '')
'id': video_id, # Try avoid getting deleted video metadata
'title': title, if current_info.get('title'):
'formats': [format], info = merge_dicts(info, current_info)
'duration': str_to_int(try_get(video_file_url_qs, lambda x: x['dur'][0])) if 'captures' not in self._configuration_arg('check_all'):
} break
info['thumbnails'] = self._extract_thumbnails(video_id)
if urlh:
url = compat_urllib_parse_unquote(urlh.url)
video_file_url_qs = parse_qs(url)
# Attempt to recover any ext & format info from playback url & response headers
format = {'url': url, 'filesize': int_or_none(urlh.headers.get('x-archive-orig-content-length'))}
itag = try_get(video_file_url_qs, lambda x: x['itag'][0])
if itag and itag in YoutubeIE._formats:
format.update(YoutubeIE._formats[itag])
format.update({'format_id': itag})
else:
mime = try_get(video_file_url_qs, lambda x: x['mime'][0])
ext = (mimetype2ext(mime)
or urlhandle_detect_ext(urlh)
or mimetype2ext(urlh.headers.get('x-archive-guessed-content-type')))
format.update({'ext': ext})
info['formats'] = [format]
if not info.get('duration'):
info['duration'] = str_to_int(try_get(video_file_url_qs, lambda x: x['dur'][0]))
if not info.get('title'):
info['title'] = video_id
return info

View file

@ -158,7 +158,7 @@ class ArcPublishingIE(InfoExtractor):
return { return {
'id': uuid, 'id': uuid,
'title': self._live_title(title) if is_live else title, 'title': title,
'thumbnail': try_get(video, lambda x: x['promo_image']['url']), 'thumbnail': try_get(video, lambda x: x['promo_image']['url']),
'description': try_get(video, lambda x: x['subheadlines']['basic']), 'description': try_get(video, lambda x: x['subheadlines']['basic']),
'formats': formats, 'formats': formats,

View file

@ -280,7 +280,7 @@ class ARDMediathekIE(ARDMediathekBaseIE):
info.update({ info.update({
'id': video_id, 'id': video_id,
'title': self._live_title(title) if info.get('is_live') else title, 'title': title,
'description': description, 'description': description,
'thumbnail': thumbnail, 'thumbnail': thumbnail,
}) })
@ -388,7 +388,13 @@ class ARDIE(InfoExtractor):
class ARDBetaMediathekIE(ARDMediathekBaseIE): class ARDBetaMediathekIE(ARDMediathekBaseIE):
_VALID_URL = r'https://(?:(?:beta|www)\.)?ardmediathek\.de/(?P<client>[^/]+)/(?P<mode>player|live|video|sendung|sammlung)/(?P<display_id>(?:[^/]+/)*)(?P<video_id>[a-zA-Z0-9]+)' _VALID_URL = r'''(?x)https://
(?:(?:beta|www)\.)?ardmediathek\.de/
(?:(?P<client>[^/]+)/)?
(?:player|live|video|(?P<playlist>sendung|sammlung))/
(?:(?P<display_id>[^?#]+)/)?
(?P<id>(?(playlist)|Y3JpZDovL)[a-zA-Z0-9]+)'''
_TESTS = [{ _TESTS = [{
'url': 'https://www.ardmediathek.de/mdr/video/die-robuste-roswita/Y3JpZDovL21kci5kZS9iZWl0cmFnL2Ntcy84MWMxN2MzZC0wMjkxLTRmMzUtODk4ZS0wYzhlOWQxODE2NGI/', 'url': 'https://www.ardmediathek.de/mdr/video/die-robuste-roswita/Y3JpZDovL21kci5kZS9iZWl0cmFnL2Ntcy84MWMxN2MzZC0wMjkxLTRmMzUtODk4ZS0wYzhlOWQxODE2NGI/',
'md5': 'a1dc75a39c61601b980648f7c9f9f71d', 'md5': 'a1dc75a39c61601b980648f7c9f9f71d',
@ -403,6 +409,18 @@ class ARDBetaMediathekIE(ARDMediathekBaseIE):
'upload_date': '20200805', 'upload_date': '20200805',
'ext': 'mp4', 'ext': 'mp4',
}, },
'skip': 'Error',
}, {
'url': 'https://www.ardmediathek.de/video/tagesschau-oder-tagesschau-20-00-uhr/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvZmM4ZDUxMjgtOTE0ZC00Y2MzLTgzNzAtNDZkNGNiZWJkOTll',
'md5': 'f1837e563323b8a642a8ddeff0131f51',
'info_dict': {
'id': '10049223',
'ext': 'mp4',
'title': 'tagesschau, 20:00 Uhr',
'timestamp': 1636398000,
'description': 'md5:39578c7b96c9fe50afdf5674ad985e6b',
'upload_date': '20211108',
},
}, { }, {
'url': 'https://beta.ardmediathek.de/ard/video/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhdG9ydC9mYmM4NGM1NC0xNzU4LTRmZGYtYWFhZS0wYzcyZTIxNGEyMDE', 'url': 'https://beta.ardmediathek.de/ard/video/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhdG9ydC9mYmM4NGM1NC0xNzU4LTRmZGYtYWFhZS0wYzcyZTIxNGEyMDE',
'only_matching': True, 'only_matching': True,
@ -426,6 +444,12 @@ class ARDBetaMediathekIE(ARDMediathekBaseIE):
# playlist of type 'sammlung' # playlist of type 'sammlung'
'url': 'https://www.ardmediathek.de/ard/sammlung/team-muenster/5JpTzLSbWUAK8184IOvEir/', 'url': 'https://www.ardmediathek.de/ard/sammlung/team-muenster/5JpTzLSbWUAK8184IOvEir/',
'only_matching': True, 'only_matching': True,
}, {
'url': 'https://www.ardmediathek.de/video/coronavirus-update-ndr-info/astrazeneca-kurz-lockdown-und-pims-syndrom-81/ndr/Y3JpZDovL25kci5kZS84NzE0M2FjNi0wMWEwLTQ5ODEtOTE5NS1mOGZhNzdhOTFmOTI/',
'only_matching': True,
}, {
'url': 'https://www.ardmediathek.de/ard/player/Y3JpZDovL3dkci5kZS9CZWl0cmFnLWQ2NDJjYWEzLTMwZWYtNGI4NS1iMTI2LTU1N2UxYTcxOGIzOQ/tatort-duo-koeln-leipzig-ihr-kinderlein-kommet',
'only_matching': True,
}] }]
def _ARD_load_playlist_snipped(self, playlist_id, display_id, client, mode, pageNumber): def _ARD_load_playlist_snipped(self, playlist_id, display_id, client, mode, pageNumber):
@ -525,20 +549,12 @@ class ARDBetaMediathekIE(ARDMediathekBaseIE):
return self.playlist_result(entries, playlist_title=display_id) return self.playlist_result(entries, playlist_title=display_id)
def _real_extract(self, url): def _real_extract(self, url):
mobj = self._match_valid_url(url) video_id, display_id, playlist_type, client = self._match_valid_url(url).group(
video_id = mobj.group('video_id') 'id', 'display_id', 'playlist', 'client')
display_id = mobj.group('display_id') display_id, client = display_id or video_id, client or 'ard'
if display_id:
display_id = display_id.rstrip('/')
if not display_id:
display_id = video_id
if mobj.group('mode') in ('sendung', 'sammlung'): if playlist_type:
# this is a playlist-URL return self._ARD_extract_playlist(url, video_id, display_id, client, playlist_type)
return self._ARD_extract_playlist(
url, video_id, display_id,
mobj.group('client'),
mobj.group('mode'))
player_page = self._download_json( player_page = self._download_json(
'https://api.ardmediathek.de/public-gateway', 'https://api.ardmediathek.de/public-gateway',
@ -574,7 +590,7 @@ class ARDBetaMediathekIE(ARDMediathekBaseIE):
} }
} }
} }
}''' % (mobj.group('client'), video_id), }''' % (client, video_id),
}).encode(), headers={ }).encode(), headers={
'Content-Type': 'application/json' 'Content-Type': 'application/json'
})['data']['playerPage'] })['data']['playerPage']

View file

@ -174,7 +174,7 @@ class ArteTVIE(ArteTVBaseIE):
return { return {
'id': player_info.get('VID') or video_id, 'id': player_info.get('VID') or video_id,
'title': title, 'title': title,
'description': player_info.get('VDE'), 'description': player_info.get('VDE') or player_info.get('V7T'),
'upload_date': unified_strdate(upload_date_str), 'upload_date': unified_strdate(upload_date_str),
'thumbnail': player_info.get('programImage') or player_info.get('VTU', {}).get('IUR'), 'thumbnail': player_info.get('programImage') or player_info.get('VTU', {}).get('IUR'),
'formats': formats, 'formats': formats,

View file

@ -24,9 +24,6 @@ class AtresPlayerIE(InfoExtractor):
'description': 'md5:7634cdcb4d50d5381bedf93efb537fbc', 'description': 'md5:7634cdcb4d50d5381bedf93efb537fbc',
'duration': 3413, 'duration': 3413,
}, },
'params': {
'format': 'bestvideo',
},
'skip': 'This video is only available for registered users' 'skip': 'This video is only available for registered users'
}, },
{ {

View file

@ -1,75 +1,106 @@
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import datetime
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
determine_ext, float_or_none,
int_or_none, jwt_encode_hs256,
unescapeHTML, try_get,
) )
class ATVAtIE(InfoExtractor): class ATVAtIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?atv\.at/(?:[^/]+/){2}(?P<id>[dv]\d+)' _VALID_URL = r'https?://(?:www\.)?atv\.at/tv/(?:[^/]+/){2,3}(?P<id>.*)'
_TESTS = [{ _TESTS = [{
'url': 'http://atv.at/aktuell/di-210317-2005-uhr/v1698449/', 'url': 'https://www.atv.at/tv/bauer-sucht-frau/staffel-18/bauer-sucht-frau/bauer-sucht-frau-staffel-18-folge-3-die-hofwochen',
'md5': 'c3b6b975fb3150fc628572939df205f2', 'md5': '3c3b4aaca9f63e32b35e04a9c2515903',
'info_dict': { 'info_dict': {
'id': '1698447', 'id': 'v-ce9cgn1e70n5-1',
'ext': 'mp4', 'ext': 'mp4',
'title': 'DI, 21.03.17 | 20:05 Uhr 1/1', 'title': 'Bauer sucht Frau - Staffel 18 Folge 3 - Die Hofwochen',
} }
}, { }, {
'url': 'http://atv.at/aktuell/meinrad-knapp/d8416/', 'url': 'https://www.atv.at/tv/bauer-sucht-frau/staffel-18/episode-01/bauer-sucht-frau-staffel-18-vorstellungsfolge-1',
'only_matching': True, 'only_matching': True,
}] }]
# extracted from bootstrap.js function (search for e.encryption_key and use your browser's debugger)
_ACCESS_ID = 'x_atv'
_ENCRYPTION_KEY = 'Hohnaekeishoogh2omaeghooquooshia'
def _extract_video_info(self, url, content, video):
clip_id = content.get('splitId', content['id'])
formats = []
clip_urls = video['urls']
for protocol, variant in clip_urls.items():
source_url = try_get(variant, lambda x: x['clear']['url'])
if not source_url:
continue
if protocol == 'dash':
formats.extend(self._extract_mpd_formats(
source_url, clip_id, mpd_id=protocol, fatal=False))
elif protocol == 'hls':
formats.extend(self._extract_m3u8_formats(
source_url, clip_id, 'mp4', 'm3u8_native',
m3u8_id=protocol, fatal=False))
else:
formats.append({
'url': source_url,
'format_id': protocol,
})
self._sort_formats(formats)
return {
'id': clip_id,
'title': content.get('title'),
'duration': float_or_none(content.get('duration')),
'series': content.get('tvShowTitle'),
'formats': formats,
}
def _real_extract(self, url): def _real_extract(self, url):
display_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, display_id) webpage = self._download_webpage(url, video_id)
video_data = self._parse_json(unescapeHTML(self._search_regex( json_data = self._parse_json(
[r'flashPlayerOptions\s*=\s*(["\'])(?P<json>(?:(?!\1).)+)\1', self._search_regex(r'<script id="state" type="text/plain">(.*)</script>', webpage, 'json_data'),
r'class="[^"]*jsb_video/FlashPlayer[^"]*"[^>]+data-jsb="(?P<json>[^"]+)"'], video_id=video_id)
webpage, 'player data', group='json')),
display_id)['config']['initial_video']
video_id = video_data['id'] video_title = json_data['views']['default']['page']['title']
video_title = video_data['title'] contentResource = json_data['views']['default']['page']['contentResource']
content_id = contentResource[0]['id']
content_ids = [{'id': id, 'subclip_start': content['start'], 'subclip_end': content['end']}
for id, content in enumerate(contentResource)]
parts = [] time_of_request = datetime.datetime.now()
for part in video_data.get('parts', []): not_before = time_of_request - datetime.timedelta(minutes=5)
part_id = part['id'] expire = time_of_request + datetime.timedelta(minutes=5)
part_title = part['title'] payload = {
'content_ids': {
formats = [] content_id: content_ids,
for source in part.get('sources', []): },
source_url = source.get('src') 'secure_delivery': True,
if not source_url: 'iat': int(time_of_request.timestamp()),
continue 'nbf': int(not_before.timestamp()),
ext = determine_ext(source_url) 'exp': int(expire.timestamp()),
if ext == 'm3u8': }
formats.extend(self._extract_m3u8_formats( jwt_token = jwt_encode_hs256(payload, self._ENCRYPTION_KEY, headers={'kid': self._ACCESS_ID})
source_url, part_id, 'mp4', 'm3u8_native', videos = self._download_json(
m3u8_id='hls', fatal=False)) 'https://vas-v4.p7s1video.net/4.0/getsources',
else: content_id, 'Downloading videos JSON', query={
formats.append({ 'token': jwt_token.decode('utf-8')
'format_id': source.get('delivery'),
'url': source_url,
})
self._sort_formats(formats)
parts.append({
'id': part_id,
'title': part_title,
'thumbnail': part.get('preview_image_url'),
'duration': int_or_none(part.get('duration')),
'is_live': part.get('is_livestream'),
'formats': formats,
}) })
video_id, videos_data = list(videos['data'].items())[0]
entries = [
self._extract_video_info(url, contentResource[video['id']], video)
for video in videos_data]
return { return {
'_type': 'multi_video', '_type': 'multi_video',
'id': video_id, 'id': video_id,
'title': video_title, 'title': video_title,
'entries': parts, 'entries': entries,
} }

View file

@ -14,7 +14,7 @@ from ..utils import (
class AudiomackIE(InfoExtractor): class AudiomackIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?audiomack\.com/song/(?P<id>[\w/-]+)' _VALID_URL = r'https?://(?:www\.)?audiomack\.com/(?:song/|(?=.+/song/))(?P<id>[\w/-]+)'
IE_NAME = 'audiomack' IE_NAME = 'audiomack'
_TESTS = [ _TESTS = [
# hosted on audiomack # hosted on audiomack
@ -39,15 +39,16 @@ class AudiomackIE(InfoExtractor):
'title': 'Black Mamba Freestyle [Prod. By Danny Wolf]', 'title': 'Black Mamba Freestyle [Prod. By Danny Wolf]',
'uploader': 'ILOVEMAKONNEN', 'uploader': 'ILOVEMAKONNEN',
'upload_date': '20160414', 'upload_date': '20160414',
} },
'skip': 'Song has been removed from the site',
}, },
] ]
def _real_extract(self, url): def _real_extract(self, url):
# URLs end with [uploader name]/[uploader title] # URLs end with [uploader name]/song/[uploader title]
# this title is whatever the user types in, and is rarely # this title is whatever the user types in, and is rarely
# the proper song title. Real metadata is in the api response # the proper song title. Real metadata is in the api response
album_url_tag = self._match_id(url) album_url_tag = self._match_id(url).replace('/song/', '/')
# Request the extended version of the api for extra fields like artist and title # Request the extended version of the api for extra fields like artist and title
api_response = self._download_json( api_response = self._download_json(
@ -73,13 +74,13 @@ class AudiomackIE(InfoExtractor):
class AudiomackAlbumIE(InfoExtractor): class AudiomackAlbumIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?audiomack\.com/album/(?P<id>[\w/-]+)' _VALID_URL = r'https?://(?:www\.)?audiomack\.com/(?:album/|(?=.+/album/))(?P<id>[\w/-]+)'
IE_NAME = 'audiomack:album' IE_NAME = 'audiomack:album'
_TESTS = [ _TESTS = [
# Standard album playlist # Standard album playlist
{ {
'url': 'http://www.audiomack.com/album/flytunezcom/tha-tour-part-2-mixtape', 'url': 'http://www.audiomack.com/album/flytunezcom/tha-tour-part-2-mixtape',
'playlist_count': 15, 'playlist_count': 11,
'info_dict': 'info_dict':
{ {
'id': '812251', 'id': '812251',
@ -95,24 +96,27 @@ class AudiomackAlbumIE(InfoExtractor):
}, },
'playlist': [{ 'playlist': [{
'info_dict': { 'info_dict': {
'title': 'PPP (Pistol P Project) - 9. Heaven or Hell (CHIMACA) ft Zuse (prod by DJ FU)', 'title': 'PPP (Pistol P Project) - 8. Real (prod by SYK SENSE )',
'id': '837577', 'id': '837576',
'ext': 'mp3',
'uploader': 'Lil Herb a.k.a. G Herbo',
}
}, {
'info_dict': {
'title': 'PPP (Pistol P Project) - 10. 4 Minutes Of Hell Part 4 (prod by DY OF 808 MAFIA)',
'id': '837580',
'ext': 'mp3', 'ext': 'mp3',
'uploader': 'Lil Herb a.k.a. G Herbo', 'uploader': 'Lil Herb a.k.a. G Herbo',
} }
}], }],
'params': {
'playliststart': 9,
'playlistend': 9,
}
} }
] ]
def _real_extract(self, url): def _real_extract(self, url):
# URLs end with [uploader name]/[uploader title] # URLs end with [uploader name]/album/[uploader title]
# this title is whatever the user types in, and is rarely # this title is whatever the user types in, and is rarely
# the proper song title. Real metadata is in the api response # the proper song title. Real metadata is in the api response
album_url_tag = self._match_id(url) album_url_tag = self._match_id(url).replace('/album/', '/')
result = {'_type': 'playlist', 'entries': []} result = {'_type': 'playlist', 'entries': []}
# There is no one endpoint for album metadata - instead it is included/repeated in each song's metadata # There is no one endpoint for album metadata - instead it is included/repeated in each song's metadata
# Therefore we don't know how many songs the album has and must infi-loop until failure # Therefore we don't know how many songs the album has and must infi-loop until failure
@ -134,7 +138,7 @@ class AudiomackAlbumIE(InfoExtractor):
# Pull out the album metadata and add to result (if it exists) # Pull out the album metadata and add to result (if it exists)
for resultkey, apikey in [('id', 'album_id'), ('title', 'album_title')]: for resultkey, apikey in [('id', 'album_id'), ('title', 'album_title')]:
if apikey in api_response and resultkey not in result: if apikey in api_response and resultkey not in result:
result[resultkey] = api_response[apikey] result[resultkey] = compat_str(api_response[apikey])
song_id = url_basename(api_response['url']).rpartition('.')[0] song_id = url_basename(api_response['url']).rpartition('.')[0]
result['entries'].append({ result['entries'].append({
'id': compat_str(api_response.get('id', song_id)), 'id': compat_str(api_response.get('id', song_id)),

View file

@ -41,7 +41,7 @@ class AWAANBaseIE(InfoExtractor):
return { return {
'id': video_id, 'id': video_id,
'title': self._live_title(title) if is_live else title, 'title': title,
'description': video_data.get('description_en') or video_data.get('description_ar'), 'description': video_data.get('description_en') or video_data.get('description_ar'),
'thumbnail': 'http://admin.mangomolo.com/analytics/%s' % img if img else None, 'thumbnail': 'http://admin.mangomolo.com/analytics/%s' % img if img else None,
'duration': int_or_none(video_data.get('duration')), 'duration': int_or_none(video_data.get('duration')),

Some files were not shown because too many files have changed in this diff Show more