mirror of
https://github.com/yt-dlp/yt-dlp
synced 2024-12-26 21:59:08 +01:00
Merge branch 'yt-dlp:master' into pr/2475
This commit is contained in:
commit
59b4801285
32 changed files with 523 additions and 335 deletions
|
@ -707,3 +707,7 @@ Sakura286
|
|||
SamDecrock
|
||||
stratus-ss
|
||||
subrat-lima
|
||||
gitninja1234
|
||||
jkruse
|
||||
xiaomac
|
||||
wesson09
|
||||
|
|
49
Changelog.md
49
Changelog.md
|
@ -4,6 +4,55 @@
|
|||
# To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master
|
||||
-->
|
||||
|
||||
### 2024.12.06
|
||||
|
||||
#### Core changes
|
||||
- **cookies**: [Add `--cookies-from-browser` support for MS Store Firefox](https://github.com/yt-dlp/yt-dlp/commit/354cb4026cf2191e1a130ec2a627b95cabfbc60a) ([#11731](https://github.com/yt-dlp/yt-dlp/issues/11731)) by [wesson09](https://github.com/wesson09)
|
||||
|
||||
#### Extractor changes
|
||||
- **bilibili**: [Fix HD formats extraction](https://github.com/yt-dlp/yt-dlp/commit/fca3eb5f8be08d5fab2e18b45b7281a12e566725) ([#11734](https://github.com/yt-dlp/yt-dlp/issues/11734)) by [grqz](https://github.com/grqz)
|
||||
- **soundcloud**: [Fix formats extraction](https://github.com/yt-dlp/yt-dlp/commit/2feb28028ee48f2185d2d95076e62accb09b9e2e) ([#11742](https://github.com/yt-dlp/yt-dlp/issues/11742)) by [bashonly](https://github.com/bashonly)
|
||||
- **youtube**
|
||||
- [Fix `n` sig extraction for player `3bb1f723`](https://github.com/yt-dlp/yt-dlp/commit/a95ee6d8803fca9157adecf63732ab58bf87fd88) ([#11750](https://github.com/yt-dlp/yt-dlp/issues/11750)) by [bashonly](https://github.com/bashonly) (With fixes in [4bd2655](https://github.com/yt-dlp/yt-dlp/commit/4bd2655398aed450456197a6767639114a24eac2))
|
||||
- [Fix signature function extraction](https://github.com/yt-dlp/yt-dlp/commit/4c85ccd1366c88cf93982f8350f58eed17355981) ([#11751](https://github.com/yt-dlp/yt-dlp/issues/11751)) by [bashonly](https://github.com/bashonly)
|
||||
- [Player client maintenance](https://github.com/yt-dlp/yt-dlp/commit/2e49c789d3eebc39af8910705d65a98bca0e4c4f) ([#11724](https://github.com/yt-dlp/yt-dlp/issues/11724)) by [bashonly](https://github.com/bashonly)
|
||||
|
||||
### 2024.12.03
|
||||
|
||||
#### Core changes
|
||||
- [Add `playlist_webpage_url` field](https://github.com/yt-dlp/yt-dlp/commit/7d6c259a03bc4707a319e5e8c6eff0278707874b) ([#11613](https://github.com/yt-dlp/yt-dlp/issues/11613)) by [seproDev](https://github.com/seproDev)
|
||||
|
||||
#### Extractor changes
|
||||
- [Handle fragmented formats in `_remove_duplicate_formats`](https://github.com/yt-dlp/yt-dlp/commit/e0500cbf796323551bbabe5b8ed8c75a511ba47a) ([#11637](https://github.com/yt-dlp/yt-dlp/issues/11637)) by [Grub4K](https://github.com/Grub4K)
|
||||
- **bilibili**
|
||||
- [Always try to extract HD formats](https://github.com/yt-dlp/yt-dlp/commit/dc1687648077c5bf64863b307ecc5ab7e029bd8d) ([#10559](https://github.com/yt-dlp/yt-dlp/issues/10559)) by [grqz](https://github.com/grqz)
|
||||
- [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/239f5f36fe04603bec59c8b975f6a792f10246db) ([#11667](https://github.com/yt-dlp/yt-dlp/issues/11667)) by [grqz](https://github.com/grqz) (With fixes in [f05a1cd](https://github.com/yt-dlp/yt-dlp/commit/f05a1cd1492fc98dc8d80d2081d632a1879913d2) by [bashonly](https://github.com/bashonly), [grqz](https://github.com/grqz))
|
||||
- [Fix subtitles and chapters extraction](https://github.com/yt-dlp/yt-dlp/commit/a13a336aa6f906812701abec8101b73b73db8ff7) ([#11708](https://github.com/yt-dlp/yt-dlp/issues/11708)) by [xiaomac](https://github.com/xiaomac)
|
||||
- **chaturbate**: [Fix support for non-public streams](https://github.com/yt-dlp/yt-dlp/commit/4b5eec0aaa7c02627f27a386591b735b90e681a8) ([#11624](https://github.com/yt-dlp/yt-dlp/issues/11624)) by [jkruse](https://github.com/jkruse)
|
||||
- **dacast**: [Fix HLS AES formats extraction](https://github.com/yt-dlp/yt-dlp/commit/0a0d80800b9350d1a4c4b18d82cfb77ffbc3c507) ([#11644](https://github.com/yt-dlp/yt-dlp/issues/11644)) by [bashonly](https://github.com/bashonly)
|
||||
- **dropbox**: [Fix password-protected video extraction](https://github.com/yt-dlp/yt-dlp/commit/00dcde728635633eee969ad4d498b9f233c4a94e) ([#11636](https://github.com/yt-dlp/yt-dlp/issues/11636)) by [bashonly](https://github.com/bashonly)
|
||||
- **duoplay**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/62cba8a1bedbfc0ddde7267ae57b72bf5f7ea7b1) ([#11588](https://github.com/yt-dlp/yt-dlp/issues/11588)) by [bashonly](https://github.com/bashonly), [glensc](https://github.com/glensc)
|
||||
- **facebook**: [Support more groups URLs](https://github.com/yt-dlp/yt-dlp/commit/e0f1ae813b36e783e2348ba2a1566e12f5cd8f6e) ([#11576](https://github.com/yt-dlp/yt-dlp/issues/11576)) by [grqz](https://github.com/grqz)
|
||||
- **instagram**: [Support `share` URLs](https://github.com/yt-dlp/yt-dlp/commit/360aed810ad85db950df586282d256516c98cd2d) ([#11677](https://github.com/yt-dlp/yt-dlp/issues/11677)) by [grqz](https://github.com/grqz)
|
||||
- **microsoftembed**: [Make format extraction non fatal](https://github.com/yt-dlp/yt-dlp/commit/2bea7936323ca4b6f3b9b1fdd892566223e30efa) ([#11654](https://github.com/yt-dlp/yt-dlp/issues/11654)) by [seproDev](https://github.com/seproDev)
|
||||
- **mitele**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/cd0f934604587ed793e9177f6a127e5dcf99a7dd) ([#11683](https://github.com/yt-dlp/yt-dlp/issues/11683)) by [DarkZeros](https://github.com/DarkZeros)
|
||||
- **stripchat**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/16336c51d0848a6868a4fa04e749fa03548b4913) ([#11596](https://github.com/yt-dlp/yt-dlp/issues/11596)) by [gitninja1234](https://github.com/gitninja1234)
|
||||
- **tiktok**: [Deprioritize animated thumbnails](https://github.com/yt-dlp/yt-dlp/commit/910ecc422930bca14e2abe4986f5f92359e3cea8) ([#11645](https://github.com/yt-dlp/yt-dlp/issues/11645)) by [bashonly](https://github.com/bashonly)
|
||||
- **vk**: [Fix extractors](https://github.com/yt-dlp/yt-dlp/commit/c038a7b187ba24360f14134842a7a2cf897c33b1) ([#11715](https://github.com/yt-dlp/yt-dlp/issues/11715)) by [bashonly](https://github.com/bashonly)
|
||||
- **youtube**
|
||||
- [Adjust player clients for site changes](https://github.com/yt-dlp/yt-dlp/commit/0d146c1e36f467af30e87b7af651bdee67b73500) ([#11663](https://github.com/yt-dlp/yt-dlp/issues/11663)) by [bashonly](https://github.com/bashonly)
|
||||
- tab: [Fix playlists tab extraction](https://github.com/yt-dlp/yt-dlp/commit/fe70f20aedf528fdee332131bc9b6710e54e6f10) ([#11615](https://github.com/yt-dlp/yt-dlp/issues/11615)) by [seproDev](https://github.com/seproDev)
|
||||
|
||||
#### Networking changes
|
||||
- **Request Handler**: websockets: [Support websockets 14.0+](https://github.com/yt-dlp/yt-dlp/commit/c7316373c0a886f65a07a51e50ee147bb3294c85) ([#11616](https://github.com/yt-dlp/yt-dlp/issues/11616)) by [coletdjnz](https://github.com/coletdjnz)
|
||||
|
||||
#### Misc. changes
|
||||
- **cleanup**
|
||||
- [Bump ruff to 0.8.x](https://github.com/yt-dlp/yt-dlp/commit/d8fb3490863653182864d2a53522f350d67a9ff8) ([#11608](https://github.com/yt-dlp/yt-dlp/issues/11608)) by [seproDev](https://github.com/seproDev)
|
||||
- Miscellaneous
|
||||
- [ccf0a6b](https://github.com/yt-dlp/yt-dlp/commit/ccf0a6b86b7f68a75463804fe485ec240b8635f0) by [bashonly](https://github.com/bashonly), [pzhlkj6612](https://github.com/pzhlkj6612)
|
||||
- [2b67ac3](https://github.com/yt-dlp/yt-dlp/commit/2b67ac300ac8b44368fb121637d1743cea8c5b6b) by [bashonly](https://github.com/bashonly), [seproDev](https://github.com/seproDev)
|
||||
|
||||
### 2024.11.18
|
||||
|
||||
#### Important changes
|
||||
|
|
|
@ -1761,7 +1761,7 @@ $ yt-dlp --replace-in-metadata "title,uploader" "[ _]" "-"
|
|||
|
||||
# EXTRACTOR ARGUMENTS
|
||||
|
||||
Some extractors accept additional arguments which can be passed using `--extractor-args KEY:ARGS`. `ARGS` is a `;` (semicolon) separated string of `ARG=VAL1,VAL2`. E.g. `--extractor-args "youtube:player-client=mediaconnect,web;formats=incomplete" --extractor-args "funimation:version=uncut"`
|
||||
Some extractors accept additional arguments which can be passed using `--extractor-args KEY:ARGS`. `ARGS` is a `;` (semicolon) separated string of `ARG=VAL1,VAL2`. E.g. `--extractor-args "youtube:player-client=tv,mweb;formats=incomplete" --extractor-args "funimation:version=uncut"`
|
||||
|
||||
Note: In CLI, `ARG` can use `-` instead of `_`; e.g. `youtube:player-client"` becomes `youtube:player_client"`
|
||||
|
||||
|
@ -1770,7 +1770,7 @@ The following extractors use this feature:
|
|||
#### youtube
|
||||
* `lang`: Prefer translated metadata (`title`, `description` etc) of this language code (case-sensitive). By default, the video primary language metadata is preferred, with a fallback to `en` translated. See [youtube.py](https://github.com/yt-dlp/yt-dlp/blob/c26f9b991a0681fd3ea548d535919cec1fbbd430/yt_dlp/extractor/youtube.py#L381-L390) for list of supported content language codes
|
||||
* `skip`: One or more of `hls`, `dash` or `translated_subs` to skip extraction of the m3u8 manifests, dash manifests and [auto-translated subtitles](https://github.com/yt-dlp/yt-dlp/issues/4090#issuecomment-1158102032) respectively
|
||||
* `player_client`: Clients to extract video data from. The main clients are `web`, `ios` and `android`, with variants `_music` and `_creator` (e.g. `ios_creator`); and `mweb`, `mediaconnect`, `android_vr`, `web_safari`, `web_embedded`, `tv` and `tv_embedded` with no variants. By default, `ios,mweb` is used, and `web_creator` is added as needed for age-gated videos when account age verification is required. Similarly, the `_music` variants are added for `music.youtube.com` URLs. Some clients, such as `web` and `android`, require a `po_token` for their formats to be downloadable. Some clients, such as the `_creator` variants, will only work with authentication. You can use `all` to use all the clients, and `default` for the default clients. You can prefix a client with `-` to exclude it, e.g. `youtube:player_client=all,-web`
|
||||
* `player_client`: Clients to extract video data from. The main clients are `web`, `ios` and `android`, with variants `_music` and `_creator` (e.g. `ios_creator`); and `mweb`, `android_vr`, `web_safari`, `web_embedded`, `tv` and `tv_embedded` with no variants. By default, `ios,mweb` is used, or `web_creator,mweb` is used when authenticating with cookies. The `_music` variants are added for `music.youtube.com` URLs. Some clients, such as `web` and `android`, require a `po_token` for their formats to be downloadable. Some clients, such as the `_creator` variants, will only work with authentication. Not all clients support authentication via cookies. You can use `all` to use all the clients, and `default` for the default clients. You can prefix a client with `-` to exclude it, e.g. `youtube:player_client=all,-web`
|
||||
* `player_skip`: Skip some network requests that are generally needed for robust extraction. One or more of `configs` (skip client configs), `webpage` (skip initial webpage), `js` (skip js player). While these options can help reduce the number of requests needed or avoid some rate-limiting, they could cause some issues. See [#860](https://github.com/yt-dlp/yt-dlp/pull/860) for more details
|
||||
* `player_params`: YouTube player parameters to use for player requests. Will overwrite any default ones set by yt-dlp.
|
||||
* `comment_sort`: `top` or `new` (default) - choose comment sorting mode (on YouTube's side)
|
||||
|
@ -1860,7 +1860,7 @@ The following extractors use this feature:
|
|||
* `cdn`: One or more CDN IDs to use with the API call for stream URLs, e.g. `gcp_cdn`, `gs_cdn_pc_app`, `gs_cdn_mobile_web`, `gs_cdn_pc_web`
|
||||
|
||||
#### soundcloud
|
||||
* `formats`: Formats to request from the API. Requested values should be in the format of `{protocol}_{extension}` (omitting the bitrate), e.g. `hls_opus,http_aac`. The `*` character functions as a wildcard, e.g. `*_mp3`, and can be passed by itself to request all formats. Known protocols include `http`, `hls` and `hls-aes`; known extensions include `aac`, `opus` and `mp3`. Original `download` formats are always extracted. Default is `http_aac,hls_aac,http_opus,hls_opus,http_mp3,hls_mp3`
|
||||
* `formats`: Formats to request from the API. Requested values should be in the format of `{protocol}_{codec}`, e.g. `hls_opus,http_aac`. The `*` character functions as a wildcard, e.g. `*_mp3`, and can be passed by itself to request all formats. Known protocols include `http`, `hls` and `hls-aes`; known codecs include `aac`, `opus` and `mp3`. Original `download` formats are always extracted. Default is `http_aac,hls_aac,http_opus,hls_opus,http_mp3,hls_mp3`
|
||||
|
||||
#### orfon (orf:on)
|
||||
* `prefer_segments_playlist`: Prefer a playlist of program segments instead of a single complete video when available. If individual segments are desired, use `--concat-playlist never --extractor-args "orfon:prefer_segments_playlist"`
|
||||
|
|
|
@ -76,7 +76,7 @@ dev = [
|
|||
]
|
||||
static-analysis = [
|
||||
"autopep8~=2.0",
|
||||
"ruff~=0.7.0",
|
||||
"ruff~=0.8.0",
|
||||
]
|
||||
test = [
|
||||
"pytest~=8.1",
|
||||
|
@ -186,6 +186,7 @@ ignore = [
|
|||
"E501", # line-too-long
|
||||
"E731", # lambda-assignment
|
||||
"E741", # ambiguous-variable-name
|
||||
"UP031", # printf-string-formatting
|
||||
"UP036", # outdated-version-block
|
||||
"B006", # mutable-argument-default
|
||||
"B008", # function-call-in-default-argument
|
||||
|
@ -258,9 +259,6 @@ select = [
|
|||
"A002", # builtin-argument-shadowing
|
||||
"C408", # unnecessary-collection-call
|
||||
]
|
||||
"yt_dlp/jsinterp.py" = [
|
||||
"UP031", # printf-string-formatting
|
||||
]
|
||||
|
||||
[tool.ruff.lint.isort]
|
||||
known-first-party = [
|
||||
|
|
|
@ -68,6 +68,11 @@ _SIG_TESTS = [
|
|||
'2aq0aqSyOoJXtK73m-uME_jv7-pT15gOFC02RFkGMqWpzEICs69VdbwQ0LDp1v7j8xx92efCJlYFYb1sUkkBSPOlPmXgIARw8JQ0qOAOAA',
|
||||
'AOq0QJ8wRAIgXmPlOPSBkkUs1bYFYlJCfe29xx8j7v1pDL2QwbdV96sCIEzpWqMGkFR20CFOg51Tp-7vj_EMu-m37KtXJoOySqa0',
|
||||
),
|
||||
(
|
||||
'https://www.youtube.com/s/player/3bb1f723/player_ias.vflset/en_US/base.js',
|
||||
'2aq0aqSyOoJXtK73m-uME_jv7-pT15gOFC02RFkGMqWpzEICs69VdbwQ0LDp1v7j8xx92efCJlYFYb1sUkkBSPOlPmXgIARw8JQ0qOAOAA',
|
||||
'MyOSJXtKI3m-uME_jv7-pT12gOFC02RFkGoqWpzE0Cs69VdbwQ0LDp1v7j8xx92efCJlYFYb1sUkkBSPOlPmXgIARw8JQ0qOAOAA',
|
||||
),
|
||||
]
|
||||
|
||||
_NSIG_TESTS = [
|
||||
|
@ -183,6 +188,10 @@ _NSIG_TESTS = [
|
|||
'https://www.youtube.com/s/player/b12cc44b/player_ias.vflset/en_US/base.js',
|
||||
'keLa5R2U00sR9SQK', 'N1OGyujjEwMnLw',
|
||||
),
|
||||
(
|
||||
'https://www.youtube.com/s/player/3bb1f723/player_ias.vflset/en_US/base.js',
|
||||
'gK15nzVyaXE9RsMP3z', 'ZFFWFLPWx9DEgQ',
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
|
@ -254,8 +263,11 @@ def signature(jscode, sig_input):
|
|||
|
||||
|
||||
def n_sig(jscode, sig_input):
|
||||
funcname = YoutubeIE(FakeYDL())._extract_n_function_name(jscode)
|
||||
return JSInterpreter(jscode).call_function(funcname, sig_input)
|
||||
ie = YoutubeIE(FakeYDL())
|
||||
funcname = ie._extract_n_function_name(jscode)
|
||||
jsi = JSInterpreter(jscode)
|
||||
func = jsi.extract_function_from_code(*ie._fixup_n_function_code(*jsi.extract_function_code(funcname)))
|
||||
return func([sig_input])
|
||||
|
||||
|
||||
make_sig_test = t_factory(
|
||||
|
|
|
@ -1116,7 +1116,7 @@ class YoutubeDL:
|
|||
def raise_no_formats(self, info, forced=False, *, msg=None):
|
||||
has_drm = info.get('_has_drm')
|
||||
ignored, expected = self.params.get('ignore_no_formats_error'), bool(msg)
|
||||
msg = msg or has_drm and 'This video is DRM protected' or 'No video formats found!'
|
||||
msg = msg or (has_drm and 'This video is DRM protected') or 'No video formats found!'
|
||||
if forced or not ignored:
|
||||
raise ExtractorError(msg, video_id=info['id'], ie=info['extractor'],
|
||||
expected=has_drm or ignored or expected)
|
||||
|
@ -2196,7 +2196,7 @@ class YoutubeDL:
|
|||
def _default_format_spec(self, info_dict):
|
||||
prefer_best = (
|
||||
self.params['outtmpl']['default'] == '-'
|
||||
or info_dict.get('is_live') and not self.params.get('live_from_start'))
|
||||
or (info_dict.get('is_live') and not self.params.get('live_from_start')))
|
||||
|
||||
def can_merge():
|
||||
merger = FFmpegMergerPP(self)
|
||||
|
@ -2365,7 +2365,7 @@ class YoutubeDL:
|
|||
vexts=[f['ext'] for f in video_fmts],
|
||||
aexts=[f['ext'] for f in audio_fmts],
|
||||
preferences=(try_call(lambda: self.params['merge_output_format'].split('/'))
|
||||
or self.params.get('prefer_free_formats') and ('webm', 'mkv')))
|
||||
or (self.params.get('prefer_free_formats') and ('webm', 'mkv'))))
|
||||
|
||||
filtered = lambda *keys: filter(None, (traverse_obj(fmt, *keys) for fmt in formats_info))
|
||||
|
||||
|
@ -3541,8 +3541,8 @@ class YoutubeDL:
|
|||
and info_dict.get('container') == 'm4a_dash',
|
||||
'writing DASH m4a. Only some players support this container',
|
||||
FFmpegFixupM4aPP)
|
||||
ffmpeg_fixup(downloader == 'hlsnative' and not self.params.get('hls_use_mpegts')
|
||||
or info_dict.get('is_live') and self.params.get('hls_use_mpegts') is None,
|
||||
ffmpeg_fixup((downloader == 'hlsnative' and not self.params.get('hls_use_mpegts'))
|
||||
or (info_dict.get('is_live') and self.params.get('hls_use_mpegts') is None),
|
||||
'Possible MPEG-TS in MP4 container or malformed AAC timestamps',
|
||||
FFmpegFixupM3u8PP)
|
||||
ffmpeg_fixup(downloader == 'dashsegments'
|
||||
|
|
|
@ -1062,7 +1062,7 @@ def _real_main(argv=None):
|
|||
# If we only have a single process attached, then the executable was double clicked
|
||||
# When using `pyinstaller` with `--onefile`, two processes get attached
|
||||
is_onefile = hasattr(sys, '_MEIPASS') and os.path.basename(sys._MEIPASS).startswith('_MEI')
|
||||
if attached_processes == 1 or is_onefile and attached_processes == 2:
|
||||
if attached_processes == 1 or (is_onefile and attached_processes == 2):
|
||||
print(parser._generate_error_message(
|
||||
'Do not double-click the executable, instead call it from a command line.\n'
|
||||
'Please read the README for further information on how to use yt-dlp: '
|
||||
|
@ -1109,9 +1109,9 @@ def main(argv=None):
|
|||
from .extractor import gen_extractors, list_extractors
|
||||
|
||||
__all__ = [
|
||||
'main',
|
||||
'YoutubeDL',
|
||||
'parse_options',
|
||||
'gen_extractors',
|
||||
'list_extractors',
|
||||
'main',
|
||||
'parse_options',
|
||||
]
|
||||
|
|
|
@ -534,19 +534,17 @@ def ghash(subkey, data):
|
|||
__all__ = [
|
||||
'aes_cbc_decrypt',
|
||||
'aes_cbc_decrypt_bytes',
|
||||
'aes_ctr_decrypt',
|
||||
'aes_decrypt_text',
|
||||
'aes_decrypt',
|
||||
'aes_ecb_decrypt',
|
||||
'aes_gcm_decrypt_and_verify',
|
||||
'aes_gcm_decrypt_and_verify_bytes',
|
||||
|
||||
'aes_cbc_encrypt',
|
||||
'aes_cbc_encrypt_bytes',
|
||||
'aes_ctr_decrypt',
|
||||
'aes_ctr_encrypt',
|
||||
'aes_decrypt',
|
||||
'aes_decrypt_text',
|
||||
'aes_ecb_decrypt',
|
||||
'aes_ecb_encrypt',
|
||||
'aes_encrypt',
|
||||
|
||||
'aes_gcm_decrypt_and_verify',
|
||||
'aes_gcm_decrypt_and_verify_bytes',
|
||||
'key_expansion',
|
||||
'pad_block',
|
||||
'pkcs7_padding',
|
||||
|
|
|
@ -195,7 +195,10 @@ def _extract_firefox_cookies(profile, container, logger):
|
|||
|
||||
def _firefox_browser_dirs():
|
||||
if sys.platform in ('cygwin', 'win32'):
|
||||
yield os.path.expandvars(R'%APPDATA%\Mozilla\Firefox\Profiles')
|
||||
yield from map(os.path.expandvars, (
|
||||
R'%APPDATA%\Mozilla\Firefox\Profiles',
|
||||
R'%LOCALAPPDATA%\Packages\Mozilla.Firefox_n80bbvh6b1yt2\LocalCache\Roaming\Mozilla\Firefox\Profiles',
|
||||
))
|
||||
|
||||
elif sys.platform == 'darwin':
|
||||
yield os.path.expanduser('~/Library/Application Support/Firefox/Profiles')
|
||||
|
@ -1276,8 +1279,8 @@ class YoutubeDLCookieJar(http.cookiejar.MozillaCookieJar):
|
|||
def _really_save(self, f, ignore_discard, ignore_expires):
|
||||
now = time.time()
|
||||
for cookie in self:
|
||||
if (not ignore_discard and cookie.discard
|
||||
or not ignore_expires and cookie.is_expired(now)):
|
||||
if ((not ignore_discard and cookie.discard)
|
||||
or (not ignore_expires and cookie.is_expired(now))):
|
||||
continue
|
||||
name, value = cookie.name, cookie.value
|
||||
if value is None:
|
||||
|
|
|
@ -119,12 +119,12 @@ class HlsFD(FragmentFD):
|
|||
self.to_screen(f'[{self.FD_NAME}] Fragment downloads will be delegated to {real_downloader.get_basename()}')
|
||||
|
||||
def is_ad_fragment_start(s):
|
||||
return (s.startswith('#ANVATO-SEGMENT-INFO') and 'type=ad' in s
|
||||
or s.startswith('#UPLYNK-SEGMENT') and s.endswith(',ad'))
|
||||
return ((s.startswith('#ANVATO-SEGMENT-INFO') and 'type=ad' in s)
|
||||
or (s.startswith('#UPLYNK-SEGMENT') and s.endswith(',ad')))
|
||||
|
||||
def is_ad_fragment_end(s):
|
||||
return (s.startswith('#ANVATO-SEGMENT-INFO') and 'type=master' in s
|
||||
or s.startswith('#UPLYNK-SEGMENT') and s.endswith(',segment'))
|
||||
return ((s.startswith('#ANVATO-SEGMENT-INFO') and 'type=master' in s)
|
||||
or (s.startswith('#UPLYNK-SEGMENT') and s.endswith(',segment')))
|
||||
|
||||
fragments = []
|
||||
|
||||
|
|
|
@ -123,8 +123,8 @@ class YoutubeLiveChatFD(FragmentFD):
|
|||
data,
|
||||
lambda x: x['continuationContents']['liveChatContinuation'], dict) or {}
|
||||
|
||||
func = (info_dict['protocol'] == 'youtube_live_chat' and parse_actions_live
|
||||
or frag_index == 1 and try_refresh_replay_beginning
|
||||
func = ((info_dict['protocol'] == 'youtube_live_chat' and parse_actions_live)
|
||||
or (frag_index == 1 and try_refresh_replay_beginning)
|
||||
or parse_actions_replay)
|
||||
return (True, *func(live_chat_continuation))
|
||||
except HTTPError as err:
|
||||
|
|
|
@ -232,7 +232,7 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
|
|||
|
||||
error = self._parse_json(e.cause.response.read(), video_id)
|
||||
message = error.get('message')
|
||||
if e.cause.code == 403 and error.get('code') == 'player-bad-geolocation-country':
|
||||
if e.cause.status == 403 and error.get('code') == 'player-bad-geolocation-country':
|
||||
self.raise_geo_restricted(msg=message)
|
||||
raise ExtractorError(message)
|
||||
else:
|
||||
|
|
|
@ -18,7 +18,6 @@ from ..utils import (
|
|||
InAdvancePagedList,
|
||||
OnDemandPagedList,
|
||||
bool_or_none,
|
||||
clean_html,
|
||||
determine_ext,
|
||||
filter_dict,
|
||||
float_or_none,
|
||||
|
@ -63,7 +62,7 @@ class BilibiliBaseIE(InfoExtractor):
|
|||
'support_formats', lambda _, v: v['quality'] not in parsed_qualities))], delim=', ')
|
||||
if missing_formats:
|
||||
self.to_screen(
|
||||
f'Format(s) {missing_formats} are missing; you have to login or '
|
||||
f'Format(s) {missing_formats} are missing; you have to '
|
||||
f'become a premium member to download them. {self._login_hint()}')
|
||||
|
||||
def extract_formats(self, play_info):
|
||||
|
@ -165,14 +164,18 @@ class BilibiliBaseIE(InfoExtractor):
|
|||
params['w_rid'] = hashlib.md5(f'{query}{self._get_wbi_key(video_id)}'.encode()).hexdigest()
|
||||
return params
|
||||
|
||||
def _download_playinfo(self, bvid, cid, headers=None, qn=None):
|
||||
params = {'bvid': bvid, 'cid': cid, 'fnval': 4048}
|
||||
if qn:
|
||||
params['qn'] = qn
|
||||
def _download_playinfo(self, bvid, cid, headers=None, query=None):
|
||||
params = {'bvid': bvid, 'cid': cid, 'fnval': 4048, **(query or {})}
|
||||
if self.is_logged_in:
|
||||
params.pop('try_look', None)
|
||||
if qn := params.get('qn'):
|
||||
note = f'Downloading video format {qn} for cid {cid}'
|
||||
else:
|
||||
note = f'Downloading video formats for cid {cid}'
|
||||
|
||||
return self._download_json(
|
||||
'https://api.bilibili.com/x/player/wbi/playurl', bvid,
|
||||
query=self._sign_wbi(params, bvid), headers=headers,
|
||||
note=f'Downloading video formats for cid {cid} {qn or ""}')['data']
|
||||
query=self._sign_wbi(params, bvid), headers=headers, note=note)['data']
|
||||
|
||||
def json2srt(self, json_data):
|
||||
srt_data = ''
|
||||
|
@ -191,7 +194,7 @@ class BilibiliBaseIE(InfoExtractor):
|
|||
}
|
||||
|
||||
video_info = self._download_json(
|
||||
'https://api.bilibili.com/x/player/v2', video_id,
|
||||
'https://api.bilibili.com/x/player/wbi/v2', video_id,
|
||||
query={'aid': aid, 'cid': cid} if aid else {'bvid': video_id, 'cid': cid},
|
||||
note=f'Extracting subtitle info {cid}', headers=self._HEADERS)
|
||||
if traverse_obj(video_info, ('data', 'need_login_subtitle')):
|
||||
|
@ -207,7 +210,7 @@ class BilibiliBaseIE(InfoExtractor):
|
|||
|
||||
def _get_chapters(self, aid, cid):
|
||||
chapters = aid and cid and self._download_json(
|
||||
'https://api.bilibili.com/x/player/v2', aid, query={'aid': aid, 'cid': cid},
|
||||
'https://api.bilibili.com/x/player/wbi/v2', aid, query={'aid': aid, 'cid': cid},
|
||||
note='Extracting chapters', fatal=False, headers=self._HEADERS)
|
||||
return traverse_obj(chapters, ('data', 'view_points', ..., {
|
||||
'title': 'content',
|
||||
|
@ -286,7 +289,7 @@ class BilibiliBaseIE(InfoExtractor):
|
|||
('data', 'interaction', 'graph_version', {int_or_none}))
|
||||
cid_edges = self._get_divisions(video_id, graph_version, {1: {'cid': cid}}, 1)
|
||||
for cid, edges in cid_edges.items():
|
||||
play_info = self._download_playinfo(video_id, cid, headers=headers)
|
||||
play_info = self._download_playinfo(video_id, cid, headers=headers, query={'try_look': 1})
|
||||
yield {
|
||||
**metainfo,
|
||||
'id': f'{video_id}_{cid}',
|
||||
|
@ -639,40 +642,29 @@ class BiliBiliIE(BilibiliBaseIE):
|
|||
headers['Referer'] = url
|
||||
|
||||
initial_state = self._search_json(r'window\.__INITIAL_STATE__\s*=', webpage, 'initial state', video_id)
|
||||
|
||||
if traverse_obj(initial_state, ('error', 'trueCode')) == -403:
|
||||
self.raise_login_required()
|
||||
if traverse_obj(initial_state, ('error', 'trueCode')) == -404:
|
||||
raise ExtractorError(
|
||||
'This video may be deleted or geo-restricted. '
|
||||
'You might want to try a VPN or a proxy server (with --proxy)', expected=True)
|
||||
|
||||
is_festival = 'videoData' not in initial_state
|
||||
if is_festival:
|
||||
video_data = initial_state['videoInfo']
|
||||
else:
|
||||
play_info_obj = self._search_json(
|
||||
r'window\.__playinfo__\s*=', webpage, 'play info', video_id, fatal=False)
|
||||
if not play_info_obj:
|
||||
if traverse_obj(initial_state, ('error', 'trueCode')) == -403:
|
||||
self.raise_login_required()
|
||||
if traverse_obj(initial_state, ('error', 'trueCode')) == -404:
|
||||
raise ExtractorError(
|
||||
'This video may be deleted or geo-restricted. '
|
||||
'You might want to try a VPN or a proxy server (with --proxy)', expected=True)
|
||||
play_info = traverse_obj(play_info_obj, ('data', {dict}))
|
||||
if not play_info:
|
||||
if traverse_obj(play_info_obj, 'code') == 87007:
|
||||
toast = get_element_by_class('tips-toast', webpage) or ''
|
||||
msg = clean_html(
|
||||
f'{get_element_by_class("belongs-to", toast) or ""},'
|
||||
+ (get_element_by_class('level', toast) or ''))
|
||||
raise ExtractorError(
|
||||
f'This is a supporter-only video: {msg}. {self._login_hint()}', expected=True)
|
||||
raise ExtractorError('Failed to extract play info')
|
||||
video_data = initial_state['videoData']
|
||||
|
||||
video_id, title = video_data['bvid'], video_data.get('title')
|
||||
|
||||
# Bilibili anthologies are similar to playlists but all videos share the same video ID as the anthology itself.
|
||||
page_list_json = not is_festival and traverse_obj(
|
||||
page_list_json = (not is_festival and traverse_obj(
|
||||
self._download_json(
|
||||
'https://api.bilibili.com/x/player/pagelist', video_id,
|
||||
fatal=False, query={'bvid': video_id, 'jsonp': 'jsonp'},
|
||||
note='Extracting videos in anthology', headers=headers),
|
||||
'data', expected_type=list) or []
|
||||
'data', expected_type=list)) or []
|
||||
is_anthology = len(page_list_json) > 1
|
||||
|
||||
part_id = int_or_none(parse_qs(url).get('p', [None])[-1])
|
||||
|
@ -691,8 +683,6 @@ class BiliBiliIE(BilibiliBaseIE):
|
|||
|
||||
festival_info = {}
|
||||
if is_festival:
|
||||
play_info = self._download_playinfo(video_id, cid, headers=headers)
|
||||
|
||||
festival_info = traverse_obj(initial_state, {
|
||||
'uploader': ('videoInfo', 'upName'),
|
||||
'uploader_id': ('videoInfo', 'upMid', {str_or_none}),
|
||||
|
@ -727,62 +717,79 @@ class BiliBiliIE(BilibiliBaseIE):
|
|||
self._get_interactive_entries(video_id, cid, metainfo, headers=headers), **metainfo,
|
||||
duration=traverse_obj(initial_state, ('videoData', 'duration', {int_or_none})),
|
||||
__post_extractor=self.extract_comments(aid))
|
||||
else:
|
||||
formats = self.extract_formats(play_info)
|
||||
|
||||
if not traverse_obj(play_info, ('dash')):
|
||||
# we only have legacy formats and need additional work
|
||||
has_qn = lambda x: x in traverse_obj(formats, (..., 'quality'))
|
||||
for qn in traverse_obj(play_info, ('accept_quality', lambda _, v: not has_qn(v), {int})):
|
||||
formats.extend(traverse_obj(
|
||||
self.extract_formats(self._download_playinfo(video_id, cid, headers=headers, qn=qn)),
|
||||
lambda _, v: not has_qn(v['quality'])))
|
||||
self._check_missing_formats(play_info, formats)
|
||||
flv_formats = traverse_obj(formats, lambda _, v: v['fragments'])
|
||||
if flv_formats and len(flv_formats) < len(formats):
|
||||
# Flv and mp4 are incompatible due to `multi_video` workaround, so drop one
|
||||
if not self._configuration_arg('prefer_multi_flv'):
|
||||
dropped_fmts = ', '.join(
|
||||
f'{f.get("format_note")} ({f.get("format_id")})' for f in flv_formats)
|
||||
formats = traverse_obj(formats, lambda _, v: not v.get('fragments'))
|
||||
if dropped_fmts:
|
||||
self.to_screen(
|
||||
f'Dropping incompatible flv format(s) {dropped_fmts} since mp4 is available. '
|
||||
'To extract flv, pass --extractor-args "bilibili:prefer_multi_flv"')
|
||||
else:
|
||||
formats = traverse_obj(
|
||||
# XXX: Filtering by extractor-arg is for testing purposes
|
||||
formats, lambda _, v: v['quality'] == int(self._configuration_arg('prefer_multi_flv')[0]),
|
||||
) or [max(flv_formats, key=lambda x: x['quality'])]
|
||||
play_info = None
|
||||
if self.is_logged_in:
|
||||
play_info = traverse_obj(
|
||||
self._search_json(r'window\.__playinfo__\s*=', webpage, 'play info', video_id, default=None),
|
||||
('data', {dict}))
|
||||
if not play_info:
|
||||
play_info = self._download_playinfo(video_id, cid, headers=headers, query={'try_look': 1})
|
||||
formats = self.extract_formats(play_info)
|
||||
|
||||
if traverse_obj(formats, (0, 'fragments')):
|
||||
# We have flv formats, which are individual short videos with their own timestamps and metainfo
|
||||
# Binary concatenation corrupts their timestamps, so we need a `multi_video` workaround
|
||||
return {
|
||||
**metainfo,
|
||||
'_type': 'multi_video',
|
||||
'entries': [{
|
||||
'id': f'{metainfo["id"]}_{idx}',
|
||||
'title': metainfo['title'],
|
||||
'http_headers': metainfo['http_headers'],
|
||||
'formats': [{
|
||||
**fragment,
|
||||
'format_id': formats[0].get('format_id'),
|
||||
}],
|
||||
'subtitles': self.extract_subtitles(video_id, cid) if idx == 0 else None,
|
||||
'__post_extractor': self.extract_comments(aid) if idx == 0 else None,
|
||||
} for idx, fragment in enumerate(formats[0]['fragments'])],
|
||||
'duration': float_or_none(play_info.get('timelength'), scale=1000),
|
||||
}
|
||||
else:
|
||||
return {
|
||||
**metainfo,
|
||||
'formats': formats,
|
||||
'duration': float_or_none(play_info.get('timelength'), scale=1000),
|
||||
'chapters': self._get_chapters(aid, cid),
|
||||
'subtitles': self.extract_subtitles(video_id, cid),
|
||||
'__post_extractor': self.extract_comments(aid),
|
||||
}
|
||||
if video_data.get('is_upower_exclusive'):
|
||||
high_level = traverse_obj(initial_state, ('elecFullInfo', 'show_info', 'high_level', {dict})) or {}
|
||||
msg = f'{join_nonempty("title", "sub_title", from_dict=high_level, delim=",")}. {self._login_hint()}'
|
||||
if not formats:
|
||||
raise ExtractorError(f'This is a supporter-only video: {msg}', expected=True)
|
||||
if '试看' in traverse_obj(play_info, ('accept_description', ..., {str})):
|
||||
self.report_warning(
|
||||
f'This is a supporter-only video, only the preview will be extracted: {msg}',
|
||||
video_id=video_id)
|
||||
|
||||
if not traverse_obj(play_info, 'dash'):
|
||||
# we only have legacy formats and need additional work
|
||||
has_qn = lambda x: x in traverse_obj(formats, (..., 'quality'))
|
||||
for qn in traverse_obj(play_info, ('accept_quality', lambda _, v: not has_qn(v), {int})):
|
||||
formats.extend(traverse_obj(
|
||||
self.extract_formats(self._download_playinfo(video_id, cid, headers=headers, query={'qn': qn})),
|
||||
lambda _, v: not has_qn(v['quality'])))
|
||||
self._check_missing_formats(play_info, formats)
|
||||
flv_formats = traverse_obj(formats, lambda _, v: v['fragments'])
|
||||
if flv_formats and len(flv_formats) < len(formats):
|
||||
# Flv and mp4 are incompatible due to `multi_video` workaround, so drop one
|
||||
if not self._configuration_arg('prefer_multi_flv'):
|
||||
dropped_fmts = ', '.join(
|
||||
f'{f.get("format_note")} ({f.get("format_id")})' for f in flv_formats)
|
||||
formats = traverse_obj(formats, lambda _, v: not v.get('fragments'))
|
||||
if dropped_fmts:
|
||||
self.to_screen(
|
||||
f'Dropping incompatible flv format(s) {dropped_fmts} since mp4 is available. '
|
||||
'To extract flv, pass --extractor-args "bilibili:prefer_multi_flv"')
|
||||
else:
|
||||
formats = traverse_obj(
|
||||
# XXX: Filtering by extractor-arg is for testing purposes
|
||||
formats, lambda _, v: v['quality'] == int(self._configuration_arg('prefer_multi_flv')[0]),
|
||||
) or [max(flv_formats, key=lambda x: x['quality'])]
|
||||
|
||||
if traverse_obj(formats, (0, 'fragments')):
|
||||
# We have flv formats, which are individual short videos with their own timestamps and metainfo
|
||||
# Binary concatenation corrupts their timestamps, so we need a `multi_video` workaround
|
||||
return {
|
||||
**metainfo,
|
||||
'_type': 'multi_video',
|
||||
'entries': [{
|
||||
'id': f'{metainfo["id"]}_{idx}',
|
||||
'title': metainfo['title'],
|
||||
'http_headers': metainfo['http_headers'],
|
||||
'formats': [{
|
||||
**fragment,
|
||||
'format_id': formats[0].get('format_id'),
|
||||
}],
|
||||
'subtitles': self.extract_subtitles(video_id, cid) if idx == 0 else None,
|
||||
'__post_extractor': self.extract_comments(aid) if idx == 0 else None,
|
||||
} for idx, fragment in enumerate(formats[0]['fragments'])],
|
||||
'duration': float_or_none(play_info.get('timelength'), scale=1000),
|
||||
}
|
||||
|
||||
return {
|
||||
**metainfo,
|
||||
'formats': formats,
|
||||
'duration': float_or_none(play_info.get('timelength'), scale=1000),
|
||||
'chapters': self._get_chapters(aid, cid),
|
||||
'subtitles': self.extract_subtitles(video_id, cid),
|
||||
'__post_extractor': self.extract_comments(aid),
|
||||
}
|
||||
|
||||
|
||||
class BiliBiliBangumiIE(BilibiliBaseIE):
|
||||
|
@ -860,10 +867,16 @@ class BiliBiliBangumiIE(BilibiliBaseIE):
|
|||
self.raise_login_required('This video is for premium members only')
|
||||
|
||||
headers['Referer'] = url
|
||||
play_info = self._download_json(
|
||||
'https://api.bilibili.com/pgc/player/web/v2/playurl', episode_id,
|
||||
'Extracting episode', query={'fnval': '4048', 'ep_id': episode_id},
|
||||
headers=headers)
|
||||
|
||||
play_info = (
|
||||
self._search_json(
|
||||
r'playurlSSRData\s*=', webpage, 'embedded page info', episode_id,
|
||||
end_pattern='\n', default=None)
|
||||
or self._download_json(
|
||||
'https://api.bilibili.com/pgc/player/web/v2/playurl', episode_id,
|
||||
'Extracting episode', query={'fnval': 12240, 'ep_id': episode_id},
|
||||
headers=headers))
|
||||
|
||||
premium_only = play_info.get('code') == -10403
|
||||
play_info = traverse_obj(play_info, ('result', 'video_info', {dict})) or {}
|
||||
|
||||
|
|
|
@ -1854,12 +1854,26 @@ class InfoExtractor:
|
|||
|
||||
@staticmethod
|
||||
def _remove_duplicate_formats(formats):
|
||||
format_urls = set()
|
||||
seen_urls = set()
|
||||
seen_fragment_urls = set()
|
||||
unique_formats = []
|
||||
for f in formats:
|
||||
if f['url'] not in format_urls:
|
||||
format_urls.add(f['url'])
|
||||
fragments = f.get('fragments')
|
||||
if callable(fragments):
|
||||
unique_formats.append(f)
|
||||
|
||||
elif fragments:
|
||||
fragment_urls = frozenset(
|
||||
fragment.get('url') or urljoin(f['fragment_base_url'], fragment['path'])
|
||||
for fragment in fragments)
|
||||
if fragment_urls not in seen_fragment_urls:
|
||||
seen_fragment_urls.add(fragment_urls)
|
||||
unique_formats.append(f)
|
||||
|
||||
elif f['url'] not in seen_urls:
|
||||
seen_urls.add(f['url'])
|
||||
unique_formats.append(f)
|
||||
|
||||
formats[:] = unique_formats
|
||||
|
||||
def _is_valid_url(self, url, video_id, item='video', headers={}):
|
||||
|
@ -3789,7 +3803,7 @@ class InfoExtractor:
|
|||
def mark_watched(self, *args, **kwargs):
|
||||
if not self.get_param('mark_watched', False):
|
||||
return
|
||||
if self.supports_login() and self._get_login_info()[0] is not None or self._cookies_passed:
|
||||
if (self.supports_login() and self._get_login_info()[0] is not None) or self._cookies_passed:
|
||||
self._mark_watched(*args, **kwargs)
|
||||
|
||||
def _mark_watched(self, *args, **kwargs):
|
||||
|
|
|
@ -1,7 +1,4 @@
|
|||
import time
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..networking import HEADRequest
|
||||
from ..utils import int_or_none
|
||||
|
||||
|
||||
|
@ -31,9 +28,6 @@ class CultureUnpluggedIE(InfoExtractor):
|
|||
video_id = mobj.group('id')
|
||||
display_id = mobj.group('display_id') or video_id
|
||||
|
||||
# request setClientTimezone.php to get PHPSESSID cookie which is need to get valid json data in the next request
|
||||
self._request_webpage(HEADRequest(
|
||||
'http://www.cultureunplugged.com/setClientTimezone.php?timeOffset=%d' % -(time.timezone / 3600)), display_id)
|
||||
movie_data = self._download_json(
|
||||
f'http://www.cultureunplugged.com/movie-data/cu-{video_id}.json', display_id)
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
import functools
|
||||
import hashlib
|
||||
import re
|
||||
import time
|
||||
|
@ -51,6 +52,15 @@ class DacastVODIE(DacastBaseIE):
|
|||
'thumbnail': 'https://universe-files.dacast.com/26137208-5858-65c1-5e9a-9d6b6bd2b6c2',
|
||||
},
|
||||
'params': {'skip_download': 'm3u8'},
|
||||
}, { # /uspaes/ in hls_url
|
||||
'url': 'https://iframe.dacast.com/vod/f9823fc6-faba-b98f-0d00-4a7b50a58c5b/348c5c84-b6af-4859-bb9d-1d01009c795b',
|
||||
'info_dict': {
|
||||
'id': '348c5c84-b6af-4859-bb9d-1d01009c795b',
|
||||
'ext': 'mp4',
|
||||
'title': 'pl1-edyta-rubas-211124.mp4',
|
||||
'uploader_id': 'f9823fc6-faba-b98f-0d00-4a7b50a58c5b',
|
||||
'thumbnail': 'https://universe-files.dacast.com/4d0bd042-a536-752d-fc34-ad2fa44bbcbb.png',
|
||||
},
|
||||
}]
|
||||
_WEBPAGE_TESTS = [{
|
||||
'url': 'https://www.dacast.com/support/knowledgebase/how-can-i-embed-a-video-on-my-website/',
|
||||
|
@ -74,6 +84,15 @@ class DacastVODIE(DacastBaseIE):
|
|||
'params': {'skip_download': 'm3u8'},
|
||||
}]
|
||||
|
||||
@functools.cached_property
|
||||
def _usp_signing_secret(self):
|
||||
player_js = self._download_webpage(
|
||||
'https://player.dacast.com/js/player.js', None, 'Downloading player JS')
|
||||
# Rotates every so often, but hardcode a fallback in case of JS change/breakage before rotation
|
||||
return self._search_regex(
|
||||
r'\bUSP_SIGNING_SECRET\s*=\s*(["\'])(?P<secret>(?:(?!\1).)+)', player_js,
|
||||
'usp signing secret', group='secret', fatal=False) or 'odnInCGqhvtyRTtIiddxtuRtawYYICZP'
|
||||
|
||||
def _real_extract(self, url):
|
||||
user_id, video_id = self._match_valid_url(url).group('user_id', 'id')
|
||||
query = {'contentId': f'{user_id}-vod-{video_id}', 'provider': 'universe'}
|
||||
|
@ -94,10 +113,10 @@ class DacastVODIE(DacastBaseIE):
|
|||
if 'DRM_EXT' in hls_url:
|
||||
self.report_drm(video_id)
|
||||
elif '/uspaes/' in hls_url:
|
||||
# From https://player.dacast.com/js/player.js
|
||||
# Ref: https://player.dacast.com/js/player.js
|
||||
ts = int(time.time())
|
||||
signature = hashlib.sha1(
|
||||
f'{10413792000 - ts}{ts}YfaKtquEEpDeusCKbvYszIEZnWmBcSvw').digest().hex()
|
||||
f'{10413792000 - ts}{ts}{self._usp_signing_secret}'.encode()).digest().hex()
|
||||
hls_aes['uri'] = f'https://keys.dacast.com/uspaes/{video_id}.key?s={signature}&ts={ts}'
|
||||
|
||||
for retry in self.RetryManager():
|
||||
|
|
|
@ -48,32 +48,30 @@ class DropboxIE(InfoExtractor):
|
|||
webpage = self._download_webpage(url, video_id)
|
||||
fn = urllib.parse.unquote(url_basename(url))
|
||||
title = os.path.splitext(fn)[0]
|
||||
password = self.get_param('videopassword')
|
||||
content_id = None
|
||||
|
||||
for part in self._yield_decoded_parts(webpage):
|
||||
if '/sm/password' in part:
|
||||
webpage = self._download_webpage(
|
||||
update_url('https://www.dropbox.com/sm/password', query=part.partition('?')[2]), video_id)
|
||||
content_id = self._search_regex(r'content_id=([\w.+=/-]+)', part, 'content ID')
|
||||
break
|
||||
|
||||
if (self._og_search_title(webpage, default=None) == 'Dropbox - Password Required'
|
||||
or 'Enter the password for this link' in webpage):
|
||||
if password:
|
||||
response = self._download_json(
|
||||
'https://www.dropbox.com/sm/auth', video_id, 'POSTing video password',
|
||||
headers={'content-type': 'application/x-www-form-urlencoded; charset=UTF-8'},
|
||||
data=urlencode_postdata({
|
||||
'is_xhr': 'true',
|
||||
't': self._get_cookies('https://www.dropbox.com')['t'].value,
|
||||
'content_id': self._search_regex(r'content_id=([\w.+=/-]+)["\']', webpage, 'content id'),
|
||||
'password': password,
|
||||
'url': url,
|
||||
}))
|
||||
|
||||
if response.get('status') != 'authed':
|
||||
raise ExtractorError('Invalid password', expected=True)
|
||||
elif not self._get_cookies('https://dropbox.com').get('sm_auth'):
|
||||
if content_id:
|
||||
password = self.get_param('videopassword')
|
||||
if not password:
|
||||
raise ExtractorError('Password protected video, use --video-password <password>', expected=True)
|
||||
|
||||
response = self._download_json(
|
||||
'https://www.dropbox.com/sm/auth', video_id, 'POSTing video password',
|
||||
data=urlencode_postdata({
|
||||
'is_xhr': 'true',
|
||||
't': self._get_cookies('https://www.dropbox.com')['t'].value,
|
||||
'content_id': content_id,
|
||||
'password': password,
|
||||
'url': update_url(url, scheme='', netloc=''),
|
||||
}))
|
||||
if response.get('status') != 'authed':
|
||||
raise ExtractorError('Invalid password', expected=True)
|
||||
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
|
||||
formats, subtitles = [], {}
|
||||
|
|
|
@ -5,15 +5,16 @@ from ..utils import (
|
|||
get_element_text_and_html_by_tag,
|
||||
int_or_none,
|
||||
join_nonempty,
|
||||
parse_qs,
|
||||
str_or_none,
|
||||
try_call,
|
||||
unified_timestamp,
|
||||
)
|
||||
from ..utils.traversal import traverse_obj
|
||||
from ..utils.traversal import traverse_obj, value
|
||||
|
||||
|
||||
class DuoplayIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://duoplay\.ee/(?P<id>\d+)/[\w-]+/?(?:\?(?:[^#]+&)?ep=(?P<ep>\d+))?'
|
||||
_VALID_URL = r'https?://duoplay\.ee/(?P<id>\d+)(?:[/?#]|$)'
|
||||
_TESTS = [{
|
||||
'note': 'Siberi võmm S02E12',
|
||||
'url': 'https://duoplay.ee/4312/siberi-vomm?ep=24',
|
||||
|
@ -34,15 +35,16 @@ class DuoplayIE(InfoExtractor):
|
|||
'episode_number': 12,
|
||||
'episode_id': '24',
|
||||
},
|
||||
'skip': 'No video found',
|
||||
}, {
|
||||
'note': 'Empty title',
|
||||
'url': 'https://duoplay.ee/17/uhikarotid?ep=14',
|
||||
'md5': '6aca68be71112314738dd17cced7f8bf',
|
||||
'md5': 'cba9f5dabf2582b224d80ac44fb80e47',
|
||||
'info_dict': {
|
||||
'id': '17_14',
|
||||
'ext': 'mp4',
|
||||
'title': 'Ühikarotid',
|
||||
'thumbnail': r're:https://.+\.jpg(?:\?c=\d+)?$',
|
||||
'title': 'Episode 14',
|
||||
'thumbnail': r're:https?://.+\.jpg',
|
||||
'description': 'md5:4719b418e058c209def41d48b601276e',
|
||||
'upload_date': '20100916',
|
||||
'timestamp': 1284661800,
|
||||
|
@ -52,6 +54,8 @@ class DuoplayIE(InfoExtractor):
|
|||
'season_number': 2,
|
||||
'episode_id': '14',
|
||||
'release_year': 2010,
|
||||
'episode': 'Episode 14',
|
||||
'episode_number': 14,
|
||||
},
|
||||
}, {
|
||||
'note': 'Movie without expiry',
|
||||
|
@ -68,10 +72,32 @@ class DuoplayIE(InfoExtractor):
|
|||
'timestamp': 1671054000,
|
||||
'release_year': 2018,
|
||||
},
|
||||
'skip': 'No video found',
|
||||
}, {
|
||||
'note': 'Episode url without show name',
|
||||
'url': 'https://duoplay.ee/9644?ep=185',
|
||||
'md5': '63f324b4fe2dbd8194dca16a6d52184a',
|
||||
'info_dict': {
|
||||
'id': '9644_185',
|
||||
'ext': 'mp4',
|
||||
'title': 'Episode 185',
|
||||
'thumbnail': r're:https?://.+\.jpg',
|
||||
'description': 'md5:ed25ba4e9e5d54bc291a4a0cdd241467',
|
||||
'upload_date': '20241120',
|
||||
'timestamp': 1732077000,
|
||||
'episode': 'Episode 63',
|
||||
'episode_id': '185',
|
||||
'episode_number': 63,
|
||||
'season': 'Season 2',
|
||||
'season_number': 2,
|
||||
'series': 'Telehommik',
|
||||
'series_id': '9644',
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
telecast_id, episode = self._match_valid_url(url).group('id', 'ep')
|
||||
telecast_id = self._match_id(url)
|
||||
episode = traverse_obj(parse_qs(url), ('ep', 0, {int_or_none}, {str_or_none}))
|
||||
video_id = join_nonempty(telecast_id, episode, delim='_')
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
video_player = try_call(lambda: extract_attributes(
|
||||
|
@ -79,25 +105,33 @@ class DuoplayIE(InfoExtractor):
|
|||
if not video_player or not video_player.get('manifest-url'):
|
||||
raise ExtractorError('No video found', expected=True)
|
||||
|
||||
manifest_url = video_player['manifest-url']
|
||||
session_token = self._download_json(
|
||||
'https://sts.postimees.ee/session/register', video_id, 'Registering session',
|
||||
'Unable to register session', headers={
|
||||
'Accept': 'application/json',
|
||||
'X-Original-URI': manifest_url,
|
||||
})['session']
|
||||
|
||||
episode_attr = self._parse_json(video_player.get(':episode') or '', video_id, fatal=False) or {}
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'formats': self._extract_m3u8_formats(video_player['manifest-url'], video_id, 'mp4'),
|
||||
'formats': self._extract_m3u8_formats(manifest_url, video_id, 'mp4', query={'s': session_token}),
|
||||
**traverse_obj(episode_attr, {
|
||||
'title': 'title',
|
||||
'description': 'synopsis',
|
||||
'title': ('title', {str}),
|
||||
'description': ('synopsis', {str}),
|
||||
'thumbnail': ('images', 'original'),
|
||||
'timestamp': ('airtime', {lambda x: unified_timestamp(x + ' +0200')}),
|
||||
'cast': ('cast', {lambda x: x.split(', ')}),
|
||||
'cast': ('cast', filter, {lambda x: x.split(', ')}),
|
||||
'release_year': ('year', {int_or_none}),
|
||||
}),
|
||||
**(traverse_obj(episode_attr, {
|
||||
'title': (None, ('subtitle', ('episode_nr', {lambda x: f'Episode {x}' if x else None}))),
|
||||
'series': 'title',
|
||||
'title': (None, (('subtitle', {str}, filter), {value(f'Episode {episode}' if episode else None)})),
|
||||
'series': ('title', {str}),
|
||||
'series_id': ('telecast_id', {str_or_none}),
|
||||
'season_number': ('season_id', {int_or_none}),
|
||||
'episode': 'subtitle',
|
||||
'episode': ('subtitle', {str}, filter),
|
||||
'episode_number': ('episode_nr', {int_or_none}),
|
||||
'episode_id': ('episode_id', {str_or_none}),
|
||||
}, get_all=False) if episode_attr.get('category') != 'movies' else {}),
|
||||
|
|
|
@ -193,9 +193,9 @@ class FunimationIE(FunimationBaseIE):
|
|||
|
||||
for lang, version, fmt in self._get_experiences(episode):
|
||||
experience_id = str(fmt['experienceId'])
|
||||
if (only_initial_experience and experience_id != initial_experience_id
|
||||
or requested_languages and lang.lower() not in requested_languages
|
||||
or requested_versions and version.lower() not in requested_versions):
|
||||
if ((only_initial_experience and experience_id != initial_experience_id)
|
||||
or (requested_languages and lang.lower() not in requested_languages)
|
||||
or (requested_versions and version.lower() not in requested_versions)):
|
||||
continue
|
||||
thumbnails.append({'url': fmt.get('poster')})
|
||||
duration = max(duration, fmt.get('duration', 0))
|
||||
|
|
|
@ -254,7 +254,7 @@ class InstagramIOSIE(InfoExtractor):
|
|||
|
||||
|
||||
class InstagramIE(InstagramBaseIE):
|
||||
_VALID_URL = r'(?P<url>https?://(?:www\.)?instagram\.com(?:/[^/]+)?/(?:p|tv|reels?(?!/audio/))/(?P<id>[^/?#&]+))'
|
||||
_VALID_URL = r'(?P<url>https?://(?:www\.)?instagram\.com(?:/(?!share/)[^/?#]+)?/(?:p|tv|reels?(?!/audio/))/(?P<id>[^/?#&]+))'
|
||||
_EMBED_REGEX = [r'<iframe[^>]+src=(["\'])(?P<url>(?:https?:)?//(?:www\.)?instagram\.com/p/[^/]+/embed.*?)\1']
|
||||
_TESTS = [{
|
||||
'url': 'https://instagram.com/p/aye83DjauH/?foo=bar#abc',
|
||||
|
|
|
@ -26,6 +26,7 @@ class MicrosoftEmbedIE(InfoExtractor):
|
|||
'timestamp': 1631658316,
|
||||
'upload_date': '20210914',
|
||||
},
|
||||
'expected_warnings': ['Failed to parse XML: syntax error: line 1, column 0'],
|
||||
}]
|
||||
_API_URL = 'https://prod-video-cms-rt-microsoft-com.akamaized.net/vhs/api/videos/'
|
||||
|
||||
|
@ -36,11 +37,11 @@ class MicrosoftEmbedIE(InfoExtractor):
|
|||
formats = []
|
||||
for source_type, source in metadata['streams'].items():
|
||||
if source_type == 'smooth_Streaming':
|
||||
formats.extend(self._extract_ism_formats(source['url'], video_id, 'mss'))
|
||||
formats.extend(self._extract_ism_formats(source['url'], video_id, 'mss', fatal=False))
|
||||
elif source_type == 'apple_HTTP_Live_Streaming':
|
||||
formats.extend(self._extract_m3u8_formats(source['url'], video_id, 'mp4'))
|
||||
formats.extend(self._extract_m3u8_formats(source['url'], video_id, 'mp4', fatal=False))
|
||||
elif source_type == 'mPEG_DASH':
|
||||
formats.extend(self._extract_mpd_formats(source['url'], video_id))
|
||||
formats.extend(self._extract_mpd_formats(source['url'], video_id, fatal=False))
|
||||
else:
|
||||
formats.append({
|
||||
'format_id': source_type,
|
||||
|
|
|
@ -80,9 +80,9 @@ class MiTeleIE(TelecincoBaseIE):
|
|||
def _real_extract(self, url):
|
||||
display_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, display_id)
|
||||
pre_player = self._parse_json(self._search_regex(
|
||||
r'window\.\$REACTBASE_STATE\.prePlayer_mtweb\s*=\s*({.+})',
|
||||
webpage, 'Pre Player'), display_id)['prePlayer']
|
||||
pre_player = self._search_json(
|
||||
r'window\.\$REACTBASE_STATE\.prePlayer_mtweb\s*=',
|
||||
webpage, 'Pre Player', display_id)['prePlayer']
|
||||
title = pre_player['title']
|
||||
video_info = self._parse_content(pre_player['video'], url)
|
||||
content = pre_player.get('content') or {}
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
from .common import InfoExtractor
|
||||
from ..networking.exceptions import HTTPError
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
traverse_obj,
|
||||
|
@ -110,8 +111,8 @@ class PixivSketchUserIE(PixivSketchBaseIE):
|
|||
if not traverse_obj(data, 'is_broadcasting'):
|
||||
try:
|
||||
self._call_api(user_id, 'users/current.json', url, 'Investigating reason for request failure')
|
||||
except ExtractorError as ex:
|
||||
if ex.cause and ex.cause.code == 401:
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, HTTPError) and e.cause.status == 401:
|
||||
self.raise_login_required(f'Please log in, or use direct link like https://sketch.pixiv.net/@{user_id}/1234567890', method='cookies')
|
||||
raise ExtractorError('This user is offline', expected=True)
|
||||
|
||||
|
|
|
@ -7,7 +7,6 @@ from .common import InfoExtractor, SearchInfoExtractor
|
|||
from ..networking import HEADRequest
|
||||
from ..networking.exceptions import HTTPError
|
||||
from ..utils import (
|
||||
KNOWN_EXTENSIONS,
|
||||
ExtractorError,
|
||||
float_or_none,
|
||||
int_or_none,
|
||||
|
@ -251,50 +250,15 @@ class SoundcloudBaseIE(InfoExtractor):
|
|||
def invalid_url(url):
|
||||
return not url or url in format_urls
|
||||
|
||||
def add_format(f, protocol, is_preview=False):
|
||||
mobj = re.search(r'\.(?P<abr>\d+)\.(?P<ext>[0-9a-z]{3,4})(?=[/?])', stream_url)
|
||||
if mobj:
|
||||
for k, v in mobj.groupdict().items():
|
||||
if not f.get(k):
|
||||
f[k] = v
|
||||
format_id_list = []
|
||||
if protocol:
|
||||
format_id_list.append(protocol)
|
||||
ext = f.get('ext')
|
||||
if ext == 'aac':
|
||||
f.update({
|
||||
'abr': 256,
|
||||
'quality': 5,
|
||||
'format_note': 'Premium',
|
||||
})
|
||||
for k in ('ext', 'abr'):
|
||||
v = str_or_none(f.get(k))
|
||||
if v:
|
||||
format_id_list.append(v)
|
||||
preview = is_preview or re.search(r'/(?:preview|playlist)/0/30/', f['url'])
|
||||
if preview:
|
||||
format_id_list.append('preview')
|
||||
abr = f.get('abr')
|
||||
if abr:
|
||||
f['abr'] = int(abr)
|
||||
if protocol in ('hls', 'hls-aes'):
|
||||
protocol = 'm3u8' if ext == 'aac' else 'm3u8_native'
|
||||
else:
|
||||
protocol = 'http'
|
||||
f.update({
|
||||
'format_id': '_'.join(format_id_list),
|
||||
'protocol': protocol,
|
||||
'preference': -10 if preview else None,
|
||||
})
|
||||
formats.append(f)
|
||||
|
||||
# New API
|
||||
for t in traverse_obj(info, ('media', 'transcodings', lambda _, v: url_or_none(v['url']))):
|
||||
for t in traverse_obj(info, ('media', 'transcodings', lambda _, v: url_or_none(v['url']) and v['preset'])):
|
||||
if extract_flat:
|
||||
break
|
||||
format_url = t['url']
|
||||
preset = t['preset']
|
||||
preset_base = preset.partition('_')[0]
|
||||
|
||||
protocol = traverse_obj(t, ('format', 'protocol', {str}))
|
||||
protocol = traverse_obj(t, ('format', 'protocol', {str})) or 'http'
|
||||
if protocol == 'progressive':
|
||||
protocol = 'http'
|
||||
if protocol != 'hls' and '/hls' in format_url:
|
||||
|
@ -302,32 +266,54 @@ class SoundcloudBaseIE(InfoExtractor):
|
|||
if protocol == 'encrypted-hls' or '/encrypted-hls' in format_url:
|
||||
protocol = 'hls-aes'
|
||||
|
||||
ext = None
|
||||
if preset := traverse_obj(t, ('preset', {str_or_none})):
|
||||
ext = preset.split('_')[0]
|
||||
if ext not in KNOWN_EXTENSIONS:
|
||||
ext = mimetype2ext(traverse_obj(t, ('format', 'mime_type', {str})))
|
||||
|
||||
identifier = join_nonempty(protocol, ext, delim='_')
|
||||
if not self._is_requested(identifier):
|
||||
self.write_debug(f'"{identifier}" is not a requested format, skipping')
|
||||
short_identifier = f'{protocol}_{preset_base}'
|
||||
if preset_base == 'abr':
|
||||
self.write_debug(f'Skipping broken "{short_identifier}" format')
|
||||
continue
|
||||
if not self._is_requested(short_identifier):
|
||||
self.write_debug(f'"{short_identifier}" is not a requested format, skipping')
|
||||
continue
|
||||
|
||||
# XXX: if not extract_flat, 429 error must be caught where _extract_info_dict is called
|
||||
stream_url = traverse_obj(self._call_api(
|
||||
format_url, track_id, f'Downloading {identifier} format info JSON',
|
||||
format_url, track_id, f'Downloading {short_identifier} format info JSON',
|
||||
query=query, headers=self._HEADERS), ('url', {url_or_none}))
|
||||
|
||||
if invalid_url(stream_url):
|
||||
continue
|
||||
format_urls.add(stream_url)
|
||||
add_format({
|
||||
|
||||
mime_type = traverse_obj(t, ('format', 'mime_type', {str}))
|
||||
codec = self._search_regex(r'codecs="([^"]+)"', mime_type, 'codec', default=None)
|
||||
ext = {
|
||||
'mp4a': 'm4a',
|
||||
'opus': 'opus',
|
||||
}.get(codec[:4] if codec else None) or mimetype2ext(mime_type, default=None)
|
||||
if not ext or ext == 'm3u8':
|
||||
ext = preset_base
|
||||
|
||||
is_premium = t.get('quality') == 'hq'
|
||||
abr = int_or_none(
|
||||
self._search_regex(r'(\d+)k$', preset, 'abr', default=None)
|
||||
or self._search_regex(r'\.(\d+)\.(?:opus|mp3)[/?]', stream_url, 'abr', default=None)
|
||||
or (256 if (is_premium and 'aac' in preset) else None))
|
||||
|
||||
is_preview = (t.get('snipped')
|
||||
or '/preview/' in format_url
|
||||
or re.search(r'/(?:preview|playlist)/0/30/', stream_url))
|
||||
|
||||
formats.append({
|
||||
'format_id': join_nonempty(protocol, preset, is_preview and 'preview', delim='_'),
|
||||
'url': stream_url,
|
||||
'ext': ext,
|
||||
}, protocol, t.get('snipped') or '/preview/' in format_url)
|
||||
|
||||
for f in formats:
|
||||
f['vcodec'] = 'none'
|
||||
'acodec': codec,
|
||||
'vcodec': 'none',
|
||||
'abr': abr,
|
||||
'protocol': 'm3u8_native' if protocol in ('hls', 'hls-aes') else 'http',
|
||||
'container': 'm4a_dash' if ext == 'm4a' else None,
|
||||
'quality': 5 if is_premium else 0 if (abr and abr >= 160) else -1,
|
||||
'format_note': 'Premium' if is_premium else None,
|
||||
'preference': -10 if is_preview else None,
|
||||
})
|
||||
|
||||
if not formats and info.get('policy') == 'BLOCK':
|
||||
self.raise_geo_restricted(metadata_available=True)
|
||||
|
|
|
@ -413,15 +413,6 @@ class TikTokBaseIE(InfoExtractor):
|
|||
for f in formats:
|
||||
self._set_cookie(urllib.parse.urlparse(f['url']).hostname, 'sid_tt', auth_cookie.value)
|
||||
|
||||
thumbnails = []
|
||||
for cover_id in ('cover', 'ai_dynamic_cover', 'animated_cover', 'ai_dynamic_cover_bak',
|
||||
'origin_cover', 'dynamic_cover'):
|
||||
for cover_url in traverse_obj(video_info, (cover_id, 'url_list', ...)):
|
||||
thumbnails.append({
|
||||
'id': cover_id,
|
||||
'url': cover_url,
|
||||
})
|
||||
|
||||
stats_info = aweme_detail.get('statistics') or {}
|
||||
music_info = aweme_detail.get('music') or {}
|
||||
labels = traverse_obj(aweme_detail, ('hybrid_label', ..., 'text'), expected_type=str)
|
||||
|
@ -467,7 +458,17 @@ class TikTokBaseIE(InfoExtractor):
|
|||
'formats': formats,
|
||||
'subtitles': self.extract_subtitles(
|
||||
aweme_detail, aweme_id, traverse_obj(author_info, 'uploader', 'uploader_id', 'channel_id')),
|
||||
'thumbnails': thumbnails,
|
||||
'thumbnails': [
|
||||
{
|
||||
'id': cover_id,
|
||||
'url': cover_url,
|
||||
'preference': -1 if cover_id in ('cover', 'origin_cover') else -2,
|
||||
}
|
||||
for cover_id in (
|
||||
'cover', 'ai_dynamic_cover', 'animated_cover',
|
||||
'ai_dynamic_cover_bak', 'origin_cover', 'dynamic_cover')
|
||||
for cover_url in traverse_obj(video_info, (cover_id, 'url_list', ...))
|
||||
],
|
||||
'duration': (traverse_obj(video_info, (
|
||||
(None, 'download_addr'), 'duration', {int_or_none(scale=1000)}, any))
|
||||
or traverse_obj(music_info, ('duration', {int_or_none}))),
|
||||
|
@ -600,11 +601,15 @@ class TikTokBaseIE(InfoExtractor):
|
|||
'repost_count': 'shareCount',
|
||||
'comment_count': 'commentCount',
|
||||
}), expected_type=int_or_none),
|
||||
'thumbnails': traverse_obj(aweme_detail, (
|
||||
(None, 'video'), ('thumbnail', 'cover', 'dynamicCover', 'originCover'), {
|
||||
'url': ({url_or_none}, {self._proto_relative_url}),
|
||||
},
|
||||
)),
|
||||
'thumbnails': [
|
||||
{
|
||||
'id': cover_id,
|
||||
'url': self._proto_relative_url(cover_url),
|
||||
'preference': -2 if cover_id == 'dynamicCover' else -1,
|
||||
}
|
||||
for cover_id in ('thumbnail', 'cover', 'dynamicCover', 'originCover')
|
||||
for cover_url in traverse_obj(aweme_detail, ((None, 'video'), cover_id, {url_or_none}))
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -17,10 +17,10 @@ from ..utils import (
|
|||
get_element_html_by_id,
|
||||
int_or_none,
|
||||
join_nonempty,
|
||||
parse_qs,
|
||||
parse_resolution,
|
||||
str_or_none,
|
||||
str_to_int,
|
||||
traverse_obj,
|
||||
try_call,
|
||||
unescapeHTML,
|
||||
unified_timestamp,
|
||||
|
@ -29,6 +29,7 @@ from ..utils import (
|
|||
urlencode_postdata,
|
||||
urljoin,
|
||||
)
|
||||
from ..utils.traversal import require, traverse_obj
|
||||
|
||||
|
||||
class VKBaseIE(InfoExtractor):
|
||||
|
@ -91,17 +92,17 @@ class VKBaseIE(InfoExtractor):
|
|||
class VKIE(VKBaseIE):
|
||||
IE_NAME = 'vk'
|
||||
IE_DESC = 'VK'
|
||||
_EMBED_REGEX = [r'<iframe[^>]+?src=(["\'])(?P<url>https?://vk\.com/video_ext\.php.+?)\1']
|
||||
_EMBED_REGEX = [r'<iframe[^>]+?src=(["\'])(?P<url>https?://vk(?:(?:video)?\.ru|\.com)/video_ext\.php.+?)\1']
|
||||
_VALID_URL = r'''(?x)
|
||||
https?://
|
||||
(?:
|
||||
(?:
|
||||
(?:(?:m|new)\.)?vk\.com/video_|
|
||||
(?:(?:m|new)\.)?vk(?:(?:video)?\.ru|\.com)/video_|
|
||||
(?:www\.)?daxab\.com/
|
||||
)
|
||||
ext\.php\?(?P<embed_query>.*?\boid=(?P<oid>-?\d+).*?\bid=(?P<id>\d+).*)|
|
||||
(?:
|
||||
(?:(?:m|new)\.)?vk\.com/(?:.+?\?.*?z=)?(?:video|clip)|
|
||||
(?:(?:m|new)\.)?vk(?:(?:video)?\.ru|\.com)/(?:.+?\?.*?z=)?(?:video|clip)|
|
||||
(?:www\.)?daxab\.com/embed/
|
||||
)
|
||||
(?P<videoid>-?\d+_\d+)(?:.*\blist=(?P<list_id>([\da-f]+)|(ln-[\da-zA-Z]+)))?
|
||||
|
@ -110,7 +111,7 @@ class VKIE(VKBaseIE):
|
|||
|
||||
_TESTS = [
|
||||
{
|
||||
'url': 'http://vk.com/videos-77521?z=video-77521_162222515%2Fclub77521',
|
||||
'url': 'https://vk.com/videos-77521?z=video-77521_162222515%2Fclub77521',
|
||||
'info_dict': {
|
||||
'id': '-77521_162222515',
|
||||
'ext': 'mp4',
|
||||
|
@ -127,7 +128,7 @@ class VKIE(VKBaseIE):
|
|||
'params': {'skip_download': 'm3u8'},
|
||||
},
|
||||
{
|
||||
'url': 'http://vk.com/video205387401_165548505',
|
||||
'url': 'https://vk.com/video205387401_165548505',
|
||||
'info_dict': {
|
||||
'id': '205387401_165548505',
|
||||
'ext': 'mp4',
|
||||
|
@ -182,10 +183,10 @@ class VKIE(VKBaseIE):
|
|||
'ext': 'mp4',
|
||||
'title': "DSWD Awards 'Children's Joy Foundation, Inc.' Certificate of Registration and License to Operate",
|
||||
'description': 'md5:bf9c26cfa4acdfb146362682edd3827a',
|
||||
'duration': 178,
|
||||
'duration': 179,
|
||||
'upload_date': '20130117',
|
||||
'uploader': "Children's Joy Foundation Inc.",
|
||||
'uploader_id': 'thecjf',
|
||||
'uploader_id': '@CJFIofficial',
|
||||
'view_count': int,
|
||||
'channel_id': 'UCgzCNQ11TmR9V97ECnhi3gw',
|
||||
'availability': 'public',
|
||||
|
@ -193,7 +194,7 @@ class VKIE(VKBaseIE):
|
|||
'live_status': 'not_live',
|
||||
'playable_in_embed': True,
|
||||
'channel': 'Children\'s Joy Foundation Inc.',
|
||||
'uploader_url': 'http://www.youtube.com/user/thecjf',
|
||||
'uploader_url': 'https://www.youtube.com/@CJFIofficial',
|
||||
'thumbnail': r're:https?://.+\.jpg$',
|
||||
'tags': 'count:27',
|
||||
'start_time': 0.0,
|
||||
|
@ -201,6 +202,7 @@ class VKIE(VKBaseIE):
|
|||
'channel_url': 'https://www.youtube.com/channel/UCgzCNQ11TmR9V97ECnhi3gw',
|
||||
'channel_follower_count': int,
|
||||
'age_limit': 0,
|
||||
'timestamp': 1358394935,
|
||||
},
|
||||
},
|
||||
{
|
||||
|
@ -222,6 +224,7 @@ class VKIE(VKBaseIE):
|
|||
'thumbnail': r're:https?://.+x1080$',
|
||||
'tags': list,
|
||||
},
|
||||
'skip': 'This video has been deleted and is no longer available.',
|
||||
},
|
||||
{
|
||||
'url': 'https://vk.com/clips-74006511?z=clip-74006511_456247211',
|
||||
|
@ -235,13 +238,13 @@ class VKIE(VKBaseIE):
|
|||
'timestamp': 1664995597,
|
||||
'title': 'Clip by @madempress',
|
||||
'upload_date': '20221005',
|
||||
'uploader': 'Шальная императрица',
|
||||
'uploader': 'Шальная Императрица',
|
||||
'uploader_id': '-74006511',
|
||||
},
|
||||
},
|
||||
{
|
||||
# video key is extra_data not url\d+
|
||||
'url': 'http://vk.com/video-110305615_171782105',
|
||||
'url': 'https://vk.com/video-110305615_171782105',
|
||||
'md5': 'e13fcda136f99764872e739d13fac1d1',
|
||||
'info_dict': {
|
||||
'id': '-110305615_171782105',
|
||||
|
@ -273,6 +276,7 @@ class VKIE(VKBaseIE):
|
|||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
'skip': 'No formats found',
|
||||
},
|
||||
{
|
||||
# live stream, hls and rtmp links, most likely already finished live
|
||||
|
@ -312,7 +316,16 @@ class VKIE(VKBaseIE):
|
|||
{
|
||||
'url': 'https://vk.com/clip30014565_456240946',
|
||||
'only_matching': True,
|
||||
}]
|
||||
},
|
||||
{
|
||||
'url': 'https://vkvideo.ru/video-127553155_456242961',
|
||||
'only_matching': True,
|
||||
},
|
||||
{
|
||||
'url': 'https://vk.ru/video-220754053_456242564',
|
||||
'only_matching': True,
|
||||
},
|
||||
]
|
||||
|
||||
def _real_extract(self, url):
|
||||
mobj = self._match_valid_url(url)
|
||||
|
@ -338,7 +351,7 @@ class VKIE(VKBaseIE):
|
|||
video_id = '{}_{}'.format(mobj.group('oid'), mobj.group('id'))
|
||||
|
||||
info_page = self._download_webpage(
|
||||
'http://vk.com/video_ext.php?' + mobj.group('embed_query'), video_id)
|
||||
'https://vk.com/video_ext.php?' + mobj.group('embed_query'), video_id)
|
||||
|
||||
error_message = self._html_search_regex(
|
||||
[r'(?s)<!><div[^>]+class="video_layer_message"[^>]*>(.+?)</div>',
|
||||
|
@ -432,7 +445,7 @@ class VKIE(VKBaseIE):
|
|||
if m_opts_url:
|
||||
opts_url = m_opts_url.group(1)
|
||||
if opts_url.startswith('//'):
|
||||
opts_url = 'http:' + opts_url
|
||||
opts_url = 'https:' + opts_url
|
||||
return self.url_result(opts_url)
|
||||
|
||||
data = player['params'][0]
|
||||
|
@ -512,8 +525,11 @@ class VKIE(VKBaseIE):
|
|||
class VKUserVideosIE(VKBaseIE):
|
||||
IE_NAME = 'vk:uservideos'
|
||||
IE_DESC = "VK - User's Videos"
|
||||
_VALID_URL = r'https?://(?:(?:m|new)\.)?vk\.com/video/(?:playlist/)?(?P<id>[^?$#/&]+)(?!\?.*\bz=video)(?:[/?#&](?:.*?\bsection=(?P<section>\w+))?|$)'
|
||||
_TEMPLATE_URL = 'https://vk.com/videos'
|
||||
_BASE_URL_RE = r'https?://(?:(?:m|new)\.)?vk(?:video\.ru|\.com/video)'
|
||||
_VALID_URL = [
|
||||
rf'{_BASE_URL_RE}/playlist/(?P<id>-?\d+_\d+)',
|
||||
rf'{_BASE_URL_RE}/(?P<id>@[^/?#]+)(?:/all)?/?(?!\?.*\bz=video)(?:[?#]|$)',
|
||||
]
|
||||
_TESTS = [{
|
||||
'url': 'https://vk.com/video/@mobidevices',
|
||||
'info_dict': {
|
||||
|
@ -527,12 +543,20 @@ class VKUserVideosIE(VKBaseIE):
|
|||
},
|
||||
'playlist_mincount': 182,
|
||||
}, {
|
||||
'url': 'https://vk.com/video/playlist/-174476437_2',
|
||||
'url': 'https://vkvideo.ru/playlist/-204353299_426',
|
||||
'info_dict': {
|
||||
'id': '-174476437_playlist_2',
|
||||
'title': 'Анонсы',
|
||||
'id': '-204353299_playlist_426',
|
||||
},
|
||||
'playlist_mincount': 108,
|
||||
'playlist_mincount': 33,
|
||||
}, {
|
||||
'url': 'https://vk.com/video/@gorkyfilmstudio/all',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://vkvideo.ru/@mobidevices',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://vk.com/video/playlist/-174476437_2',
|
||||
'only_matching': True,
|
||||
}]
|
||||
_VIDEO = collections.namedtuple('Video', ['owner_id', 'id'])
|
||||
|
||||
|
@ -552,7 +576,7 @@ class VKUserVideosIE(VKBaseIE):
|
|||
v = self._VIDEO._make(video[:2])
|
||||
video_id = '%d_%d' % (v.owner_id, v.id)
|
||||
yield self.url_result(
|
||||
'http://vk.com/video' + video_id, VKIE.ie_key(), video_id)
|
||||
'https://vk.com/video' + video_id, VKIE.ie_key(), video_id)
|
||||
if count >= total:
|
||||
break
|
||||
video_list_json = self._download_payload('al_video', page_id, {
|
||||
|
@ -561,23 +585,25 @@ class VKUserVideosIE(VKBaseIE):
|
|||
'oid': page_id,
|
||||
'section': section,
|
||||
})[0][section]
|
||||
count += video_list_json['count']
|
||||
new_count = video_list_json['count']
|
||||
if not new_count:
|
||||
self.to_screen(f'{page_id}: Skipping {total - count} unavailable videos')
|
||||
break
|
||||
count += new_count
|
||||
video_list = video_list_json['list']
|
||||
|
||||
def _real_extract(self, url):
|
||||
u_id, section = self._match_valid_url(url).groups()
|
||||
u_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, u_id)
|
||||
|
||||
if u_id.startswith('@'):
|
||||
page_id = self._search_regex(r'data-owner-id\s?=\s?"([^"]+)"', webpage, 'page_id')
|
||||
elif '_' in u_id:
|
||||
page_id, section = u_id.split('_', 1)
|
||||
section = f'playlist_{section}'
|
||||
page_id = traverse_obj(
|
||||
self._search_json(r'\bvar newCur\s*=', webpage, 'cursor data', u_id),
|
||||
('oid', {int}, {str_or_none}, {require('page id')}))
|
||||
section = traverse_obj(parse_qs(url), ('section', 0)) or 'all'
|
||||
else:
|
||||
raise ExtractorError('Invalid URL', expected=True)
|
||||
|
||||
if not section:
|
||||
section = 'all'
|
||||
page_id, _, section = u_id.partition('_')
|
||||
section = f'playlist_{section}'
|
||||
|
||||
playlist_title = clean_html(get_element_by_class('VideoInfoPanel__title', webpage))
|
||||
return self.playlist_result(self._entries(page_id, section), f'{page_id}_{section}', playlist_title)
|
||||
|
@ -717,7 +743,7 @@ class VKWallPostIE(VKBaseIE):
|
|||
|
||||
|
||||
class VKPlayBaseIE(InfoExtractor):
|
||||
_BASE_URL_RE = r'https?://(?:vkplay\.live|live\.vkplay\.ru)/'
|
||||
_BASE_URL_RE = r'https?://(?:vkplay\.live|live\.vk(?:play|video)\.ru)/'
|
||||
_RESOLUTIONS = {
|
||||
'tiny': '256x144',
|
||||
'lowest': '426x240',
|
||||
|
@ -797,6 +823,9 @@ class VKPlayIE(VKPlayBaseIE):
|
|||
}, {
|
||||
'url': 'https://live.vkplay.ru/lebwa/record/33a4e4ce-e3ef-49db-bb14-f006cc6fabc9/records',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://live.vkvideo.ru/lebwa/record/33a4e4ce-e3ef-49db-bb14-f006cc6fabc9/records',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
|
@ -839,6 +868,9 @@ class VKPlayLiveIE(VKPlayBaseIE):
|
|||
}, {
|
||||
'url': 'https://live.vkplay.ru/lebwa',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://live.vkvideo.ru/panterka',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
|
|
|
@ -78,53 +78,58 @@ INNERTUBE_CLIENTS = {
|
|||
'INNERTUBE_CONTEXT': {
|
||||
'client': {
|
||||
'clientName': 'WEB',
|
||||
'clientVersion': '2.20240726.00.00',
|
||||
'clientVersion': '2.20241126.01.00',
|
||||
},
|
||||
},
|
||||
'INNERTUBE_CONTEXT_CLIENT_NAME': 1,
|
||||
'REQUIRE_PO_TOKEN': True,
|
||||
'SUPPORTS_COOKIES': True,
|
||||
},
|
||||
# Safari UA returns pre-merged video+audio 144p/240p/360p/720p/1080p HLS formats
|
||||
'web_safari': {
|
||||
'INNERTUBE_CONTEXT': {
|
||||
'client': {
|
||||
'clientName': 'WEB',
|
||||
'clientVersion': '2.20240726.00.00',
|
||||
'clientVersion': '2.20241126.01.00',
|
||||
'userAgent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.5 Safari/605.1.15,gzip(gfe)',
|
||||
},
|
||||
},
|
||||
'INNERTUBE_CONTEXT_CLIENT_NAME': 1,
|
||||
'REQUIRE_PO_TOKEN': True,
|
||||
'SUPPORTS_COOKIES': True,
|
||||
},
|
||||
'web_embedded': {
|
||||
'INNERTUBE_CONTEXT': {
|
||||
'client': {
|
||||
'clientName': 'WEB_EMBEDDED_PLAYER',
|
||||
'clientVersion': '1.20240723.01.00',
|
||||
'clientVersion': '1.20241201.00.00',
|
||||
},
|
||||
},
|
||||
'INNERTUBE_CONTEXT_CLIENT_NAME': 56,
|
||||
'SUPPORTS_COOKIES': True,
|
||||
},
|
||||
'web_music': {
|
||||
'INNERTUBE_HOST': 'music.youtube.com',
|
||||
'INNERTUBE_CONTEXT': {
|
||||
'client': {
|
||||
'clientName': 'WEB_REMIX',
|
||||
'clientVersion': '1.20240724.00.00',
|
||||
'clientVersion': '1.20241127.01.00',
|
||||
},
|
||||
},
|
||||
'INNERTUBE_CONTEXT_CLIENT_NAME': 67,
|
||||
'SUPPORTS_COOKIES': True,
|
||||
},
|
||||
# This client now requires sign-in for every video
|
||||
'web_creator': {
|
||||
'INNERTUBE_CONTEXT': {
|
||||
'client': {
|
||||
'clientName': 'WEB_CREATOR',
|
||||
'clientVersion': '1.20240723.03.00',
|
||||
'clientVersion': '1.20241203.01.00',
|
||||
},
|
||||
},
|
||||
'INNERTUBE_CONTEXT_CLIENT_NAME': 62,
|
||||
'REQUIRE_AUTH': True,
|
||||
'SUPPORTS_COOKIES': True,
|
||||
},
|
||||
'android': {
|
||||
'INNERTUBE_CONTEXT': {
|
||||
|
@ -157,6 +162,7 @@ INNERTUBE_CLIENTS = {
|
|||
'REQUIRE_JS_PLAYER': False,
|
||||
'REQUIRE_PO_TOKEN': True,
|
||||
'REQUIRE_AUTH': True,
|
||||
'SUPPORTS_COOKIES': True,
|
||||
},
|
||||
# This client now requires sign-in for every video
|
||||
'android_creator': {
|
||||
|
@ -191,6 +197,7 @@ INNERTUBE_CLIENTS = {
|
|||
},
|
||||
'INNERTUBE_CONTEXT_CLIENT_NAME': 28,
|
||||
'REQUIRE_JS_PLAYER': False,
|
||||
'SUPPORTS_COOKIES': True,
|
||||
},
|
||||
# iOS clients have HLS live streams. Setting device model to get 60fps formats.
|
||||
# See: https://github.com/TeamNewPipe/NewPipeExtractor/issues/680#issuecomment-1002724558
|
||||
|
@ -225,6 +232,7 @@ INNERTUBE_CLIENTS = {
|
|||
'INNERTUBE_CONTEXT_CLIENT_NAME': 26,
|
||||
'REQUIRE_JS_PLAYER': False,
|
||||
'REQUIRE_AUTH': True,
|
||||
'SUPPORTS_COOKIES': True,
|
||||
},
|
||||
# This client now requires sign-in for every video
|
||||
'ios_creator': {
|
||||
|
@ -249,19 +257,22 @@ INNERTUBE_CLIENTS = {
|
|||
'INNERTUBE_CONTEXT': {
|
||||
'client': {
|
||||
'clientName': 'MWEB',
|
||||
'clientVersion': '2.20240726.01.00',
|
||||
'clientVersion': '2.20241202.07.00',
|
||||
'userAgent': 'Mozilla/5.0 (iPad; CPU OS 16_7_10 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Mobile/15E148 Safari/604.1,gzip(gfe)',
|
||||
},
|
||||
},
|
||||
'INNERTUBE_CONTEXT_CLIENT_NAME': 2,
|
||||
'SUPPORTS_COOKIES': True,
|
||||
},
|
||||
'tv': {
|
||||
'INNERTUBE_CONTEXT': {
|
||||
'client': {
|
||||
'clientName': 'TVHTML5',
|
||||
'clientVersion': '7.20240724.13.00',
|
||||
'clientVersion': '7.20241201.18.00',
|
||||
},
|
||||
},
|
||||
'INNERTUBE_CONTEXT_CLIENT_NAME': 7,
|
||||
'SUPPORTS_COOKIES': True,
|
||||
},
|
||||
# This client now requires sign-in for every video
|
||||
# It was previously an age-gate workaround for videos that were `playable_in_embed`
|
||||
|
@ -275,19 +286,7 @@ INNERTUBE_CLIENTS = {
|
|||
},
|
||||
'INNERTUBE_CONTEXT_CLIENT_NAME': 85,
|
||||
'REQUIRE_AUTH': True,
|
||||
},
|
||||
# This client now requires sign-in for every video
|
||||
# It may be able to receive pre-merged video+audio 720p/1080p streams
|
||||
'mediaconnect': {
|
||||
'INNERTUBE_CONTEXT': {
|
||||
'client': {
|
||||
'clientName': 'MEDIA_CONNECT_FRONTEND',
|
||||
'clientVersion': '0.1',
|
||||
},
|
||||
},
|
||||
'INNERTUBE_CONTEXT_CLIENT_NAME': 95,
|
||||
'REQUIRE_JS_PLAYER': False,
|
||||
'REQUIRE_AUTH': True,
|
||||
'SUPPORTS_COOKIES': True,
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -317,6 +316,7 @@ def build_innertube_clients():
|
|||
ytcfg.setdefault('REQUIRE_JS_PLAYER', True)
|
||||
ytcfg.setdefault('REQUIRE_PO_TOKEN', False)
|
||||
ytcfg.setdefault('REQUIRE_AUTH', False)
|
||||
ytcfg.setdefault('SUPPORTS_COOKIES', False)
|
||||
ytcfg.setdefault('PLAYER_PARAMS', None)
|
||||
ytcfg['INNERTUBE_CONTEXT']['client'].setdefault('hl', 'en')
|
||||
|
||||
|
@ -1357,6 +1357,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||
}
|
||||
_SUBTITLE_FORMATS = ('json3', 'srv1', 'srv2', 'srv3', 'ttml', 'vtt')
|
||||
_DEFAULT_CLIENTS = ('ios', 'mweb')
|
||||
_DEFAULT_AUTHED_CLIENTS = ('web_creator', 'mweb')
|
||||
|
||||
_GEO_BYPASS = False
|
||||
|
||||
|
@ -2925,7 +2926,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||
# Obtain from MPD's maximum seq value
|
||||
old_mpd_url = mpd_url
|
||||
last_error = ctx.pop('last_error', None)
|
||||
expire_fast = immediate or last_error and isinstance(last_error, HTTPError) and last_error.status == 403
|
||||
expire_fast = immediate or (last_error and isinstance(last_error, HTTPError) and last_error.status == 403)
|
||||
mpd_url, stream_number, is_live = (mpd_feed(format_id, 5 if expire_fast else 18000)
|
||||
or (mpd_url, stream_number, False))
|
||||
if not refresh_sequence:
|
||||
|
@ -3118,19 +3119,26 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||
self.to_screen('Extracted signature function:\n' + code)
|
||||
|
||||
def _parse_sig_js(self, jscode):
|
||||
# Examples where `sig` is funcname:
|
||||
# sig=function(a){a=a.split(""); ... ;return a.join("")};
|
||||
# ;c&&(c=sig(decodeURIComponent(c)),a.set(b,encodeURIComponent(c)));return a};
|
||||
# {var l=f,m=h.sp,n=sig(decodeURIComponent(h.s));l.set(m,encodeURIComponent(n))}
|
||||
# sig=function(J){J=J.split(""); ... ;return J.join("")};
|
||||
# ;N&&(N=sig(decodeURIComponent(N)),J.set(R,encodeURIComponent(N)));return J};
|
||||
# {var H=u,k=f.sp,v=sig(decodeURIComponent(f.s));H.set(k,encodeURIComponent(v))}
|
||||
funcname = self._search_regex(
|
||||
(r'\b[cs]\s*&&\s*[adf]\.set\([^,]+\s*,\s*encodeURIComponent\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
(r'\b(?P<var>[a-zA-Z0-9$]+)&&\((?P=var)=(?P<sig>[a-zA-Z0-9$]{2,})\(decodeURIComponent\((?P=var)\)\)',
|
||||
r'(?P<sig>[a-zA-Z0-9$]+)\s*=\s*function\(\s*(?P<arg>[a-zA-Z0-9$]+)\s*\)\s*{\s*(?P=arg)\s*=\s*(?P=arg)\.split\(\s*""\s*\)\s*;\s*[^}]+;\s*return\s+(?P=arg)\.join\(\s*""\s*\)',
|
||||
r'(?:\b|[^a-zA-Z0-9$])(?P<sig>[a-zA-Z0-9$]{2,})\s*=\s*function\(\s*a\s*\)\s*{\s*a\s*=\s*a\.split\(\s*""\s*\)(?:;[a-zA-Z0-9$]{2}\.[a-zA-Z0-9$]{2}\(a,\d+\))?',
|
||||
# Old patterns
|
||||
r'\b[cs]\s*&&\s*[adf]\.set\([^,]+\s*,\s*encodeURIComponent\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'\b[a-zA-Z0-9]+\s*&&\s*[a-zA-Z0-9]+\.set\([^,]+\s*,\s*encodeURIComponent\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'\bm=(?P<sig>[a-zA-Z0-9$]{2,})\(decodeURIComponent\(h\.s\)\)',
|
||||
r'\bc&&\(c=(?P<sig>[a-zA-Z0-9$]{2,})\(decodeURIComponent\(c\)\)',
|
||||
r'(?:\b|[^a-zA-Z0-9$])(?P<sig>[a-zA-Z0-9$]{2,})\s*=\s*function\(\s*a\s*\)\s*{\s*a\s*=\s*a\.split\(\s*""\s*\)(?:;[a-zA-Z0-9$]{2}\.[a-zA-Z0-9$]{2}\(a,\d+\))?',
|
||||
r'(?P<sig>[a-zA-Z0-9$]+)\s*=\s*function\(\s*a\s*\)\s*{\s*a\s*=\s*a\.split\(\s*""\s*\)',
|
||||
# Obsolete patterns
|
||||
r'("|\')signature\1\s*,\s*(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'\.sig\|\|(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'yt\.akamaized\.net/\)\s*\|\|\s*.*?\s*[cs]\s*&&\s*[adf]\.set\([^,]+\s*,\s*(?:encodeURIComponent\s*\()?\s*(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'\b[cs]\s*&&\s*[adf]\.set\([^,]+\s*,\s*(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'\b[a-zA-Z0-9]+\s*&&\s*[a-zA-Z0-9]+\.set\([^,]+\s*,\s*(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'\bc\s*&&\s*[a-zA-Z0-9]+\.set\([^,]+\s*,\s*\([^)]*\)\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\('),
|
||||
jscode, 'Initial JS player signature function name', group='sig')
|
||||
|
||||
|
@ -3204,6 +3212,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||
# * a.D&&(b="nn"[+a.D],c=a.get(b))&&(c=narray[idx](c),a.set(b,c),narray.length||nfunc("")
|
||||
# * a.D&&(PL(a),b=a.j.n||null)&&(b=narray[0](b),a.set("n",b),narray.length||nfunc("")
|
||||
# * a.D&&(b="nn"[+a.D],vL(a),c=a.j[b]||null)&&(c=narray[idx](c),a.set(b,c),narray.length||nfunc("")
|
||||
# * J.J="";J.url="";J.Z&&(R="nn"[+J.Z],mW(J),N=J.K[R]||null)&&(N=narray[idx](N),J.set(R,N))}};
|
||||
funcname, idx = self._search_regex(
|
||||
r'''(?x)
|
||||
(?:
|
||||
|
@ -3220,7 +3229,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||
)\)&&\(c=|
|
||||
\b(?P<var>[a-zA-Z0-9_$]+)=
|
||||
)(?P<nfunc>[a-zA-Z0-9_$]+)(?:\[(?P<idx>\d+)\])?\([a-zA-Z]\)
|
||||
(?(var),[a-zA-Z0-9_$]+\.set\("n"\,(?P=var)\),(?P=nfunc)\.length)''',
|
||||
(?(var),[a-zA-Z0-9_$]+\.set\((?:"n+"|[a-zA-Z0-9_$]+)\,(?P=var)\))''',
|
||||
jscode, 'n function name', group=('nfunc', 'idx'), default=(None, None))
|
||||
if not funcname:
|
||||
self.report_warning(join_nonempty(
|
||||
|
@ -3229,7 +3238,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||
return self._search_regex(
|
||||
r'''(?xs)
|
||||
;\s*(?P<name>[a-zA-Z0-9_$]+)\s*=\s*function\([a-zA-Z0-9_$]+\)
|
||||
\s*\{(?:(?!};).)+?["']enhanced_except_''',
|
||||
\s*\{(?:(?!};).)+?return\s*(?P<q>["'])[\w-]+_w8_(?P=q)\s*\+\s*[a-zA-Z0-9_$]+''',
|
||||
jscode, 'Initial JS player n function name', group='name')
|
||||
elif not idx:
|
||||
return funcname
|
||||
|
@ -3238,6 +3247,11 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||
rf'var {re.escape(funcname)}\s*=\s*(\[.+?\])\s*[,;]', jscode,
|
||||
f'Initial JS player n function list ({funcname}.{idx})')))[int(idx)]
|
||||
|
||||
def _fixup_n_function_code(self, argnames, code):
|
||||
return argnames, re.sub(
|
||||
rf';\s*if\s*\(\s*typeof\s+[a-zA-Z0-9_$]+\s*===?\s*(["\'])undefined\1\s*\)\s*return\s+{argnames[0]};',
|
||||
';', code)
|
||||
|
||||
def _extract_n_function_code(self, video_id, player_url):
|
||||
player_id = self._extract_player_info(player_url)
|
||||
func_code = self.cache.load('youtube-nsig', player_id, min_ver='2024.07.09')
|
||||
|
@ -3249,7 +3263,8 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||
|
||||
func_name = self._extract_n_function_name(jscode, player_url=player_url)
|
||||
|
||||
func_code = jsi.extract_function_code(func_name)
|
||||
# XXX: Workaround for the `typeof` gotcha
|
||||
func_code = self._fixup_n_function_code(*jsi.extract_function_code(func_name))
|
||||
|
||||
self.cache.store('youtube-nsig', player_id, func_code)
|
||||
return jsi, player_id, func_code
|
||||
|
@ -3265,7 +3280,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||
except Exception as e:
|
||||
raise JSInterpreter.Exception(traceback.format_exc(), cause=e)
|
||||
|
||||
if ret.startswith('enhanced_except_'):
|
||||
if ret.startswith('enhanced_except_') or ret.endswith(s):
|
||||
raise JSInterpreter.Exception('Signature function returned an exception')
|
||||
return ret
|
||||
|
||||
|
@ -3823,12 +3838,13 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||
def _get_requested_clients(self, url, smuggled_data):
|
||||
requested_clients = []
|
||||
excluded_clients = []
|
||||
default_clients = self._DEFAULT_AUTHED_CLIENTS if self.is_authenticated else self._DEFAULT_CLIENTS
|
||||
allowed_clients = sorted(
|
||||
(client for client in INNERTUBE_CLIENTS if client[:1] != '_'),
|
||||
key=lambda client: INNERTUBE_CLIENTS[client]['priority'], reverse=True)
|
||||
for client in self._configuration_arg('player_client'):
|
||||
if client == 'default':
|
||||
requested_clients.extend(self._DEFAULT_CLIENTS)
|
||||
requested_clients.extend(default_clients)
|
||||
elif client == 'all':
|
||||
requested_clients.extend(allowed_clients)
|
||||
elif client.startswith('-'):
|
||||
|
@ -3838,7 +3854,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||
else:
|
||||
requested_clients.append(client)
|
||||
if not requested_clients:
|
||||
requested_clients.extend(self._DEFAULT_CLIENTS)
|
||||
requested_clients.extend(default_clients)
|
||||
for excluded_client in excluded_clients:
|
||||
if excluded_client in requested_clients:
|
||||
requested_clients.remove(excluded_client)
|
||||
|
@ -3850,9 +3866,18 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||
_, base_client, variant = _split_innertube_client(requested_client)
|
||||
music_client = f'{base_client}_music' if base_client != 'mweb' else 'web_music'
|
||||
if variant != 'music' and music_client in INNERTUBE_CLIENTS:
|
||||
if not INNERTUBE_CLIENTS[music_client]['REQUIRE_AUTH'] or self.is_authenticated:
|
||||
client_info = INNERTUBE_CLIENTS[music_client]
|
||||
if not client_info['REQUIRE_AUTH'] or (self.is_authenticated and client_info['SUPPORTS_COOKIES']):
|
||||
requested_clients.append(music_client)
|
||||
|
||||
if self.is_authenticated:
|
||||
unsupported_clients = [
|
||||
client for client in requested_clients if not INNERTUBE_CLIENTS[client]['SUPPORTS_COOKIES']
|
||||
]
|
||||
for client in unsupported_clients:
|
||||
self.report_warning(f'Skipping client "{client}" since it does not support cookies', only_once=True)
|
||||
requested_clients.remove(client)
|
||||
|
||||
return orderedSet(requested_clients)
|
||||
|
||||
def _invalid_player_response(self, pr, video_id):
|
||||
|
@ -3958,6 +3983,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||
else:
|
||||
prs.append(pr)
|
||||
|
||||
''' This code is pointless while web_creator is in _DEFAULT_AUTHED_CLIENTS
|
||||
# EU countries require age-verification for accounts to access age-restricted videos
|
||||
# If account is not age-verified, _is_agegated() will be truthy for non-embedded clients
|
||||
if self.is_authenticated and self._is_agegated(pr):
|
||||
|
@ -3965,9 +3991,10 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||
f'{video_id}: This video is age-restricted and YouTube is requiring '
|
||||
'account age-verification; some formats may be missing', only_once=True)
|
||||
# web_creator can work around the age-verification requirement
|
||||
# android_vr and mediaconnect may also be able to work around age-verification
|
||||
# android_vr may also be able to work around age-verification
|
||||
# tv_embedded may(?) still work around age-verification if the video is embeddable
|
||||
append_client('web_creator')
|
||||
'''
|
||||
|
||||
prs.extend(deprioritized_prs)
|
||||
|
||||
|
@ -3983,8 +4010,8 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||
return prs, player_url
|
||||
|
||||
def _needs_live_processing(self, live_status, duration):
|
||||
if (live_status == 'is_live' and self.get_param('live_from_start')
|
||||
or live_status == 'post_live' and (duration or 0) > 2 * 3600):
|
||||
if ((live_status == 'is_live' and self.get_param('live_from_start'))
|
||||
or (live_status == 'post_live' and (duration or 0) > 2 * 3600)):
|
||||
return live_status
|
||||
|
||||
def _extract_formats_and_subtitles(self, streaming_data, video_id, player_url, live_status, duration):
|
||||
|
@ -4180,7 +4207,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||
skip_manifests = set(self._configuration_arg('skip'))
|
||||
if (not self.get_param('youtube_include_hls_manifest', True)
|
||||
or needs_live_processing == 'is_live' # These will be filtered out by YoutubeDL anyway
|
||||
or needs_live_processing and skip_bad_formats):
|
||||
or (needs_live_processing and skip_bad_formats)):
|
||||
skip_manifests.add('hls')
|
||||
|
||||
if not self.get_param('youtube_include_dash_manifest', True):
|
||||
|
@ -4378,14 +4405,14 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||
expected_type=dict)
|
||||
|
||||
translated_title = self._get_text(microformats, (..., 'title'))
|
||||
video_title = (self._preferred_lang and translated_title
|
||||
video_title = ((self._preferred_lang and translated_title)
|
||||
or get_first(video_details, 'title') # primary
|
||||
or translated_title
|
||||
or search_meta(['og:title', 'twitter:title', 'title']))
|
||||
translated_description = self._get_text(microformats, (..., 'description'))
|
||||
original_description = get_first(video_details, 'shortDescription')
|
||||
video_description = (
|
||||
self._preferred_lang and translated_description
|
||||
(self._preferred_lang and translated_description)
|
||||
# If original description is blank, it will be an empty string.
|
||||
# Do not prefer translated description in this case.
|
||||
or original_description if original_description is not None else translated_description)
|
||||
|
@ -6825,7 +6852,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
|
|||
tab_url = urljoin(base_url, traverse_obj(
|
||||
tab, ('endpoint', 'commandMetadata', 'webCommandMetadata', 'url')))
|
||||
|
||||
tab_id = (tab_url and self._get_url_mobj(tab_url)['tab'][1:]
|
||||
tab_id = ((tab_url and self._get_url_mobj(tab_url)['tab'][1:])
|
||||
or traverse_obj(tab, 'tabIdentifier', expected_type=str))
|
||||
if tab_id:
|
||||
return {
|
||||
|
|
|
@ -183,4 +183,4 @@ def load_plugins(name, suffix):
|
|||
|
||||
sys.meta_path.insert(0, PluginFinder(f'{PACKAGE_NAME}.extractor', f'{PACKAGE_NAME}.postprocessor'))
|
||||
|
||||
__all__ = ['directories', 'load_plugins', 'PACKAGE_NAME', 'COMPAT_PACKAGE_NAME']
|
||||
__all__ = ['COMPAT_PACKAGE_NAME', 'PACKAGE_NAME', 'directories', 'load_plugins']
|
||||
|
|
|
@ -44,4 +44,4 @@ def get_postprocessor(key):
|
|||
|
||||
globals().update(_PLUGIN_CLASSES)
|
||||
__all__ = [name for name in globals() if name.endswith('PP')]
|
||||
__all__.extend(('PostProcessor', 'FFmpegPostProcessor'))
|
||||
__all__.extend(('FFmpegPostProcessor', 'PostProcessor'))
|
||||
|
|
|
@ -637,7 +637,7 @@ class FFmpegEmbedSubtitlePP(FFmpegPostProcessor):
|
|||
sub_ext = sub_info['ext']
|
||||
if sub_ext == 'json':
|
||||
self.report_warning('JSON subtitles cannot be embedded')
|
||||
elif ext != 'webm' or ext == 'webm' and sub_ext == 'vtt':
|
||||
elif ext != 'webm' or (ext == 'webm' and sub_ext == 'vtt'):
|
||||
sub_langs.append(lang)
|
||||
sub_names.append(sub_info.get('name'))
|
||||
sub_filenames.append(sub_info['filepath'])
|
||||
|
|
|
@ -2683,8 +2683,8 @@ def merge_dicts(*dicts):
|
|||
merged = {}
|
||||
for a_dict in dicts:
|
||||
for k, v in a_dict.items():
|
||||
if (v is not None and k not in merged
|
||||
or isinstance(v, str) and merged[k] == ''):
|
||||
if ((v is not None and k not in merged)
|
||||
or (isinstance(v, str) and merged[k] == '')):
|
||||
merged[k] = v
|
||||
return merged
|
||||
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
# Autogenerated by devscripts/update-version.py
|
||||
|
||||
__version__ = '2024.11.18'
|
||||
__version__ = '2024.12.06'
|
||||
|
||||
RELEASE_GIT_HEAD = '7ea2787920cccc6b8ea30791993d114fbd564434'
|
||||
RELEASE_GIT_HEAD = '4bd2655398aed450456197a6767639114a24eac2'
|
||||
|
||||
VARIANT = None
|
||||
|
||||
|
@ -12,4 +12,4 @@ CHANNEL = 'stable'
|
|||
|
||||
ORIGIN = 'yt-dlp/yt-dlp'
|
||||
|
||||
_pkg_version = '2024.11.18'
|
||||
_pkg_version = '2024.12.06'
|
||||
|
|
Loading…
Reference in a new issue