1
0
Fork 0

Compare commits

...

90 Commits

Author SHA1 Message Date
Niklas Haas 5d7f234e7e avcodec/hevcdec: apply AOM film grain synthesis
Following the usual logic for H.274 film grain.
2024-03-23 18:55:21 +01:00
Niklas Haas 2e24c12aa1 avcodec/h2645_sei: decode AFGS1 T.35 SEI
I restricted this SEI to HEVC for now, until I see a H.264 sample.
2024-03-23 18:55:21 +01:00
Niklas Haas f50382cba6 avcodec/aom_film_grain: implement AFGS1 parsing
Based on the AOMedia Film Grain Synthesis 1 (AFGS1) spec:
  https://aomediacodec.github.io/afgs1-spec/

The parsing has been changed substantially relative to the AV1 film
grain OBU. In particular:

1. There is the possibility of maintaining multiple independent film
   grain parameter sets, and decoders/players are recommended to pick
   the one most appropriate for the intended display resolution. This
   could also be used to e.g. switch between different grain profiles
   without having to re-signal the appropriate coefficients.

2. Supporting this, it's possible to *predict* the grain coefficients
   from previously signalled parameter sets, transmitting only the
   residual.

3. When not predicting, the parameter sets are now stored as a series of
   increments, rather than being directly transmitted.

4. There are several new AFGS1-exclusive fields.

I placed this parser in its own file, rather than h2645_sei.c, since
nothing in the generic AFGS1 film grain payload is specific to T.35, and
to compartmentalize the code base.
2024-03-23 18:55:21 +01:00
Niklas Haas 1535d33818 avcodec/aom_film_grain: add AOM film grain synthesis
Implementation copied wholesale from dav1d, sans SIMD, under permissive
license. This implementation was extensively verified to be bit-exact,
so it serves as a much better starting point than trying to re-engineer
this from scratch for no reason. (I also authored the original
implementation in dav1d, so any "clean room" implementation would end up
looking much the same, anyway)

The notable changes I had to make while adapting this from the dav1d
code-base to the FFmpeg codebase include:

- reordering variable declarations to avoid triggering warnings
- replacing several inline helpers by avutil equivalents
- changing code that accesses frame metadata
- replacing raw plane copying logic by av_image_copy_plane

Apart from this, the implementation is basically unmodified.
2024-03-23 18:55:21 +01:00
Niklas Haas a9023377b2 avutil/film_grain_params: add av_film_grain_params_select()
Common utility function that can be used by all codecs to select the
right (any valid) film grain parameter set. In particular, this is
useful for AFGS1, which has support for multiple parameters.

However, it also performs parameter validation for H274.
2024-03-23 18:55:15 +01:00
Niklas Haas ea147f3b50 avutil/frame: clarify AV_FRAME_DATA_FILM_GRAIN_PARAMS usage
To allow for AFGS1 usage, which can expose multiple parameter sets for
a single frame.
2024-03-23 18:54:36 +01:00
Niklas Haas 1539efaacb avcodec/libdavv1d: signal new AVFilmGrainParams members
Not directly signalled by AV1, but we should still set this accordingly
so that users will know what the original intended video characteristics
and chroma resolution were.
2024-03-23 18:54:36 +01:00
Niklas Haas 511f297680 avcodec/av1dec: signal new AVFilmGrainParams members
Not directly signalled by AV1, but we should still set this accordingly
so that users will know what the original intended video characteristics
and chroma resolution were.
2024-03-23 18:54:36 +01:00
Niklas Haas ad7f059180 avcodec/h2645_sei: signal new AVFilmGrainParams members
H.274 specifies that film grain parameters are signalled as intended for
4:4:4 frames, so we always signal this, regardless of the frame's actual
subsampling.
2024-03-23 18:54:36 +01:00
Niklas Haas 6963033590 ffprobe: adapt to new AVFilmGrainParams
Follow the establish convention of printing the bit depth metadata
per-component.
2024-03-23 18:54:36 +01:00
Niklas Haas 25cd0e0913 avfilter/vf_showinfo: adapt to new AVFilmGrainParams 2024-03-23 18:54:36 +01:00
Niklas Haas a08f358769 avutil/film_grain_params: initialize VCS to UNSPECIFIED 2024-03-23 18:54:36 +01:00
Niklas Haas 35d2960dcd avutil/film_grain_params: add metadata to common struct
This is needed for AV1 film grain as well, when using AFGS1 streams.
Also add extra width/height and subsampling information, which AFGS1
cares about, as part of the same API bump. (And in principle, H274
should also expose this information, since it is needed downstream to
correctly adjust the chroma grain frequency to the subsampling ratio)

Deprecate the equivalent H274-exclusive fields. To avoid breaking ABI,
add the new fields after the union; but with enough of a paper trail to
hopefully re-order them on the next bump.
2024-03-23 18:54:29 +01:00
Jun Zhao bfbf0f4e82 lavc/vvc_parser: small cleanup for style
small cleanup for style, redundant semicolons, goto labels,
in FFmpeg, we put goto labels at brace level.

Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
2024-03-23 22:49:29 +08:00
Michael Niedermayer 57f252b2d1 avcodec/cbs_h266_syntax_template: Check tile_y
Fixes: out of array access
Fixes: 67021/clusterfuzz-testcase-minimized-ffmpeg_DEMUXER_fuzzer-4883576579489792

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2024-03-23 22:33:21 +08:00
Anton Khirnov e99594812c tests/fate/ffmpeg: evaluate thread count in fate-run.sh rather than make
Fixes fate-ffmpeg-loopback-decoding with THREADS=random*
2024-03-23 14:07:04 +01:00
Leo Izen 83ed18a3ca
avformat/jpegxl_anim_dec: set pos for generic index
avpkt->pos needs to be set for generic indexing or features such as the
stream_loop option will not work.

Co-authored-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Signed-off-by: Leo Izen <leo.izen@gmail.com>
2024-03-23 07:29:18 -04:00
Wenbin Chen f34000541a Changelog: add dnn libtorch backend entry
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
2024-03-23 11:42:13 +01:00
Stefano Sabatini 7852bf02b0 doc/muxers: add hds 2024-03-23 11:42:13 +01:00
Stefano Sabatini 25248c9d75 doc/muxers: add gxf 2024-03-23 11:42:13 +01:00
Stefano Sabatini 5c60be3ab6 lavf/gxfenc: return proper error codes in case of failure 2024-03-23 11:42:13 +01:00
Stefano Sabatini 3733aa7b17 lavf/gxfenc: consistently use snake_case in function names 2024-03-23 11:42:13 +01:00
Matthieu Bouron ad227a41d4 avcodec/mediacodec_wrapper: remove unnecessary NULL checks before calling Delete{Global,Local}Ref()
Delete{Global,Local}Ref already handle NULL.
2024-03-23 11:37:44 +01:00
Matthieu Bouron b1a683a2fd avcodec/mediacodec_wrapper: use an OFFSET() macro where relevant
Reduces a bit the horizontal spacing.
2024-03-23 11:37:44 +01:00
Matthieu Bouron dab4124350 avcodec/jni: remove unnecessary NULL checks before calling DeleteLocalRef()
Delete{Global,Local}Ref() already handle NULL.
2024-03-23 11:37:44 +01:00
Matthieu Bouron 70ba15d2cf avcodec/jni: use size_t to store structure offsets 2024-03-23 11:37:44 +01:00
Matthieu Bouron 6567516a5e avformat: add Android content resolver protocol support
Handles Android content URIs starting with content://.
2024-03-23 11:37:29 +01:00
Matthieu Bouron f17e18d292 avcodec: add av_jni_{get,set}_android_app_ctx() helpers
This will allow users to pass the Android ApplicationContext which is mandatory
to retrieve the ContentResolver responsible to resolve/open Android content URIS.
2024-03-23 11:34:34 +01:00
Andreas Rheinhardt 073251316e avformat: Make init function out of write_header functions if possible
Also mark them as av_cold while just at it.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 23:57:20 +01:00
Andreas Rheinhardt 37f0dbbc39 avformat: Enforce codec_id where appropriate
E.g. chromaprint expects to be fed 16bit signed PCM
in native endianness, yet there was no check for this.
Similarly for other muxers. Use the new
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS to enfore this where
appropriate, e.g. for pcm/raw muxers.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 23:57:20 +01:00
Andreas Rheinhardt 2ccb45511f avformat/ttmlenc: Avoid unnecessary block
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 23:57:20 +01:00
Andreas Rheinhardt a24bccc238 avformat/mux: Add flag for "only default codecs allowed"
AVOutputFormat has default codecs for audio, video and subtitle
and often these are the only codecs of this type allowed.
So add a flag to AVOutputFormat so that this can be checked generically.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 23:57:19 +01:00
Andreas Rheinhardt 03b04eef72 avformat: Enforce one-stream limit where appropriate
Several muxers (e.g. pcm muxers) did not check the number
of streams even though the individual streams were not
recoverable from the muxed files. This commit changes
this by using the FF_OFMT_MAX_ONE_OF_EACH flag
where appropriate.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 23:57:19 +01:00
Andreas Rheinhardt f4167842c1 avformat/mux: Add flag for "not more than one stream of each type"
More exactly: Not more than one stream of each type for which
a default codec (i.e. AVOutputFormat.(audio|video|subtitle)_codec)
is set; for those types for which no such codec is set (or for
which no designated default codec in AVOutputFormat exists at all)
no streams are permitted.

Given that with this flag set the default codecs become more important,
they are now set explicitly to AV_CODEC_ID_NONE for "unset";
the earlier code relied on AV_CODEC_ID_NONE being equal to zero,
so that default static initialization set it accordingly;
but this is not how one is supposed to use an enum.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 23:57:19 +01:00
Andreas Rheinhardt c6bc2d4fea fate/filter-audio: Don't use pcm output for channelsplit test
This test muxes two streams into a single pcm file, although
the two streams are of course not recoverable from the output
(unless one has extra information). So use the streamhash muxer
instead (which also provides coverage for it; it was surprisingly
unused in FATE so far). This is in preparation for actually
enforcing a limit of one stream for the PCM muxers.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 23:57:19 +01:00
Andreas Rheinhardt a48e839a22 avformat/mux_utils: Don't report that AV_CODEC_ID_NONE can be muxed
If AVOutputFormat.video_codec, audio_codec or subtitle_codec
is AV_CODEC_ID_NONE, it means that there is no default codec
for this format and not that it is supported to mux AV_CODEC_ID_NONE.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 23:57:19 +01:00
Andreas Rheinhardt 789c5b03db avformat/amr: Move write_header closer to muxer definition
Avoids one #if.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 23:57:19 +01:00
Andreas Rheinhardt 233e13f285 avformat/mux: Rename FF_FMT_ALLOW_FLUSH->FF_OFMT_FLAG_ALLOW_FLUSH
It better reflects that this is a muxer-only flag.
Also document the flag.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 23:57:19 +01:00
Andreas Rheinhardt b8124fe35e libavformat/westwood_audenc: Use proper logcontext
(AVStream did not have an AVClass when this muxer was added.)

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 23:57:19 +01:00
Andreas Rheinhardt eb3ee7f141 avformat/mp3enc: Improve query_codec
Signal that anything except MP3 and the ID3V2 attached pic types
are forbidden.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 23:57:19 +01:00
Andreas Rheinhardt d11b5e6096 avutil/frame: Use av_realloc_array(), improve overflow check
Also use sizeof of the proper type, namely sizeof(**sd)
and not sizeof(*sd).

Reviewed-by: Jan Ekström <jeebjp@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 23:38:36 +01:00
Andreas Rheinhardt b7bec5d3c9 avutil/frame: Rename av_frame_side_data_get and add wrapper for it
av_frame_side_data_get() has a const AVFrameSideData * const *sd
parameter; so calling it with an AVFramesSideData **sd like
AVCodecContext.decoded_side_data (or with a AVFramesSideData * const
*sd) is safe, but the conversion is not performed automatically
in C. All users of this function therefore resort to a cast.

This commit changes this: av_frame_side_data_get() is renamed
to av_frame_side_data_get_c(); furthermore, a static inline
wrapper for it name av_frame_side_data_get() is added
that accepts an AVFramesSideData * const * and converts this
to const AVFramesSideData * const * in a Wcast-qual safe way.

This also allows to remove the casts from the current users.

Reviewed-by: Jan Ekström <jeebjp@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 23:38:16 +01:00
Andreas Rheinhardt 26398da8f3 avutil/frame: Constify av_frame_side_data_get()
Reviewed-by: Jan Ekström <jeebjp@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 23:36:07 +01:00
Andreas Rheinhardt b9fcc135c5 avcodec/libx265: Pass logctx as void*, not AVClass**
The latter need not be save, because av_log() expects
to get a pointer to an AVClass-enabled structure
and not only a fake object. If this function were actually
be called in the following way:

const AVClass *avcl = avctx->av_class;
handle_mdcv(&avcl, );

the AVClass's item_name would expect to point to an actual
AVCodecContext, potentially leading to a segfault.

Reviewed-by: Jan Ekström <jeebjp@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 23:32:15 +01:00
Andreas Rheinhardt 244db71037 avcodec/libx265: Don't use AVBPrint unnecessarily
This code uses the AVBPrint API for exactly one av_bprintf()
in a scenario in which a good upper bound for the needed
size of the buffer is available (with said upper bound being
much smaller than sizeof(AVBPrint)). So one can simply use
snprintf() instead. This also avoids the (always-false due to
the current size of the internal AVBPrint buffer) check for
whether the AVBPrint is complete.

Furthermore, the old code used AV_BPRINT_SIZE_AUTOMATIC
which implies that the AVBPrint buffer will never be
(re)allocated and yet it used av_bprint_finalize().
This has of course also been removed.

Reviewed-by: Jan Ekström <jeebjp@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 23:31:58 +01:00
Andreas Rheinhardt c77164390b fftools/ffmpeg_enc: Don't call frame_data twice
Reviewed-by: Jan Ekström <jeebjp@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 22:11:04 +01:00
Andreas Rheinhardt 6ecc2f0f6f avcodec/libx264: Remove unused variable
Reviewed-by: Zhao Zhili <quinkblack@foxmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 22:10:26 +01:00
Andreas Rheinhardt 3fd047ee30 avcodec/librav1e: Don't unnecessarily create new references
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 17:04:05 +01:00
Andreas Rheinhardt c89f6ae689 avcodec/libdav1d: Stop mangling AVPacket.opaque
Unnecessary since 67e7f0b05e
as there are no longer two opaque fields.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 16:33:53 +01:00
Niklas Haas f04a2ba302 avcodec/dovi_rpu: fix off-by-one in loop
Otherwise the last VDR would never get copied.
2024-03-22 14:05:30 +01:00
Niklas Haas d5648a806f avcodec/dovi_rpu: use OR instead of addition 2024-03-22 14:05:22 +01:00
Zhao Zhili 4869171aa9 Changelog: mention ffplay with hwaccel decoding support
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2024-03-22 20:26:53 +08:00
Zhao Zhili 5229778440 avcodec/libx264: fix extradata when config annexb=0
AVCodecContext extradata should be an AVCDecoderConfigurationRecord
when bitstream format is avcc. Simply concatenating the NALUs output
by x264_encoder_headers does not form a standard
AVCDecoderConfigurationRecord. The following cmd generates broken
file before the patch:

ffmpeg -i foo.mp4 -c:v libx264 -x264-params annexb=0 bar.mp4

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2024-03-22 20:26:53 +08:00
Zhao Zhili c775163a8c avcodec/decode: log hwaccel name
Many users mistakenly think that hwaccel is an instance of a decoder,
and cannot find the corresponding decoder name in the logs. Log
hwaccel name so user know hwaccel has taken effect.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2024-03-22 20:26:53 +08:00
Andreas Rheinhardt ee736ff80e avformat/flvenc: Avoid avio_write(pb, "", 0)
When the compiler chooses to inline put_amf_string(pb, ""),
the avio_write(pb, "", 0) can be avoided. Happens with
Clang-17 with -O1 and higher and GCC 13 with -O2 and higher
here.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-22 12:59:50 +01:00
James Almer 535b1a93f5 avcodec/hevc_ps: fix setting HEVCHdrParams fields
These were defined in a way compatible with the Vulkan HEVC acceleration, which
expects bitmasks, yet the fields were being overwritting on each loop with the
latest read value.

Signed-off-by: James Almer <jamrial@gmail.com>
2024-03-21 11:31:32 -03:00
James Almer 456c8ebe7c avcodec/hevc_ps: allocate only the required HEVCHdrParams within a VPS
Fixes: timeout
Fixes: 64033/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_HEVC_fuzzer-5332101272305664

Signed-off-by: James Almer <jamrial@gmail.com>
2024-03-21 09:59:20 -03:00
James Almer 97d2990ea6 avformat/iamf_reader: propagate avio_skip() error values
Fixes: null pointer derference
Fixes: 67007/clusterfuzz-testcase-minimized-ffmpeg_dem_IAMF_fuzzer-6522819204677632

Tested-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
2024-03-21 09:08:22 -03:00
James Almer e04c638f5f avformat/movenc: only compile avif_write_trailer() when the avif muxer is enabled
Signed-off-by: James Almer <jamrial@gmail.com>
2024-03-20 23:50:55 -03:00
James Almer 5ff0eb34d2 configure: check for C17 by default
Signed-off-by: James Almer <jamrial@gmail.com>
2024-03-20 17:11:18 -03:00
James Almer 6c2ff982dc configure: make the C and C++ standard settable
While ensuring it's at least C11, the minimum supported version.
Also, enforce C11 on the host compiler, same as we already do for C11 on the
target compiler.

Tested-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
2024-03-20 17:10:55 -03:00
Jan Ekström d7d2213a6b avcodec/libx265: add support for writing out CLL and MDCV
The newer of these two are the separate integers for content light
level, introduced in 3952bf3e98c76c31594529a3fe34e056d3e3e2ea ,
with X265_BUILD 75. As we already require X265_BUILD of at least
89, no further conditions are required.
2024-03-20 19:15:05 +02:00
Jan Ekström 471c0a34c1 avcodec/libx264: add support for writing out CLL and MDCV
Both of these two structures were first available with X264_BUILD
163, so make relevant functionality conditional on the version
being at least such.

Keep handle_side_data available in all cases as this way X264_init
does not require additional version based conditions within it.

Finally, add a FATE test which verifies that pass-through of the
MDCV/CLL side data is working during encoding.
2024-03-20 19:15:05 +02:00
Jan Ekström f4b89b6e54 avcodec/libsvtav1: add support for writing out CLL and MDCV
These two were added in 28e23d7f348c78d49a726c7469f9d4e38edec341
and 3558c1f2e97455e0b89edef31b9a72ab7fa30550 for version 0.9.0 of
SVT-AV1, which is also our minimum requirement right now.

In other words, no additional version limiting conditions seem
to be required.

Additionally, add a FATE test which verifies that pass-through of
the MDCV/CLL side data is working during encoding.
2024-03-20 19:15:05 +02:00
Jan Ekström 8f4b173029 ffmpeg: pass first video AVFrame's side data to encoder
This enables further configuration of output based on the results
of input decoding and filtering in a similar manner as the color
information.
2024-03-20 19:15:05 +02:00
Jan Ekström 0d36844ddf avcodec: add frame side data array to AVCodecContext
This allows configuring an encoder by using AVFrameSideData.
2024-03-20 19:15:05 +02:00
Jan Ekström d9ade14c5c {avutil/version,APIchanges}: bump, document new AVFrameSideData functions 2024-03-20 19:15:05 +02:00
Jan Ekström f287a285d9 avutil/frame: add helper for getting side data from array 2024-03-20 19:15:05 +02:00
Jan Ekström 3c52f73e25 avutil/frame: add helper for adding existing side data to array 2024-03-20 19:14:02 +02:00
Jan Ekström 53335f6cf4 avutil/frame: add helper for adding side data to array
Additionally, add an API test to check that the no-duplicates
addition works after duplicates have been inserted.
2024-03-20 19:14:02 +02:00
Jan Ekström d2bb22f6d5 avutil/frame: split side data removal out to non-AVFrame function
This will make it possible to reuse logic in further commits.
2024-03-20 19:14:02 +02:00
Jan Ekström 28783896dc avutil/frame: split side_data_from_buf to base and AVFrame func 2024-03-20 19:14:02 +02:00
Jan Ekström 919c9cdbe6 avutil/frame: add helper for freeing arrays of side data 2024-03-20 19:14:02 +02:00
Jan Ekström d5104b3401 avutil/frame: split side data list wiping out to non-AVFrame function
This will make it possible to to reuse logic in further commits.
2024-03-20 19:14:02 +02:00
Frank Plowman dfcf5f828d lavc/vvc: Fix check whether QG is in first tile col
The second part of this condition is intended to check whether the
current quantisation group is in the first CTU column of the current
tile.  The issue is that ctb_to_col_bd gives the x-ordinate of the first
column of the current tile *in CTUs*, while xQg gives the x-ordinate of
the quantisation group *in samples*.  Rectify this by shifting xQg by
ctb_log2_size to get xQg in CTUs before comparing.

Fixes FFVVC issues #201 and #203.
2024-03-20 22:27:19 +08:00
Andreas Rheinhardt 0b7d4fccce avformat/codec2: Don't allocate Codec2Context for muxer
Only the demuxers use it.

Reviewed-by: Tomas Härdin <git@haerdin.se>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-20 12:46:23 +01:00
Andreas Rheinhardt cd8cc3d1b3 avformat/iamfenc: Remove unused headers
Forgotten in c95c8a0158.

Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-20 10:17:59 +01:00
Andreas Rheinhardt 6a9ddfcd96 avformat/iamfenc: Align check and error message
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-20 10:17:52 +01:00
Andreas Rheinhardt a7ad5d4d10 avformat/iamfenc: Remove always-false check
This muxer does not have the AVFMT_NOSTREAMS flag; therefore
it is checked generically that there is at least a stream.

Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-20 10:17:37 +01:00
Mark Thompson 7f4b8d2f5e ffmpeg: set extra_hw_frames to account for frames held in queues
Since e0da916b8f the ffmpeg utility has
held multiple frames output by the decoder in internal queues without
telling the decoder that it is going to do so.  When the decoder has a
fixed-size pool of frames (common in some hardware APIs where the output
frames must be stored as an array texture) this could lead to the pool
being exhausted and the decoder getting stuck.  Fix this by telling the
decoder to allocate additional frames according to the queue size.
2024-03-19 22:56:56 +00:00
Marton Balint 7251f90972 fftools/ffplay: use correct buffersink channel layout parameters
Regression since 0995e1f1b3.

Signed-off-by: Marton Balint <cus@passwd.hu>
2024-03-19 20:48:22 +01:00
Stefano Sabatini 0cd13ad674 doc/muxers/gif: apply consistency fixes 2024-03-19 17:23:20 +01:00
Stefano Sabatini f7d560e919 doc/muxers/flv: apply misc consistency fixes 2024-03-19 17:23:20 +01:00
Stefano Sabatini 9afd9bb5c5 doc/muxers: add flac 2024-03-19 17:23:05 +01:00
Marth64 0b342a2f15 avcodec/mpeg12dec: extract only one type of CC substream
In MPEG-2 user data, there can be different types of Closed Captions
formats embedded (A53, SCTE-20, or DVD). The current behavior of the
CC extraction code in the MPEG-2 decoder is to not be aware of
multiple formats if multiple exist, therefore allowing one format
to overwrite the other during the extraction process since the CC
extraction shares one output buffer for the normalized bytes.

This causes sources that have two CC formats to produce flawed output.
There exist real-world samples which contain both A53 and SCTE-20 captions
in the same MPEG-2 stream, and that manifest this problem. Example of symptom:
THANK YOU (expected) --> THTHANANK K YOYOUU (actual)

The solution is to pick only the first CC substream observed with valid bytes,
and ignore the other types. Additionally, provide an option for users
to manually "force" a type in the event that this matters for a particular
source.

Signed-off-by: Marth64 <marth64@proxyid.net>
2024-03-19 15:52:05 +01:00
James Almer 53dd31497b avformat/matroska: use named constants for ITU-T T.35 metadata
Signed-off-by: James Almer <jamrial@gmail.com>
2024-03-19 10:49:20 -03:00
James Almer 61519cc654 avcodec/libdav1d: use named constants for ITU-T T.35 metadata
Signed-off-by: James Almer <jamrial@gmail.com>
2024-03-19 09:44:59 -03:00
James Almer a1f714d197 avcodec/h2645_sei: use named constants for ITU-T T.35 metadata
Signed-off-by: James Almer <jamrial@gmail.com>
2024-03-19 09:44:59 -03:00
James Almer 4ca5d45193 avcodec/av1dec: use named constants for ITU-T T.35 metadata
Signed-off-by: James Almer <jamrial@gmail.com>
2024-03-19 09:44:59 -03:00
Wenbin Chen f4e0664fd1 libavfi/dnn: add LibTorch as one of DNN backend
PyTorch is an open source machine learning framework that accelerates
the path from research prototyping to production deployment. Official
website: https://pytorch.org/. We call the C++ library of PyTorch as
LibTorch, the same below.

To build FFmpeg with LibTorch, please take following steps as
reference:
1. download LibTorch C++ library in
 https://pytorch.org/get-started/locally/,
please select C++/Java for language, and other options as your need.
Please download cxx11 ABI version:
 (libtorch-cxx11-abi-shared-with-deps-*.zip).
2. unzip the file to your own dir, with command
unzip libtorch-shared-with-deps-latest.zip -d your_dir
3. export libtorch_root/libtorch/include and
libtorch_root/libtorch/include/torch/csrc/api/include to $PATH
export libtorch_root/libtorch/lib/ to $LD_LIBRARY_PATH
4. config FFmpeg with ../configure --enable-libtorch \
 --extra-cflag=-I/libtorch_root/libtorch/include \
 --extra-cflag=-I/libtorch_root/libtorch/include/torch/csrc/api/include \
 --extra-ldflags=-L/libtorch_root/libtorch/lib/
5. make

To run FFmpeg DNN inference with LibTorch backend:
./ffmpeg -i input.jpg -vf \
dnn_processing=dnn_backend=torch:model=LibTorch_model.pt -y output.jpg

The LibTorch_model.pt can be generated by Python with torch.jit.script()
api. https://pytorch.org/tutorials/advanced/cpp_export.html. This is
pytorch official guide about how to convert and load torchscript model.
Please note, torch.jit.trace() is not recommanded, since it does
not support ambiguous input size.

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-03-19 14:48:58 +08:00
141 changed files with 4109 additions and 963 deletions

View File

@ -35,6 +35,8 @@ version <next>:
- AEA muxer
- ffmpeg CLI loopback decoders
- Support PacketTypeMetadata of PacketType in enhanced flv format
- ffplay with hwaccel decoding support (depends on vulkan renderer via libplacebo)
- dnn filter libtorch backend
version 6.1:

86
configure vendored
View File

@ -281,6 +281,7 @@ External library support:
--enable-libtheora enable Theora encoding via libtheora [no]
--enable-libtls enable LibreSSL (via libtls), needed for https support
if openssl, gnutls or mbedtls is not used [no]
--enable-libtorch enable Torch as one DNN backend [no]
--enable-libtwolame enable MP2 encoding via libtwolame [no]
--enable-libuavs3d enable AVS3 decoding via libuavs3d [no]
--enable-libv4l2 enable libv4l2/v4l-utils [no]
@ -386,7 +387,9 @@ Toolchain options:
--windres=WINDRES use windows resource compiler WINDRES [$windres_default]
--x86asmexe=EXE use nasm-compatible assembler EXE [$x86asmexe_default]
--cc=CC use C compiler CC [$cc_default]
--stdc=STDC use C standard STDC [$stdc_default]
--cxx=CXX use C compiler CXX [$cxx_default]
--stdcxx=STDCXX use C standard STDCXX [$stdcxx_default]
--objcc=OCC use ObjC compiler OCC [$cc_default]
--dep-cc=DEPCC use dependency generator DEPCC [$cc_default]
--nvcc=NVCC use Nvidia CUDA compiler NVCC or clang [$nvcc_default]
@ -1453,6 +1456,33 @@ test_cflags_cc(){
EOF
}
check_cflags_cc(){
log check_cflags_cc "$@"
flags=$1
test_cflags_cc "$@" && add_cflags $flags
}
test_cxxflags_cc(){
log test_cxxflags_cc "$@"
flags=$1
header=$2
condition=$3
shift 3
set -- $($cflags_filter "$flags")
test_cxx "$@" <<EOF
#include <$header>
#if !($condition)
#error "unsatisfied condition: $condition"
#endif
EOF
}
check_cxxflags_cc(){
log check_cxxflags_cc "$@"
flags=$1
test_cxxflags_cc "$@" && add_cxxflags $flags
}
check_lib(){
log check_lib "$@"
name="$1"
@ -1694,6 +1724,27 @@ int x;
EOF
}
test_host_cflags_cc(){
log test_host_cflags_cc "$@"
flags=$1
header=$2
condition=$3
shift 3
set -- $($host_cflags_filter "$flags")
test_host_cc "$@" <<EOF
#include <$header>
#if !($condition)
#error "unsatisfied condition: $condition"
#endif
EOF
}
check_host_cflags_cc(){
log check_host_cflags_cc "$@"
flags=$1
test_host_cflags_cc "$@" && add_host_cflags $flags
}
test_host_cpp_condition(){
log test_host_cpp_condition "$@"
header=$1
@ -1905,6 +1956,7 @@ EXTERNAL_LIBRARY_LIST="
libtensorflow
libtesseract
libtheora
libtorch
libtwolame
libuavs3d
libv4l2
@ -2531,6 +2583,7 @@ CONFIG_EXTRA="
jpegtables
lgplv3
libx262
libx264_hdr10
llauddsp
llviddsp
llvidencdsp
@ -2650,6 +2703,8 @@ CMDLINE_SET="
random_seed
ranlib
samples
stdc
stdcxx
strip
sws_max_filter_size
sysinclude
@ -2785,7 +2840,7 @@ cbs_vp9_select="cbs"
deflate_wrapper_deps="zlib"
dirac_parse_select="golomb"
dovi_rpu_select="golomb"
dnn_suggest="libtensorflow libopenvino"
dnn_suggest="libtensorflow libopenvino libtorch"
dnn_deps="avformat swscale"
error_resilience_select="me_cmp"
evcparse_select="golomb"
@ -3484,7 +3539,7 @@ libwebp_encoder_deps="libwebp"
libwebp_anim_encoder_deps="libwebp"
libx262_encoder_deps="libx262"
libx264_encoder_deps="libx264"
libx264_encoder_select="atsc_a53"
libx264_encoder_select="atsc_a53 golomb"
libx264rgb_encoder_deps="libx264"
libx264rgb_encoder_select="libx264_encoder"
libx265_encoder_deps="libx265"
@ -3656,6 +3711,8 @@ xcbgrab_indev_suggest="libxcb_shm libxcb_shape libxcb_xfixes"
xv_outdev_deps="xlib_xv xlib_x11 xlib_xext"
# protocols
android_content_protocol_deps="jni"
android_content_protocol_select="file_protocol"
async_protocol_deps="threads"
bluray_protocol_deps="libbluray"
ffrtmpcrypt_protocol_conflict="librtmp_protocol"
@ -3978,6 +4035,8 @@ mandir_default='${prefix}/share/man'
# toolchain
ar_default="ar"
cc_default="gcc"
stdc_default="c17"
stdcxx_default="c++11"
cxx_default="g++"
host_cc_default="gcc"
doxygen_default="doxygen"
@ -4585,7 +4644,7 @@ if enabled cuda_nvcc; then
fi
set_default arch cc cxx doxygen pkg_config ranlib strip sysinclude \
target_exec x86asmexe metalcc metallib
target_exec x86asmexe metalcc metallib stdc stdcxx
enabled cross_compile || host_cc_default=$cc
set_default host_cc
@ -4755,7 +4814,7 @@ icl_flags(){
# Despite what Intel's documentation says -Wall, which is supported
# on Windows, does enable remarks so disable them here.
-Wall) echo $flag -Qdiag-disable:remark ;;
-std=c11) echo -Qstd=c11 ;;
-std=$stdc) echo -Qstd=$stdc ;;
-flto*) echo -ipo ;;
esac
done
@ -4803,7 +4862,7 @@ suncc_flags(){
athlon*) echo -xarch=pentium_proa ;;
esac
;;
-std=c11) echo -xc11 ;;
-std=$stdc) echo -x$stdc ;;
-fomit-frame-pointer) echo -xregs=frameptr ;;
-fPIC) echo -KPIC -xcode=pic32 ;;
-W*,*) echo $flag ;;
@ -4892,8 +4951,8 @@ probe_cc(){
_type=suncc
_ident=$($_cc -V 2>&1 | head -n1 | cut -d' ' -f 2-)
_DEPCMD='$(DEP$(1)) $(DEP$(1)FLAGS) $($(1)DEP_FLAGS) $< | sed -e "1s,^.*: ,$@: ," -e "\$$!s,\$$, \\\," -e "1!s,^.*: , ," > $(@:.o=.d)'
_DEPFLAGS='-xM1 -xc11'
_ldflags='-std=c11'
_DEPFLAGS='-xM1 -x$stdc'
_ldflags='-std=$stdc'
_cflags_speed='-O5'
_cflags_size='-O5 -xspace'
_flags_filter=suncc_flags
@ -5524,18 +5583,21 @@ fi
add_cppflags -D_ISOC11_SOURCE
add_cxxflags -D__STDC_CONSTANT_MACROS
check_cxxflags -std=c++11 || check_cxxflags -std=c++0x
check_cxxflags_cc -std=$stdcxx ctype.h "__cplusplus >= 201103L" ||
{ check_cxxflags -std=c++11 && stdcxx="c++11" || { check_cxxflags -std=c++0x && stdcxx="c++0x"; }; }
# some compilers silently accept -std=c11, so we also need to check that the
# version macro is defined properly
test_cflags_cc -std=c11 ctype.h "__STDC_VERSION__ >= 201112L" &&
add_cflags -std=c11 || die "Compiler lacks C11 support"
check_cflags_cc -std=$stdc ctype.h "__STDC_VERSION__ >= 201112L" ||
{ check_cflags_cc -std=c11 ctype.h "__STDC_VERSION__ >= 201112L" && stdc="c11" || die "Compiler lacks C11 support"; }
check_cppflags -D_FILE_OFFSET_BITS=64
check_cppflags -D_LARGEFILE_SOURCE
add_host_cppflags -D_ISOC11_SOURCE
check_host_cflags -std=c11
check_host_cflags_cc -std=$stdc ctype.h "__STDC_VERSION__ >= 201112L" ||
check_host_cflags_cc -std=c11 ctype.h "__STDC_VERSION__ >= 201112L" || die "Host compiler lacks C11 support"
check_host_cflags -Wall
check_host_cflags $host_cflags_speed
@ -6884,6 +6946,7 @@ enabled libtensorflow && require libtensorflow tensorflow/c/c_api.h TF_Versi
enabled libtesseract && require_pkg_config libtesseract tesseract tesseract/capi.h TessBaseAPICreate
enabled libtheora && require libtheora theora/theoraenc.h th_info_init -ltheoraenc -ltheoradec -logg
enabled libtls && require_pkg_config libtls libtls tls.h tls_configure
enabled libtorch && check_cxxflags -std=c++17 && require_cpp libtorch torch/torch.h "torch::Tensor" -ltorch -lc10 -ltorch_cpu -lstdc++ -lpthread
enabled libtwolame && require libtwolame twolame.h twolame_init -ltwolame &&
{ check_lib libtwolame twolame.h twolame_encode_buffer_float32_interleaved -ltwolame ||
die "ERROR: libtwolame must be installed and version must be >= 0.3.10"; }
@ -6925,6 +6988,7 @@ enabled libx264 && require_pkg_config libx264 x264 "stdint.h x264.h" x
require_cpp_condition libx264 x264.h "X264_BUILD >= 122" && {
[ "$toolchain" != "msvc" ] ||
require_cpp_condition libx264 x264.h "X264_BUILD >= 158"; } &&
check_cpp_condition libx264_hdr10 x264.h "X264_BUILD >= 163" &&
check_cpp_condition libx262 x264.h "X264_MPEG2"
enabled libx265 && require_pkg_config libx265 x265 x265.h x265_api_get &&
require_cpp_condition libx265 x265.h "X265_BUILD >= 89"

View File

@ -2,6 +2,32 @@ The last version increases of all libraries were on 2024-03-07
API changes, most recent first:
2024-03-xx - xxxxxxxxxx - lavu 59.6.100 - film_grain_params.h
Add av_film_grain_params_select().
2024-03-xx - xxxxxxxxxx - lavu 59.5.100 - film_grain_params.h
Add AVFilmGrainParams.color_range, color_primaries, color_trc, color_space,
width, height, subsampling_x, subsampling_y, bit_depth_luma and
bit_depth_chroma. Deprecate the corresponding fields from
AVFilmGrainH274Params.
2024-03-xx - xxxxxxxxxx - lavc 61.3.100 - jni.h
Add av_jni_set_android_app_ctx() and av_jni_get_android_app_ctx().
2024-03-22 - xxxxxxxxxx - lavu 59.4.100 - frame.h
Constified the first-level pointee of av_frame_side_data_get()
and renamed it to av_frame_side_data_get_c(). From now on,
av_frame_side_data_get() is a wrapper around av_frame_side_data_get_c()
that accepts AVFrameSideData * const *sd.
2024-03-xx - xxxxxxxxxx - lavc 61.2.100 - avcodec.h
Add AVCodecContext.[nb_]decoded_side_data.
2024-03-xx - xxxxxxxxxx - lavu 59.3.100 - frame.h
Add av_frame_side_data_free(), av_frame_side_data_new(),
av_frame_side_data_clone(), av_frame_side_data_get() as well
as AV_FRAME_SIDE_DATA_FLAG_UNIQUE.
2024-03-xx - xxxxxxxxxx - lavu 59.2.100 - channel_layout.h
Add AV_CHANNEL_LAYOUT_RETYPE_FLAG_CANONICAL.

View File

@ -1576,19 +1576,35 @@ This image format is used to store astronomical data.
For more information regarding the format, visit
@url{https://fits.gsfc.nasa.gov}.
@section flv
@section flac
Raw FLAC audio muxer.
This muxer accepts exactly one FLAC audio stream. Additionally, it is possible to add
images with disposition @samp{attached_pic}.
@subsection Options
@table @option
@item write_header @var{bool}
write the file header if set to @code{true}, default is @code{true}
@end table
@subsection Example
Use @command{ffmpeg} to store the audio stream from an input file,
together with several pictures used with @samp{attached_pic}
disposition:
@example
ffmpeg -i INPUT -i pic1.png -i pic2.jpg -map 0:a -map 1 -map 2 -disposition:v attached_pic OUTPUT
@end example
@section flv
Adobe Flash Video Format muxer.
This muxer accepts the following options:
@subsection Options
@table @option
@item flvflags @var{flags}
Possible values:
@table @samp
@item aac_seq_header_detect
Place AAC sequence header based on audio stream data.
@ -1729,24 +1745,26 @@ See also the @ref{framehash} and @ref{md5} muxers.
@anchor{gif}
@section gif
Animated GIF muxer.
It accepts the following options:
Note that the GIF format has a very large time base: the delay between two frames can
therefore not be smaller than one centi second.
@subsection Options
@table @option
@item loop
@item loop @var{bool}
Set the number of times to loop the output. Use @code{-1} for no loop, @code{0}
for looping indefinitely (default).
@item final_delay
@item final_delay @var{delay}
Force the delay (expressed in centiseconds) after the last frame. Each frame
ends with a delay until the next frame. The default is @code{-1}, which is a
special value to tell the muxer to re-use the previous delay. In case of a
loop, you might want to customize this value to mark a pause for instance.
@end table
For example, to encode a gif looping 10 times, with a 5 seconds delay between
@subsection Example
Encode a gif looping 10 times, with a 5 seconds delay between
the loops:
@example
ffmpeg -i INPUT -loop 10 -final_delay 500 out.gif
@ -1758,8 +1776,17 @@ force the @ref{image2} muxer:
ffmpeg -i INPUT -c:v gif -f image2 "out%d.gif"
@end example
Note 2: the GIF format has a very large time base: the delay between two frames
can therefore not be smaller than one centi second.
@section gxf
General eXchange Format (GXF) muxer.
GXF was developed by Grass Valley Group, then standardized by SMPTE as SMPTE
360M and was extended in SMPTE RDD 14-2007 to include high-definition video
resolutions.
It accepts at most one video stream with codec @samp{mjpeg}, or
@samp{mpeg1video}, or @samp{mpeg2video}, or @samp{dvvideo} with resolution
@samp{512x480} or @samp{608x576}, and several audio streams with rate 48000Hz
and codec @samp{pcm16_le}.
@anchor{hash}
@section hash
@ -1806,6 +1833,45 @@ ffmpeg -i INPUT -f hash -hash md5 -
See also the @ref{framehash} muxer.
@anchor{hds}
@section hds
HTTP Dynamic Streaming (HDS) muxer.
HTTP dynamic streaming, or HDS, is an adaptive bitrate streaming method
developed by Adobe. HDS delivers MP4 video content over HTTP connections. HDS
can be used for on-demand streaming or live streaming.
This muxer creates an .f4m (Adobe Flash Media Manifest File) manifest, an .abst
(Adobe Bootstrap File) for each stream, and segment files in a directory
specified as the output.
These needs to be accessed by an HDS player throuhg HTTPS for it to be able to
perform playback on the generated stream.
@subsection Options
@table @option
@item extra_window_size @var{int}
number of fragments kept outside of the manifest before removing from disk
@item min_frag_duration @var{microseconds}
minimum fragment duration (in microseconds), default value is 1 second
(@code{10000000})
@item remove_at_exit @var{bool}
remove all fragments when finished when set to @code{true}
@item window_size @var{int}
number of fragments kept in the manifest, if set to a value different from
@code{0}. By default all segments are kept in the output directory.
@end table
@subsection Example
Use @command{ffmpeg} to generate HDS files to the @file{output.hds} directory in
real-time rate:
@example
ffmpeg -re -i INPUT -f hds -b:v 200k output.hds
@end example
@anchor{hls}
@section hls

View File

@ -1207,6 +1207,19 @@ static int dec_open(DecoderPriv *dp, AVDictionary **dec_opts,
return ret;
}
if (dp->dec_ctx->hw_device_ctx) {
// Update decoder extra_hw_frames option to account for the
// frames held in queues inside the ffmpeg utility. This is
// called after avcodec_open2() because the user-set value of
// extra_hw_frames becomes valid in there, and we need to add
// this on top of it.
int extra_frames = DEFAULT_FRAME_THREAD_QUEUE_SIZE;
if (dp->dec_ctx->extra_hw_frames >= 0)
dp->dec_ctx->extra_hw_frames += extra_frames;
else
dp->dec_ctx->extra_hw_frames = extra_frames;
}
ret = check_avoptions(*dec_opts);
if (ret < 0)
return ret;

View File

@ -246,6 +246,21 @@ int enc_open(void *opaque, const AVFrame *frame)
enc_ctx->colorspace = frame->colorspace;
enc_ctx->chroma_sample_location = frame->chroma_location;
for (int i = 0; i < frame->nb_side_data; i++) {
ret = av_frame_side_data_clone(
&enc_ctx->decoded_side_data, &enc_ctx->nb_decoded_side_data,
frame->side_data[i], AV_FRAME_SIDE_DATA_FLAG_UNIQUE);
if (ret < 0) {
av_frame_side_data_free(
&enc_ctx->decoded_side_data,
&enc_ctx->nb_decoded_side_data);
av_log(NULL, AV_LOG_ERROR,
"failed to configure video encoder: %s!\n",
av_err2str(ret));
return ret;
}
}
if (enc_ctx->flags & (AV_CODEC_FLAG_INTERLACED_DCT | AV_CODEC_FLAG_INTERLACED_ME) ||
(frame->flags & AV_FRAME_FLAG_INTERLACED)
#if FFMPEG_OPT_TOP
@ -631,7 +646,6 @@ static int encode_frame(OutputFile *of, OutputStream *ost, AVFrame *frame,
if (frame) {
FrameData *fd = frame_data(frame);
fd = frame_data(frame);
if (!fd)
return AVERROR(ENOMEM);

View File

@ -365,7 +365,21 @@ static int queue_alloc(ThreadQueue **ptq, unsigned nb_streams, unsigned queue_si
ThreadQueue *tq;
ObjPool *op;
queue_size = queue_size > 0 ? queue_size : 8;
if (queue_size <= 0) {
if (type == QUEUE_FRAMES)
queue_size = DEFAULT_FRAME_THREAD_QUEUE_SIZE;
else
queue_size = DEFAULT_PACKET_THREAD_QUEUE_SIZE;
}
if (type == QUEUE_FRAMES) {
// This queue length is used in the decoder code to ensure that
// there are enough entries in fixed-size frame pools to account
// for frames held in queues inside the ffmpeg utility. If this
// can ever dynamically change then the corresponding decode
// code needs to be updated as well.
av_assert0(queue_size == DEFAULT_FRAME_THREAD_QUEUE_SIZE);
}
op = (type == QUEUE_PACKETS) ? objpool_alloc_packets() :
objpool_alloc_frames();

View File

@ -233,6 +233,18 @@ int sch_add_filtergraph(Scheduler *sch, unsigned nb_inputs, unsigned nb_outputs,
*/
int sch_add_mux(Scheduler *sch, SchThreadFunc func, int (*init)(void *),
void *ctx, int sdp_auto, unsigned thread_queue_size);
/**
* Default size of a packet thread queue. For muxing this can be overridden by
* the thread_queue_size option as passed to a call to sch_add_mux().
*/
#define DEFAULT_PACKET_THREAD_QUEUE_SIZE 8
/**
* Default size of a frame thread queue.
*/
#define DEFAULT_FRAME_THREAD_QUEUE_SIZE 8
/**
* Add a muxed stream for a previously added muxer.
*

View File

@ -2040,6 +2040,8 @@ static int configure_audio_filters(VideoState *is, const char *afilters, int for
goto end;
if (force_output_format) {
av_bprint_clear(&bp);
av_channel_layout_describe_bprint(&is->audio_tgt.ch_layout, &bp);
sample_rates [0] = is->audio_tgt.freq;
if ((ret = av_opt_set_int(filt_asink, "all_channel_counts", 0, AV_OPT_SEARCH_CHILDREN)) < 0)
goto end;

View File

@ -2402,22 +2402,41 @@ static void print_ambient_viewing_environment(WriterContext *w,
static void print_film_grain_params(WriterContext *w,
const AVFilmGrainParams *fgp)
{
const char *color_range, *color_primaries, *color_trc, *color_space;
const char *const film_grain_type_names[] = {
[AV_FILM_GRAIN_PARAMS_NONE] = "none",
[AV_FILM_GRAIN_PARAMS_AV1] = "av1",
[AV_FILM_GRAIN_PARAMS_H274] = "h274",
};
AVBPrint pbuf;
if (!fgp)
if (!fgp || fgp->type >= FF_ARRAY_ELEMS(film_grain_type_names))
return;
color_range = av_color_range_name(fgp->color_range);
color_primaries = av_color_primaries_name(fgp->color_primaries);
color_trc = av_color_transfer_name(fgp->color_trc);
color_space = av_color_space_name(fgp->color_space);
av_bprint_init(&pbuf, 1, AV_BPRINT_SIZE_UNLIMITED);
print_str("type", film_grain_type_names[fgp->type]);
print_fmt("seed", "%"PRIu64, fgp->seed);
print_int("width", fgp->width);
print_int("height", fgp->height);
print_int("subsampling_x", fgp->subsampling_x);
print_int("subsampling_y", fgp->subsampling_y);
print_str("color_range", color_range ? color_range : "unknown");
print_str("color_primaries", color_primaries ? color_primaries : "unknown");
print_str("color_trc", color_trc ? color_trc : "unknown");
print_str("color_space", color_space ? color_space : "unknown");
switch (fgp->type) {
case AV_FILM_GRAIN_PARAMS_NONE:
print_str("type", "none");
break;
case AV_FILM_GRAIN_PARAMS_AV1: {
const AVFilmGrainAOMParams *aom = &fgp->codec.aom;
const int num_ar_coeffs_y = 2 * aom->ar_coeff_lag * (aom->ar_coeff_lag + 1);
const int num_ar_coeffs_uv = num_ar_coeffs_y + !!aom->num_y_points;
print_str("type", "av1");
print_fmt("seed", "%"PRIu64, fgp->seed);
print_int("chroma_scaling_from_luma", aom->chroma_scaling_from_luma);
print_int("scaling_shift", aom->scaling_shift);
print_int("ar_coeff_lag", aom->ar_coeff_lag);
@ -2431,6 +2450,7 @@ static void print_film_grain_params(WriterContext *w,
if (aom->num_y_points) {
writer_print_section_header(w, NULL, SECTION_ID_FRAME_SIDE_DATA_COMPONENT);
print_int("bit_depth_luma", fgp->bit_depth_luma);
print_list_fmt("y_points_value", "%"PRIu8, aom->num_y_points, 1, aom->y_points[idx][0]);
print_list_fmt("y_points_scaling", "%"PRIu8, aom->num_y_points, 1, aom->y_points[idx][1]);
print_list_fmt("ar_coeffs_y", "%"PRId8, num_ar_coeffs_y, 1, aom->ar_coeffs_y[idx]);
@ -2445,6 +2465,7 @@ static void print_film_grain_params(WriterContext *w,
writer_print_section_header(w, NULL, SECTION_ID_FRAME_SIDE_DATA_COMPONENT);
print_int("bit_depth_chroma", fgp->bit_depth_chroma);
print_list_fmt("uv_points_value", "%"PRIu8, aom->num_uv_points[uv], 1, aom->uv_points[uv][idx][0]);
print_list_fmt("uv_points_scaling", "%"PRIu8, aom->num_uv_points[uv], 1, aom->uv_points[uv][idx][1]);
print_list_fmt("ar_coeffs_uv", "%"PRId8, num_ar_coeffs_uv, 1, aom->ar_coeffs_uv[uv][idx]);
@ -2462,17 +2483,7 @@ static void print_film_grain_params(WriterContext *w,
}
case AV_FILM_GRAIN_PARAMS_H274: {
const AVFilmGrainH274Params *h274 = &fgp->codec.h274;
const char *color_range_str = av_color_range_name(h274->color_range);
const char *color_primaries_str = av_color_primaries_name(h274->color_primaries);
const char *color_trc_str = av_color_transfer_name(h274->color_trc);
const char *color_space_str = av_color_space_name(h274->color_space);
print_str("type", "h274");
print_fmt("seed", "%"PRIu64, fgp->seed);
print_int("model_id", h274->model_id);
print_str("color_range", color_range_str ? color_range_str : "unknown");
print_str("color_primaries", color_primaries_str ? color_primaries_str : "unknown");
print_str("color_trc", color_trc_str ? color_trc_str : "unknown");
print_str("color_space", color_space_str ? color_space_str : "unknown");
print_int("blending_mode_id", h274->blending_mode_id);
print_int("log2_scale_factor", h274->log2_scale_factor);
@ -2483,7 +2494,7 @@ static void print_film_grain_params(WriterContext *w,
continue;
writer_print_section_header(w, NULL, SECTION_ID_FRAME_SIDE_DATA_COMPONENT);
print_int(c ? "bit_depth_chroma" : "bit_depth_luma", c ? h274->bit_depth_chroma : h274->bit_depth_luma);
print_int(c ? "bit_depth_chroma" : "bit_depth_luma", c ? fgp->bit_depth_chroma : fgp->bit_depth_luma);
writer_print_section_header(w, NULL, SECTION_ID_FRAME_SIDE_DATA_PIECE_LIST);
for (int i = 0; i < h274->num_intensity_intervals[c]; i++) {

View File

@ -105,7 +105,7 @@ OBJS-$(CONFIG_H264_SEI) += h264_sei.o h2645_sei.o
OBJS-$(CONFIG_HEVCPARSE) += hevc_parse.o hevc_ps.o hevc_data.o \
h2645data.o h2645_parse.o h2645_vui.o
OBJS-$(CONFIG_HEVC_SEI) += hevc_sei.o h2645_sei.o \
dynamic_hdr_vivid.o
dynamic_hdr_vivid.o aom_film_grain.o
OBJS-$(CONFIG_HPELDSP) += hpeldsp.o
OBJS-$(CONFIG_HUFFMAN) += huffman.o
OBJS-$(CONFIG_HUFFYUVDSP) += huffyuvdsp.o
@ -432,7 +432,7 @@ OBJS-$(CONFIG_HDR_ENCODER) += hdrenc.o
OBJS-$(CONFIG_HEVC_DECODER) += hevcdec.o hevc_mvs.o \
hevc_cabac.o hevc_refs.o hevcpred.o \
hevcdsp.o hevc_filter.o hevc_data.o \
h274.o
h274.o aom_film_grain.o
OBJS-$(CONFIG_HEVC_AMF_ENCODER) += amfenc_hevc.o
OBJS-$(CONFIG_HEVC_CUVID_DECODER) += cuviddec.o
OBJS-$(CONFIG_HEVC_MEDIACODEC_DECODER) += mediacodecdec.o

548
libavcodec/aom_film_grain.c Normal file
View File

@ -0,0 +1,548 @@
/*
* AOM film grain synthesis
* Copyright (c) 2023 Niklas Haas <ffmpeg@haasn.xyz>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* AOM film grain synthesis.
* @author Niklas Haas <ffmpeg@haasn.xyz>
*/
#include "libavutil/avassert.h"
#include "libavutil/imgutils.h"
#include "aom_film_grain.h"
#include "get_bits.h"
// Common/shared helpers (not dependent on BIT_DEPTH)
static inline int get_random_number(const int bits, unsigned *const state) {
const int r = *state;
unsigned bit = ((r >> 0) ^ (r >> 1) ^ (r >> 3) ^ (r >> 12)) & 1;
*state = (r >> 1) | (bit << 15);
return (*state >> (16 - bits)) & ((1 << bits) - 1);
}
static inline int round2(const int x, const uint64_t shift) {
return (x + ((1 << shift) >> 1)) >> shift;
}
enum {
GRAIN_WIDTH = 82,
GRAIN_HEIGHT = 73,
SUB_GRAIN_WIDTH = 44,
SUB_GRAIN_HEIGHT = 38,
FG_BLOCK_SIZE = 32,
};
static const int16_t gaussian_sequence[2048];
#define BIT_DEPTH 16
#include "aom_film_grain_template.c"
#undef BIT_DEPTH
#define BIT_DEPTH 8
#include "aom_film_grain_template.c"
#undef BIT_DEPTH
int ff_aom_apply_film_grain(AVFrame *out, const AVFrame *in,
const AVFilmGrainParams *params)
{
const AVFilmGrainAOMParams *const data = &params->codec.aom;
const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(out->format);
const int subx = desc->log2_chroma_w, suby = desc->log2_chroma_h;
const int pxstep = desc->comp[0].step;
av_assert0(out->format == in->format);
av_assert0(params->type == AV_FILM_GRAIN_PARAMS_AV1);
// Copy over the non-modified planes
if (!params->codec.aom.num_y_points) {
av_image_copy_plane(out->data[0], out->linesize[0],
in->data[0], in->linesize[0],
out->width * pxstep, out->height);
}
for (int uv = 0; uv < 2; uv++) {
if (!data->num_uv_points[uv]) {
av_image_copy_plane(out->data[1+uv], out->linesize[1+uv],
in->data[1+uv], in->linesize[1+uv],
AV_CEIL_RSHIFT(out->width, subx) * pxstep,
AV_CEIL_RSHIFT(out->height, suby));
}
}
switch (in->format) {
case AV_PIX_FMT_GRAY8:
case AV_PIX_FMT_YUV420P:
case AV_PIX_FMT_YUV422P:
case AV_PIX_FMT_YUV444P:
case AV_PIX_FMT_YUVJ420P:
case AV_PIX_FMT_YUVJ422P:
case AV_PIX_FMT_YUVJ444P:
return apply_film_grain_8(out, in, params);
case AV_PIX_FMT_GRAY9:
case AV_PIX_FMT_YUV420P9:
case AV_PIX_FMT_YUV422P9:
case AV_PIX_FMT_YUV444P9:
return apply_film_grain_16(out, in, params, 9);
case AV_PIX_FMT_GRAY10:
case AV_PIX_FMT_YUV420P10:
case AV_PIX_FMT_YUV422P10:
case AV_PIX_FMT_YUV444P10:
return apply_film_grain_16(out, in, params, 10);
case AV_PIX_FMT_GRAY12:
case AV_PIX_FMT_YUV420P12:
case AV_PIX_FMT_YUV422P12:
case AV_PIX_FMT_YUV444P12:
return apply_film_grain_16(out, in, params, 12);
}
/* The AV1 spec only defines film grain synthesis for these formats */
return AVERROR_INVALIDDATA;
}
int ff_aom_parse_film_grain_sets(AVFilmGrainAFGS1Params *s,
const uint8_t *payload, int payload_size)
{
GetBitContext gbc, *gb = &gbc;
AVFilmGrainAOMParams *aom;
AVFilmGrainParams *fgp, *ref = NULL;
int ret, num_sets, n, i, uv, num_y_coeffs, update_grain, luma_only;
ret = init_get_bits8(gb, payload, payload_size);
if (ret < 0)
return ret;
s->enable = get_bits1(gb);
if (!s->enable)
return 0;
skip_bits(gb, 4); // reserved
num_sets = get_bits(gb, 3) + 1;
for (n = 0; n < num_sets; n++) {
int payload_4byte, payload_size, set_idx, apply_units_log2, vsc_flag;
int predict_scaling, predict_y_scaling, predict_uv_scaling[2];
int payload_bits, start_position;
start_position = get_bits_count(gb);
payload_4byte = get_bits1(gb);
payload_size = get_bits(gb, payload_4byte ? 2 : 8);
set_idx = get_bits(gb, 3);
fgp = &s->sets[set_idx];
aom = &fgp->codec.aom;
fgp->type = get_bits1(gb) ? AV_FILM_GRAIN_PARAMS_AV1 : AV_FILM_GRAIN_PARAMS_NONE;
if (!fgp->type)
continue;
fgp->seed = get_bits(gb, 16);
update_grain = get_bits1(gb);
if (!update_grain)
continue;
apply_units_log2 = get_bits(gb, 4);
fgp->width = get_bits(gb, 12) << apply_units_log2;
fgp->height = get_bits(gb, 12) << apply_units_log2;
luma_only = get_bits1(gb);
if (luma_only) {
fgp->subsampling_x = fgp->subsampling_y = 0;
} else {
fgp->subsampling_x = get_bits1(gb);
fgp->subsampling_y = get_bits1(gb);
}
fgp->bit_depth_luma = fgp->bit_depth_chroma = 0;
fgp->color_primaries = AVCOL_PRI_UNSPECIFIED;
fgp->color_trc = AVCOL_TRC_UNSPECIFIED;
fgp->color_space = AVCOL_SPC_UNSPECIFIED;
fgp->color_range = AVCOL_RANGE_UNSPECIFIED;
vsc_flag = get_bits1(gb); // video_signal_characteristics_flag
if (vsc_flag) {
int cicp_flag;
fgp->bit_depth_luma = get_bits(gb, 3) + 8;
if (!luma_only)
fgp->bit_depth_chroma = fgp->bit_depth_luma;
cicp_flag = get_bits1(gb);
if (cicp_flag) {
fgp->color_primaries = get_bits(gb, 8);
fgp->color_trc = get_bits(gb, 8);
fgp->color_space = get_bits(gb, 8);
fgp->color_range = get_bits1(gb) ? AVCOL_RANGE_JPEG : AVCOL_RANGE_MPEG;
if (fgp->color_primaries > AVCOL_PRI_NB ||
fgp->color_primaries == AVCOL_PRI_RESERVED ||
fgp->color_primaries == AVCOL_PRI_RESERVED0 ||
fgp->color_trc > AVCOL_TRC_NB ||
fgp->color_trc == AVCOL_TRC_RESERVED ||
fgp->color_trc == AVCOL_TRC_RESERVED0 ||
fgp->color_space > AVCOL_SPC_NB ||
fgp->color_space == AVCOL_SPC_RESERVED)
goto error;
}
}
predict_scaling = get_bits1(gb);
if (predict_scaling && (!ref || ref == fgp))
goto error; // prediction must be from valid, different set
predict_y_scaling = predict_scaling ? get_bits1(gb) : 0;
if (predict_y_scaling) {
int y_scale, y_offset, bits_res;
y_scale = get_bits(gb, 9) - 256;
y_offset = get_bits(gb, 9) - 256;
bits_res = get_bits(gb, 3);
if (bits_res) {
int res[14], pred, granularity;
aom->num_y_points = ref->codec.aom.num_y_points;
for (i = 0; i < aom->num_y_points; i++)
res[i] = get_bits(gb, bits_res);
granularity = get_bits(gb, 3);
for (i = 0; i < aom->num_y_points; i++) {
pred = ref->codec.aom.y_points[i][1];
pred = ((pred * y_scale + 8) >> 4) + y_offset;
pred += (res[i] - (1 << (bits_res - 1))) * granularity;
aom->y_points[i][0] = ref->codec.aom.y_points[i][0];
aom->y_points[i][1] = av_clip_uint8(pred);
}
}
} else {
aom->num_y_points = get_bits(gb, 4);
if (aom->num_y_points > 14) {
goto error;
} else if (aom->num_y_points) {
int bits_inc, bits_scaling;
int y_value = 0;
bits_inc = get_bits(gb, 3) + 1;
bits_scaling = get_bits(gb, 2) + 5;
for (i = 0; i < aom->num_y_points; i++) {
y_value += get_bits(gb, bits_inc);
if (y_value > UINT8_MAX)
goto error;
aom->y_points[i][0] = y_value;
aom->y_points[i][1] = get_bits(gb, bits_scaling);
}
}
}
if (luma_only) {
aom->chroma_scaling_from_luma = 0;
aom->num_uv_points[0] = aom->num_uv_points[1] = 0;
} else {
aom->chroma_scaling_from_luma = get_bits1(gb);
if (aom->chroma_scaling_from_luma) {
aom->num_uv_points[0] = aom->num_uv_points[1] = 0;
} else {
for (uv = 0; uv < 2; uv++) {
predict_uv_scaling[uv] = predict_scaling ? get_bits1(gb) : 0;
if (predict_uv_scaling[uv]) {
int uv_scale, uv_offset, bits_res;
uv_scale = get_bits(gb, 9) - 256;
uv_offset = get_bits(gb, 9) - 256;
bits_res = get_bits(gb, 3);
aom->uv_mult[uv] = ref->codec.aom.uv_mult[uv];
aom->uv_mult_luma[uv] = ref->codec.aom.uv_mult_luma[uv];
aom->uv_offset[uv] = ref->codec.aom.uv_offset[uv];
if (bits_res) {
int res[10], pred, granularity;
aom->num_uv_points[uv] = ref->codec.aom.num_uv_points[uv];
for (i = 0; i < aom->num_uv_points[uv]; i++)
res[i] = get_bits(gb, bits_res);
granularity = get_bits(gb, 3);
for (i = 0; i < aom->num_uv_points[uv]; i++) {
pred = ref->codec.aom.uv_points[uv][i][1];
pred = ((pred * uv_scale + 8) >> 4) + uv_offset;
pred += (res[i] - (1 << (bits_res - 1))) * granularity;
aom->uv_points[uv][i][0] = ref->codec.aom.uv_points[uv][i][0];
aom->uv_points[uv][i][1] = av_clip_uint8(pred);
}
}
} else {
int bits_inc, bits_scaling, uv_offset;
int uv_value = 0;
aom->num_uv_points[uv] = get_bits(gb, 4);
if (aom->num_uv_points[uv] > 10)
goto error;
bits_inc = get_bits(gb, 3) + 1;
bits_scaling = get_bits(gb, 2) + 5;
uv_offset = get_bits(gb, 8);
for (i = 0; i < aom->num_uv_points[uv]; i++) {
uv_value += get_bits(gb, bits_inc);
if (uv_value > UINT8_MAX)
goto error;
aom->uv_points[uv][i][0] = uv_value;
aom->uv_points[uv][i][1] = get_bits(gb, bits_scaling) + uv_offset;
}
}
}
}
}
aom->scaling_shift = get_bits(gb, 2) + 8;
aom->ar_coeff_lag = get_bits(gb, 2);
num_y_coeffs = 2 * aom->ar_coeff_lag * (aom->ar_coeff_lag + 1);
if (aom->num_y_points) {
int ar_bits = get_bits(gb, 2) + 5;
for (i = 0; i < num_y_coeffs; i++)
aom->ar_coeffs_y[i] = get_bits(gb, ar_bits) - (1 << (ar_bits - 1));
}
for (uv = 0; uv < 2; uv++) {
if (aom->chroma_scaling_from_luma || aom->num_uv_points[uv]) {
int ar_bits = get_bits(gb, 2) + 5;
for (i = 0; i < num_y_coeffs + !!aom->num_y_points; i++)
aom->ar_coeffs_uv[uv][i] = get_bits(gb, ar_bits) - (1 << (ar_bits - 1));
}
}
aom->ar_coeff_shift = get_bits(gb, 2) + 6;
aom->grain_scale_shift = get_bits(gb, 2);
for (uv = 0; uv < 2; uv++) {
if (aom->num_uv_points[uv] && !predict_uv_scaling[uv]) {
aom->uv_mult[uv] = get_bits(gb, 8) - 128;
aom->uv_mult_luma[uv] = get_bits(gb, 8) - 128;
aom->uv_offset[uv] = get_bits(gb, 9) - 256;
}
}
aom->overlap_flag = get_bits1(gb);
aom->limit_output_range = get_bits1(gb);
// use first set as reference only if it was fully transmitted
if (n == 0)
ref = fgp;
payload_bits = get_bits_count(gb) - start_position;
if (payload_bits > payload_size * 8)
goto error;
skip_bits(gb, payload_size * 8 - payload_bits);
}
return 0;
error:
memset(s, 0, sizeof(*s));
return AVERROR_INVALIDDATA;
}
int ff_aom_attach_film_grain_sets(const AVFilmGrainAFGS1Params *s, AVFrame *frame)
{
AVFilmGrainParams *fgp;
if (!s->enable)
return 0;
for (int i = 0; i < FF_ARRAY_ELEMS(s->sets); i++) {
if (s->sets[i].type != AV_FILM_GRAIN_PARAMS_AV1)
continue;
fgp = av_film_grain_params_create_side_data(frame);
if (!fgp)
return AVERROR(ENOMEM);
memcpy(fgp, &s->sets[i], sizeof(*fgp));
}
return 0;
}
// Taken from the AV1 spec. Range is [-2048, 2047], mean is 0 and stddev is 512
static const int16_t gaussian_sequence[2048] = {
56, 568, -180, 172, 124, -84, 172, -64, -900, 24, 820,
224, 1248, 996, 272, -8, -916, -388, -732, -104, -188, 800,
112, -652, -320, -376, 140, -252, 492, -168, 44, -788, 588,
-584, 500, -228, 12, 680, 272, -476, 972, -100, 652, 368,
432, -196, -720, -192, 1000, -332, 652, -136, -552, -604, -4,
192, -220, -136, 1000, -52, 372, -96, -624, 124, -24, 396,
540, -12, -104, 640, 464, 244, -208, -84, 368, -528, -740,
248, -968, -848, 608, 376, -60, -292, -40, -156, 252, -292,
248, 224, -280, 400, -244, 244, -60, 76, -80, 212, 532,
340, 128, -36, 824, -352, -60, -264, -96, -612, 416, -704,
220, -204, 640, -160, 1220, -408, 900, 336, 20, -336, -96,
-792, 304, 48, -28, -1232, -1172, -448, 104, -292, -520, 244,
60, -948, 0, -708, 268, 108, 356, -548, 488, -344, -136,
488, -196, -224, 656, -236, -1128, 60, 4, 140, 276, -676,
-376, 168, -108, 464, 8, 564, 64, 240, 308, -300, -400,
-456, -136, 56, 120, -408, -116, 436, 504, -232, 328, 844,
-164, -84, 784, -168, 232, -224, 348, -376, 128, 568, 96,
-1244, -288, 276, 848, 832, -360, 656, 464, -384, -332, -356,
728, -388, 160, -192, 468, 296, 224, 140, -776, -100, 280,
4, 196, 44, -36, -648, 932, 16, 1428, 28, 528, 808,
772, 20, 268, 88, -332, -284, 124, -384, -448, 208, -228,
-1044, -328, 660, 380, -148, -300, 588, 240, 540, 28, 136,
-88, -436, 256, 296, -1000, 1400, 0, -48, 1056, -136, 264,
-528, -1108, 632, -484, -592, -344, 796, 124, -668, -768, 388,
1296, -232, -188, -200, -288, -4, 308, 100, -168, 256, -500,
204, -508, 648, -136, 372, -272, -120, -1004, -552, -548, -384,
548, -296, 428, -108, -8, -912, -324, -224, -88, -112, -220,
-100, 996, -796, 548, 360, -216, 180, 428, -200, -212, 148,
96, 148, 284, 216, -412, -320, 120, -300, -384, -604, -572,
-332, -8, -180, -176, 696, 116, -88, 628, 76, 44, -516,
240, -208, -40, 100, -592, 344, -308, -452, -228, 20, 916,
-1752, -136, -340, -804, 140, 40, 512, 340, 248, 184, -492,
896, -156, 932, -628, 328, -688, -448, -616, -752, -100, 560,
-1020, 180, -800, -64, 76, 576, 1068, 396, 660, 552, -108,
-28, 320, -628, 312, -92, -92, -472, 268, 16, 560, 516,
-672, -52, 492, -100, 260, 384, 284, 292, 304, -148, 88,
-152, 1012, 1064, -228, 164, -376, -684, 592, -392, 156, 196,
-524, -64, -884, 160, -176, 636, 648, 404, -396, -436, 864,
424, -728, 988, -604, 904, -592, 296, -224, 536, -176, -920,
436, -48, 1176, -884, 416, -776, -824, -884, 524, -548, -564,
-68, -164, -96, 692, 364, -692, -1012, -68, 260, -480, 876,
-1116, 452, -332, -352, 892, -1088, 1220, -676, 12, -292, 244,
496, 372, -32, 280, 200, 112, -440, -96, 24, -644, -184,
56, -432, 224, -980, 272, -260, 144, -436, 420, 356, 364,
-528, 76, 172, -744, -368, 404, -752, -416, 684, -688, 72,
540, 416, 92, 444, 480, -72, -1416, 164, -1172, -68, 24,
424, 264, 1040, 128, -912, -524, -356, 64, 876, -12, 4,
-88, 532, 272, -524, 320, 276, -508, 940, 24, -400, -120,
756, 60, 236, -412, 100, 376, -484, 400, -100, -740, -108,
-260, 328, -268, 224, -200, -416, 184, -604, -564, -20, 296,
60, 892, -888, 60, 164, 68, -760, 216, -296, 904, -336,
-28, 404, -356, -568, -208, -1480, -512, 296, 328, -360, -164,
-1560, -776, 1156, -428, 164, -504, -112, 120, -216, -148, -264,
308, 32, 64, -72, 72, 116, 176, -64, -272, 460, -536,
-784, -280, 348, 108, -752, -132, 524, -540, -776, 116, -296,
-1196, -288, -560, 1040, -472, 116, -848, -1116, 116, 636, 696,
284, -176, 1016, 204, -864, -648, -248, 356, 972, -584, -204,
264, 880, 528, -24, -184, 116, 448, -144, 828, 524, 212,
-212, 52, 12, 200, 268, -488, -404, -880, 824, -672, -40,
908, -248, 500, 716, -576, 492, -576, 16, 720, -108, 384,
124, 344, 280, 576, -500, 252, 104, -308, 196, -188, -8,
1268, 296, 1032, -1196, 436, 316, 372, -432, -200, -660, 704,
-224, 596, -132, 268, 32, -452, 884, 104, -1008, 424, -1348,
-280, 4, -1168, 368, 476, 696, 300, -8, 24, 180, -592,
-196, 388, 304, 500, 724, -160, 244, -84, 272, -256, -420,
320, 208, -144, -156, 156, 364, 452, 28, 540, 316, 220,
-644, -248, 464, 72, 360, 32, -388, 496, -680, -48, 208,
-116, -408, 60, -604, -392, 548, -840, 784, -460, 656, -544,
-388, -264, 908, -800, -628, -612, -568, 572, -220, 164, 288,
-16, -308, 308, -112, -636, -760, 280, -668, 432, 364, 240,
-196, 604, 340, 384, 196, 592, -44, -500, 432, -580, -132,
636, -76, 392, 4, -412, 540, 508, 328, -356, -36, 16,
-220, -64, -248, -60, 24, -192, 368, 1040, 92, -24, -1044,
-32, 40, 104, 148, 192, -136, -520, 56, -816, -224, 732,
392, 356, 212, -80, -424, -1008, -324, 588, -1496, 576, 460,
-816, -848, 56, -580, -92, -1372, -112, -496, 200, 364, 52,
-140, 48, -48, -60, 84, 72, 40, 132, -356, -268, -104,
-284, -404, 732, -520, 164, -304, -540, 120, 328, -76, -460,
756, 388, 588, 236, -436, -72, -176, -404, -316, -148, 716,
-604, 404, -72, -88, -888, -68, 944, 88, -220, -344, 960,
472, 460, -232, 704, 120, 832, -228, 692, -508, 132, -476,
844, -748, -364, -44, 1116, -1104, -1056, 76, 428, 552, -692,
60, 356, 96, -384, -188, -612, -576, 736, 508, 892, 352,
-1132, 504, -24, -352, 324, 332, -600, -312, 292, 508, -144,
-8, 484, 48, 284, -260, -240, 256, -100, -292, -204, -44,
472, -204, 908, -188, -1000, -256, 92, 1164, -392, 564, 356,
652, -28, -884, 256, 484, -192, 760, -176, 376, -524, -452,
-436, 860, -736, 212, 124, 504, -476, 468, 76, -472, 552,
-692, -944, -620, 740, -240, 400, 132, 20, 192, -196, 264,
-668, -1012, -60, 296, -316, -828, 76, -156, 284, -768, -448,
-832, 148, 248, 652, 616, 1236, 288, -328, -400, -124, 588,
220, 520, -696, 1032, 768, -740, -92, -272, 296, 448, -464,
412, -200, 392, 440, -200, 264, -152, -260, 320, 1032, 216,
320, -8, -64, 156, -1016, 1084, 1172, 536, 484, -432, 132,
372, -52, -256, 84, 116, -352, 48, 116, 304, -384, 412,
924, -300, 528, 628, 180, 648, 44, -980, -220, 1320, 48,
332, 748, 524, -268, -720, 540, -276, 564, -344, -208, -196,
436, 896, 88, -392, 132, 80, -964, -288, 568, 56, -48,
-456, 888, 8, 552, -156, -292, 948, 288, 128, -716, -292,
1192, -152, 876, 352, -600, -260, -812, -468, -28, -120, -32,
-44, 1284, 496, 192, 464, 312, -76, -516, -380, -456, -1012,
-48, 308, -156, 36, 492, -156, -808, 188, 1652, 68, -120,
-116, 316, 160, -140, 352, 808, -416, 592, 316, -480, 56,
528, -204, -568, 372, -232, 752, -344, 744, -4, 324, -416,
-600, 768, 268, -248, -88, -132, -420, -432, 80, -288, 404,
-316, -1216, -588, 520, -108, 92, -320, 368, -480, -216, -92,
1688, -300, 180, 1020, -176, 820, -68, -228, -260, 436, -904,
20, 40, -508, 440, -736, 312, 332, 204, 760, -372, 728,
96, -20, -632, -520, -560, 336, 1076, -64, -532, 776, 584,
192, 396, -728, -520, 276, -188, 80, -52, -612, -252, -48,
648, 212, -688, 228, -52, -260, 428, -412, -272, -404, 180,
816, -796, 48, 152, 484, -88, -216, 988, 696, 188, -528,
648, -116, -180, 316, 476, 12, -564, 96, 476, -252, -364,
-376, -392, 556, -256, -576, 260, -352, 120, -16, -136, -260,
-492, 72, 556, 660, 580, 616, 772, 436, 424, -32, -324,
-1268, 416, -324, -80, 920, 160, 228, 724, 32, -516, 64,
384, 68, -128, 136, 240, 248, -204, -68, 252, -932, -120,
-480, -628, -84, 192, 852, -404, -288, -132, 204, 100, 168,
-68, -196, -868, 460, 1080, 380, -80, 244, 0, 484, -888,
64, 184, 352, 600, 460, 164, 604, -196, 320, -64, 588,
-184, 228, 12, 372, 48, -848, -344, 224, 208, -200, 484,
128, -20, 272, -468, -840, 384, 256, -720, -520, -464, -580,
112, -120, 644, -356, -208, -608, -528, 704, 560, -424, 392,
828, 40, 84, 200, -152, 0, -144, 584, 280, -120, 80,
-556, -972, -196, -472, 724, 80, 168, -32, 88, 160, -688,
0, 160, 356, 372, -776, 740, -128, 676, -248, -480, 4,
-364, 96, 544, 232, -1032, 956, 236, 356, 20, -40, 300,
24, -676, -596, 132, 1120, -104, 532, -1096, 568, 648, 444,
508, 380, 188, -376, -604, 1488, 424, 24, 756, -220, -192,
716, 120, 920, 688, 168, 44, -460, 568, 284, 1144, 1160,
600, 424, 888, 656, -356, -320, 220, 316, -176, -724, -188,
-816, -628, -348, -228, -380, 1012, -452, -660, 736, 928, 404,
-696, -72, -268, -892, 128, 184, -344, -780, 360, 336, 400,
344, 428, 548, -112, 136, -228, -216, -820, -516, 340, 92,
-136, 116, -300, 376, -244, 100, -316, -520, -284, -12, 824,
164, -548, -180, -128, 116, -924, -828, 268, -368, -580, 620,
192, 160, 0, -1676, 1068, 424, -56, -360, 468, -156, 720,
288, -528, 556, -364, 548, -148, 504, 316, 152, -648, -620,
-684, -24, -376, -384, -108, -920, -1032, 768, 180, -264, -508,
-1268, -260, -60, 300, -240, 988, 724, -376, -576, -212, -736,
556, 192, 1092, -620, -880, 376, -56, -4, -216, -32, 836,
268, 396, 1332, 864, -600, 100, 56, -412, -92, 356, 180,
884, -468, -436, 292, -388, -804, -704, -840, 368, -348, 140,
-724, 1536, 940, 372, 112, -372, 436, -480, 1136, 296, -32,
-228, 132, -48, -220, 868, -1016, -60, -1044, -464, 328, 916,
244, 12, -736, -296, 360, 468, -376, -108, -92, 788, 368,
-56, 544, 400, -672, -420, 728, 16, 320, 44, -284, -380,
-796, 488, 132, 204, -596, -372, 88, -152, -908, -636, -572,
-624, -116, -692, -200, -56, 276, -88, 484, -324, 948, 864,
1000, -456, -184, -276, 292, -296, 156, 676, 320, 160, 908,
-84, -1236, -288, -116, 260, -372, -644, 732, -756, -96, 84,
344, -520, 348, -688, 240, -84, 216, -1044, -136, -676, -396,
-1500, 960, -40, 176, 168, 1516, 420, -504, -344, -364, -360,
1216, -940, -380, -212, 252, -660, -708, 484, -444, -152, 928,
-120, 1112, 476, -260, 560, -148, -344, 108, -196, 228, -288,
504, 560, -328, -88, 288, -1008, 460, -228, 468, -836, -196,
76, 388, 232, 412, -1168, -716, -644, 756, -172, -356, -504,
116, 432, 528, 48, 476, -168, -608, 448, 160, -532, -272,
28, -676, -12, 828, 980, 456, 520, 104, -104, 256, -344,
-4, -28, -368, -52, -524, -572, -556, -200, 768, 1124, -208,
-512, 176, 232, 248, -148, -888, 604, -600, -304, 804, -156,
-212, 488, -192, -804, -256, 368, -360, -916, -328, 228, -240,
-448, -472, 856, -556, -364, 572, -12, -156, -368, -340, 432,
252, -752, -152, 288, 268, -580, -848, -592, 108, -76, 244,
312, -716, 592, -80, 436, 360, 4, -248, 160, 516, 584,
732, 44, -468, -280, -292, -156, -588, 28, 308, 912, 24,
124, 156, 180, -252, 944, -924, -772, -520, -428, -624, 300,
-212, -1144, 32, -724, 800, -1128, -212, -1288, -848, 180, -416,
440, 192, -576, -792, -76, -1080, 80, -532, -352, -132, 380,
-820, 148, 1112, 128, 164, 456, 700, -924, 144, -668, -384,
648, -832, 508, 552, -52, -100, -656, 208, -568, 748, -88,
680, 232, 300, 192, -408, -1012, -152, -252, -268, 272, -876,
-664, -648, -332, -136, 16, 12, 1152, -28, 332, -536, 320,
-672, -460, -316, 532, -260, 228, -40, 1052, -816, 180, 88,
-496, -556, -672, -368, 428, 92, 356, 404, -408, 252, 196,
-176, -556, 792, 268, 32, 372, 40, 96, -332, 328, 120,
372, -900, -40, 472, -264, -592, 952, 128, 656, 112, 664,
-232, 420, 4, -344, -464, 556, 244, -416, -32, 252, 0,
-412, 188, -696, 508, -476, 324, -1096, 656, -312, 560, 264,
-136, 304, 160, -64, -580, 248, 336, -720, 560, -348, -288,
-276, -196, -500, 852, -544, -236, -1128, -992, -776, 116, 56,
52, 860, 884, 212, -12, 168, 1020, 512, -552, 924, -148,
716, 188, 164, -340, -520, -184, 880, -152, -680, -208, -1156,
-300, -528, -472, 364, 100, -744, -1056, -32, 540, 280, 144,
-676, -32, -232, -280, -224, 96, 568, -76, 172, 148, 148,
104, 32, -296, -32, 788, -80, 32, -16, 280, 288, 944,
428, -484
};

View File

@ -0,0 +1,51 @@
/*
* AOM film grain synthesis
* Copyright (c) 2021 Niklas Haas <ffmpeg@haasn.xyz>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* AOM film grain synthesis.
* @author Niklas Haas <ffmpeg@haasn.xyz>
*/
#ifndef AVCODEC_AOM_FILM_GRAIN_H
#define AVCODEC_AOM_FILM_GRAIN_H
#include "libavutil/film_grain_params.h"
typedef struct AVFilmGrainAFGS1Params {
int enable;
AVFilmGrainParams sets[8];
} AVFilmGrainAFGS1Params;
// Synthesizes film grain on top of `in` and stores the result to `out`. `out`
// must already have been allocated and set to the same size and format as `in`.
int ff_aom_apply_film_grain(AVFrame *out, const AVFrame *in,
const AVFilmGrainParams *params);
// Parse AFGS1 parameter sets from an ITU-T T.35 payload. Returns 0 on success,
// or a negative error code.
int ff_aom_parse_film_grain_sets(AVFilmGrainAFGS1Params *s,
const uint8_t *payload, int payload_size);
// Attach all valid film grain param sets to `frame`.
int ff_aom_attach_film_grain_sets(const AVFilmGrainAFGS1Params *s, AVFrame *frame);
#endif /* AVCODEC_AOM_FILM_GRAIN_H */

View File

@ -0,0 +1,577 @@
/*
* AOM film grain synthesis
* Copyright (c) 2023 Niklas Haas <ffmpeg@haasn.xyz>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/*
* Copyright © 2018, Niklas Haas
* Copyright © 2018, VideoLAN and dav1d authors
* Copyright © 2018, Two Orioles, LLC
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
* ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "bit_depth_template.c"
#undef entry
#undef bitdepth
#undef bitdepth_max
#undef HBD_DECL
#undef HBD_CALL
#undef SCALING_SIZE
#if BIT_DEPTH > 8
# define entry int16_t
# define bitdepth_max ((1 << bitdepth) - 1)
# define HBD_DECL , const int bitdepth
# define HBD_CALL , bitdepth
# define SCALING_SIZE 4096
#else
# define entry int8_t
# define bitdepth 8
# define bitdepth_max UINT8_MAX
# define HBD_DECL
# define HBD_CALL
# define SCALING_SIZE 256
#endif
static void FUNC(generate_grain_y_c)(entry buf[][GRAIN_WIDTH],
const AVFilmGrainParams *const params
HBD_DECL)
{
const AVFilmGrainAOMParams *const data = &params->codec.aom;
const int bitdepth_min_8 = bitdepth - 8;
unsigned seed = params->seed;
const int shift = 4 - bitdepth_min_8 + data->grain_scale_shift;
const int grain_ctr = 128 << bitdepth_min_8;
const int grain_min = -grain_ctr, grain_max = grain_ctr - 1;
const int ar_pad = 3;
const int ar_lag = data->ar_coeff_lag;
for (int y = 0; y < GRAIN_HEIGHT; y++) {
for (int x = 0; x < GRAIN_WIDTH; x++) {
const int value = get_random_number(11, &seed);
buf[y][x] = round2(gaussian_sequence[ value ], shift);
}
}
for (int y = ar_pad; y < GRAIN_HEIGHT; y++) {
for (int x = ar_pad; x < GRAIN_WIDTH - ar_pad; x++) {
const int8_t *coeff = data->ar_coeffs_y;
int sum = 0, grain;
for (int dy = -ar_lag; dy <= 0; dy++) {
for (int dx = -ar_lag; dx <= ar_lag; dx++) {
if (!dx && !dy)
break;
sum += *(coeff++) * buf[y + dy][x + dx];
}
}
grain = buf[y][x] + round2(sum, data->ar_coeff_shift);
buf[y][x] = av_clip(grain, grain_min, grain_max);
}
}
}
static void
FUNC(generate_grain_uv_c)(entry buf[][GRAIN_WIDTH],
const entry buf_y[][GRAIN_WIDTH],
const AVFilmGrainParams *const params, const intptr_t uv,
const int subx, const int suby HBD_DECL)
{
const AVFilmGrainAOMParams *const data = &params->codec.aom;
const int bitdepth_min_8 = bitdepth - 8;
unsigned seed = params->seed ^ (uv ? 0x49d8 : 0xb524);
const int shift = 4 - bitdepth_min_8 + data->grain_scale_shift;
const int grain_ctr = 128 << bitdepth_min_8;
const int grain_min = -grain_ctr, grain_max = grain_ctr - 1;
const int chromaW = subx ? SUB_GRAIN_WIDTH : GRAIN_WIDTH;
const int chromaH = suby ? SUB_GRAIN_HEIGHT : GRAIN_HEIGHT;
const int ar_pad = 3;
const int ar_lag = data->ar_coeff_lag;
for (int y = 0; y < chromaH; y++) {
for (int x = 0; x < chromaW; x++) {
const int value = get_random_number(11, &seed);
buf[y][x] = round2(gaussian_sequence[ value ], shift);
}
}
for (int y = ar_pad; y < chromaH; y++) {
for (int x = ar_pad; x < chromaW - ar_pad; x++) {
const int8_t *coeff = data->ar_coeffs_uv[uv];
int sum = 0, grain;
for (int dy = -ar_lag; dy <= 0; dy++) {
for (int dx = -ar_lag; dx <= ar_lag; dx++) {
// For the final (current) pixel, we need to add in the
// contribution from the luma grain texture
if (!dx && !dy) {
const int lumaX = ((x - ar_pad) << subx) + ar_pad;
const int lumaY = ((y - ar_pad) << suby) + ar_pad;
int luma = 0;
if (!data->num_y_points)
break;
for (int i = 0; i <= suby; i++) {
for (int j = 0; j <= subx; j++) {
luma += buf_y[lumaY + i][lumaX + j];
}
}
luma = round2(luma, subx + suby);
sum += luma * (*coeff);
break;
}
sum += *(coeff++) * buf[y + dy][x + dx];
}
}
grain = buf[y][x] + round2(sum, data->ar_coeff_shift);
buf[y][x] = av_clip(grain, grain_min, grain_max);
}
}
}
// samples from the correct block of a grain LUT, while taking into account the
// offsets provided by the offsets cache
static inline entry FUNC(sample_lut)(const entry grain_lut[][GRAIN_WIDTH],
const int offsets[2][2],
const int subx, const int suby,
const int bx, const int by,
const int x, const int y)
{
const int randval = offsets[bx][by];
const int offx = 3 + (2 >> subx) * (3 + (randval >> 4));
const int offy = 3 + (2 >> suby) * (3 + (randval & 0xF));
return grain_lut[offy + y + (FG_BLOCK_SIZE >> suby) * by]
[offx + x + (FG_BLOCK_SIZE >> subx) * bx];
}
static void FUNC(fgy_32x32xn_c)(pixel *const dst_row, const pixel *const src_row,
const ptrdiff_t stride,
const AVFilmGrainParams *const params, const size_t pw,
const uint8_t scaling[SCALING_SIZE],
const entry grain_lut[][GRAIN_WIDTH],
const int bh, const int row_num HBD_DECL)
{
const AVFilmGrainAOMParams *const data = &params->codec.aom;
const int rows = 1 + (data->overlap_flag && row_num > 0);
const int bitdepth_min_8 = bitdepth - 8;
const int grain_ctr = 128 << bitdepth_min_8;
const int grain_min = -grain_ctr, grain_max = grain_ctr - 1;
unsigned seed[2];
int offsets[2 /* col offset */][2 /* row offset */];
int min_value, max_value;
if (data->limit_output_range) {
min_value = 16 << bitdepth_min_8;
max_value = 235 << bitdepth_min_8;
} else {
min_value = 0;
max_value = bitdepth_max;
}
// seed[0] contains the current row, seed[1] contains the previous
for (int i = 0; i < rows; i++) {
seed[i] = params->seed;
seed[i] ^= (((row_num - i) * 37 + 178) & 0xFF) << 8;
seed[i] ^= (((row_num - i) * 173 + 105) & 0xFF);
}
av_assert1(stride % (FG_BLOCK_SIZE * sizeof(pixel)) == 0);
// process this row in FG_BLOCK_SIZE^2 blocks
for (unsigned bx = 0; bx < pw; bx += FG_BLOCK_SIZE) {
const int bw = FFMIN(FG_BLOCK_SIZE, (int) pw - bx);
const pixel *src;
pixel *dst;
int noise;
// x/y block offsets to compensate for overlapped regions
const int ystart = data->overlap_flag && row_num ? FFMIN(2, bh) : 0;
const int xstart = data->overlap_flag && bx ? FFMIN(2, bw) : 0;
static const int w[2][2] = { { 27, 17 }, { 17, 27 } };
if (data->overlap_flag && bx) {
// shift previous offsets left
for (int i = 0; i < rows; i++)
offsets[1][i] = offsets[0][i];
}
// update current offsets
for (int i = 0; i < rows; i++)
offsets[0][i] = get_random_number(8, &seed[i]);
#define add_noise_y(x, y, grain) \
src = (const pixel*)((const char*)src_row + (y) * stride) + (x) + bx; \
dst = (pixel*)((char*)dst_row + (y) * stride) + (x) + bx; \
noise = round2(scaling[ *src ] * (grain), data->scaling_shift); \
*dst = av_clip(*src + noise, min_value, max_value);
for (int y = ystart; y < bh; y++) {
// Non-overlapped image region (straightforward)
for (int x = xstart; x < bw; x++) {
int grain = FUNC(sample_lut)(grain_lut, offsets, 0, 0, 0, 0, x, y);
add_noise_y(x, y, grain);
}
// Special case for overlapped column
for (int x = 0; x < xstart; x++) {
int grain = FUNC(sample_lut)(grain_lut, offsets, 0, 0, 0, 0, x, y);
int old = FUNC(sample_lut)(grain_lut, offsets, 0, 0, 1, 0, x, y);
grain = round2(old * w[x][0] + grain * w[x][1], 5);
grain = av_clip(grain, grain_min, grain_max);
add_noise_y(x, y, grain);
}
}
for (int y = 0; y < ystart; y++) {
// Special case for overlapped row (sans corner)
for (int x = xstart; x < bw; x++) {
int grain = FUNC(sample_lut)(grain_lut, offsets, 0, 0, 0, 0, x, y);
int old = FUNC(sample_lut)(grain_lut, offsets, 0, 0, 0, 1, x, y);
grain = round2(old * w[y][0] + grain * w[y][1], 5);
grain = av_clip(grain, grain_min, grain_max);
add_noise_y(x, y, grain);
}
// Special case for doubly-overlapped corner
for (int x = 0; x < xstart; x++) {
int grain = FUNC(sample_lut)(grain_lut, offsets, 0, 0, 0, 0, x, y);
int top = FUNC(sample_lut)(grain_lut, offsets, 0, 0, 0, 1, x, y);
int old = FUNC(sample_lut)(grain_lut, offsets, 0, 0, 1, 1, x, y);
// Blend the top pixel with the top left block
top = round2(old * w[x][0] + top * w[x][1], 5);
top = av_clip(top, grain_min, grain_max);
// Blend the current pixel with the left block
old = FUNC(sample_lut)(grain_lut, offsets, 0, 0, 1, 0, x, y);
grain = round2(old * w[x][0] + grain * w[x][1], 5);
grain = av_clip(grain, grain_min, grain_max);
// Mix the row rows together and apply grain
grain = round2(top * w[y][0] + grain * w[y][1], 5);
grain = av_clip(grain, grain_min, grain_max);
add_noise_y(x, y, grain);
}
}
}
}
static void
FUNC(fguv_32x32xn_c)(pixel *const dst_row, const pixel *const src_row,
const ptrdiff_t stride, const AVFilmGrainParams *const params,
const size_t pw, const uint8_t scaling[SCALING_SIZE],
const entry grain_lut[][GRAIN_WIDTH], const int bh,
const int row_num, const pixel *const luma_row,
const ptrdiff_t luma_stride, const int uv, const int is_id,
const int sx, const int sy HBD_DECL)
{
const AVFilmGrainAOMParams *const data = &params->codec.aom;
const int rows = 1 + (data->overlap_flag && row_num > 0);
const int bitdepth_min_8 = bitdepth - 8;
const int grain_ctr = 128 << bitdepth_min_8;
const int grain_min = -grain_ctr, grain_max = grain_ctr - 1;
unsigned seed[2];
int offsets[2 /* col offset */][2 /* row offset */];
int min_value, max_value;
if (data->limit_output_range) {
min_value = 16 << bitdepth_min_8;
max_value = (is_id ? 235 : 240) << bitdepth_min_8;
} else {
min_value = 0;
max_value = bitdepth_max;
}
// seed[0] contains the current row, seed[1] contains the previous
for (int i = 0; i < rows; i++) {
seed[i] = params->seed;
seed[i] ^= (((row_num - i) * 37 + 178) & 0xFF) << 8;
seed[i] ^= (((row_num - i) * 173 + 105) & 0xFF);
}
av_assert1(stride % (FG_BLOCK_SIZE * sizeof(pixel)) == 0);
// process this row in FG_BLOCK_SIZE^2 blocks (subsampled)
for (unsigned bx = 0; bx < pw; bx += FG_BLOCK_SIZE >> sx) {
const int bw = FFMIN(FG_BLOCK_SIZE >> sx, (int)(pw - bx));
int val, lx, ly, noise;
const pixel *src, *luma;
pixel *dst, avg;
// x/y block offsets to compensate for overlapped regions
const int ystart = data->overlap_flag && row_num ? FFMIN(2 >> sy, bh) : 0;
const int xstart = data->overlap_flag && bx ? FFMIN(2 >> sx, bw) : 0;
static const int w[2 /* sub */][2 /* off */][2] = {
{ { 27, 17 }, { 17, 27 } },
{ { 23, 22 } },
};
if (data->overlap_flag && bx) {
// shift previous offsets left
for (int i = 0; i < rows; i++)
offsets[1][i] = offsets[0][i];
}
// update current offsets
for (int i = 0; i < rows; i++)
offsets[0][i] = get_random_number(8, &seed[i]);
#define add_noise_uv(x, y, grain) \
lx = (bx + x) << sx; \
ly = y << sy; \
luma = (const pixel*)((const char*)luma_row + ly * luma_stride) + lx;\
avg = luma[0]; \
if (sx) \
avg = (avg + luma[1] + 1) >> 1; \
src = (const pixel*)((const char *)src_row + (y) * stride) + bx + (x);\
dst = (pixel *) ((char *) dst_row + (y) * stride) + bx + (x); \
val = avg; \
if (!data->chroma_scaling_from_luma) { \
const int combined = avg * data->uv_mult_luma[uv] + \
*src * data->uv_mult[uv]; \
val = av_clip( (combined >> 6) + \
(data->uv_offset[uv] * (1 << bitdepth_min_8)), \
0, bitdepth_max ); \
} \
noise = round2(scaling[ val ] * (grain), data->scaling_shift); \
*dst = av_clip(*src + noise, min_value, max_value);
for (int y = ystart; y < bh; y++) {
// Non-overlapped image region (straightforward)
for (int x = xstart; x < bw; x++) {
int grain = FUNC(sample_lut)(grain_lut, offsets, sx, sy, 0, 0, x, y);
add_noise_uv(x, y, grain);
}
// Special case for overlapped column
for (int x = 0; x < xstart; x++) {
int grain = FUNC(sample_lut)(grain_lut, offsets, sx, sy, 0, 0, x, y);
int old = FUNC(sample_lut)(grain_lut, offsets, sx, sy, 1, 0, x, y);
grain = round2(old * w[sx][x][0] + grain * w[sx][x][1], 5);
grain = av_clip(grain, grain_min, grain_max);
add_noise_uv(x, y, grain);
}
}
for (int y = 0; y < ystart; y++) {
// Special case for overlapped row (sans corner)
for (int x = xstart; x < bw; x++) {
int grain = FUNC(sample_lut)(grain_lut, offsets, sx, sy, 0, 0, x, y);
int old = FUNC(sample_lut)(grain_lut, offsets, sx, sy, 0, 1, x, y);
grain = round2(old * w[sy][y][0] + grain * w[sy][y][1], 5);
grain = av_clip(grain, grain_min, grain_max);
add_noise_uv(x, y, grain);
}
// Special case for doubly-overlapped corner
for (int x = 0; x < xstart; x++) {
int top = FUNC(sample_lut)(grain_lut, offsets, sx, sy, 0, 1, x, y);
int old = FUNC(sample_lut)(grain_lut, offsets, sx, sy, 1, 1, x, y);
int grain = FUNC(sample_lut)(grain_lut, offsets, sx, sy, 0, 0, x, y);
// Blend the top pixel with the top left block
top = round2(old * w[sx][x][0] + top * w[sx][x][1], 5);
top = av_clip(top, grain_min, grain_max);
// Blend the current pixel with the left block
old = FUNC(sample_lut)(grain_lut, offsets, sx, sy, 1, 0, x, y);
grain = round2(old * w[sx][x][0] + grain * w[sx][x][1], 5);
grain = av_clip(grain, grain_min, grain_max);
// Mix the row rows together and apply to image
grain = round2(top * w[sy][y][0] + grain * w[sy][y][1], 5);
grain = av_clip(grain, grain_min, grain_max);
add_noise_uv(x, y, grain);
}
}
}
}
static void FUNC(generate_scaling)(const uint8_t points[][2], const int num,
uint8_t scaling[SCALING_SIZE] HBD_DECL)
{
const int shift_x = bitdepth - 8;
const int scaling_size = 1 << bitdepth;
const int max_value = points[num - 1][0] << shift_x;
av_assert0(scaling_size <= SCALING_SIZE);
if (num == 0) {
memset(scaling, 0, scaling_size);
return;
}
// Fill up the preceding entries with the initial value
memset(scaling, points[0][1], points[0][0] << shift_x);
// Linearly interpolate the values in the middle
for (int i = 0; i < num - 1; i++) {
const int bx = points[i][0];
const int by = points[i][1];
const int ex = points[i+1][0];
const int ey = points[i+1][1];
const int dx = ex - bx;
const int dy = ey - by;
const int delta = dy * ((0x10000 + (dx >> 1)) / dx);
av_assert1(dx > 0);
for (int x = 0, d = 0x8000; x < dx; x++) {
scaling[(bx + x) << shift_x] = by + (d >> 16);
d += delta;
}
}
// Fill up the remaining entries with the final value
memset(&scaling[max_value], points[num - 1][1], scaling_size - max_value);
#if BIT_DEPTH != 8
for (int i = 0; i < num - 1; i++) {
const int pad = 1 << shift_x, rnd = pad >> 1;
const int bx = points[i][0] << shift_x;
const int ex = points[i+1][0] << shift_x;
const int dx = ex - bx;
for (int x = 0; x < dx; x += pad) {
const int range = scaling[bx + x + pad] - scaling[bx + x];
for (int n = 1, r = rnd; n < pad; n++) {
r += range;
scaling[bx + x + n] = scaling[bx + x] + (r >> shift_x);
}
}
}
#endif
}
static av_always_inline void
FUNC(apply_grain_row)(AVFrame *out, const AVFrame *in,
const int ss_x, const int ss_y,
const uint8_t scaling[3][SCALING_SIZE],
const entry grain_lut[3][GRAIN_HEIGHT+1][GRAIN_WIDTH],
const AVFilmGrainParams *params,
const int row HBD_DECL)
{
// Synthesize grain for the affected planes
const AVFilmGrainAOMParams *const data = &params->codec.aom;
const int cpw = (out->width + ss_x) >> ss_x;
const int is_id = out->colorspace == AVCOL_SPC_RGB;
const int bh = (FFMIN(out->height - row * FG_BLOCK_SIZE, FG_BLOCK_SIZE) + ss_y) >> ss_y;
const ptrdiff_t uv_off = row * FG_BLOCK_SIZE * out->linesize[1] >> ss_y;
pixel *const luma_src = (pixel *)
((char *) in->data[0] + row * FG_BLOCK_SIZE * in->linesize[0]);
if (data->num_y_points) {
const int bh = FFMIN(out->height - row * FG_BLOCK_SIZE, FG_BLOCK_SIZE);
const ptrdiff_t off = row * FG_BLOCK_SIZE * out->linesize[0];
FUNC(fgy_32x32xn_c)((pixel *) ((char *) out->data[0] + off), luma_src,
out->linesize[0], params, out->width, scaling[0],
grain_lut[0], bh, row HBD_CALL);
}
if (!data->num_uv_points[0] && !data->num_uv_points[1] &&
!data->chroma_scaling_from_luma)
{
return;
}
// extend padding pixels
if (out->width & ss_x) {
pixel *ptr = luma_src;
for (int y = 0; y < bh; y++) {
ptr[out->width] = ptr[out->width - 1];
ptr = (pixel *) ((char *) ptr + (in->linesize[0] << ss_y));
}
}
if (data->chroma_scaling_from_luma) {
for (int pl = 0; pl < 2; pl++)
FUNC(fguv_32x32xn_c)((pixel *) ((char *) out->data[1 + pl] + uv_off),
(const pixel *) ((const char *) in->data[1 + pl] + uv_off),
in->linesize[1], params, cpw, scaling[0],
grain_lut[1 + pl], bh, row, luma_src,
in->linesize[0], pl, is_id, ss_x, ss_y HBD_CALL);
} else {
for (int pl = 0; pl < 2; pl++) {
if (data->num_uv_points[pl]) {
FUNC(fguv_32x32xn_c)((pixel *) ((char *) out->data[1 + pl] + uv_off),
(const pixel *) ((const char *) in->data[1 + pl] + uv_off),
in->linesize[1], params, cpw, scaling[1 + pl],
grain_lut[1 + pl], bh, row, luma_src,
in->linesize[0], pl, is_id, ss_x, ss_y HBD_CALL);
}
}
}
}
static int FUNC(apply_film_grain)(AVFrame *out_frame, const AVFrame *in_frame,
const AVFilmGrainParams *params HBD_DECL)
{
entry grain_lut[3][GRAIN_HEIGHT + 1][GRAIN_WIDTH];
uint8_t scaling[3][SCALING_SIZE];
const AVFilmGrainAOMParams *const data = &params->codec.aom;
const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(out_frame->format);
const int rows = AV_CEIL_RSHIFT(out_frame->height, 5); /* log2(FG_BLOCK_SIZE) */
const int subx = desc->log2_chroma_w, suby = desc->log2_chroma_h;
// Generate grain LUTs as needed
FUNC(generate_grain_y_c)(grain_lut[0], params HBD_CALL);
if (data->num_uv_points[0] || data->chroma_scaling_from_luma)
FUNC(generate_grain_uv_c)(grain_lut[1], grain_lut[0], params, 0, subx, suby HBD_CALL);
if (data->num_uv_points[1] || data->chroma_scaling_from_luma)
FUNC(generate_grain_uv_c)(grain_lut[2], grain_lut[0], params, 1, subx, suby HBD_CALL);
// Generate scaling LUTs as needed
if (data->num_y_points || data->chroma_scaling_from_luma)
FUNC(generate_scaling)(data->y_points, data->num_y_points, scaling[0] HBD_CALL);
if (data->num_uv_points[0])
FUNC(generate_scaling)(data->uv_points[0], data->num_uv_points[0], scaling[1] HBD_CALL);
if (data->num_uv_points[1])
FUNC(generate_scaling)(data->uv_points[1], data->num_uv_points[1], scaling[2] HBD_CALL);
for (int row = 0; row < rows; row++) {
FUNC(apply_grain_row)(out_frame, in_frame, subx, suby, scaling, grain_lut,
params, row HBD_CALL);
}
return 0;
}

View File

@ -34,6 +34,7 @@
#include "decode.h"
#include "hwaccel_internal.h"
#include "internal.h"
#include "itut35.h"
#include "hwconfig.h"
#include "profiles.h"
#include "refstruct.h"
@ -951,7 +952,7 @@ static int export_itut_t35(AVCodecContext *avctx, AVFrame *frame,
provider_code = bytestream2_get_be16(&gb);
switch (provider_code) {
case 0x31: { // atsc_provider_code
case ITU_T_T35_PROVIDER_CODE_ATSC: {
uint32_t user_identifier = bytestream2_get_be32(&gb);
switch (user_identifier) {
case MKBETAG('G', 'A', '9', '4'): { // closed captions
@ -975,12 +976,12 @@ static int export_itut_t35(AVCodecContext *avctx, AVFrame *frame,
}
break;
}
case 0x3C: { // smpte_provider_code
case ITU_T_T35_PROVIDER_CODE_SMTPE: {
AVDynamicHDRPlus *hdrplus;
int provider_oriented_code = bytestream2_get_be16(&gb);
int application_identifier = bytestream2_get_byte(&gb);
if (itut_t35->itu_t_t35_country_code != 0xB5 ||
if (itut_t35->itu_t_t35_country_code != ITU_T_T35_COUNTRY_CODE_US ||
provider_oriented_code != 1 || application_identifier != 4)
break;
@ -994,9 +995,10 @@ static int export_itut_t35(AVCodecContext *avctx, AVFrame *frame,
return ret;
break;
}
case 0x3B: { // dolby_provider_code
case ITU_T_T35_PROVIDER_CODE_DOLBY: {
int provider_oriented_code = bytestream2_get_be32(&gb);
if (itut_t35->itu_t_t35_country_code != 0xB5 || provider_oriented_code != 0x800)
if (itut_t35->itu_t_t35_country_code != ITU_T_T35_COUNTRY_CODE_US ||
provider_oriented_code != 0x800)
break;
ret = ff_dovi_rpu_parse(&s->dovi, gb.buffer, gb.buffer_end - gb.buffer);
@ -1072,9 +1074,11 @@ static int export_film_grain(AVCodecContext *avctx, AVFrame *frame)
{
AV1DecContext *s = avctx->priv_data;
const AV1RawFilmGrainParams *film_grain = &s->cur_frame.film_grain;
const AVPixFmtDescriptor *pixdesc = av_pix_fmt_desc_get(frame->format);
AVFilmGrainParams *fgp;
AVFilmGrainAOMParams *aom;
av_assert0(pixdesc);
if (!film_grain->apply_grain)
return 0;
@ -1084,6 +1088,14 @@ static int export_film_grain(AVCodecContext *avctx, AVFrame *frame)
fgp->type = AV_FILM_GRAIN_PARAMS_AV1;
fgp->seed = film_grain->grain_seed;
fgp->width = frame->width;
fgp->height = frame->height;
fgp->color_range = frame->color_range;
fgp->color_primaries = frame->color_primaries;
fgp->color_trc = frame->color_trc;
fgp->color_space = frame->colorspace;
fgp->subsampling_x = pixdesc->log2_chroma_w;
fgp->subsampling_y = pixdesc->log2_chroma_h;
aom = &fgp->codec.aom;
aom->chroma_scaling_from_luma = film_grain->chroma_scaling_from_luma;

View File

@ -2062,6 +2062,19 @@ typedef struct AVCodecContext {
* Number of entries in side_data_prefer_packet.
*/
unsigned nb_side_data_prefer_packet;
/**
* Array containing static side data, such as HDR10 CLL / MDCV structures.
* Side data entries should be allocated by usage of helpers defined in
* libavutil/frame.h.
*
* - encoding: may be set by user before calling avcodec_open2() for
* encoder configuration. Afterwards owned and freed by the
* encoder.
* - decoding: unused
*/
AVFrameSideData **decoded_side_data;
int nb_decoded_side_data;
} AVCodecContext;
/**

View File

@ -2072,6 +2072,8 @@ static int FUNC(pps) (CodedBitstreamContext *ctx, RWContext *rw,
tile_x = tile_idx % current->num_tile_columns;
tile_y = tile_idx / current->num_tile_columns;
if (tile_y >= current->num_tile_rows)
return AVERROR_INVALIDDATA;
ctu_x = 0, ctu_y = 0;
for (j = 0; j < tile_x; j++) {

View File

@ -1326,8 +1326,8 @@ int ff_get_format(AVCodecContext *avctx, const enum AVPixelFormat *fmt)
goto try_again;
}
if (hw_config->hwaccel) {
av_log(avctx, AV_LOG_DEBUG, "Format %s requires hwaccel "
"initialisation.\n", desc->name);
av_log(avctx, AV_LOG_DEBUG, "Format %s requires hwaccel %s "
"initialisation.\n", desc->name, hw_config->hwaccel->p.name);
err = hwaccel_init(avctx, hw_config->hwaccel);
if (err < 0)
goto try_again;

View File

@ -68,7 +68,7 @@ void ff_dovi_ctx_replace(DOVIContext *s, const DOVIContext *s0)
s->mapping = s0->mapping;
s->color = s0->color;
s->dv_profile = s0->dv_profile;
for (int i = 0; i < DOVI_MAX_DM_ID; i++)
for (int i = 0; i <= DOVI_MAX_DM_ID; i++)
ff_refstruct_replace(&s->vdr[i], s0->vdr[i]);
}
@ -145,7 +145,7 @@ static inline uint64_t get_ue_coef(GetBitContext *gb, const AVDOVIRpuDataHeader
case RPU_COEFF_FIXED:
ipart = get_ue_golomb_long(gb);
fpart.u32 = get_bits_long(gb, hdr->coef_log2_denom);
return (ipart << hdr->coef_log2_denom) + fpart.u32;
return (ipart << hdr->coef_log2_denom) | fpart.u32;
case RPU_COEFF_FLOAT:
fpart.u32 = get_bits_long(gb, 32);
@ -164,7 +164,7 @@ static inline int64_t get_se_coef(GetBitContext *gb, const AVDOVIRpuDataHeader *
case RPU_COEFF_FIXED:
ipart = get_se_golomb_long(gb);
fpart.u32 = get_bits_long(gb, hdr->coef_log2_denom);
return ipart * (1LL << hdr->coef_log2_denom) + fpart.u32;
return ipart * (1LL << hdr->coef_log2_denom) | fpart.u32;
case RPU_COEFF_FLOAT:
fpart.u32 = get_bits_long(gb, 32);

View File

@ -236,17 +236,9 @@ done:
av_free(name);
av_free(message);
if (class_class) {
(*env)->DeleteLocalRef(env, class_class);
}
if (exception_class) {
(*env)->DeleteLocalRef(env, exception_class);
}
if (string) {
(*env)->DeleteLocalRef(env, string);
}
(*env)->DeleteLocalRef(env, class_class);
(*env)->DeleteLocalRef(env, exception_class);
(*env)->DeleteLocalRef(env, string);
return ret;
}

View File

@ -24,6 +24,7 @@
#define AVCODEC_FFJNI_H
#include <jni.h>
#include <stddef.h>
/*
* Attach permanently a JNI environment to the current thread and retrieve it.
@ -105,7 +106,7 @@ struct FFJniField {
const char *method;
const char *signature;
enum FFJniFieldType type;
int offset;
size_t offset;
int mandatory;
};

View File

@ -40,6 +40,7 @@
#include "get_bits.h"
#include "golomb.h"
#include "h2645_sei.h"
#include "itut35.h"
#define IS_H264(codec_id) (CONFIG_H264_SEI && CONFIG_HEVC_SEI ? codec_id == AV_CODEC_ID_H264 : CONFIG_H264_SEI)
#define IS_HEVC(codec_id) (CONFIG_H264_SEI && CONFIG_HEVC_SEI ? codec_id == AV_CODEC_ID_HEVC : CONFIG_HEVC_SEI)
@ -140,7 +141,8 @@ static int decode_registered_user_data(H2645SEI *h, GetByteContext *gb,
bytestream2_skipu(gb, 1); // itu_t_t35_country_code_extension_byte
}
if (country_code != 0xB5 && country_code != 0x26) { // usa_country_code and cn_country_code
if (country_code != ITU_T_T35_COUNTRY_CODE_US &&
country_code != ITU_T_T35_COUNTRY_CODE_CN) {
av_log(logctx, AV_LOG_VERBOSE,
"Unsupported User Data Registered ITU-T T35 SEI message (country_code = %d)\n",
country_code);
@ -151,7 +153,7 @@ static int decode_registered_user_data(H2645SEI *h, GetByteContext *gb,
provider_code = bytestream2_get_be16u(gb);
switch (provider_code) {
case 0x31: { // atsc_provider_code
case ITU_T_T35_PROVIDER_CODE_ATSC: {
uint32_t user_identifier;
if (bytestream2_get_bytes_left(gb) < 4)
@ -172,7 +174,7 @@ static int decode_registered_user_data(H2645SEI *h, GetByteContext *gb,
break;
}
#if CONFIG_HEVC_SEI
case 0x04: { // cuva_provider_code
case ITU_T_T35_PROVIDER_CODE_CUVA: {
const uint16_t cuva_provider_oriented_code = 0x0005;
uint16_t provider_oriented_code;
@ -188,7 +190,7 @@ static int decode_registered_user_data(H2645SEI *h, GetByteContext *gb,
}
break;
}
case 0x3C: { // smpte_provider_code
case ITU_T_T35_PROVIDER_CODE_SMTPE: {
// A/341 Amendment - 2094-40
const uint16_t smpte2094_40_provider_oriented_code = 0x0001;
const uint8_t smpte2094_40_application_identifier = 0x04;
@ -209,6 +211,24 @@ static int decode_registered_user_data(H2645SEI *h, GetByteContext *gb,
}
break;
}
case 0x5890: { // aom_provider_code
const uint16_t aom_grain_provider_oriented_code = 0x0001;
uint16_t provider_oriented_code;
if (!IS_HEVC(codec_id))
goto unsupported_provider_code;
if (bytestream2_get_bytes_left(gb) < 2)
return AVERROR_INVALIDDATA;
provider_oriented_code = bytestream2_get_byteu(gb);
if (provider_oriented_code == aom_grain_provider_oriented_code) {
return ff_aom_parse_film_grain_sets(&h->aom_film_grain,
gb->buffer,
bytestream2_get_bytes_left(gb));
}
break;
}
unsupported_provider_code:
#endif
default:
@ -641,35 +661,45 @@ int ff_h2645_sei_to_frame(AVFrame *frame, H2645SEI *sei,
h274 = &fgp->codec.h274;
fgp->seed = seed;
fgp->width = frame->width;
fgp->height = frame->height;
/* H.274 mandates film grain be applied to 4:4:4 frames */
fgp->subsampling_x = fgp->subsampling_y = 0;
h274->model_id = fgc->model_id;
if (fgc->separate_colour_description_present_flag) {
h274->bit_depth_luma = fgc->bit_depth_luma;
h274->bit_depth_chroma = fgc->bit_depth_chroma;
h274->color_range = fgc->full_range + 1;
h274->color_primaries = fgc->color_primaries;
h274->color_trc = fgc->transfer_characteristics;
h274->color_space = fgc->matrix_coeffs;
fgp->bit_depth_luma = fgc->bit_depth_luma;
fgp->bit_depth_chroma = fgc->bit_depth_chroma;
fgp->color_range = fgc->full_range + 1;
fgp->color_primaries = fgc->color_primaries;
fgp->color_trc = fgc->transfer_characteristics;
fgp->color_space = fgc->matrix_coeffs;
} else {
h274->bit_depth_luma = bit_depth_luma;
h274->bit_depth_chroma = bit_depth_chroma;
fgp->bit_depth_luma = bit_depth_luma;
fgp->bit_depth_chroma = bit_depth_chroma;
if (vui->video_signal_type_present_flag)
h274->color_range = vui->video_full_range_flag + 1;
else
h274->color_range = AVCOL_RANGE_UNSPECIFIED;
fgp->color_range = vui->video_full_range_flag + 1;
if (vui->colour_description_present_flag) {
h274->color_primaries = vui->colour_primaries;
h274->color_trc = vui->transfer_characteristics;
h274->color_space = vui->matrix_coeffs;
} else {
h274->color_primaries = AVCOL_PRI_UNSPECIFIED;
h274->color_trc = AVCOL_TRC_UNSPECIFIED;
h274->color_space = AVCOL_SPC_UNSPECIFIED;
fgp->color_primaries = vui->colour_primaries;
fgp->color_trc = vui->transfer_characteristics;
fgp->color_space = vui->matrix_coeffs;
}
}
h274->blending_mode_id = fgc->blending_mode_id;
h274->log2_scale_factor = fgc->log2_scale_factor;
#if FF_API_H274_FILM_GRAIN_VCS
FF_DISABLE_DEPRECATION_WARNINGS
h274->bit_depth_luma = fgp->bit_depth_luma;
h274->bit_depth_chroma = fgp->bit_depth_chroma;
h274->color_range = fgp->color_range;
h274->color_primaries = fgp->color_primaries;
h274->color_trc = fgp->color_trc;
h274->color_space = fgp->color_space;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
memcpy(&h274->component_model_present, &fgc->comp_model_present_flag,
sizeof(h274->component_model_present));
memcpy(&h274->num_intensity_intervals, &fgc->num_intensity_intervals,
@ -692,6 +722,12 @@ int ff_h2645_sei_to_frame(AVFrame *frame, H2645SEI *sei,
avctx->properties |= FF_CODEC_PROPERTY_FILM_GRAIN;
}
#if CONFIG_HEVC_SEI
ret = ff_aom_attach_film_grain_sets(&sei->aom_film_grain, frame);
if (ret < 0)
return ret;
#endif
if (sei->ambient_viewing_environment.present) {
H2645SEIAmbientViewingEnvironment *env =
&sei->ambient_viewing_environment;
@ -788,4 +824,5 @@ void ff_h2645_sei_reset(H2645SEI *s)
s->ambient_viewing_environment.present = 0;
s->mastering_display.present = 0;
s->content_light.present = 0;
s->aom_film_grain.enable = 0;
}

View File

@ -23,7 +23,9 @@
#include "libavutil/buffer.h"
#include "libavutil/frame.h"
#include "libavutil/film_grain_params.h"
#include "aom_film_grain.h"
#include "avcodec.h"
#include "bytestream.h"
#include "codec_id.h"
@ -132,6 +134,7 @@ typedef struct H2645SEI {
H2645SEIAmbientViewingEnvironment ambient_viewing_environment;
H2645SEIMasteringDisplay mastering_display;
H2645SEIContentLight content_light;
AVFilmGrainAFGS1Params aom_film_grain;
} H2645SEI;
enum {

View File

@ -370,7 +370,7 @@ static void decode_sublayer_hrd(GetBitContext *gb, unsigned int nb_cpb,
par->bit_rate_du_value_minus1[i] = get_ue_golomb_long(gb);
}
par->cbr_flag = get_bits1(gb);
par->cbr_flag |= get_bits1(gb) << i;
}
}
@ -378,24 +378,24 @@ static int decode_hrd(GetBitContext *gb, int common_inf_present,
HEVCHdrParams *hdr, int max_sublayers)
{
if (common_inf_present) {
hdr->flags.nal_hrd_parameters_present_flag = get_bits1(gb);
hdr->flags.vcl_hrd_parameters_present_flag = get_bits1(gb);
hdr->nal_hrd_parameters_present_flag = get_bits1(gb);
hdr->vcl_hrd_parameters_present_flag = get_bits1(gb);
if (hdr->flags.nal_hrd_parameters_present_flag ||
hdr->flags.vcl_hrd_parameters_present_flag) {
hdr->flags.sub_pic_hrd_params_present_flag = get_bits1(gb);
if (hdr->nal_hrd_parameters_present_flag ||
hdr->vcl_hrd_parameters_present_flag) {
hdr->sub_pic_hrd_params_present_flag = get_bits1(gb);
if (hdr->flags.sub_pic_hrd_params_present_flag) {
if (hdr->sub_pic_hrd_params_present_flag) {
hdr->tick_divisor_minus2 = get_bits(gb, 8);
hdr->du_cpb_removal_delay_increment_length_minus1 = get_bits(gb, 5);
hdr->flags.sub_pic_cpb_params_in_pic_timing_sei_flag = get_bits1(gb);
hdr->sub_pic_cpb_params_in_pic_timing_sei_flag = get_bits1(gb);
hdr->dpb_output_delay_du_length_minus1 = get_bits(gb, 5);
}
hdr->bit_rate_scale = get_bits(gb, 4);
hdr->cpb_size_scale = get_bits(gb, 4);
if (hdr->flags.sub_pic_hrd_params_present_flag)
if (hdr->sub_pic_hrd_params_present_flag)
hdr->cpb_size_du_scale = get_bits(gb, 4);
hdr->initial_cpb_removal_delay_length_minus1 = get_bits(gb, 5);
@ -405,18 +405,22 @@ static int decode_hrd(GetBitContext *gb, int common_inf_present,
}
for (int i = 0; i < max_sublayers; i++) {
hdr->flags.fixed_pic_rate_general_flag = get_bits1(gb);
unsigned fixed_pic_rate_general_flag = get_bits1(gb);
unsigned fixed_pic_rate_within_cvs_flag = 0;
unsigned low_delay_hrd_flag = 0;
hdr->flags.fixed_pic_rate_general_flag |= fixed_pic_rate_general_flag << i;
if (!hdr->flags.fixed_pic_rate_general_flag)
hdr->flags.fixed_pic_rate_within_cvs_flag = get_bits1(gb);
if (!fixed_pic_rate_general_flag)
fixed_pic_rate_within_cvs_flag = get_bits1(gb);
hdr->flags.fixed_pic_rate_within_cvs_flag |= fixed_pic_rate_within_cvs_flag << i;
if (hdr->flags.fixed_pic_rate_within_cvs_flag ||
hdr->flags.fixed_pic_rate_general_flag)
if (fixed_pic_rate_within_cvs_flag || fixed_pic_rate_general_flag)
hdr->elemental_duration_in_tc_minus1[i] = get_ue_golomb_long(gb);
else
hdr->flags.low_delay_hrd_flag = get_bits1(gb);
low_delay_hrd_flag = get_bits1(gb);
hdr->flags.low_delay_hrd_flag |= low_delay_hrd_flag << i;
if (!hdr->flags.low_delay_hrd_flag) {
if (!low_delay_hrd_flag) {
unsigned cpb_cnt_minus1 = get_ue_golomb_long(gb);
if (cpb_cnt_minus1 > 31) {
av_log(NULL, AV_LOG_ERROR, "nb_cpb %d invalid\n",
@ -426,25 +430,32 @@ static int decode_hrd(GetBitContext *gb, int common_inf_present,
hdr->cpb_cnt_minus1[i] = cpb_cnt_minus1;
}
if (hdr->flags.nal_hrd_parameters_present_flag)
if (hdr->nal_hrd_parameters_present_flag)
decode_sublayer_hrd(gb, hdr->cpb_cnt_minus1[i]+1, &hdr->nal_params[i],
hdr->flags.sub_pic_hrd_params_present_flag);
hdr->sub_pic_hrd_params_present_flag);
if (hdr->flags.vcl_hrd_parameters_present_flag)
if (hdr->vcl_hrd_parameters_present_flag)
decode_sublayer_hrd(gb, hdr->cpb_cnt_minus1[i]+1, &hdr->vcl_params[i],
hdr->flags.sub_pic_hrd_params_present_flag);
hdr->sub_pic_hrd_params_present_flag);
}
return 0;
}
static void uninit_vps(FFRefStructOpaque opaque, void *obj)
{
HEVCVPS *vps = obj;
av_freep(&vps->hdr);
}
int ff_hevc_decode_nal_vps(GetBitContext *gb, AVCodecContext *avctx,
HEVCParamSets *ps)
{
int i,j;
int vps_id = 0;
ptrdiff_t nal_size;
HEVCVPS *vps = ff_refstruct_allocz(sizeof(*vps));
HEVCVPS *vps = ff_refstruct_alloc_ext(sizeof(*vps), 0, NULL, uninit_vps);
if (!vps)
return AVERROR(ENOMEM);
@ -533,6 +544,11 @@ int ff_hevc_decode_nal_vps(GetBitContext *gb, AVCodecContext *avctx,
"vps_num_hrd_parameters %d is invalid\n", vps->vps_num_hrd_parameters);
goto err;
}
vps->hdr = av_calloc(vps->vps_num_hrd_parameters, sizeof(*vps->hdr));
if (!vps->hdr)
goto err;
for (i = 0; i < vps->vps_num_hrd_parameters; i++) {
int common_inf_present = 1;

View File

@ -39,18 +39,19 @@ typedef struct HEVCSublayerHdrParams {
uint32_t cbr_flag;
} HEVCSublayerHdrParams;
// flags in bitmask form
typedef struct HEVCHdrFlagParams {
uint32_t nal_hrd_parameters_present_flag;
uint32_t vcl_hrd_parameters_present_flag;
uint32_t sub_pic_hrd_params_present_flag;
uint32_t sub_pic_cpb_params_in_pic_timing_sei_flag;
uint32_t fixed_pic_rate_general_flag;
uint32_t fixed_pic_rate_within_cvs_flag;
uint32_t low_delay_hrd_flag;
uint8_t fixed_pic_rate_general_flag;
uint8_t fixed_pic_rate_within_cvs_flag;
uint8_t low_delay_hrd_flag;
} HEVCHdrFlagParams;
typedef struct HEVCHdrParams {
HEVCHdrFlagParams flags;
uint8_t nal_hrd_parameters_present_flag;
uint8_t vcl_hrd_parameters_present_flag;
uint8_t sub_pic_hrd_params_present_flag;
uint8_t sub_pic_cpb_params_in_pic_timing_sei_flag;
uint8_t tick_divisor_minus2;
uint8_t du_cpb_removal_delay_increment_length_minus1;
@ -152,7 +153,7 @@ typedef struct PTL {
typedef struct HEVCVPS {
unsigned int vps_id;
HEVCHdrParams hdr[HEVC_MAX_LAYER_SETS];
HEVCHdrParams *hdr;
uint8_t vps_temporal_id_nesting_flag;
int vps_max_layers;

View File

@ -35,6 +35,7 @@
#include "libavutil/pixdesc.h"
#include "libavutil/timecode.h"
#include "aom_film_grain.h"
#include "bswapdsp.h"
#include "cabac_functions.h"
#include "codec_internal.h"
@ -388,7 +389,8 @@ static int export_stream_params_from_sei(HEVCContext *s)
avctx->color_trc = s->sei.common.alternative_transfer.preferred_transfer_characteristics;
}
if (s->sei.common.film_grain_characteristics.present)
if (s->sei.common.film_grain_characteristics.present ||
s->sei.common.aom_film_grain.enable)
avctx->properties |= FF_CODEC_PROPERTY_FILM_GRAIN;
return 0;
@ -2885,11 +2887,13 @@ static int hevc_frame_start(HEVCContext *s)
else
s->ref->frame->flags &= ~AV_FRAME_FLAG_KEY;
s->ref->needs_fg = s->sei.common.film_grain_characteristics.present &&
s->ref->needs_fg = (s->sei.common.film_grain_characteristics.present ||
s->sei.common.aom_film_grain.enable) &&
!(s->avctx->export_side_data & AV_CODEC_EXPORT_DATA_FILM_GRAIN) &&
!s->avctx->hwaccel;
if (s->ref->needs_fg &&
s->sei.common.film_grain_characteristics.present &&
!ff_h274_film_grain_params_supported(s->sei.common.film_grain_characteristics.model_id,
s->ref->frame->format)) {
av_log_once(s->avctx, AV_LOG_WARNING, AV_LOG_DEBUG, &s->film_grain_warning_shown,
@ -2934,14 +2938,24 @@ fail:
static int hevc_frame_end(HEVCContext *s)
{
HEVCFrame *out = s->ref;
const AVFrameSideData *sd;
const AVFilmGrainParams *fgp;
av_unused int ret;
if (out->needs_fg) {
sd = av_frame_get_side_data(out->frame, AV_FRAME_DATA_FILM_GRAIN_PARAMS);
av_assert0(out->frame_grain->buf[0] && sd);
ret = ff_h274_apply_film_grain(out->frame_grain, out->frame, &s->h274db,
(AVFilmGrainParams *) sd->data);
av_assert0(out->frame_grain->buf[0]);
fgp = av_film_grain_params_select(out->frame);
switch (fgp->type) {
case AV_FILM_GRAIN_PARAMS_NONE:
av_assert0(0);
return AVERROR_BUG;
case AV_FILM_GRAIN_PARAMS_H274:
ret = ff_h274_apply_film_grain(out->frame_grain, out->frame,
&s->h274db, fgp);
break;
case AV_FILM_GRAIN_PARAMS_AV1:
ret = ff_aom_apply_film_grain(out->frame_grain, out->frame, fgp);
break;
}
av_assert1(ret >= 0);
}
@ -3596,6 +3610,7 @@ static int hevc_update_thread_context(AVCodecContext *dst,
s->sei.common.alternative_transfer = s0->sei.common.alternative_transfer;
s->sei.common.mastering_display = s0->sei.common.mastering_display;
s->sei.common.content_light = s0->sei.common.content_light;
s->sei.common.aom_film_grain = s0->sei.common.aom_film_grain;
ret = export_stream_params_from_sei(s);
if (ret < 0)

30
libavcodec/itut35.h Normal file
View File

@ -0,0 +1,30 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_ITUT35_H
#define AVCODEC_ITUT35_H
#define ITU_T_T35_COUNTRY_CODE_CN 0x26
#define ITU_T_T35_COUNTRY_CODE_US 0xB5
#define ITU_T_T35_PROVIDER_CODE_ATSC 0x31
#define ITU_T_T35_PROVIDER_CODE_CUVA 0x04
#define ITU_T_T35_PROVIDER_CODE_DOLBY 0x3B
#define ITU_T_T35_PROVIDER_CODE_SMTPE 0x3C
#endif /* AVCODEC_ITUT35_H */

View File

@ -35,6 +35,7 @@
#include "ffjni.h"
static void *java_vm;
static void *android_app_ctx;
static pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
int av_jni_set_java_vm(void *vm, void *log_ctx)
@ -77,3 +78,45 @@ void *av_jni_get_java_vm(void *log_ctx)
}
#endif
#if defined(__ANDROID__)
int av_jni_set_android_app_ctx(void *app_ctx, void *log_ctx)
{
#if CONFIG_JNI
JNIEnv *env = ff_jni_get_env(log_ctx);
if (!env)
return AVERROR(EINVAL);
jobjectRefType type = (*env)->GetObjectRefType(env, app_ctx);
if (type != JNIGlobalRefType) {
av_log(log_ctx, AV_LOG_ERROR, "Application context must be passed as a global reference");
return AVERROR(EINVAL);
}
pthread_mutex_lock(&lock);
android_app_ctx = app_ctx;
pthread_mutex_unlock(&lock);
return 0;
#else
return AVERROR(ENOSYS);
#endif
}
void *av_jni_get_android_app_ctx(void)
{
#if CONFIG_JNI
void *ctx;
pthread_mutex_lock(&lock);
ctx = android_app_ctx;
pthread_mutex_unlock(&lock);
return ctx;
#else
return NULL;
#endif
}
#endif

View File

@ -43,4 +43,25 @@ int av_jni_set_java_vm(void *vm, void *log_ctx);
*/
void *av_jni_get_java_vm(void *log_ctx);
/*
* Set the Android application context which will be used to retrieve the Android
* content resolver to handle content uris.
*
* This function is only available on Android.
*
* @param app_ctx global JNI reference to the Android application context
* @return 0 on success, < 0 otherwise
*/
int av_jni_set_android_app_ctx(void *app_ctx, void *log_ctx);
/*
* Get the Android application context that has been set with
* av_jni_set_android_app_ctx.
*
* This function is only available on Android.
*
* @return a pointer the the Android application context
*/
void *av_jni_get_android_app_ctx(void);
#endif /* AVCODEC_JNI_H */

View File

@ -37,6 +37,7 @@
#include "decode.h"
#include "dovi_rpu.h"
#include "internal.h"
#include "itut35.h"
#define FF_DAV1D_VERSION_AT_LEAST(x,y) \
(DAV1D_API_VERSION_MAJOR > (x) || DAV1D_API_VERSION_MAJOR == (x) && DAV1D_API_VERSION_MINOR >= (y))
@ -304,10 +305,6 @@ static void libdav1d_flush(AVCodecContext *c)
dav1d_flush(dav1d->c);
}
typedef struct OpaqueData {
void *pkt_orig_opaque;
} OpaqueData;
static void libdav1d_data_free(const uint8_t *data, void *opaque) {
AVBufferRef *buf = opaque;
@ -317,7 +314,6 @@ static void libdav1d_data_free(const uint8_t *data, void *opaque) {
static void libdav1d_user_data_free(const uint8_t *data, void *opaque) {
AVPacket *pkt = opaque;
av_assert0(data == opaque);
av_free(pkt->opaque);
av_packet_free(&pkt);
}
@ -340,8 +336,6 @@ static int libdav1d_receive_frame_internal(AVCodecContext *c, Dav1dPicture *p)
}
if (pkt->size) {
OpaqueData *od = NULL;
res = dav1d_data_wrap(data, pkt->data, pkt->size,
libdav1d_data_free, pkt->buf);
if (res < 0) {
@ -351,21 +345,9 @@ static int libdav1d_receive_frame_internal(AVCodecContext *c, Dav1dPicture *p)
pkt->buf = NULL;
if (pkt->opaque && (c->flags & AV_CODEC_FLAG_COPY_OPAQUE)) {
od = av_mallocz(sizeof(*od));
if (!od) {
av_packet_free(&pkt);
dav1d_data_unref(data);
return AVERROR(ENOMEM);
}
od->pkt_orig_opaque = pkt->opaque;
}
pkt->opaque = od;
res = dav1d_data_wrap_user_data(data, (const uint8_t *)pkt,
libdav1d_user_data_free, pkt);
if (res < 0) {
av_free(pkt->opaque);
av_packet_free(&pkt);
dav1d_data_unref(data);
return res;
@ -404,7 +386,6 @@ static int libdav1d_receive_frame(AVCodecContext *c, AVFrame *frame)
Libdav1dContext *dav1d = c->priv_data;
Dav1dPicture pic = { 0 }, *p = &pic;
AVPacket *pkt;
OpaqueData *od = NULL;
#if FF_DAV1D_VERSION_AT_LEAST(5,1)
enum Dav1dEventFlags event_flags = 0;
#endif
@ -459,16 +440,9 @@ static int libdav1d_receive_frame(AVCodecContext *c, AVFrame *frame)
ff_set_sar(c, frame->sample_aspect_ratio);
pkt = (AVPacket *)p->m.user_data.data;
od = pkt->opaque;
// restore the original user opaque value for
// ff_decode_frame_props_from_pkt()
pkt->opaque = od ? od->pkt_orig_opaque : NULL;
av_freep(&od);
// match timestamps and packet size
res = ff_decode_frame_props_from_pkt(c, frame, pkt);
pkt->opaque = NULL;
if (res < 0)
goto fail;
@ -542,7 +516,7 @@ static int libdav1d_receive_frame(AVCodecContext *c, AVFrame *frame)
provider_code = bytestream2_get_be16(&gb);
switch (provider_code) {
case 0x31: { // atsc_provider_code
case ITU_T_T35_PROVIDER_CODE_ATSC: {
uint32_t user_identifier = bytestream2_get_be32(&gb);
switch (user_identifier) {
case MKBETAG('G', 'A', '9', '4'): { // closed captions
@ -566,12 +540,12 @@ static int libdav1d_receive_frame(AVCodecContext *c, AVFrame *frame)
}
break;
}
case 0x3C: { // smpte_provider_code
case ITU_T_T35_PROVIDER_CODE_SMTPE: {
AVDynamicHDRPlus *hdrplus;
int provider_oriented_code = bytestream2_get_be16(&gb);
int application_identifier = bytestream2_get_byte(&gb);
if (itut_t35->country_code != 0xB5 ||
if (itut_t35->country_code != ITU_T_T35_COUNTRY_CODE_US ||
provider_oriented_code != 1 || application_identifier != 4)
break;
@ -587,9 +561,10 @@ static int libdav1d_receive_frame(AVCodecContext *c, AVFrame *frame)
goto fail;
break;
}
case 0x3B: { // dolby_provider_code
case ITU_T_T35_PROVIDER_CODE_DOLBY: {
int provider_oriented_code = bytestream2_get_be32(&gb);
if (itut_t35->country_code != 0xB5 || provider_oriented_code != 0x800)
if (itut_t35->country_code != ITU_T_T35_COUNTRY_CODE_US ||
provider_oriented_code != 0x800)
break;
res = ff_dovi_rpu_parse(&dav1d->dovi, gb.buffer, gb.buffer_end - gb.buffer);
@ -613,6 +588,8 @@ static int libdav1d_receive_frame(AVCodecContext *c, AVFrame *frame)
if (p->frame_hdr->film_grain.present && (!dav1d->apply_grain ||
(c->export_side_data & AV_CODEC_EXPORT_DATA_FILM_GRAIN))) {
AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(frame);
const AVPixFmtDescriptor *pixdesc = av_pix_fmt_desc_get(frame->format);
av_assert0(pixdesc);
if (!fgp) {
res = AVERROR(ENOMEM);
goto fail;
@ -620,6 +597,14 @@ static int libdav1d_receive_frame(AVCodecContext *c, AVFrame *frame)
fgp->type = AV_FILM_GRAIN_PARAMS_AV1;
fgp->seed = p->frame_hdr->film_grain.data.seed;
fgp->width = frame->width;
fgp->height = frame->height;
fgp->color_range = frame->color_range;
fgp->color_primaries = frame->color_primaries;
fgp->color_trc = frame->color_trc;
fgp->color_space = frame->colorspace;
fgp->subsampling_x = pixdesc->log2_chroma_w;
fgp->subsampling_y = pixdesc->log2_chroma_h;
fgp->codec.aom.num_y_points = p->frame_hdr->film_grain.data.num_y_points;
fgp->codec.aom.chroma_scaling_from_luma = p->frame_hdr->film_grain.data.chroma_scaling_from_luma;
fgp->codec.aom.scaling_shift = p->frame_hdr->film_grain.data.scaling_shift;

View File

@ -472,12 +472,8 @@ static int librav1e_receive_packet(AVCodecContext *avctx, AVPacket *pkt)
if (avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) {
fd->frame_opaque = frame->opaque;
ret = av_buffer_replace(&fd->frame_opaque_ref, frame->opaque_ref);
if (ret < 0) {
frame_data_free(fd);
av_frame_unref(frame);
return ret;
}
fd->frame_opaque_ref = frame->opaque_ref;
frame->opaque_ref = NULL;
}
rframe = rav1e_frame_new(ctx->ctx);

View File

@ -27,6 +27,8 @@
#include "libavutil/common.h"
#include "libavutil/frame.h"
#include "libavutil/imgutils.h"
#include "libavutil/intreadwrite.h"
#include "libavutil/mastering_display_metadata.h"
#include "libavutil/opt.h"
#include "libavutil/pixdesc.h"
#include "libavutil/avassert.h"
@ -136,6 +138,69 @@ static int alloc_buffer(EbSvtAv1EncConfiguration *config, SvtContext *svt_enc)
}
static void handle_mdcv(struct EbSvtAv1MasteringDisplayInfo *dst,
const AVMasteringDisplayMetadata *mdcv)
{
if (mdcv->has_primaries) {
const struct EbSvtAv1ChromaPoints *const points[] = {
&dst->r,
&dst->g,
&dst->b,
};
for (int i = 0; i < 3; i++) {
const struct EbSvtAv1ChromaPoints *dst = points[i];
const AVRational *src = mdcv->display_primaries[i];
AV_WB16(&dst->x,
av_rescale_q(1, src[0], (AVRational){ 1, (1 << 16) }));
AV_WB16(&dst->y,
av_rescale_q(1, src[1], (AVRational){ 1, (1 << 16) }));
}
AV_WB16(&dst->white_point.x,
av_rescale_q(1, mdcv->white_point[0],
(AVRational){ 1, (1 << 16) }));
AV_WB16(&dst->white_point.y,
av_rescale_q(1, mdcv->white_point[1],
(AVRational){ 1, (1 << 16) }));
}
if (mdcv->has_luminance) {
AV_WB32(&dst->max_luma,
av_rescale_q(1, mdcv->max_luminance,
(AVRational){ 1, (1 << 8) }));
AV_WB32(&dst->min_luma,
av_rescale_q(1, mdcv->min_luminance,
(AVRational){ 1, (1 << 14) }));
}
}
static void handle_side_data(AVCodecContext *avctx,
EbSvtAv1EncConfiguration *param)
{
const AVFrameSideData *cll_sd =
av_frame_side_data_get(avctx->decoded_side_data,
avctx->nb_decoded_side_data, AV_FRAME_DATA_CONTENT_LIGHT_LEVEL);
const AVFrameSideData *mdcv_sd =
av_frame_side_data_get(avctx->decoded_side_data,
avctx->nb_decoded_side_data,
AV_FRAME_DATA_MASTERING_DISPLAY_METADATA);
if (cll_sd) {
const AVContentLightMetadata *cll =
(AVContentLightMetadata *)cll_sd->data;
AV_WB16(&param->content_light_level.max_cll, cll->MaxCLL);
AV_WB16(&param->content_light_level.max_fall, cll->MaxFALL);
}
if (mdcv_sd) {
handle_mdcv(&param->mastering_display,
(AVMasteringDisplayMetadata *)mdcv_sd->data);
}
}
static int config_enc_params(EbSvtAv1EncConfiguration *param,
AVCodecContext *avctx)
{
@ -254,6 +319,8 @@ FF_ENABLE_DEPRECATION_WARNINGS
/* 2 = IDR, closed GOP, 1 = CRA, open GOP */
param->intra_refresh_type = avctx->flags & AV_CODEC_FLAG_CLOSED_GOP ? 2 : 1;
handle_side_data(avctx, param);
#if SVT_AV1_CHECK_VERSION(0, 9, 1)
while ((en = av_dict_get(svt_enc->svtav1_opts, "", en, AV_DICT_IGNORE_SUFFIX))) {
EbErrorType ret = svt_av1_enc_parse_parameter(param, en->key, en->value);

View File

@ -25,6 +25,7 @@
#include "libavutil/eval.h"
#include "libavutil/internal.h"
#include "libavutil/opt.h"
#include "libavutil/mastering_display_metadata.h"
#include "libavutil/mem.h"
#include "libavutil/pixdesc.h"
#include "libavutil/stereo3d.h"
@ -38,6 +39,7 @@
#include "packet_internal.h"
#include "atsc_a53.h"
#include "sei.h"
#include "golomb.h"
#include <x264.h>
#include <float.h>
@ -847,12 +849,224 @@ static int convert_pix_fmt(enum AVPixelFormat pix_fmt)
return 0;
}
static int save_sei(AVCodecContext *avctx, x264_nal_t *nal)
{
X264Context *x4 = avctx->priv_data;
av_log(avctx, AV_LOG_INFO, "%s\n", nal->p_payload + 25);
x4->sei_size = nal->i_payload;
x4->sei = av_malloc(x4->sei_size);
if (!x4->sei)
return AVERROR(ENOMEM);
memcpy(x4->sei, nal->p_payload, nal->i_payload);
return 0;
}
#if CONFIG_LIBX264_ENCODER
static int set_avcc_extradata(AVCodecContext *avctx, x264_nal_t *nal, int nnal)
{
x264_nal_t *sps_nal = NULL;
x264_nal_t *pps_nal = NULL;
uint8_t *p, *sps;
int ret;
/* We know it's in the order of SPS/PPS/SEI, but it's not documented in x264 API.
* The x264 param i_sps_id implies there is a single pair of SPS/PPS.
*/
for (int i = 0; i < nnal; i++) {
switch (nal[i].i_type) {
case NAL_SPS:
sps_nal = &nal[i];
break;
case NAL_PPS:
pps_nal = &nal[i];
break;
case NAL_SEI:
ret = save_sei(avctx, &nal[i]);
if (ret < 0)
return ret;
break;
}
}
if (!sps_nal || !pps_nal)
return AVERROR_EXTERNAL;
avctx->extradata_size = sps_nal->i_payload + pps_nal->i_payload + 7;
avctx->extradata = av_mallocz(avctx->extradata_size + AV_INPUT_BUFFER_PADDING_SIZE);
if (!avctx->extradata)
return AVERROR(ENOMEM);
// Now create AVCDecoderConfigurationRecord
p = avctx->extradata;
// Skip size part
sps = sps_nal->p_payload + 4;
*p++ = 1; // version
*p++ = sps[1]; // AVCProfileIndication
*p++ = sps[2]; // profile_compatibility
*p++ = sps[3]; // AVCLevelIndication
*p++ = 0xFF;
*p++ = 0xE0 | 0x01; // 3 bits reserved (111) + 5 bits number of sps
memcpy(p, sps_nal->p_payload + 2, sps_nal->i_payload - 2);
// Make sps has AV_INPUT_BUFFER_PADDING_SIZE padding, so it can be used
// with GetBitContext
sps = p + 2;
p += sps_nal->i_payload - 2;
*p++ = 1;
memcpy(p, pps_nal->p_payload + 2, pps_nal->i_payload - 2);
p += pps_nal->i_payload - 2;
if (sps[3] != 66 && sps[3] != 77 && sps[3] != 88) {
GetBitContext gbc;
int chroma_format_idc;
int bit_depth_luma_minus8, bit_depth_chroma_minus8;
/* It's not possible to have emulation prevention byte before
* bit_depth_chroma_minus8 due to the range of sps id, chroma_format_idc
* and so on. So we can read directly without need to escape emulation
* prevention byte.
*
* +4 to skip until sps id.
*/
init_get_bits8(&gbc, sps + 4, sps_nal->i_payload - 4 - 4);
// Skip sps id
get_ue_golomb_31(&gbc);
chroma_format_idc = get_ue_golomb_31(&gbc);
if (chroma_format_idc == 3)
skip_bits1(&gbc);
bit_depth_luma_minus8 = get_ue_golomb_31(&gbc);
bit_depth_chroma_minus8 = get_ue_golomb_31(&gbc);
*p++ = 0xFC | chroma_format_idc;
*p++ = 0xF8 | bit_depth_luma_minus8;
*p++ = 0xF8 | bit_depth_chroma_minus8;
*p++ = 0;
}
av_assert2(avctx->extradata + avctx->extradata_size >= p);
avctx->extradata_size = p - avctx->extradata;
return 0;
}
#endif
static int set_extradata(AVCodecContext *avctx)
{
X264Context *x4 = avctx->priv_data;
x264_nal_t *nal;
uint8_t *p;
int nnal, s;
s = x264_encoder_headers(x4->enc, &nal, &nnal);
if (s < 0)
return AVERROR_EXTERNAL;
#if CONFIG_LIBX264_ENCODER
if (!x4->params.b_annexb)
return set_avcc_extradata(avctx, nal, nnal);
#endif
avctx->extradata = p = av_mallocz(s + AV_INPUT_BUFFER_PADDING_SIZE);
if (!p)
return AVERROR(ENOMEM);
for (int i = 0; i < nnal; i++) {
/* Don't put the SEI in extradata. */
if (nal[i].i_type == NAL_SEI) {
s = save_sei(avctx, &nal[i]);
if (s < 0)
return s;
continue;
}
memcpy(p, nal[i].p_payload, nal[i].i_payload);
p += nal[i].i_payload;
}
avctx->extradata_size = p - avctx->extradata;
return 0;
}
#define PARSE_X264_OPT(name, var)\
if (x4->var && x264_param_parse(&x4->params, name, x4->var) < 0) {\
av_log(avctx, AV_LOG_ERROR, "Error parsing option '%s' with value '%s'.\n", name, x4->var);\
return AVERROR(EINVAL);\
}
#if CONFIG_LIBX264_HDR10
static void handle_mdcv(x264_param_t *params,
const AVMasteringDisplayMetadata *mdcv)
{
if (!mdcv->has_primaries && !mdcv->has_luminance)
return;
params->mastering_display.b_mastering_display = 1;
if (mdcv->has_primaries) {
int *const points[][2] = {
{
&params->mastering_display.i_red_x,
&params->mastering_display.i_red_y
},
{
&params->mastering_display.i_green_x,
&params->mastering_display.i_green_y
},
{
&params->mastering_display.i_blue_x,
&params->mastering_display.i_blue_y
},
};
for (int i = 0; i < 3; i++) {
const AVRational *src = mdcv->display_primaries[i];
int *dst[2] = { points[i][0], points[i][1] };
*dst[0] = av_rescale_q(1, src[0], (AVRational){ 1, 50000 });
*dst[1] = av_rescale_q(1, src[1], (AVRational){ 1, 50000 });
}
params->mastering_display.i_white_x =
av_rescale_q(1, mdcv->white_point[0], (AVRational){ 1, 50000 });
params->mastering_display.i_white_y =
av_rescale_q(1, mdcv->white_point[1], (AVRational){ 1, 50000 });
}
if (mdcv->has_luminance) {
params->mastering_display.i_display_max =
av_rescale_q(1, mdcv->max_luminance, (AVRational){ 1, 10000 });
params->mastering_display.i_display_min =
av_rescale_q(1, mdcv->min_luminance, (AVRational){ 1, 10000 });
}
}
#endif // CONFIG_LIBX264_HDR10
static void handle_side_data(AVCodecContext *avctx, x264_param_t *params)
{
#if CONFIG_LIBX264_HDR10
const AVFrameSideData *cll_sd =
av_frame_side_data_get(avctx->decoded_side_data,
avctx->nb_decoded_side_data, AV_FRAME_DATA_CONTENT_LIGHT_LEVEL);
const AVFrameSideData *mdcv_sd =
av_frame_side_data_get(avctx->decoded_side_data,
avctx->nb_decoded_side_data,
AV_FRAME_DATA_MASTERING_DISPLAY_METADATA);
if (cll_sd) {
const AVContentLightMetadata *cll =
(AVContentLightMetadata *)cll_sd->data;
params->content_light_level.i_max_cll = cll->MaxCLL;
params->content_light_level.i_max_fall = cll->MaxFALL;
params->content_light_level.b_cll = 1;
}
if (mdcv_sd) {
handle_mdcv(params, (AVMasteringDisplayMetadata *)mdcv_sd->data);
}
#endif // CONFIG_LIBX264_HDR10
}
static av_cold int X264_init(AVCodecContext *avctx)
{
X264Context *x4 = avctx->priv_data;
@ -1153,6 +1367,8 @@ FF_ENABLE_DEPRECATION_WARNINGS
if (avctx->chroma_sample_location != AVCHROMA_LOC_UNSPECIFIED)
x4->params.vui.i_chroma_loc = avctx->chroma_sample_location - 1;
handle_side_data(avctx, &x4->params);
if (avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER)
x4->params.b_repeat_headers = 0;
@ -1215,30 +1431,9 @@ FF_ENABLE_DEPRECATION_WARNINGS
return AVERROR_EXTERNAL;
if (avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER) {
x264_nal_t *nal;
uint8_t *p;
int nnal, s, i;
s = x264_encoder_headers(x4->enc, &nal, &nnal);
avctx->extradata = p = av_mallocz(s + AV_INPUT_BUFFER_PADDING_SIZE);
if (!p)
return AVERROR(ENOMEM);
for (i = 0; i < nnal; i++) {
/* Don't put the SEI in extradata. */
if (nal[i].i_type == NAL_SEI) {
av_log(avctx, AV_LOG_INFO, "%s\n", nal[i].p_payload+25);
x4->sei_size = nal[i].i_payload;
x4->sei = av_malloc(x4->sei_size);
if (!x4->sei)
return AVERROR(ENOMEM);
memcpy(x4->sei, nal[i].p_payload, nal[i].i_payload);
continue;
}
memcpy(p, nal[i].p_payload, nal[i].i_payload);
p += nal[i].i_payload;
}
avctx->extradata_size = p - avctx->extradata;
ret = set_extradata(avctx);
if (ret < 0)
return ret;
}
cpb_props = ff_encode_add_cpb_side_data(avctx);

View File

@ -30,13 +30,12 @@
#include "libavutil/avassert.h"
#include "libavutil/buffer.h"
#include "libavutil/internal.h"
#include "libavutil/common.h"
#include "libavutil/mastering_display_metadata.h"
#include "libavutil/opt.h"
#include "libavutil/pixdesc.h"
#include "avcodec.h"
#include "codec_internal.h"
#include "encode.h"
#include "internal.h"
#include "packet_internal.h"
#include "atsc_a53.h"
#include "sei.h"
@ -176,6 +175,68 @@ static av_cold int libx265_param_parse_int(AVCodecContext *avctx,
return 0;
}
static int handle_mdcv(void *logctx, const x265_api *api,
x265_param *params,
const AVMasteringDisplayMetadata *mdcv)
{
char buf[10 /* # of PRId64s */ * 20 /* max strlen for %PRId64 */ + sizeof("G(,)B(,)R(,)WP(,)L(,)")];
// G(%hu,%hu)B(%hu,%hu)R(%hu,%hu)WP(%hu,%hu)L(%u,%u)
snprintf(buf, sizeof(buf),
"G(%"PRId64",%"PRId64")B(%"PRId64",%"PRId64")R(%"PRId64",%"PRId64")"
"WP(%"PRId64",%"PRId64")L(%"PRId64",%"PRId64")",
av_rescale_q(1, mdcv->display_primaries[1][0], (AVRational){ 1, 50000 }),
av_rescale_q(1, mdcv->display_primaries[1][1], (AVRational){ 1, 50000 }),
av_rescale_q(1, mdcv->display_primaries[2][0], (AVRational){ 1, 50000 }),
av_rescale_q(1, mdcv->display_primaries[2][1], (AVRational){ 1, 50000 }),
av_rescale_q(1, mdcv->display_primaries[0][0], (AVRational){ 1, 50000 }),
av_rescale_q(1, mdcv->display_primaries[0][1], (AVRational){ 1, 50000 }),
av_rescale_q(1, mdcv->white_point[0], (AVRational){ 1, 50000 }),
av_rescale_q(1, mdcv->white_point[1], (AVRational){ 1, 50000 }),
av_rescale_q(1, mdcv->max_luminance, (AVRational){ 1, 10000 }),
av_rescale_q(1, mdcv->min_luminance, (AVRational){ 1, 10000 }));
if (api->param_parse(params, "master-display", buf) ==
X265_PARAM_BAD_VALUE) {
av_log(logctx, AV_LOG_ERROR,
"Invalid value \"%s\" for param \"master-display\".\n",
buf);
return AVERROR(EINVAL);
}
return 0;
}
static int handle_side_data(AVCodecContext *avctx, const x265_api *api,
x265_param *params)
{
const AVFrameSideData *cll_sd =
av_frame_side_data_get(avctx->decoded_side_data,
avctx->nb_decoded_side_data, AV_FRAME_DATA_CONTENT_LIGHT_LEVEL);
const AVFrameSideData *mdcv_sd =
av_frame_side_data_get(avctx->decoded_side_data,
avctx->nb_decoded_side_data,
AV_FRAME_DATA_MASTERING_DISPLAY_METADATA);
if (cll_sd) {
const AVContentLightMetadata *cll =
(AVContentLightMetadata *)cll_sd->data;
params->maxCLL = cll->MaxCLL;
params->maxFALL = cll->MaxFALL;
}
if (mdcv_sd) {
int ret = handle_mdcv(
avctx, api, params,
(AVMasteringDisplayMetadata *)mdcv_sd->data);
if (ret < 0)
return ret;
}
return 0;
}
static av_cold int libx265_encode_init(AVCodecContext *avctx)
{
libx265Context *ctx = avctx->priv_data;
@ -336,6 +397,13 @@ FF_ENABLE_DEPRECATION_WARNINGS
return AVERROR_BUG;
}
ret = handle_side_data(avctx, ctx->api, ctx->params);
if (ret < 0) {
av_log(avctx, AV_LOG_ERROR, "Failed handling side data! (%s)\n",
av_err2str(ret));
return ret;
}
if (ctx->crf >= 0) {
char crf[6];

View File

@ -60,31 +60,33 @@ struct JNIAMediaCodecListFields {
jfieldID level_id;
};
#define OFFSET(x) offsetof(struct JNIAMediaCodecListFields, x)
static const struct FFJniField jni_amediacodeclist_mapping[] = {
{ "android/media/MediaCodecList", NULL, NULL, FF_JNI_CLASS, offsetof(struct JNIAMediaCodecListFields, mediacodec_list_class), 1 },
{ "android/media/MediaCodecList", "<init>", "(I)V", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecListFields, init_id), 0 },
{ "android/media/MediaCodecList", "findDecoderForFormat", "(Landroid/media/MediaFormat;)Ljava/lang/String;", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecListFields, find_decoder_for_format_id), 0 },
{ "android/media/MediaCodecList", NULL, NULL, FF_JNI_CLASS, OFFSET(mediacodec_list_class), 1 },
{ "android/media/MediaCodecList", "<init>", "(I)V", FF_JNI_METHOD, OFFSET(init_id), 0 },
{ "android/media/MediaCodecList", "findDecoderForFormat", "(Landroid/media/MediaFormat;)Ljava/lang/String;", FF_JNI_METHOD, OFFSET(find_decoder_for_format_id), 0 },
{ "android/media/MediaCodecList", "getCodecCount", "()I", FF_JNI_STATIC_METHOD, offsetof(struct JNIAMediaCodecListFields, get_codec_count_id), 1 },
{ "android/media/MediaCodecList", "getCodecInfoAt", "(I)Landroid/media/MediaCodecInfo;", FF_JNI_STATIC_METHOD, offsetof(struct JNIAMediaCodecListFields, get_codec_info_at_id), 1 },
{ "android/media/MediaCodecList", "getCodecCount", "()I", FF_JNI_STATIC_METHOD, OFFSET(get_codec_count_id), 1 },
{ "android/media/MediaCodecList", "getCodecInfoAt", "(I)Landroid/media/MediaCodecInfo;", FF_JNI_STATIC_METHOD, OFFSET(get_codec_info_at_id), 1 },
{ "android/media/MediaCodecInfo", NULL, NULL, FF_JNI_CLASS, offsetof(struct JNIAMediaCodecListFields, mediacodec_info_class), 1 },
{ "android/media/MediaCodecInfo", "getName", "()Ljava/lang/String;", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecListFields, get_name_id), 1 },
{ "android/media/MediaCodecInfo", "getCapabilitiesForType", "(Ljava/lang/String;)Landroid/media/MediaCodecInfo$CodecCapabilities;", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecListFields, get_codec_capabilities_id), 1 },
{ "android/media/MediaCodecInfo", "getSupportedTypes", "()[Ljava/lang/String;", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecListFields, get_supported_types_id), 1 },
{ "android/media/MediaCodecInfo", "isEncoder", "()Z", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecListFields, is_encoder_id), 1 },
{ "android/media/MediaCodecInfo", "isSoftwareOnly", "()Z", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecListFields, is_software_only_id), 0 },
{ "android/media/MediaCodecInfo", NULL, NULL, FF_JNI_CLASS, OFFSET(mediacodec_info_class), 1 },
{ "android/media/MediaCodecInfo", "getName", "()Ljava/lang/String;", FF_JNI_METHOD, OFFSET(get_name_id), 1 },
{ "android/media/MediaCodecInfo", "getCapabilitiesForType", "(Ljava/lang/String;)Landroid/media/MediaCodecInfo$CodecCapabilities;", FF_JNI_METHOD, OFFSET(get_codec_capabilities_id), 1 },
{ "android/media/MediaCodecInfo", "getSupportedTypes", "()[Ljava/lang/String;", FF_JNI_METHOD, OFFSET(get_supported_types_id), 1 },
{ "android/media/MediaCodecInfo", "isEncoder", "()Z", FF_JNI_METHOD, OFFSET(is_encoder_id), 1 },
{ "android/media/MediaCodecInfo", "isSoftwareOnly", "()Z", FF_JNI_METHOD, OFFSET(is_software_only_id), 0 },
{ "android/media/MediaCodecInfo$CodecCapabilities", NULL, NULL, FF_JNI_CLASS, offsetof(struct JNIAMediaCodecListFields, codec_capabilities_class), 1 },
{ "android/media/MediaCodecInfo$CodecCapabilities", "colorFormats", "[I", FF_JNI_FIELD, offsetof(struct JNIAMediaCodecListFields, color_formats_id), 1 },
{ "android/media/MediaCodecInfo$CodecCapabilities", "profileLevels", "[Landroid/media/MediaCodecInfo$CodecProfileLevel;", FF_JNI_FIELD, offsetof(struct JNIAMediaCodecListFields, profile_levels_id), 1 },
{ "android/media/MediaCodecInfo$CodecCapabilities", NULL, NULL, FF_JNI_CLASS, OFFSET(codec_capabilities_class), 1 },
{ "android/media/MediaCodecInfo$CodecCapabilities", "colorFormats", "[I", FF_JNI_FIELD, OFFSET(color_formats_id), 1 },
{ "android/media/MediaCodecInfo$CodecCapabilities", "profileLevels", "[Landroid/media/MediaCodecInfo$CodecProfileLevel;", FF_JNI_FIELD, OFFSET(profile_levels_id), 1 },
{ "android/media/MediaCodecInfo$CodecProfileLevel", NULL, NULL, FF_JNI_CLASS, offsetof(struct JNIAMediaCodecListFields, codec_profile_level_class), 1 },
{ "android/media/MediaCodecInfo$CodecProfileLevel", "profile", "I", FF_JNI_FIELD, offsetof(struct JNIAMediaCodecListFields, profile_id), 1 },
{ "android/media/MediaCodecInfo$CodecProfileLevel", "level", "I", FF_JNI_FIELD, offsetof(struct JNIAMediaCodecListFields, level_id), 1 },
{ "android/media/MediaCodecInfo$CodecProfileLevel", NULL, NULL, FF_JNI_CLASS, OFFSET(codec_profile_level_class), 1 },
{ "android/media/MediaCodecInfo$CodecProfileLevel", "profile", "I", FF_JNI_FIELD, OFFSET(profile_id), 1 },
{ "android/media/MediaCodecInfo$CodecProfileLevel", "level", "I", FF_JNI_FIELD, OFFSET(level_id), 1 },
{ NULL }
};
#undef OFFSET
struct JNIAMediaFormatFields {
@ -110,29 +112,31 @@ struct JNIAMediaFormatFields {
};
#define OFFSET(x) offsetof(struct JNIAMediaFormatFields, x)
static const struct FFJniField jni_amediaformat_mapping[] = {
{ "android/media/MediaFormat", NULL, NULL, FF_JNI_CLASS, offsetof(struct JNIAMediaFormatFields, mediaformat_class), 1 },
{ "android/media/MediaFormat", NULL, NULL, FF_JNI_CLASS, OFFSET(mediaformat_class), 1 },
{ "android/media/MediaFormat", "<init>", "()V", FF_JNI_METHOD, offsetof(struct JNIAMediaFormatFields, init_id), 1 },
{ "android/media/MediaFormat", "<init>", "()V", FF_JNI_METHOD, OFFSET(init_id), 1 },
{ "android/media/MediaFormat", "containsKey", "(Ljava/lang/String;)Z", FF_JNI_METHOD,offsetof(struct JNIAMediaFormatFields, contains_key_id), 1 },
{ "android/media/MediaFormat", "containsKey", "(Ljava/lang/String;)Z", FF_JNI_METHOD, OFFSET(contains_key_id), 1 },
{ "android/media/MediaFormat", "getInteger", "(Ljava/lang/String;)I", FF_JNI_METHOD, offsetof(struct JNIAMediaFormatFields, get_integer_id), 1 },
{ "android/media/MediaFormat", "getLong", "(Ljava/lang/String;)J", FF_JNI_METHOD, offsetof(struct JNIAMediaFormatFields, get_long_id), 1 },
{ "android/media/MediaFormat", "getFloat", "(Ljava/lang/String;)F", FF_JNI_METHOD, offsetof(struct JNIAMediaFormatFields, get_float_id), 1 },
{ "android/media/MediaFormat", "getByteBuffer", "(Ljava/lang/String;)Ljava/nio/ByteBuffer;", FF_JNI_METHOD, offsetof(struct JNIAMediaFormatFields, get_bytebuffer_id), 1 },
{ "android/media/MediaFormat", "getString", "(Ljava/lang/String;)Ljava/lang/String;", FF_JNI_METHOD, offsetof(struct JNIAMediaFormatFields, get_string_id), 1 },
{ "android/media/MediaFormat", "getInteger", "(Ljava/lang/String;)I", FF_JNI_METHOD, OFFSET(get_integer_id), 1 },
{ "android/media/MediaFormat", "getLong", "(Ljava/lang/String;)J", FF_JNI_METHOD, OFFSET(get_long_id), 1 },
{ "android/media/MediaFormat", "getFloat", "(Ljava/lang/String;)F", FF_JNI_METHOD, OFFSET(get_float_id), 1 },
{ "android/media/MediaFormat", "getByteBuffer", "(Ljava/lang/String;)Ljava/nio/ByteBuffer;", FF_JNI_METHOD, OFFSET(get_bytebuffer_id), 1 },
{ "android/media/MediaFormat", "getString", "(Ljava/lang/String;)Ljava/lang/String;", FF_JNI_METHOD, OFFSET(get_string_id), 1 },
{ "android/media/MediaFormat", "setInteger", "(Ljava/lang/String;I)V", FF_JNI_METHOD, offsetof(struct JNIAMediaFormatFields, set_integer_id), 1 },
{ "android/media/MediaFormat", "setLong", "(Ljava/lang/String;J)V", FF_JNI_METHOD, offsetof(struct JNIAMediaFormatFields, set_long_id), 1 },
{ "android/media/MediaFormat", "setFloat", "(Ljava/lang/String;F)V", FF_JNI_METHOD, offsetof(struct JNIAMediaFormatFields, set_float_id), 1 },
{ "android/media/MediaFormat", "setByteBuffer", "(Ljava/lang/String;Ljava/nio/ByteBuffer;)V", FF_JNI_METHOD, offsetof(struct JNIAMediaFormatFields, set_bytebuffer_id), 1 },
{ "android/media/MediaFormat", "setString", "(Ljava/lang/String;Ljava/lang/String;)V", FF_JNI_METHOD, offsetof(struct JNIAMediaFormatFields, set_string_id), 1 },
{ "android/media/MediaFormat", "setInteger", "(Ljava/lang/String;I)V", FF_JNI_METHOD, OFFSET(set_integer_id), 1 },
{ "android/media/MediaFormat", "setLong", "(Ljava/lang/String;J)V", FF_JNI_METHOD, OFFSET(set_long_id), 1 },
{ "android/media/MediaFormat", "setFloat", "(Ljava/lang/String;F)V", FF_JNI_METHOD, OFFSET(set_float_id), 1 },
{ "android/media/MediaFormat", "setByteBuffer", "(Ljava/lang/String;Ljava/nio/ByteBuffer;)V", FF_JNI_METHOD, OFFSET(set_bytebuffer_id), 1 },
{ "android/media/MediaFormat", "setString", "(Ljava/lang/String;Ljava/lang/String;)V", FF_JNI_METHOD, OFFSET(set_string_id), 1 },
{ "android/media/MediaFormat", "toString", "()Ljava/lang/String;", FF_JNI_METHOD, offsetof(struct JNIAMediaFormatFields, to_string_id), 1 },
{ "android/media/MediaFormat", "toString", "()Ljava/lang/String;", FF_JNI_METHOD, OFFSET(to_string_id), 1 },
{ NULL }
};
#undef OFFSET
static const AVClass amediaformat_class = {
.class_name = "amediaformat",
@ -202,57 +206,59 @@ struct JNIAMediaCodecFields {
};
#define OFFSET(x) offsetof(struct JNIAMediaCodecFields, x)
static const struct FFJniField jni_amediacodec_mapping[] = {
{ "android/media/MediaCodec", NULL, NULL, FF_JNI_CLASS, offsetof(struct JNIAMediaCodecFields, mediacodec_class), 1 },
{ "android/media/MediaCodec", NULL, NULL, FF_JNI_CLASS, OFFSET(mediacodec_class), 1 },
{ "android/media/MediaCodec", "INFO_TRY_AGAIN_LATER", "I", FF_JNI_STATIC_FIELD, offsetof(struct JNIAMediaCodecFields, info_try_again_later_id), 1 },
{ "android/media/MediaCodec", "INFO_OUTPUT_BUFFERS_CHANGED", "I", FF_JNI_STATIC_FIELD, offsetof(struct JNIAMediaCodecFields, info_output_buffers_changed_id), 1 },
{ "android/media/MediaCodec", "INFO_OUTPUT_FORMAT_CHANGED", "I", FF_JNI_STATIC_FIELD, offsetof(struct JNIAMediaCodecFields, info_output_format_changed_id), 1 },
{ "android/media/MediaCodec", "INFO_TRY_AGAIN_LATER", "I", FF_JNI_STATIC_FIELD, OFFSET(info_try_again_later_id), 1 },
{ "android/media/MediaCodec", "INFO_OUTPUT_BUFFERS_CHANGED", "I", FF_JNI_STATIC_FIELD, OFFSET(info_output_buffers_changed_id), 1 },
{ "android/media/MediaCodec", "INFO_OUTPUT_FORMAT_CHANGED", "I", FF_JNI_STATIC_FIELD, OFFSET(info_output_format_changed_id), 1 },
{ "android/media/MediaCodec", "BUFFER_FLAG_CODEC_CONFIG", "I", FF_JNI_STATIC_FIELD, offsetof(struct JNIAMediaCodecFields, buffer_flag_codec_config_id), 1 },
{ "android/media/MediaCodec", "BUFFER_FLAG_END_OF_STREAM", "I", FF_JNI_STATIC_FIELD, offsetof(struct JNIAMediaCodecFields, buffer_flag_end_of_stream_id), 1 },
{ "android/media/MediaCodec", "BUFFER_FLAG_KEY_FRAME", "I", FF_JNI_STATIC_FIELD, offsetof(struct JNIAMediaCodecFields, buffer_flag_key_frame_id), 0 },
{ "android/media/MediaCodec", "BUFFER_FLAG_CODEC_CONFIG", "I", FF_JNI_STATIC_FIELD, OFFSET(buffer_flag_codec_config_id), 1 },
{ "android/media/MediaCodec", "BUFFER_FLAG_END_OF_STREAM", "I", FF_JNI_STATIC_FIELD, OFFSET(buffer_flag_end_of_stream_id), 1 },
{ "android/media/MediaCodec", "BUFFER_FLAG_KEY_FRAME", "I", FF_JNI_STATIC_FIELD, OFFSET(buffer_flag_key_frame_id), 0 },
{ "android/media/MediaCodec", "CONFIGURE_FLAG_ENCODE", "I", FF_JNI_STATIC_FIELD, offsetof(struct JNIAMediaCodecFields, configure_flag_encode_id), 1 },
{ "android/media/MediaCodec", "CONFIGURE_FLAG_ENCODE", "I", FF_JNI_STATIC_FIELD, OFFSET(configure_flag_encode_id), 1 },
{ "android/media/MediaCodec", "createByCodecName", "(Ljava/lang/String;)Landroid/media/MediaCodec;", FF_JNI_STATIC_METHOD, offsetof(struct JNIAMediaCodecFields, create_by_codec_name_id), 1 },
{ "android/media/MediaCodec", "createDecoderByType", "(Ljava/lang/String;)Landroid/media/MediaCodec;", FF_JNI_STATIC_METHOD, offsetof(struct JNIAMediaCodecFields, create_decoder_by_type_id), 1 },
{ "android/media/MediaCodec", "createEncoderByType", "(Ljava/lang/String;)Landroid/media/MediaCodec;", FF_JNI_STATIC_METHOD, offsetof(struct JNIAMediaCodecFields, create_encoder_by_type_id), 1 },
{ "android/media/MediaCodec", "createByCodecName", "(Ljava/lang/String;)Landroid/media/MediaCodec;", FF_JNI_STATIC_METHOD, OFFSET(create_by_codec_name_id), 1 },
{ "android/media/MediaCodec", "createDecoderByType", "(Ljava/lang/String;)Landroid/media/MediaCodec;", FF_JNI_STATIC_METHOD, OFFSET(create_decoder_by_type_id), 1 },
{ "android/media/MediaCodec", "createEncoderByType", "(Ljava/lang/String;)Landroid/media/MediaCodec;", FF_JNI_STATIC_METHOD, OFFSET(create_encoder_by_type_id), 1 },
{ "android/media/MediaCodec", "getName", "()Ljava/lang/String;", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, get_name_id), 1 },
{ "android/media/MediaCodec", "getName", "()Ljava/lang/String;", FF_JNI_METHOD, OFFSET(get_name_id), 1 },
{ "android/media/MediaCodec", "configure", "(Landroid/media/MediaFormat;Landroid/view/Surface;Landroid/media/MediaCrypto;I)V", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, configure_id), 1 },
{ "android/media/MediaCodec", "start", "()V", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, start_id), 1 },
{ "android/media/MediaCodec", "flush", "()V", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, flush_id), 1 },
{ "android/media/MediaCodec", "stop", "()V", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, stop_id), 1 },
{ "android/media/MediaCodec", "release", "()V", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, release_id), 1 },
{ "android/media/MediaCodec", "configure", "(Landroid/media/MediaFormat;Landroid/view/Surface;Landroid/media/MediaCrypto;I)V", FF_JNI_METHOD, OFFSET(configure_id), 1 },
{ "android/media/MediaCodec", "start", "()V", FF_JNI_METHOD, OFFSET(start_id), 1 },
{ "android/media/MediaCodec", "flush", "()V", FF_JNI_METHOD, OFFSET(flush_id), 1 },
{ "android/media/MediaCodec", "stop", "()V", FF_JNI_METHOD, OFFSET(stop_id), 1 },
{ "android/media/MediaCodec", "release", "()V", FF_JNI_METHOD, OFFSET(release_id), 1 },
{ "android/media/MediaCodec", "getOutputFormat", "()Landroid/media/MediaFormat;", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, get_output_format_id), 1 },
{ "android/media/MediaCodec", "getOutputFormat", "()Landroid/media/MediaFormat;", FF_JNI_METHOD, OFFSET(get_output_format_id), 1 },
{ "android/media/MediaCodec", "dequeueInputBuffer", "(J)I", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, dequeue_input_buffer_id), 1 },
{ "android/media/MediaCodec", "queueInputBuffer", "(IIIJI)V", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, queue_input_buffer_id), 1 },
{ "android/media/MediaCodec", "getInputBuffer", "(I)Ljava/nio/ByteBuffer;", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, get_input_buffer_id), 0 },
{ "android/media/MediaCodec", "getInputBuffers", "()[Ljava/nio/ByteBuffer;", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, get_input_buffers_id), 1 },
{ "android/media/MediaCodec", "dequeueInputBuffer", "(J)I", FF_JNI_METHOD, OFFSET(dequeue_input_buffer_id), 1 },
{ "android/media/MediaCodec", "queueInputBuffer", "(IIIJI)V", FF_JNI_METHOD, OFFSET(queue_input_buffer_id), 1 },
{ "android/media/MediaCodec", "getInputBuffer", "(I)Ljava/nio/ByteBuffer;", FF_JNI_METHOD, OFFSET(get_input_buffer_id), 0 },
{ "android/media/MediaCodec", "getInputBuffers", "()[Ljava/nio/ByteBuffer;", FF_JNI_METHOD, OFFSET(get_input_buffers_id), 1 },
{ "android/media/MediaCodec", "dequeueOutputBuffer", "(Landroid/media/MediaCodec$BufferInfo;J)I", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, dequeue_output_buffer_id), 1 },
{ "android/media/MediaCodec", "getOutputBuffer", "(I)Ljava/nio/ByteBuffer;", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, get_output_buffer_id), 0 },
{ "android/media/MediaCodec", "getOutputBuffers", "()[Ljava/nio/ByteBuffer;", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, get_output_buffers_id), 1 },
{ "android/media/MediaCodec", "releaseOutputBuffer", "(IZ)V", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, release_output_buffer_id), 1 },
{ "android/media/MediaCodec", "releaseOutputBuffer", "(IJ)V", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, release_output_buffer_at_time_id), 0 },
{ "android/media/MediaCodec", "dequeueOutputBuffer", "(Landroid/media/MediaCodec$BufferInfo;J)I", FF_JNI_METHOD, OFFSET(dequeue_output_buffer_id), 1 },
{ "android/media/MediaCodec", "getOutputBuffer", "(I)Ljava/nio/ByteBuffer;", FF_JNI_METHOD, OFFSET(get_output_buffer_id), 0 },
{ "android/media/MediaCodec", "getOutputBuffers", "()[Ljava/nio/ByteBuffer;", FF_JNI_METHOD, OFFSET(get_output_buffers_id), 1 },
{ "android/media/MediaCodec", "releaseOutputBuffer", "(IZ)V", FF_JNI_METHOD, OFFSET(release_output_buffer_id), 1 },
{ "android/media/MediaCodec", "releaseOutputBuffer", "(IJ)V", FF_JNI_METHOD, OFFSET(release_output_buffer_at_time_id), 0 },
{ "android/media/MediaCodec", "setInputSurface", "(Landroid/view/Surface;)V", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, set_input_surface_id), 0 },
{ "android/media/MediaCodec", "signalEndOfInputStream", "()V", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, signal_end_of_input_stream_id), 0 },
{ "android/media/MediaCodec", "setInputSurface", "(Landroid/view/Surface;)V", FF_JNI_METHOD, OFFSET(set_input_surface_id), 0 },
{ "android/media/MediaCodec", "signalEndOfInputStream", "()V", FF_JNI_METHOD, OFFSET(signal_end_of_input_stream_id), 0 },
{ "android/media/MediaCodec$BufferInfo", NULL, NULL, FF_JNI_CLASS, offsetof(struct JNIAMediaCodecFields, mediainfo_class), 1 },
{ "android/media/MediaCodec$BufferInfo", NULL, NULL, FF_JNI_CLASS, OFFSET(mediainfo_class), 1 },
{ "android/media/MediaCodec.BufferInfo", "<init>", "()V", FF_JNI_METHOD, offsetof(struct JNIAMediaCodecFields, init_id), 1 },
{ "android/media/MediaCodec.BufferInfo", "flags", "I", FF_JNI_FIELD, offsetof(struct JNIAMediaCodecFields, flags_id), 1 },
{ "android/media/MediaCodec.BufferInfo", "offset", "I", FF_JNI_FIELD, offsetof(struct JNIAMediaCodecFields, offset_id), 1 },
{ "android/media/MediaCodec.BufferInfo", "presentationTimeUs", "J", FF_JNI_FIELD, offsetof(struct JNIAMediaCodecFields, presentation_time_us_id), 1 },
{ "android/media/MediaCodec.BufferInfo", "size", "I", FF_JNI_FIELD, offsetof(struct JNIAMediaCodecFields, size_id), 1 },
{ "android/media/MediaCodec.BufferInfo", "<init>", "()V", FF_JNI_METHOD, OFFSET(init_id), 1 },
{ "android/media/MediaCodec.BufferInfo", "flags", "I", FF_JNI_FIELD, OFFSET(flags_id), 1 },
{ "android/media/MediaCodec.BufferInfo", "offset", "I", FF_JNI_FIELD, OFFSET(offset_id), 1 },
{ "android/media/MediaCodec.BufferInfo", "presentationTimeUs", "J", FF_JNI_FIELD, OFFSET(presentation_time_us_id), 1 },
{ "android/media/MediaCodec.BufferInfo", "size", "I", FF_JNI_FIELD, OFFSET(size_id), 1 },
{ NULL }
};
#undef OFFSET
static const AVClass amediacodec_class = {
.class_name = "amediacodec",
@ -543,10 +549,8 @@ char *ff_AMediaCodecList_getCodecNameByType(const char *mime, int profile, int e
goto done;
}
if (codec_name) {
(*env)->DeleteLocalRef(env, codec_name);
codec_name = NULL;
}
(*env)->DeleteLocalRef(env, codec_name);
codec_name = NULL;
/* Skip software decoders */
if (
@ -610,10 +614,8 @@ char *ff_AMediaCodecList_getCodecNameByType(const char *mime, int profile, int e
found_codec = profile == supported_profile;
if (profile_level) {
(*env)->DeleteLocalRef(env, profile_level);
profile_level = NULL;
}
(*env)->DeleteLocalRef(env, profile_level);
profile_level = NULL;
if (found_codec) {
break;
@ -621,20 +623,14 @@ char *ff_AMediaCodecList_getCodecNameByType(const char *mime, int profile, int e
}
done_with_type:
if (profile_levels) {
(*env)->DeleteLocalRef(env, profile_levels);
profile_levels = NULL;
}
(*env)->DeleteLocalRef(env, profile_levels);
profile_levels = NULL;
if (capabilities) {
(*env)->DeleteLocalRef(env, capabilities);
capabilities = NULL;
}
(*env)->DeleteLocalRef(env, capabilities);
capabilities = NULL;
if (type) {
(*env)->DeleteLocalRef(env, type);
type = NULL;
}
(*env)->DeleteLocalRef(env, type);
type = NULL;
av_freep(&supported_type);
@ -644,15 +640,11 @@ done_with_type:
}
done_with_info:
if (info) {
(*env)->DeleteLocalRef(env, info);
info = NULL;
}
(*env)->DeleteLocalRef(env, info);
info = NULL;
if (types) {
(*env)->DeleteLocalRef(env, types);
types = NULL;
}
(*env)->DeleteLocalRef(env, types);
types = NULL;
if (found_codec) {
break;
@ -662,33 +654,13 @@ done_with_info:
}
done:
if (codec_name) {
(*env)->DeleteLocalRef(env, codec_name);
}
if (info) {
(*env)->DeleteLocalRef(env, info);
}
if (type) {
(*env)->DeleteLocalRef(env, type);
}
if (types) {
(*env)->DeleteLocalRef(env, types);
}
if (capabilities) {
(*env)->DeleteLocalRef(env, capabilities);
}
if (profile_level) {
(*env)->DeleteLocalRef(env, profile_level);
}
if (profile_levels) {
(*env)->DeleteLocalRef(env, profile_levels);
}
(*env)->DeleteLocalRef(env, codec_name);
(*env)->DeleteLocalRef(env, info);
(*env)->DeleteLocalRef(env, type);
(*env)->DeleteLocalRef(env, types);
(*env)->DeleteLocalRef(env, capabilities);
(*env)->DeleteLocalRef(env, profile_level);
(*env)->DeleteLocalRef(env, profile_levels);
av_freep(&supported_type);
@ -735,9 +707,7 @@ static FFAMediaFormat *mediaformat_jni_new(void)
}
fail:
if (object) {
(*env)->DeleteLocalRef(env, object);
}
(*env)->DeleteLocalRef(env, object);
if (!format->object) {
ff_jni_reset_jfields(env, &format->jfields, jni_amediaformat_mapping, 1, format);
@ -822,9 +792,7 @@ static char* mediaformat_jni_toString(FFAMediaFormat* ctx)
ret = ff_jni_jstring_to_utf_chars(env, description, format);
fail:
if (description) {
(*env)->DeleteLocalRef(env, description);
}
(*env)->DeleteLocalRef(env, description);
return ret;
}
@ -861,9 +829,7 @@ static int mediaformat_jni_getInt32(FFAMediaFormat* ctx, const char *name, int32
ret = 1;
fail:
if (key) {
(*env)->DeleteLocalRef(env, key);
}
(*env)->DeleteLocalRef(env, key);
return ret;
}
@ -900,9 +866,7 @@ static int mediaformat_jni_getInt64(FFAMediaFormat* ctx, const char *name, int64
ret = 1;
fail:
if (key) {
(*env)->DeleteLocalRef(env, key);
}
(*env)->DeleteLocalRef(env, key);
return ret;
}
@ -939,9 +903,7 @@ static int mediaformat_jni_getFloat(FFAMediaFormat* ctx, const char *name, float
ret = 1;
fail:
if (key) {
(*env)->DeleteLocalRef(env, key);
}
(*env)->DeleteLocalRef(env, key);
return ret;
}
@ -993,13 +955,8 @@ static int mediaformat_jni_getBuffer(FFAMediaFormat* ctx, const char *name, void
ret = 1;
fail:
if (key) {
(*env)->DeleteLocalRef(env, key);
}
if (result) {
(*env)->DeleteLocalRef(env, result);
}
(*env)->DeleteLocalRef(env, key);
(*env)->DeleteLocalRef(env, result);
return ret;
}
@ -1043,13 +1000,8 @@ static int mediaformat_jni_getString(FFAMediaFormat* ctx, const char *name, cons
ret = 1;
fail:
if (key) {
(*env)->DeleteLocalRef(env, key);
}
if (result) {
(*env)->DeleteLocalRef(env, result);
}
(*env)->DeleteLocalRef(env, key);
(*env)->DeleteLocalRef(env, result);
return ret;
}
@ -1075,9 +1027,7 @@ static void mediaformat_jni_setInt32(FFAMediaFormat* ctx, const char* name, int3
}
fail:
if (key) {
(*env)->DeleteLocalRef(env, key);
}
(*env)->DeleteLocalRef(env, key);
}
static void mediaformat_jni_setInt64(FFAMediaFormat* ctx, const char* name, int64_t value)
@ -1101,9 +1051,7 @@ static void mediaformat_jni_setInt64(FFAMediaFormat* ctx, const char* name, int6
}
fail:
if (key) {
(*env)->DeleteLocalRef(env, key);
}
(*env)->DeleteLocalRef(env, key);
}
static void mediaformat_jni_setFloat(FFAMediaFormat* ctx, const char* name, float value)
@ -1127,9 +1075,7 @@ static void mediaformat_jni_setFloat(FFAMediaFormat* ctx, const char* name, floa
}
fail:
if (key) {
(*env)->DeleteLocalRef(env, key);
}
(*env)->DeleteLocalRef(env, key);
}
static void mediaformat_jni_setString(FFAMediaFormat* ctx, const char* name, const char* value)
@ -1159,13 +1105,8 @@ static void mediaformat_jni_setString(FFAMediaFormat* ctx, const char* name, con
}
fail:
if (key) {
(*env)->DeleteLocalRef(env, key);
}
if (string) {
(*env)->DeleteLocalRef(env, string);
}
(*env)->DeleteLocalRef(env, key);
(*env)->DeleteLocalRef(env, string);
}
static void mediaformat_jni_setBuffer(FFAMediaFormat* ctx, const char* name, void* data, size_t size)
@ -1207,13 +1148,8 @@ static void mediaformat_jni_setBuffer(FFAMediaFormat* ctx, const char* name, voi
}
fail:
if (key) {
(*env)->DeleteLocalRef(env, key);
}
if (buffer) {
(*env)->DeleteLocalRef(env, buffer);
}
(*env)->DeleteLocalRef(env, key);
(*env)->DeleteLocalRef(env, buffer);
}
static int codec_init_static_fields(FFAMediaCodecJni *codec)
@ -1346,26 +1282,13 @@ static inline FFAMediaCodec *codec_create(int method, const char *arg)
ret = 0;
fail:
if (jarg) {
(*env)->DeleteLocalRef(env, jarg);
}
if (object) {
(*env)->DeleteLocalRef(env, object);
}
if (buffer_info) {
(*env)->DeleteLocalRef(env, buffer_info);
}
(*env)->DeleteLocalRef(env, jarg);
(*env)->DeleteLocalRef(env, object);
(*env)->DeleteLocalRef(env, buffer_info);
if (ret < 0) {
if (codec->object) {
(*env)->DeleteGlobalRef(env, codec->object);
}
if (codec->buffer_info) {
(*env)->DeleteGlobalRef(env, codec->buffer_info);
}
(*env)->DeleteGlobalRef(env, codec->object);
(*env)->DeleteGlobalRef(env, codec->buffer_info);
ff_jni_reset_jfields(env, &codec->jfields, jni_amediacodec_mapping, 1, codec);
av_freep(&codec);
@ -1686,13 +1609,8 @@ static uint8_t* mediacodec_jni_getInputBuffer(FFAMediaCodec* ctx, size_t idx, si
ret = (*env)->GetDirectBufferAddress(env, buffer);
*out_size = (*env)->GetDirectBufferCapacity(env, buffer);
fail:
if (buffer) {
(*env)->DeleteLocalRef(env, buffer);
}
if (input_buffers) {
(*env)->DeleteLocalRef(env, input_buffers);
}
(*env)->DeleteLocalRef(env, buffer);
(*env)->DeleteLocalRef(env, input_buffers);
return ret;
}
@ -1734,13 +1652,8 @@ static uint8_t* mediacodec_jni_getOutputBuffer(FFAMediaCodec* ctx, size_t idx, s
ret = (*env)->GetDirectBufferAddress(env, buffer);
*out_size = (*env)->GetDirectBufferCapacity(env, buffer);
fail:
if (buffer) {
(*env)->DeleteLocalRef(env, buffer);
}
if (output_buffers) {
(*env)->DeleteLocalRef(env, output_buffers);
}
(*env)->DeleteLocalRef(env, buffer);
(*env)->DeleteLocalRef(env, output_buffers);
return ret;
}
@ -1762,9 +1675,7 @@ static FFAMediaFormat* mediacodec_jni_getOutputFormat(FFAMediaCodec* ctx)
ret = mediaformat_jni_newFromObject(mediaformat);
fail:
if (mediaformat) {
(*env)->DeleteLocalRef(env, mediaformat);
}
(*env)->DeleteLocalRef(env, mediaformat);
return ret;
}

View File

@ -62,6 +62,13 @@
#define A53_MAX_CC_COUNT 2000
enum Mpeg2ClosedCaptionsFormat {
CC_FORMAT_AUTO,
CC_FORMAT_A53_PART4,
CC_FORMAT_SCTE20,
CC_FORMAT_DVD
};
typedef struct Mpeg1Context {
MpegEncContext mpeg_enc_ctx;
int mpeg_enc_ctx_allocated; /* true if decoding context allocated */
@ -70,6 +77,7 @@ typedef struct Mpeg1Context {
AVStereo3D stereo3d;
int has_stereo3d;
AVBufferRef *a53_buf_ref;
enum Mpeg2ClosedCaptionsFormat cc_format;
uint8_t afd;
int has_afd;
int slice_count;
@ -1903,12 +1911,27 @@ static int vcr2_init_sequence(AVCodecContext *avctx)
return 0;
}
static void mpeg_set_cc_format(AVCodecContext *avctx, enum Mpeg2ClosedCaptionsFormat format,
const char *label)
{
Mpeg1Context *s1 = avctx->priv_data;
av_assert2(format != CC_FORMAT_AUTO);
if (!s1->cc_format) {
s1->cc_format = format;
av_log(avctx, AV_LOG_DEBUG, "CC: first seen substream is %s format\n", label);
}
}
static int mpeg_decode_a53_cc(AVCodecContext *avctx,
const uint8_t *p, int buf_size)
{
Mpeg1Context *s1 = avctx->priv_data;
if (buf_size >= 6 &&
if ((!s1->cc_format || s1->cc_format == CC_FORMAT_A53_PART4) &&
buf_size >= 6 &&
p[0] == 'G' && p[1] == 'A' && p[2] == '9' && p[3] == '4' &&
p[4] == 3 && (p[5] & 0x40)) {
/* extract A53 Part 4 CC data */
@ -1927,9 +1950,11 @@ static int mpeg_decode_a53_cc(AVCodecContext *avctx,
memcpy(s1->a53_buf_ref->data + old_size, p + 7, cc_count * UINT64_C(3));
avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS;
mpeg_set_cc_format(avctx, CC_FORMAT_A53_PART4, "A/53 Part 4");
}
return 1;
} else if (buf_size >= 2 &&
} else if ((!s1->cc_format || s1->cc_format == CC_FORMAT_SCTE20) &&
buf_size >= 2 &&
p[0] == 0x03 && (p[1]&0x7f) == 0x01) {
/* extract SCTE-20 CC data */
GetBitContext gb;
@ -1973,10 +1998,13 @@ static int mpeg_decode_a53_cc(AVCodecContext *avctx,
cap += 3;
}
}
avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS;
mpeg_set_cc_format(avctx, CC_FORMAT_SCTE20, "SCTE-20");
}
return 1;
} else if (buf_size >= 11 &&
} else if ((!s1->cc_format || s1->cc_format == CC_FORMAT_DVD) &&
buf_size >= 11 &&
p[0] == 'C' && p[1] == 'C' && p[2] == 0x01 && p[3] == 0xf8) {
/* extract DVD CC data
*
@ -2033,7 +2061,9 @@ static int mpeg_decode_a53_cc(AVCodecContext *avctx,
p += 6;
}
}
avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS;
mpeg_set_cc_format(avctx, CC_FORMAT_DVD, "DVD");
}
return 1;
}
@ -2598,11 +2628,39 @@ const FFCodec ff_mpeg1video_decoder = {
},
};
#define M2V_OFFSET(x) offsetof(Mpeg1Context, x)
#define M2V_PARAM AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_DECODING_PARAM
static const AVOption mpeg2video_options[] = {
{ "cc_format", "extract a specific Closed Captions format",
M2V_OFFSET(cc_format), AV_OPT_TYPE_INT, { .i64 = CC_FORMAT_AUTO },
CC_FORMAT_AUTO, CC_FORMAT_DVD, M2V_PARAM, .unit = "cc_format" },
{ "auto", "pick first seen CC substream", 0, AV_OPT_TYPE_CONST,
{ .i64 = CC_FORMAT_AUTO }, .flags = M2V_PARAM, .unit = "cc_format" },
{ "a53", "pick A/53 Part 4 CC substream", 0, AV_OPT_TYPE_CONST,
{ .i64 = CC_FORMAT_A53_PART4 }, .flags = M2V_PARAM, .unit = "cc_format" },
{ "scte20", "pick SCTE-20 CC substream", 0, AV_OPT_TYPE_CONST,
{ .i64 = CC_FORMAT_SCTE20 }, .flags = M2V_PARAM, .unit = "cc_format" },
{ "dvd", "pick DVD CC substream", 0, AV_OPT_TYPE_CONST,
{ .i64 = CC_FORMAT_DVD }, .flags = M2V_PARAM, .unit = "cc_format" },
{ NULL }
};
static const AVClass mpeg2video_class = {
.class_name = "MPEG-2 video",
.item_name = av_default_item_name,
.option = mpeg2video_options,
.version = LIBAVUTIL_VERSION_INT,
.category = AV_CLASS_CATEGORY_DECODER,
};
const FFCodec ff_mpeg2video_decoder = {
.p.name = "mpeg2video",
CODEC_LONG_NAME("MPEG-2 video"),
.p.type = AVMEDIA_TYPE_VIDEO,
.p.id = AV_CODEC_ID_MPEG2VIDEO,
.p.priv_class = &mpeg2video_class,
.priv_data_size = sizeof(Mpeg1Context),
.init = mpeg_decode_init,
.close = mpeg_decode_end,

View File

@ -176,6 +176,8 @@ void avcodec_free_context(AVCodecContext **pavctx)
av_freep(&avctx->inter_matrix);
av_freep(&avctx->rc_override);
av_channel_layout_uninit(&avctx->ch_layout);
av_frame_side_data_free(
&avctx->decoded_side_data, &avctx->nb_decoded_side_data);
av_freep(pavctx);
}

View File

@ -29,8 +29,8 @@
#include "version_major.h"
#define LIBAVCODEC_VERSION_MINOR 1
#define LIBAVCODEC_VERSION_MICRO 101
#define LIBAVCODEC_VERSION_MINOR 2
#define LIBAVCODEC_VERSION_MICRO 100
#define LIBAVCODEC_VERSION_INT AV_VERSION_INT(LIBAVCODEC_VERSION_MAJOR, \
LIBAVCODEC_VERSION_MINOR, \

View File

@ -250,10 +250,10 @@ static void set_sps(const HEVCSPS *sps, int sps_idx,
*vksps_vui_header = (StdVideoH265HrdParameters) {
.flags = (StdVideoH265HrdFlags) {
.nal_hrd_parameters_present_flag = sps->hdr.flags.nal_hrd_parameters_present_flag,
.vcl_hrd_parameters_present_flag = sps->hdr.flags.vcl_hrd_parameters_present_flag,
.sub_pic_hrd_params_present_flag = sps->hdr.flags.sub_pic_hrd_params_present_flag,
.sub_pic_cpb_params_in_pic_timing_sei_flag = sps->hdr.flags.sub_pic_cpb_params_in_pic_timing_sei_flag,
.nal_hrd_parameters_present_flag = sps->hdr.nal_hrd_parameters_present_flag,
.vcl_hrd_parameters_present_flag = sps->hdr.vcl_hrd_parameters_present_flag,
.sub_pic_hrd_params_present_flag = sps->hdr.sub_pic_hrd_params_present_flag,
.sub_pic_cpb_params_in_pic_timing_sei_flag = sps->hdr.sub_pic_cpb_params_in_pic_timing_sei_flag,
.fixed_pic_rate_general_flag = sps->hdr.flags.fixed_pic_rate_general_flag,
.fixed_pic_rate_within_cvs_flag = sps->hdr.flags.fixed_pic_rate_within_cvs_flag,
.low_delay_hrd_flag = sps->hdr.flags.low_delay_hrd_flag,
@ -567,10 +567,10 @@ static void set_vps(const HEVCVPS *vps,
sls_hdr[i] = (StdVideoH265HrdParameters) {
.flags = (StdVideoH265HrdFlags) {
.nal_hrd_parameters_present_flag = src->flags.nal_hrd_parameters_present_flag,
.vcl_hrd_parameters_present_flag = src->flags.vcl_hrd_parameters_present_flag,
.sub_pic_hrd_params_present_flag = src->flags.sub_pic_hrd_params_present_flag,
.sub_pic_cpb_params_in_pic_timing_sei_flag = src->flags.sub_pic_cpb_params_in_pic_timing_sei_flag,
.nal_hrd_parameters_present_flag = src->nal_hrd_parameters_present_flag,
.vcl_hrd_parameters_present_flag = src->vcl_hrd_parameters_present_flag,
.sub_pic_hrd_params_present_flag = src->sub_pic_hrd_params_present_flag,
.sub_pic_cpb_params_in_pic_timing_sei_flag = src->sub_pic_cpb_params_in_pic_timing_sei_flag,
.fixed_pic_rate_general_flag = src->flags.fixed_pic_rate_general_flag,
.fixed_pic_rate_within_cvs_flag = src->flags.fixed_pic_rate_within_cvs_flag,
.low_delay_hrd_flag = src->flags.low_delay_hrd_flag,

View File

@ -96,7 +96,7 @@ static int get_qp_y_pred(const VVCLocalContext *lc)
if (lc->na.cand_up) {
const int first_qg_in_ctu = !(xQg & ctb_size_mask) && !(yQg & ctb_size_mask);
const int qPy_up = fc->tab.qp[LUMA][x_cb + (y_cb - 1) * min_cb_width];
if (first_qg_in_ctu && pps->ctb_to_col_bd[xQg >> ctb_log2_size] == xQg)
if (first_qg_in_ctu && pps->ctb_to_col_bd[xQg >> ctb_log2_size] == xQg >> ctb_log2_size)
return qPy_up;
}

View File

@ -173,7 +173,7 @@ static void set_parser_ctx(AVCodecParserContext *s, AVCodecContext *avctx,
h266_sub_width_c[sps->sps_chroma_format_idc];
s->height = pps->pps_pic_height_in_luma_samples -
(pps->pps_conf_win_top_offset + pps->pps_conf_win_bottom_offset) *
h266_sub_height_c[sps->sps_chroma_format_idc];;
h266_sub_height_c[sps->sps_chroma_format_idc];
avctx->profile = sps->profile_tier_level.general_profile_idc;
avctx->level = sps->profile_tier_level.general_level_idc;
@ -317,7 +317,7 @@ static int get_pu_info(PuInfo *info, const CodedBitstreamH266Context *h266,
}
info->pic_type = get_pict_type(pu);
return 0;
error:
error:
memset(info, 0, sizeof(*info));
return ret;
}
@ -329,7 +329,7 @@ static int append_au(AVPacket *pkt, const uint8_t *buf, int buf_size)
if ((ret = av_grow_packet(pkt, buf_size)) < 0)
goto end;
memcpy(pkt->data + offset, buf, buf_size);
end:
end:
return ret;
}
@ -376,7 +376,7 @@ static int parse_nal_units(AVCodecParserContext *s, const uint8_t *buf,
} else {
ret = 1; //not a completed au
}
end:
end:
ff_cbs_fragment_reset(pu);
return ret;
}

View File

@ -801,5 +801,5 @@ const FFOutputFormat ff_pulse_muxer = {
.p.flags = AVFMT_NOFILE,
#endif
.p.priv_class = &pulse_muxer_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};

View File

@ -6,5 +6,6 @@ OBJS-$(CONFIG_DNN) += dnn/dnn_backend_common.o
DNN-OBJS-$(CONFIG_LIBTENSORFLOW) += dnn/dnn_backend_tf.o
DNN-OBJS-$(CONFIG_LIBOPENVINO) += dnn/dnn_backend_openvino.o
DNN-OBJS-$(CONFIG_LIBTORCH) += dnn/dnn_backend_torch.o
OBJS-$(CONFIG_DNN) += $(DNN-OBJS-yes)

View File

@ -0,0 +1,597 @@
/*
* Copyright (c) 2024
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* DNN Torch backend implementation.
*/
#include <torch/torch.h>
#include <torch/script.h>
extern "C" {
#include "../internal.h"
#include "dnn_io_proc.h"
#include "dnn_backend_common.h"
#include "libavutil/opt.h"
#include "queue.h"
#include "safe_queue.h"
}
typedef struct THOptions{
char *device_name;
int optimize;
} THOptions;
typedef struct THContext {
const AVClass *c_class;
THOptions options;
} THContext;
typedef struct THModel {
THContext ctx;
DNNModel *model;
torch::jit::Module *jit_model;
SafeQueue *request_queue;
Queue *task_queue;
Queue *lltask_queue;
} THModel;
typedef struct THInferRequest {
torch::Tensor *output;
torch::Tensor *input_tensor;
} THInferRequest;
typedef struct THRequestItem {
THInferRequest *infer_request;
LastLevelTaskItem *lltask;
DNNAsyncExecModule exec_module;
} THRequestItem;
#define OFFSET(x) offsetof(THContext, x)
#define FLAGS AV_OPT_FLAG_FILTERING_PARAM
static const AVOption dnn_th_options[] = {
{ "device", "device to run model", OFFSET(options.device_name), AV_OPT_TYPE_STRING, { .str = "cpu" }, 0, 0, FLAGS },
{ "optimize", "turn on graph executor optimization", OFFSET(options.optimize), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, FLAGS},
{ NULL }
};
AVFILTER_DEFINE_CLASS(dnn_th);
static int extract_lltask_from_task(TaskItem *task, Queue *lltask_queue)
{
THModel *th_model = (THModel *)task->model;
THContext *ctx = &th_model->ctx;
LastLevelTaskItem *lltask = (LastLevelTaskItem *)av_malloc(sizeof(*lltask));
if (!lltask) {
av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for LastLevelTaskItem\n");
return AVERROR(ENOMEM);
}
task->inference_todo = 1;
task->inference_done = 0;
lltask->task = task;
if (ff_queue_push_back(lltask_queue, lltask) < 0) {
av_log(ctx, AV_LOG_ERROR, "Failed to push back lltask_queue.\n");
av_freep(&lltask);
return AVERROR(ENOMEM);
}
return 0;
}
static void th_free_request(THInferRequest *request)
{
if (!request)
return;
if (request->output) {
delete(request->output);
request->output = NULL;
}
if (request->input_tensor) {
delete(request->input_tensor);
request->input_tensor = NULL;
}
return;
}
static inline void destroy_request_item(THRequestItem **arg)
{
THRequestItem *item;
if (!arg || !*arg) {
return;
}
item = *arg;
th_free_request(item->infer_request);
av_freep(&item->infer_request);
av_freep(&item->lltask);
ff_dnn_async_module_cleanup(&item->exec_module);
av_freep(arg);
}
static void dnn_free_model_th(DNNModel **model)
{
THModel *th_model;
if (!model || !*model)
return;
th_model = (THModel *) (*model)->model;
while (ff_safe_queue_size(th_model->request_queue) != 0) {
THRequestItem *item = (THRequestItem *)ff_safe_queue_pop_front(th_model->request_queue);
destroy_request_item(&item);
}
ff_safe_queue_destroy(th_model->request_queue);
while (ff_queue_size(th_model->lltask_queue) != 0) {
LastLevelTaskItem *item = (LastLevelTaskItem *)ff_queue_pop_front(th_model->lltask_queue);
av_freep(&item);
}
ff_queue_destroy(th_model->lltask_queue);
while (ff_queue_size(th_model->task_queue) != 0) {
TaskItem *item = (TaskItem *)ff_queue_pop_front(th_model->task_queue);
av_frame_free(&item->in_frame);
av_frame_free(&item->out_frame);
av_freep(&item);
}
ff_queue_destroy(th_model->task_queue);
delete th_model->jit_model;
av_opt_free(&th_model->ctx);
av_freep(&th_model);
av_freep(model);
}
static int get_input_th(void *model, DNNData *input, const char *input_name)
{
input->dt = DNN_FLOAT;
input->order = DCO_RGB;
input->layout = DL_NCHW;
input->dims[0] = 1;
input->dims[1] = 3;
input->dims[2] = -1;
input->dims[3] = -1;
return 0;
}
static void deleter(void *arg)
{
av_freep(&arg);
}
static int fill_model_input_th(THModel *th_model, THRequestItem *request)
{
LastLevelTaskItem *lltask = NULL;
TaskItem *task = NULL;
THInferRequest *infer_request = NULL;
DNNData input = { 0 };
THContext *ctx = &th_model->ctx;
int ret, width_idx, height_idx, channel_idx;
lltask = (LastLevelTaskItem *)ff_queue_pop_front(th_model->lltask_queue);
if (!lltask) {
ret = AVERROR(EINVAL);
goto err;
}
request->lltask = lltask;
task = lltask->task;
infer_request = request->infer_request;
ret = get_input_th(th_model, &input, NULL);
if ( ret != 0) {
goto err;
}
width_idx = dnn_get_width_idx_by_layout(input.layout);
height_idx = dnn_get_height_idx_by_layout(input.layout);
channel_idx = dnn_get_channel_idx_by_layout(input.layout);
input.dims[height_idx] = task->in_frame->height;
input.dims[width_idx] = task->in_frame->width;
input.data = av_malloc(input.dims[height_idx] * input.dims[width_idx] *
input.dims[channel_idx] * sizeof(float));
if (!input.data)
return AVERROR(ENOMEM);
infer_request->input_tensor = new torch::Tensor();
infer_request->output = new torch::Tensor();
switch (th_model->model->func_type) {
case DFT_PROCESS_FRAME:
input.scale = 255;
if (task->do_ioproc) {
if (th_model->model->frame_pre_proc != NULL) {
th_model->model->frame_pre_proc(task->in_frame, &input, th_model->model->filter_ctx);
} else {
ff_proc_from_frame_to_dnn(task->in_frame, &input, ctx);
}
}
break;
default:
avpriv_report_missing_feature(NULL, "model function type %d", th_model->model->func_type);
break;
}
*infer_request->input_tensor = torch::from_blob(input.data,
{1, input.dims[channel_idx], input.dims[height_idx], input.dims[width_idx]},
deleter, torch::kFloat32);
return 0;
err:
th_free_request(infer_request);
return ret;
}
static int th_start_inference(void *args)
{
THRequestItem *request = (THRequestItem *)args;
THInferRequest *infer_request = NULL;
LastLevelTaskItem *lltask = NULL;
TaskItem *task = NULL;
THModel *th_model = NULL;
THContext *ctx = NULL;
std::vector<torch::jit::IValue> inputs;
torch::NoGradGuard no_grad;
if (!request) {
av_log(NULL, AV_LOG_ERROR, "THRequestItem is NULL\n");
return AVERROR(EINVAL);
}
infer_request = request->infer_request;
lltask = request->lltask;
task = lltask->task;
th_model = (THModel *)task->model;
ctx = &th_model->ctx;
if (ctx->options.optimize)
torch::jit::setGraphExecutorOptimize(true);
else
torch::jit::setGraphExecutorOptimize(false);
if (!infer_request->input_tensor || !infer_request->output) {
av_log(ctx, AV_LOG_ERROR, "input or output tensor is NULL\n");
return DNN_GENERIC_ERROR;
}
inputs.push_back(*infer_request->input_tensor);
*infer_request->output = th_model->jit_model->forward(inputs).toTensor();
return 0;
}
static void infer_completion_callback(void *args) {
THRequestItem *request = (THRequestItem*)args;
LastLevelTaskItem *lltask = request->lltask;
TaskItem *task = lltask->task;
DNNData outputs = { 0 };
THInferRequest *infer_request = request->infer_request;
THModel *th_model = (THModel *)task->model;
torch::Tensor *output = infer_request->output;
c10::IntArrayRef sizes = output->sizes();
outputs.order = DCO_RGB;
outputs.layout = DL_NCHW;
outputs.dt = DNN_FLOAT;
if (sizes.size() == 4) {
// 4 dimensions: [batch_size, channel, height, width]
// this format of data is normally used for video frame SR
outputs.dims[0] = sizes.at(0); // N
outputs.dims[1] = sizes.at(1); // C
outputs.dims[2] = sizes.at(2); // H
outputs.dims[3] = sizes.at(3); // W
} else {
avpriv_report_missing_feature(&th_model->ctx, "Support of this kind of model");
goto err;
}
switch (th_model->model->func_type) {
case DFT_PROCESS_FRAME:
if (task->do_ioproc) {
outputs.scale = 255;
outputs.data = output->data_ptr();
if (th_model->model->frame_post_proc != NULL) {
th_model->model->frame_post_proc(task->out_frame, &outputs, th_model->model->filter_ctx);
} else {
ff_proc_from_dnn_to_frame(task->out_frame, &outputs, &th_model->ctx);
}
} else {
task->out_frame->width = outputs.dims[dnn_get_width_idx_by_layout(outputs.layout)];
task->out_frame->height = outputs.dims[dnn_get_height_idx_by_layout(outputs.layout)];
}
break;
default:
avpriv_report_missing_feature(&th_model->ctx, "model function type %d", th_model->model->func_type);
goto err;
}
task->inference_done++;
av_freep(&request->lltask);
err:
th_free_request(infer_request);
if (ff_safe_queue_push_back(th_model->request_queue, request) < 0) {
destroy_request_item(&request);
av_log(&th_model->ctx, AV_LOG_ERROR, "Unable to push back request_queue when failed to start inference.\n");
}
}
static int execute_model_th(THRequestItem *request, Queue *lltask_queue)
{
THModel *th_model = NULL;
LastLevelTaskItem *lltask;
TaskItem *task = NULL;
int ret = 0;
if (ff_queue_size(lltask_queue) == 0) {
destroy_request_item(&request);
return 0;
}
lltask = (LastLevelTaskItem *)ff_queue_peek_front(lltask_queue);
if (lltask == NULL) {
av_log(NULL, AV_LOG_ERROR, "Failed to get LastLevelTaskItem\n");
ret = AVERROR(EINVAL);
goto err;
}
task = lltask->task;
th_model = (THModel *)task->model;
ret = fill_model_input_th(th_model, request);
if ( ret != 0) {
goto err;
}
if (task->async) {
avpriv_report_missing_feature(&th_model->ctx, "LibTorch async");
} else {
ret = th_start_inference((void *)(request));
if (ret != 0) {
goto err;
}
infer_completion_callback(request);
return (task->inference_done == task->inference_todo) ? 0 : DNN_GENERIC_ERROR;
}
err:
th_free_request(request->infer_request);
if (ff_safe_queue_push_back(th_model->request_queue, request) < 0) {
destroy_request_item(&request);
}
return ret;
}
static int get_output_th(void *model, const char *input_name, int input_width, int input_height,
const char *output_name, int *output_width, int *output_height)
{
int ret = 0;
THModel *th_model = (THModel*) model;
THContext *ctx = &th_model->ctx;
TaskItem task = { 0 };
THRequestItem *request = NULL;
DNNExecBaseParams exec_params = {
.input_name = input_name,
.output_names = &output_name,
.nb_output = 1,
.in_frame = NULL,
.out_frame = NULL,
};
ret = ff_dnn_fill_gettingoutput_task(&task, &exec_params, th_model, input_height, input_width, ctx);
if ( ret != 0) {
goto err;
}
ret = extract_lltask_from_task(&task, th_model->lltask_queue);
if ( ret != 0) {
av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n");
goto err;
}
request = (THRequestItem*) ff_safe_queue_pop_front(th_model->request_queue);
if (!request) {
av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n");
ret = AVERROR(EINVAL);
goto err;
}
ret = execute_model_th(request, th_model->lltask_queue);
*output_width = task.out_frame->width;
*output_height = task.out_frame->height;
err:
av_frame_free(&task.out_frame);
av_frame_free(&task.in_frame);
return ret;
}
static THInferRequest *th_create_inference_request(void)
{
THInferRequest *request = (THInferRequest *)av_malloc(sizeof(THInferRequest));
if (!request) {
return NULL;
}
request->input_tensor = NULL;
request->output = NULL;
return request;
}
static DNNModel *dnn_load_model_th(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx)
{
DNNModel *model = NULL;
THModel *th_model = NULL;
THRequestItem *item = NULL;
THContext *ctx;
model = (DNNModel *)av_mallocz(sizeof(DNNModel));
if (!model) {
return NULL;
}
th_model = (THModel *)av_mallocz(sizeof(THModel));
if (!th_model) {
av_freep(&model);
return NULL;
}
th_model->model = model;
model->model = th_model;
th_model->ctx.c_class = &dnn_th_class;
ctx = &th_model->ctx;
//parse options
av_opt_set_defaults(ctx);
if (av_opt_set_from_string(ctx, options, NULL, "=", "&") < 0) {
av_log(ctx, AV_LOG_ERROR, "Failed to parse options \"%s\"\n", options);
return NULL;
}
c10::Device device = c10::Device(ctx->options.device_name);
if (!device.is_cpu()) {
av_log(ctx, AV_LOG_ERROR, "Not supported device:\"%s\"\n", ctx->options.device_name);
goto fail;
}
try {
th_model->jit_model = new torch::jit::Module;
(*th_model->jit_model) = torch::jit::load(model_filename);
} catch (const c10::Error& e) {
av_log(ctx, AV_LOG_ERROR, "Failed to load torch model\n");
goto fail;
}
th_model->request_queue = ff_safe_queue_create();
if (!th_model->request_queue) {
goto fail;
}
item = (THRequestItem *)av_mallocz(sizeof(THRequestItem));
if (!item) {
goto fail;
}
item->lltask = NULL;
item->infer_request = th_create_inference_request();
if (!item->infer_request) {
av_log(NULL, AV_LOG_ERROR, "Failed to allocate memory for Torch inference request\n");
goto fail;
}
item->exec_module.start_inference = &th_start_inference;
item->exec_module.callback = &infer_completion_callback;
item->exec_module.args = item;
if (ff_safe_queue_push_back(th_model->request_queue, item) < 0) {
goto fail;
}
item = NULL;
th_model->task_queue = ff_queue_create();
if (!th_model->task_queue) {
goto fail;
}
th_model->lltask_queue = ff_queue_create();
if (!th_model->lltask_queue) {
goto fail;
}
model->get_input = &get_input_th;
model->get_output = &get_output_th;
model->options = NULL;
model->filter_ctx = filter_ctx;
model->func_type = func_type;
return model;
fail:
if (item) {
destroy_request_item(&item);
av_freep(&item);
}
dnn_free_model_th(&model);
return NULL;
}
static int dnn_execute_model_th(const DNNModel *model, DNNExecBaseParams *exec_params)
{
THModel *th_model = (THModel *)model->model;
THContext *ctx = &th_model->ctx;
TaskItem *task;
THRequestItem *request;
int ret = 0;
ret = ff_check_exec_params(ctx, DNN_TH, model->func_type, exec_params);
if (ret != 0) {
av_log(ctx, AV_LOG_ERROR, "exec parameter checking fail.\n");
return ret;
}
task = (TaskItem *)av_malloc(sizeof(TaskItem));
if (!task) {
av_log(ctx, AV_LOG_ERROR, "unable to alloc memory for task item.\n");
return AVERROR(ENOMEM);
}
ret = ff_dnn_fill_task(task, exec_params, th_model, 0, 1);
if (ret != 0) {
av_freep(&task);
av_log(ctx, AV_LOG_ERROR, "unable to fill task.\n");
return ret;
}
ret = ff_queue_push_back(th_model->task_queue, task);
if (ret < 0) {
av_freep(&task);
av_log(ctx, AV_LOG_ERROR, "unable to push back task_queue.\n");
return ret;
}
ret = extract_lltask_from_task(task, th_model->lltask_queue);
if (ret != 0) {
av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n");
return ret;
}
request = (THRequestItem *)ff_safe_queue_pop_front(th_model->request_queue);
if (!request) {
av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n");
return AVERROR(EINVAL);
}
return execute_model_th(request, th_model->lltask_queue);
}
static DNNAsyncStatusType dnn_get_result_th(const DNNModel *model, AVFrame **in, AVFrame **out)
{
THModel *th_model = (THModel *)model->model;
return ff_dnn_get_result_common(th_model->task_queue, in, out);
}
static int dnn_flush_th(const DNNModel *model)
{
THModel *th_model = (THModel *)model->model;
THRequestItem *request;
if (ff_queue_size(th_model->lltask_queue) == 0)
// no pending task need to flush
return 0;
request = (THRequestItem *)ff_safe_queue_pop_front(th_model->request_queue);
if (!request) {
av_log(&th_model->ctx, AV_LOG_ERROR, "unable to get infer request.\n");
return AVERROR(EINVAL);
}
return execute_model_th(request, th_model->lltask_queue);
}
extern const DNNModule ff_dnn_backend_torch = {
.load_model = dnn_load_model_th,
.execute_model = dnn_execute_model_th,
.get_result = dnn_get_result_th,
.flush = dnn_flush_th,
.free_model = dnn_free_model_th,
};

View File

@ -28,6 +28,7 @@
extern const DNNModule ff_dnn_backend_openvino;
extern const DNNModule ff_dnn_backend_tf;
extern const DNNModule ff_dnn_backend_torch;
const DNNModule *ff_get_dnn_module(DNNBackendType backend_type, void *log_ctx)
{
@ -40,6 +41,10 @@ const DNNModule *ff_get_dnn_module(DNNBackendType backend_type, void *log_ctx)
case DNN_OV:
return &ff_dnn_backend_openvino;
#endif
#if (CONFIG_LIBTORCH == 1)
case DNN_TH:
return &ff_dnn_backend_torch;
#endif
default:
av_log(log_ctx, AV_LOG_ERROR,
"Module backend_type %d is not supported or enabled.\n",

View File

@ -53,12 +53,22 @@ static char **separate_output_names(const char *expr, const char *val_sep, int *
int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx)
{
DNNBackendType backend = ctx->backend_type;
if (!ctx->model_filename) {
av_log(filter_ctx, AV_LOG_ERROR, "model file for network is not specified\n");
return AVERROR(EINVAL);
}
if (ctx->backend_type == DNN_TF) {
if (backend == DNN_TH) {
if (ctx->model_inputname)
av_log(filter_ctx, AV_LOG_WARNING, "LibTorch backend do not require inputname, "\
"inputname will be ignored.\n");
if (ctx->model_outputnames)
av_log(filter_ctx, AV_LOG_WARNING, "LibTorch backend do not require outputname(s), "\
"all outputname(s) will be ignored.\n");
ctx->nb_outputs = 1;
} else if (backend == DNN_TF) {
if (!ctx->model_inputname) {
av_log(filter_ctx, AV_LOG_ERROR, "input name of the model network is not specified\n");
return AVERROR(EINVAL);
@ -115,7 +125,8 @@ int ff_dnn_get_input(DnnContext *ctx, DNNData *input)
int ff_dnn_get_output(DnnContext *ctx, int input_width, int input_height, int *output_width, int *output_height)
{
char * output_name = ctx->model_outputnames ? ctx->model_outputnames[0] : NULL;
char * output_name = ctx->model_outputnames && ctx->backend_type != DNN_TH ?
ctx->model_outputnames[0] : NULL;
return ctx->model->get_output(ctx->model->model, ctx->model_inputname, input_width, input_height,
(const char *)output_name, output_width, output_height);
}

View File

@ -32,7 +32,7 @@
#define DNN_GENERIC_ERROR FFERRTAG('D','N','N','!')
typedef enum {DNN_TF = 1, DNN_OV} DNNBackendType;
typedef enum {DNN_TF = 1, DNN_OV, DNN_TH} DNNBackendType;
typedef enum {DNN_FLOAT = 1, DNN_UINT8 = 4} DNNDataType;

View File

@ -50,6 +50,9 @@ static const AVOption dnn_processing_options[] = {
#endif
#if (CONFIG_LIBOPENVINO == 1)
{ "openvino", "openvino backend flag", 0, AV_OPT_TYPE_CONST, { .i64 = DNN_OV }, 0, 0, FLAGS, .unit = "backend" },
#endif
#if (CONFIG_LIBTORCH == 1)
{ "torch", "torch backend flag", 0, AV_OPT_TYPE_CONST, { .i64 = DNN_TH }, 0, 0, FLAGS, "backend" },
#endif
DNN_COMMON_OPTIONS
{ NULL }

View File

@ -452,6 +452,11 @@ static void dump_sei_film_grain_params_metadata(AVFilterContext *ctx, const AVFr
[AV_FILM_GRAIN_PARAMS_H274] = "h274",
};
const char *color_range_str = av_color_range_name(fgp->color_range);
const char *color_primaries_str = av_color_primaries_name(fgp->color_primaries);
const char *color_trc_str = av_color_transfer_name(fgp->color_trc);
const char *colorspace_str = av_color_space_name(fgp->color_space);
if (fgp->type >= FF_ARRAY_ELEMS(film_grain_type_names)) {
av_log(ctx, AV_LOG_ERROR, "invalid data\n");
return;
@ -459,6 +464,16 @@ static void dump_sei_film_grain_params_metadata(AVFilterContext *ctx, const AVFr
av_log(ctx, AV_LOG_INFO, "type %s; ", film_grain_type_names[fgp->type]);
av_log(ctx, AV_LOG_INFO, "seed=%"PRIu64"; ", fgp->seed);
av_log(ctx, AV_LOG_INFO, "width=%d; ", fgp->width);
av_log(ctx, AV_LOG_INFO, "height=%d; ", fgp->height);
av_log(ctx, AV_LOG_INFO, "subsampling_x=%d; ", fgp->subsampling_x);
av_log(ctx, AV_LOG_INFO, "subsampling_y=%d; ", fgp->subsampling_y);
av_log(ctx, AV_LOG_INFO, "color_range=%s; ", color_range_str ? color_range_str : "unknown");
av_log(ctx, AV_LOG_INFO, "color_primaries=%s; ", color_primaries_str ? color_primaries_str : "unknown");
av_log(ctx, AV_LOG_INFO, "color_trc=%s; ", color_trc_str ? color_trc_str : "unknown");
av_log(ctx, AV_LOG_INFO, "color_space=%s; ", colorspace_str ? colorspace_str : "unknown");
av_log(ctx, AV_LOG_INFO, "bit_depth_luma=%d; ", fgp->bit_depth_luma);
av_log(ctx, AV_LOG_INFO, "bit_depth_chroma=%d; ", fgp->bit_depth_chroma);
switch (fgp->type) {
case AV_FILM_GRAIN_PARAMS_NONE:
@ -504,18 +519,7 @@ static void dump_sei_film_grain_params_metadata(AVFilterContext *ctx, const AVFr
}
case AV_FILM_GRAIN_PARAMS_H274: {
const AVFilmGrainH274Params *h274 = &fgp->codec.h274;
const char *color_range_str = av_color_range_name(h274->color_range);
const char *color_primaries_str = av_color_primaries_name(h274->color_primaries);
const char *color_trc_str = av_color_transfer_name(h274->color_trc);
const char *colorspace_str = av_color_space_name(h274->color_space);
av_log(ctx, AV_LOG_INFO, "model_id=%d; ", h274->model_id);
av_log(ctx, AV_LOG_INFO, "bit_depth_luma=%d; ", h274->bit_depth_luma);
av_log(ctx, AV_LOG_INFO, "bit_depth_chroma=%d; ", h274->bit_depth_chroma);
av_log(ctx, AV_LOG_INFO, "color_range=%s; ", color_range_str ? color_range_str : "unknown");
av_log(ctx, AV_LOG_INFO, "color_primaries=%s; ", color_primaries_str ? color_primaries_str : "unknown");
av_log(ctx, AV_LOG_INFO, "color_trc=%s; ", color_trc_str ? color_trc_str : "unknown");
av_log(ctx, AV_LOG_INFO, "color_space=%s; ", colorspace_str ? colorspace_str : "unknown");
av_log(ctx, AV_LOG_INFO, "blending_mode_id=%d; ", h274->blending_mode_id);
av_log(ctx, AV_LOG_INFO, "log2_scale_factor=%d; ", h274->log2_scale_factor);

View File

@ -658,6 +658,7 @@ OBJS-$(CONFIG_LIBOPENMPT_DEMUXER) += libopenmpt.o
OBJS-$(CONFIG_VAPOURSYNTH_DEMUXER) += vapoursynth.o
# protocols I/O
OBJS-$(CONFIG_ANDROID_CONTENT_PROTOCOL) += file.o
OBJS-$(CONFIG_ASYNC_PROTOCOL) += async.o
OBJS-$(CONFIG_APPLEHTTP_PROTOCOL) += hlsproto.o
OBJS-$(CONFIG_BLURAY_PROTOCOL) += bluray.o

View File

@ -65,6 +65,9 @@ const FFOutputFormat ff_a64_muxer = {
.p.long_name = NULL_IF_CONFIG_SMALL("a64 - video for Commodore 64"),
.p.extensions = "a64, A64",
.p.video_codec = AV_CODEC_ID_A64_MULTI,
.p.audio_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH,
.write_header = a64_write_header,
.write_packet = ff_raw_write_packet,
};

View File

@ -19,7 +19,6 @@
*/
#include "libavcodec/codec_id.h"
#include "libavcodec/codec_par.h"
#include "libavcodec/packet.h"
#include "libavutil/crc.h"
#include "libavutil/opt.h"
@ -31,18 +30,6 @@ typedef struct AC4Context {
int write_crc;
} AC4Context;
static int ac4_init(AVFormatContext *s)
{
AVCodecParameters *par = s->streams[0]->codecpar;
if (s->nb_streams != 1 || par->codec_id != AV_CODEC_ID_AC4) {
av_log(s, AV_LOG_ERROR, "Only one AC-4 stream can be muxed by the AC-4 muxer\n");
return AVERROR(EINVAL);
}
return 0;
}
static int ac4_write_packet(AVFormatContext *s, AVPacket *pkt)
{
AC4Context *ac4 = s->priv_data;
@ -95,7 +82,9 @@ const FFOutputFormat ff_ac4_muxer = {
.priv_data_size = sizeof(AC4Context),
.p.audio_codec = AV_CODEC_ID_AC4,
.p.video_codec = AV_CODEC_ID_NONE,
.init = ac4_init,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.write_packet = ac4_write_packet,
.p.priv_class = &ac4_muxer_class,
.p.flags = AVFMT_NOTIMESTAMPS,

View File

@ -106,10 +106,6 @@ static int adts_init(AVFormatContext *s)
ADTSContext *adts = s->priv_data;
AVCodecParameters *par = s->streams[0]->codecpar;
if (par->codec_id != AV_CODEC_ID_AAC) {
av_log(s, AV_LOG_ERROR, "Only AAC streams can be muxed by the ADTS muxer\n");
return AVERROR(EINVAL);
}
if (par->extradata_size > 0)
return adts_decode_extradata(s, adts, par->extradata,
par->extradata_size);
@ -241,6 +237,9 @@ const FFOutputFormat ff_adts_muxer = {
.priv_data_size = sizeof(ADTSContext),
.p.audio_codec = AV_CODEC_ID_AAC,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.init = adts_init,
.write_header = adts_write_header,
.write_packet = adts_write_packet,

View File

@ -29,25 +29,14 @@ static int aea_write_header(AVFormatContext *s)
{
const AVDictionaryEntry *title_entry;
size_t title_length = 0;
AVStream *st;
AVStream *st = s->streams[0];
if (s->nb_streams > 1) {
av_log(s, AV_LOG_ERROR, "Got more than one stream to encode. This is not supported.\n");
return AVERROR(EINVAL);
}
st = s->streams[0];
if (st->codecpar->ch_layout.nb_channels != 1 && st->codecpar->ch_layout.nb_channels != 2) {
av_log(s, AV_LOG_ERROR, "Only maximum 2 channels are supported in the audio"
" stream, %d channels were found.\n", st->codecpar->ch_layout.nb_channels);
return AVERROR(EINVAL);
}
if (st->codecpar->codec_id != AV_CODEC_ID_ATRAC1) {
av_log(s, AV_LOG_ERROR, "AEA can only store ATRAC1 streams, %s was found.\n", avcodec_get_name(st->codecpar->codec_id));
return AVERROR(EINVAL);
}
if (st->codecpar->sample_rate != 44100) {
av_log(s, AV_LOG_ERROR, "Invalid sample rate (%d) AEA only supports 44.1kHz.\n", st->codecpar->sample_rate);
return AVERROR(EINVAL);
@ -108,7 +97,11 @@ const FFOutputFormat ff_aea_muxer = {
.p.long_name = NULL_IF_CONFIG_SMALL("MD STUDIO audio"),
.p.extensions = "aea",
.p.audio_codec = AV_CODEC_ID_ATRAC1,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.write_header = aea_write_header,
.write_packet = ff_raw_write_packet,
.write_trailer = aea_write_trailer,

View File

@ -189,19 +189,8 @@ static int alp_write_init(AVFormatContext *s)
alp->type = ALP_TYPE_TUN;
}
if (s->nb_streams != 1) {
av_log(s, AV_LOG_ERROR, "Too many streams\n");
return AVERROR(EINVAL);
}
par = s->streams[0]->codecpar;
if (par->codec_id != AV_CODEC_ID_ADPCM_IMA_ALP) {
av_log(s, AV_LOG_ERROR, "%s codec not supported\n",
avcodec_get_name(par->codec_id));
return AVERROR(EINVAL);
}
if (par->ch_layout.nb_channels > 2) {
av_log(s, AV_LOG_ERROR, "A maximum of 2 channels are supported\n");
return AVERROR(EINVAL);
@ -298,7 +287,10 @@ const FFOutputFormat ff_alp_muxer = {
.p.extensions = "tun,pcm",
.p.audio_codec = AV_CODEC_ID_ADPCM_IMA_ALP,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.p.priv_class = &alp_muxer_class,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.init = alp_write_init,
.write_header = alp_write_header,
.write_packet = ff_raw_write_packet,

View File

@ -51,23 +51,6 @@ static const uint8_t amrwb_packed_size[16] = {
18, 24, 33, 37, 41, 47, 51, 59, 61, 6, 1, 1, 1, 1, 1, 1
};
#if CONFIG_AMR_MUXER
static int amr_write_header(AVFormatContext *s)
{
AVIOContext *pb = s->pb;
AVCodecParameters *par = s->streams[0]->codecpar;
if (par->codec_id == AV_CODEC_ID_AMR_NB) {
avio_write(pb, AMR_header, sizeof(AMR_header)); /* magic number */
} else if (par->codec_id == AV_CODEC_ID_AMR_WB) {
avio_write(pb, AMRWB_header, sizeof(AMRWB_header)); /* magic number */
} else {
return -1;
}
return 0;
}
#endif /* CONFIG_AMR_MUXER */
#if CONFIG_AMR_DEMUXER
static int amr_probe(const AVProbeData *p)
{
@ -268,6 +251,21 @@ const FFInputFormat ff_amrwb_demuxer = {
#endif
#if CONFIG_AMR_MUXER
static int amr_write_header(AVFormatContext *s)
{
AVIOContext *pb = s->pb;
AVCodecParameters *par = s->streams[0]->codecpar;
if (par->codec_id == AV_CODEC_ID_AMR_NB) {
avio_write(pb, AMR_header, sizeof(AMR_header)); /* magic number */
} else if (par->codec_id == AV_CODEC_ID_AMR_WB) {
avio_write(pb, AMRWB_header, sizeof(AMRWB_header)); /* magic number */
} else {
return -1;
}
return 0;
}
const FFOutputFormat ff_amr_muxer = {
.p.name = "amr",
.p.long_name = NULL_IF_CONFIG_SMALL("3GPP AMR"),
@ -275,7 +273,9 @@ const FFOutputFormat ff_amr_muxer = {
.p.extensions = "amr",
.p.audio_codec = AV_CODEC_ID_AMR_NB,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.p.flags = AVFMT_NOTIMESTAMPS,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH,
.write_header = amr_write_header,
.write_packet = ff_raw_write_packet,
};

View File

@ -113,11 +113,7 @@ static av_cold int amv_init(AVFormatContext *s)
return AVERROR(EINVAL);
}
if (ast->codecpar->codec_id != AV_CODEC_ID_ADPCM_IMA_AMV) {
av_log(s, AV_LOG_ERROR, "Second AMV stream must be %s\n",
avcodec_get_name(AV_CODEC_ID_ADPCM_IMA_AMV));
return AVERROR(EINVAL);
}
av_assert1(ast->codecpar->codec_id == AV_CODEC_ID_ADPCM_IMA_AMV);
/* These files are broken-enough as they are. They shouldn't be streamed. */
if (!(s->pb->seekable & AVIO_SEEKABLE_NORMAL)) {
@ -410,6 +406,9 @@ const FFOutputFormat ff_amv_muxer = {
.priv_data_size = sizeof(AMVContext),
.p.audio_codec = AV_CODEC_ID_ADPCM_IMA_AMV,
.p.video_codec = AV_CODEC_ID_AMV,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.init = amv_init,
.deinit = amv_deinit,
.write_header = amv_write_header,

View File

@ -215,20 +215,7 @@ const FFInputFormat ff_apm_demuxer = {
#if CONFIG_APM_MUXER
static int apm_write_init(AVFormatContext *s)
{
AVCodecParameters *par;
if (s->nb_streams != 1) {
av_log(s, AV_LOG_ERROR, "APM files have exactly one stream\n");
return AVERROR(EINVAL);
}
par = s->streams[0]->codecpar;
if (par->codec_id != AV_CODEC_ID_ADPCM_IMA_APM) {
av_log(s, AV_LOG_ERROR, "%s codec not supported\n",
avcodec_get_name(par->codec_id));
return AVERROR(EINVAL);
}
AVCodecParameters *par = s->streams[0]->codecpar;
if (par->ch_layout.nb_channels > 2) {
av_log(s, AV_LOG_ERROR, "APM files only support up to 2 channels\n");
@ -311,6 +298,9 @@ const FFOutputFormat ff_apm_muxer = {
.p.extensions = "apm",
.p.audio_codec = AV_CODEC_ID_ADPCM_IMA_APM,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.init = apm_write_init,
.write_header = apm_write_header,
.write_packet = ff_raw_write_packet,

View File

@ -84,14 +84,6 @@ static int apng_write_header(AVFormatContext *format_context)
APNGMuxContext *apng = format_context->priv_data;
AVCodecParameters *par = format_context->streams[0]->codecpar;
if (format_context->nb_streams != 1 ||
format_context->streams[0]->codecpar->codec_type != AVMEDIA_TYPE_VIDEO ||
format_context->streams[0]->codecpar->codec_id != AV_CODEC_ID_APNG) {
av_log(format_context, AV_LOG_ERROR,
"APNG muxer supports only a single video APNG stream.\n");
return AVERROR(EINVAL);
}
if (apng->last_delay.num > UINT16_MAX || apng->last_delay.den > UINT16_MAX) {
av_reduce(&apng->last_delay.num, &apng->last_delay.den,
apng->last_delay.num, apng->last_delay.den, UINT16_MAX);
@ -315,6 +307,9 @@ const FFOutputFormat ff_apng_muxer = {
.priv_data_size = sizeof(APNGMuxContext),
.p.audio_codec = AV_CODEC_ID_NONE,
.p.video_codec = AV_CODEC_ID_APNG,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.write_header = apng_write_header,
.write_packet = apng_write_packet,
.write_trailer = apng_write_trailer,

View File

@ -288,20 +288,7 @@ const FFInputFormat ff_argo_asf_demuxer = {
static int argo_asf_write_init(AVFormatContext *s)
{
ArgoASFMuxContext *ctx = s->priv_data;
const AVCodecParameters *par;
if (s->nb_streams != 1) {
av_log(s, AV_LOG_ERROR, "ASF files have exactly one stream\n");
return AVERROR(EINVAL);
}
par = s->streams[0]->codecpar;
if (par->codec_id != AV_CODEC_ID_ADPCM_ARGO) {
av_log(s, AV_LOG_ERROR, "%s codec not supported\n",
avcodec_get_name(par->codec_id));
return AVERROR(EINVAL);
}
const AVCodecParameters *par = s->streams[0]->codecpar;
if (ctx->version_major == 1 && ctx->version_minor == 1 && par->sample_rate != 22050) {
av_log(s, AV_LOG_ERROR, "ASF v1.1 files only support a sample rate of 22050\n");
@ -481,7 +468,10 @@ const FFOutputFormat ff_argo_asf_muxer = {
*/
.p.audio_codec = AV_CODEC_ID_ADPCM_ARGO,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.p.priv_class = &argo_asf_muxer_class,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.init = argo_asf_write_init,
.write_header = argo_asf_write_header,
.write_packet = argo_asf_write_packet,

View File

@ -269,20 +269,7 @@ const FFInputFormat ff_argo_cvg_demuxer = {
static int argo_cvg_write_init(AVFormatContext *s)
{
ArgoCVGMuxContext *ctx = s->priv_data;
const AVCodecParameters *par;
if (s->nb_streams != 1) {
av_log(s, AV_LOG_ERROR, "CVG files have exactly one stream\n");
return AVERROR(EINVAL);
}
par = s->streams[0]->codecpar;
if (par->codec_id != AV_CODEC_ID_ADPCM_PSX) {
av_log(s, AV_LOG_ERROR, "%s codec not supported\n",
avcodec_get_name(par->codec_id));
return AVERROR(EINVAL);
}
const AVCodecParameters *par = s->streams[0]->codecpar;
if (par->ch_layout.nb_channels != 1) {
av_log(s, AV_LOG_ERROR, "CVG files only support 1 channel\n");
@ -408,7 +395,10 @@ const FFOutputFormat ff_argo_cvg_muxer = {
.p.extensions = "cvg",
.p.audio_codec = AV_CODEC_ID_ADPCM_PSX,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.p.priv_class = &argo_cvg_muxer_class,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.init = argo_cvg_write_init,
.write_header = argo_cvg_write_header,
.write_packet = argo_cvg_write_packet,

View File

@ -50,10 +50,6 @@ static int write_header(AVFormatContext *s)
ASSContext *ass = s->priv_data;
AVCodecParameters *par = s->streams[0]->codecpar;
if (s->nb_streams != 1 || par->codec_id != AV_CODEC_ID_ASS) {
av_log(s, AV_LOG_ERROR, "Exactly one ASS/SSA stream is needed.\n");
return AVERROR(EINVAL);
}
avpriv_set_pts_info(s->streams[0], 64, 1, 100);
if (par->extradata_size > 0) {
size_t header_size = par->extradata_size;
@ -237,8 +233,12 @@ const FFOutputFormat ff_ass_muxer = {
.p.long_name = NULL_IF_CONFIG_SMALL("SSA (SubStation Alpha) subtitle"),
.p.mime_type = "text/x-ass",
.p.extensions = "ass,ssa",
.p.audio_codec = AV_CODEC_ID_NONE,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_ASS,
.p.flags = AVFMT_GLOBALHEADER | AVFMT_NOTIMESTAMPS | AVFMT_TS_NONSTRICT,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.p.priv_class = &ass_class,
.priv_data_size = sizeof(ASSContext),
.write_header = write_header,

View File

@ -49,16 +49,9 @@ static int ast_write_header(AVFormatContext *s)
{
ASTMuxContext *ast = s->priv_data;
AVIOContext *pb = s->pb;
AVCodecParameters *par;
AVCodecParameters *par = s->streams[0]->codecpar;
unsigned int codec_tag;
if (s->nb_streams == 1) {
par = s->streams[0]->codecpar;
} else {
av_log(s, AV_LOG_ERROR, "only one stream is supported\n");
return AVERROR(EINVAL);
}
if (par->codec_id == AV_CODEC_ID_ADPCM_AFC) {
av_log(s, AV_LOG_ERROR, "muxing ADPCM AFC is not implemented\n");
return AVERROR_PATCHWELCOME;
@ -204,6 +197,8 @@ const FFOutputFormat ff_ast_muxer = {
.priv_data_size = sizeof(ASTMuxContext),
.p.audio_codec = AV_CODEC_ID_PCM_S16BE_PLANAR,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH,
.write_header = ast_write_header,
.write_packet = ast_write_packet,
.write_trailer = ast_write_trailer,

View File

@ -291,11 +291,6 @@ static int au_write_header(AVFormatContext *s)
AVCodecParameters *par = s->streams[0]->codecpar;
AVBPrint annotations;
if (s->nb_streams != 1) {
av_log(s, AV_LOG_ERROR, "only one stream is supported\n");
return AVERROR(EINVAL);
}
par->codec_tag = ff_codec_get_tag(codec_au_tags, par->codec_id);
if (!par->codec_tag) {
av_log(s, AV_LOG_ERROR, "unsupported codec\n");
@ -346,7 +341,9 @@ const FFOutputFormat ff_au_muxer = {
.p.codec_tag = au_codec_tags,
.p.audio_codec = AV_CODEC_ID_PCM_S16BE,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.p.flags = AVFMT_NOTIMESTAMPS,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH,
.priv_data_size = sizeof(AUContext),
.write_header = au_write_header,
.write_packet = ff_raw_write_packet,

View File

@ -125,11 +125,11 @@ const FFInputFormat ff_bit_demuxer = {
#endif
#if CONFIG_BIT_MUXER
static int write_header(AVFormatContext *s)
static av_cold int init(AVFormatContext *s)
{
AVCodecParameters *par = s->streams[0]->codecpar;
if ((par->codec_id != AV_CODEC_ID_G729) || par->ch_layout.nb_channels != 1) {
if (par->ch_layout.nb_channels != 1) {
av_log(s, AV_LOG_ERROR,
"only codec g729 with 1 channel is supported by this format\n");
return AVERROR(EINVAL);
@ -167,7 +167,10 @@ const FFOutputFormat ff_bit_muxer = {
.p.extensions = "bit",
.p.audio_codec = AV_CODEC_ID_G729,
.p.video_codec = AV_CODEC_ID_NONE,
.write_header = write_header,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.init = init,
.write_packet = write_packet,
};
#endif

View File

@ -118,11 +118,6 @@ static int caf_write_header(AVFormatContext *s)
int64_t chunk_size = 0;
int frame_size = par->frame_size, sample_rate = par->sample_rate;
if (s->nb_streams != 1) {
av_log(s, AV_LOG_ERROR, "CAF files have exactly one stream\n");
return AVERROR(EINVAL);
}
switch (par->codec_id) {
case AV_CODEC_ID_AAC:
av_log(s, AV_LOG_ERROR, "muxing codec currently unsupported\n");
@ -284,6 +279,8 @@ const FFOutputFormat ff_caf_muxer = {
.priv_data_size = sizeof(CAFContext),
.p.audio_codec = AV_CODEC_ID_PCM_S16BE,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH,
.write_header = caf_write_header,
.write_packet = caf_write_packet,
.write_trailer = caf_write_trailer,

View File

@ -58,7 +58,7 @@ static void deinit(AVFormatContext *s)
}
}
static int write_header(AVFormatContext *s)
static av_cold int init(AVFormatContext *s)
{
ChromaprintMuxContext *cpr = s->priv_data;
AVStream *st;
@ -85,11 +85,6 @@ static int write_header(AVFormatContext *s)
#endif
}
if (s->nb_streams != 1) {
av_log(s, AV_LOG_ERROR, "Only one stream is supported\n");
return AVERROR(EINVAL);
}
st = s->streams[0];
if (st->codecpar->ch_layout.nb_channels > 2) {
@ -182,7 +177,11 @@ const FFOutputFormat ff_chromaprint_muxer = {
.p.long_name = NULL_IF_CONFIG_SMALL("Chromaprint"),
.priv_data_size = sizeof(ChromaprintMuxContext),
.p.audio_codec = AV_NE(AV_CODEC_ID_PCM_S16BE, AV_CODEC_ID_PCM_S16LE),
.write_header = write_header,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.init = init,
.write_packet = write_packet,
.write_trailer = write_trailer,
.deinit = deinit,

View File

@ -214,14 +214,7 @@ static int codec2_read_packet(AVFormatContext *s, AVPacket *pkt)
static int codec2_write_header(AVFormatContext *s)
{
AVStream *st;
if (s->nb_streams != 1 || s->streams[0]->codecpar->codec_id != AV_CODEC_ID_CODEC2) {
av_log(s, AV_LOG_ERROR, ".c2 files must have exactly one codec2 stream\n");
return AVERROR(EINVAL);
}
st = s->streams[0];
AVStream *st = s->streams[0];
if (st->codecpar->extradata_size != CODEC2_EXTRADATA_SIZE) {
av_log(s, AV_LOG_ERROR, ".c2 files require exactly %i bytes of extradata (got %i)\n",
@ -317,8 +310,10 @@ const FFOutputFormat ff_codec2_muxer = {
.p.extensions = "c2",
.p.audio_codec = AV_CODEC_ID_CODEC2,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.p.flags = AVFMT_NOTIMESTAMPS,
.priv_data_size = sizeof(Codec2Context),
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.write_header = codec2_write_header,
.write_packet = ff_raw_write_packet,
};

View File

@ -62,7 +62,10 @@ const FFOutputFormat ff_daud_muxer = {
.p.extensions = "302",
.p.audio_codec = AV_CODEC_ID_PCM_S24DAUD,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.p.flags = AVFMT_NOTIMESTAMPS,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.init = daud_init,
.write_packet = daud_write_packet,
};

View File

@ -728,5 +728,5 @@ const FFOutputFormat ff_fifo_muxer = {
.write_packet = fifo_write_packet,
.write_trailer = fifo_write_trailer,
.deinit = fifo_deinit,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};

View File

@ -40,6 +40,12 @@
#include <stdlib.h>
#include "os_support.h"
#include "url.h"
#if CONFIG_ANDROID_CONTENT_PROTOCOL
#include <jni.h>
#include "libavcodec/jni.h"
#include "libavcodec/ffjni.c"
#endif
/* Some systems may not have S_ISFIFO */
#ifndef S_ISFIFO
@ -101,6 +107,21 @@ typedef struct FileContext {
int64_t initial_pos;
} FileContext;
#if CONFIG_ANDROID_CONTENT_PROTOCOL
static const AVOption android_content_options[] = {
{ "blocksize", "set I/O operation maximum block size", offsetof(FileContext, blocksize), AV_OPT_TYPE_INT, { .i64 = INT_MAX }, 1, INT_MAX, AV_OPT_FLAG_ENCODING_PARAM },
{ NULL }
};
static const AVClass android_content_class = {
.class_name = "android_content",
.item_name = av_default_item_name,
.option = android_content_options,
.version = LIBAVUTIL_VERSION_INT,
};
#endif
static const AVOption file_options[] = {
{ "truncate", "truncate existing files on write", offsetof(FileContext, trunc), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, AV_OPT_FLAG_ENCODING_PARAM },
{ "blocksize", "set I/O operation maximum block size", offsetof(FileContext, blocksize), AV_OPT_TYPE_INT, { .i64 = INT_MAX }, 1, INT_MAX, AV_OPT_FLAG_ENCODING_PARAM },
@ -524,3 +545,142 @@ const URLProtocol ff_fd_protocol = {
};
#endif /* CONFIG_FD_PROTOCOL */
#if CONFIG_ANDROID_CONTENT_PROTOCOL
typedef struct JFields {
jclass uri_class;
jmethodID parse_id;
jclass context_class;
jmethodID get_content_resolver_id;
jclass content_resolver_class;
jmethodID open_file_descriptor_id;
jclass parcel_file_descriptor_class;
jmethodID detach_fd_id;
} JFields;
#define OFFSET(x) offsetof(JFields, x)
static const struct FFJniField jfields_mapping[] = {
{ "android/net/Uri", NULL, NULL, FF_JNI_CLASS, OFFSET(uri_class), 1 },
{ "android/net/Uri", "parse", "(Ljava/lang/String;)Landroid/net/Uri;", FF_JNI_STATIC_METHOD, OFFSET(parse_id), 1 },
{ "android/content/Context", NULL, NULL, FF_JNI_CLASS, OFFSET(context_class), 1 },
{ "android/content/Context", "getContentResolver", "()Landroid/content/ContentResolver;", FF_JNI_METHOD, OFFSET(get_content_resolver_id), 1 },
{ "android/content/ContentResolver", NULL, NULL, FF_JNI_CLASS, OFFSET(content_resolver_class), 1 },
{ "android/content/ContentResolver", "openFileDescriptor", "(Landroid/net/Uri;Ljava/lang/String;)Landroid/os/ParcelFileDescriptor;", FF_JNI_METHOD, OFFSET(open_file_descriptor_id), 1 },
{ "android/os/ParcelFileDescriptor", NULL, NULL, FF_JNI_CLASS, OFFSET(parcel_file_descriptor_class), 1 },
{ "android/os/ParcelFileDescriptor", "detachFd", "()I", FF_JNI_METHOD, OFFSET(detach_fd_id), 1 },
{ NULL }
};
#undef OFFSET
static int android_content_open(URLContext *h, const char *filename, int flags)
{
FileContext *c = h->priv_data;
int fd, ret;
struct stat st;
const char *mode_str = "r";
JNIEnv *env;
JFields jfields = { 0 };
jobject application_context = NULL;
jobject url = NULL;
jobject mode = NULL;
jobject uri = NULL;
jobject content_resolver = NULL;
jobject parcel_file_descriptor = NULL;
env = ff_jni_get_env(c);
if (!env) {
return AVERROR(EINVAL);
}
ret = ff_jni_init_jfields(env, &jfields, jfields_mapping, 0, c);
if (ret < 0) {
av_log(c, AV_LOG_ERROR, "failed to initialize jni fields\n");
return ret;
}
application_context = av_jni_get_android_app_ctx();
if (!application_context) {
av_log(c, AV_LOG_ERROR, "application context is not set\n");
ret = AVERROR_EXTERNAL;
goto done;
}
url = ff_jni_utf_chars_to_jstring(env, filename, c);
if (!url) {
ret = AVERROR_EXTERNAL;
goto done;
}
if (flags & AVIO_FLAG_WRITE && flags & AVIO_FLAG_READ)
mode_str = "rw";
else if (flags & AVIO_FLAG_WRITE)
mode_str = "w";
mode = ff_jni_utf_chars_to_jstring(env, mode_str, c);
if (!mode) {
ret = AVERROR_EXTERNAL;
goto done;
}
uri = (*env)->CallStaticObjectMethod(env, jfields.uri_class, jfields.parse_id, url);
ret = ff_jni_exception_check(env, 1, c);
if (ret < 0)
goto done;
content_resolver = (*env)->CallObjectMethod(env, application_context, jfields.get_content_resolver_id);
ret = ff_jni_exception_check(env, 1, c);
if (ret < 0)
goto done;
parcel_file_descriptor = (*env)->CallObjectMethod(env, content_resolver, jfields.open_file_descriptor_id, uri, mode);
ret = ff_jni_exception_check(env, 1, c);
if (ret < 0)
goto done;
fd = (*env)->CallIntMethod(env, parcel_file_descriptor, jfields.detach_fd_id);
ret = ff_jni_exception_check(env, 1, c);
if (ret < 0)
goto done;
if (fstat(fd, &st) < 0) {
close(fd);
return AVERROR(errno);
}
c->fd = fd;
h->is_streamed = !(S_ISREG(st.st_mode) || S_ISBLK(st.st_mode));
done:
(*env)->DeleteLocalRef(env, url);
(*env)->DeleteLocalRef(env, mode);
(*env)->DeleteLocalRef(env, uri);
(*env)->DeleteLocalRef(env, content_resolver);
(*env)->DeleteLocalRef(env, parcel_file_descriptor);
ff_jni_reset_jfields(env, &jfields, jfields_mapping, 0, c);
return ret;
}
URLProtocol ff_android_content_protocol = {
.name = "content",
.url_open = android_content_open,
.url_read = file_read,
.url_write = file_write,
.url_seek = file_seek,
.url_close = file_close,
.url_get_file_handle = file_get_handle,
.url_check = NULL,
.priv_data_size = sizeof(FileContext),
.priv_data_class = &android_content_class,
};
#endif /* CONFIG_ANDROID_CONTENT_PROTOCOL */

View File

@ -32,7 +32,7 @@
#define RAND_TAG MKBETAG('R','a','n','d')
static int write_header(AVFormatContext *s)
static av_cold int init(AVFormatContext *s)
{
if (s->streams[0]->codecpar->format != AV_PIX_FMT_RGBA) {
av_log(s, AV_LOG_ERROR, "only AV_PIX_FMT_RGBA is supported\n");
@ -66,7 +66,10 @@ const FFOutputFormat ff_filmstrip_muxer = {
.p.extensions = "flm",
.p.audio_codec = AV_CODEC_ID_NONE,
.p.video_codec = AV_CODEC_ID_RAWVIDEO,
.write_header = write_header,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.init = init,
.write_packet = ff_raw_write_packet,
.write_trailer = write_trailer,
};

View File

@ -198,6 +198,9 @@ const FFOutputFormat ff_fits_muxer = {
.p.extensions = "fits",
.p.audio_codec = AV_CODEC_ID_NONE,
.p.video_codec = AV_CODEC_ID_FITS,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.priv_data_size = sizeof(FITSContext),
.write_header = fits_write_header,
.write_packet = fits_write_packet,

View File

@ -231,6 +231,9 @@ static void put_amf_string(AVIOContext *pb, const char *str)
{
size_t len = strlen(str);
avio_wb16(pb, len);
// Avoid avio_write() if put_amf_string(pb, "") is inlined.
if (av_builtin_constant_p(len == 0) && len == 0)
return;
avio_write(pb, str, len);
}

View File

@ -40,16 +40,8 @@ typedef struct GIFContext {
AVPacket *prev_pkt;
} GIFContext;
static int gif_write_header(AVFormatContext *s)
static av_cold int gif_init(AVFormatContext *s)
{
if (s->nb_streams != 1 ||
s->streams[0]->codecpar->codec_type != AVMEDIA_TYPE_VIDEO ||
s->streams[0]->codecpar->codec_id != AV_CODEC_ID_GIF) {
av_log(s, AV_LOG_ERROR,
"GIF muxer supports only a single video GIF stream.\n");
return AVERROR(EINVAL);
}
avpriv_set_pts_info(s->streams[0], 64, 1, 100);
return 0;
@ -213,7 +205,10 @@ const FFOutputFormat ff_gif_muxer = {
.priv_data_size = sizeof(GIFContext),
.p.audio_codec = AV_CODEC_ID_NONE,
.p.video_codec = AV_CODEC_ID_GIF,
.write_header = gif_write_header,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.init = gif_init,
.write_packet = gif_write_packet,
.write_trailer = gif_write_trailer,
.p.priv_class = &gif_muxer_class,

View File

@ -137,7 +137,7 @@ static void gxf_write_padding(AVIOContext *pb, int64_t to_pad)
ffio_fill(pb, 0, to_pad);
}
static int64_t updatePacketSize(AVIOContext *pb, int64_t pos)
static int64_t update_packet_size(AVIOContext *pb, int64_t pos)
{
int64_t curpos;
int size;
@ -154,7 +154,7 @@ static int64_t updatePacketSize(AVIOContext *pb, int64_t pos)
return curpos - pos;
}
static int64_t updateSize(AVIOContext *pb, int64_t pos)
static int64_t update_size(AVIOContext *pb, int64_t pos)
{
int64_t curpos;
@ -300,7 +300,7 @@ static int gxf_write_track_description(AVFormatContext *s, GXFStreamContext *sc,
avio_w8(pb, 4);
avio_wb32(pb, sc->fields);
return updateSize(pb, pos);
return update_size(pb, pos);
}
static int gxf_write_material_data_section(AVFormatContext *s)
@ -351,7 +351,7 @@ static int gxf_write_material_data_section(AVFormatContext *s)
avio_w8(pb, 4);
avio_wb32(pb, avio_size(pb) / 1024);
return updateSize(pb, pos);
return update_size(pb, pos);
}
static int gxf_write_track_description_section(AVFormatContext *s)
@ -368,7 +368,7 @@ static int gxf_write_track_description_section(AVFormatContext *s)
gxf_write_track_description(s, &gxf->timecode_track, s->nb_streams);
return updateSize(pb, pos);
return update_size(pb, pos);
}
static int gxf_write_map_packet(AVFormatContext *s, int rewrite)
@ -400,7 +400,7 @@ static int gxf_write_map_packet(AVFormatContext *s, int rewrite)
gxf_write_material_data_section(s);
gxf_write_track_description_section(s);
return updatePacketSize(pb, pos);
return update_packet_size(pb, pos);
}
static int gxf_write_flt_packet(AVFormatContext *s)
@ -424,7 +424,7 @@ static int gxf_write_flt_packet(AVFormatContext *s)
ffio_fill(pb, 0, (1000 - i) * 4);
return updatePacketSize(pb, pos);
return update_packet_size(pb, pos);
}
static int gxf_write_umf_material_description(AVFormatContext *s)
@ -643,7 +643,7 @@ static int gxf_write_umf_packet(AVFormatContext *s)
gxf->umf_track_size = gxf_write_umf_track_description(s);
gxf->umf_media_size = gxf_write_umf_media_description(s);
gxf->umf_length = avio_tell(pb) - gxf->umf_start_offset;
return updatePacketSize(pb, pos);
return update_packet_size(pb, pos);
}
static void gxf_init_timecode_track(GXFStreamContext *sc, GXFStreamContext *vsc)
@ -692,7 +692,7 @@ static int gxf_write_header(AVFormatContext *s)
if (!(pb->seekable & AVIO_SEEKABLE_NORMAL)) {
av_log(s, AV_LOG_ERROR, "gxf muxer does not support streamed output, patch welcome\n");
return -1;
return AVERROR_PATCHWELCOME;
}
gxf->flags |= 0x00080000; /* material is simple clip */
@ -707,15 +707,15 @@ static int gxf_write_header(AVFormatContext *s)
if (st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
if (st->codecpar->codec_id != AV_CODEC_ID_PCM_S16LE) {
av_log(s, AV_LOG_ERROR, "only 16 BIT PCM LE allowed for now\n");
return -1;
return AVERROR(EINVAL);
}
if (st->codecpar->sample_rate != 48000) {
av_log(s, AV_LOG_ERROR, "only 48000hz sampling rate is allowed\n");
return -1;
return AVERROR(EINVAL);
}
if (st->codecpar->ch_layout.nb_channels != 1) {
av_log(s, AV_LOG_ERROR, "only mono tracks are allowed\n");
return -1;
return AVERROR(EINVAL);
}
ret = ff_stream_add_bitstream_filter(st, "pcm_rechunk", "n="AV_STRINGIFY(GXF_SAMPLES_PER_FRAME));
if (ret < 0)
@ -733,7 +733,7 @@ static int gxf_write_header(AVFormatContext *s)
} else if (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
if (i != 0) {
av_log(s, AV_LOG_ERROR, "video stream must be the first track\n");
return -1;
return AVERROR(EINVAL);
}
/* FIXME check from time_base ? */
if (st->codecpar->height == 480 || st->codecpar->height == 512) { /* NTSC or NTSC+VBI */
@ -750,7 +750,7 @@ static int gxf_write_header(AVFormatContext *s)
} else {
av_log(s, AV_LOG_ERROR, "unsupported video resolution, "
"gxf muxer only accepts PAL or NTSC resolutions currently\n");
return -1;
return AVERROR(EINVAL);
}
if (!tcr)
tcr = av_dict_get(st->metadata, "timecode", NULL, 0);
@ -823,7 +823,7 @@ static int gxf_write_eos_packet(AVIOContext *pb)
int64_t pos = avio_tell(pb);
gxf_write_packet_header(pb, PKT_EOS);
return updatePacketSize(pb, pos);
return update_packet_size(pb, pos);
}
static int gxf_write_trailer(AVFormatContext *s)
@ -956,7 +956,7 @@ static int gxf_write_packet(AVFormatContext *s, AVPacket *pkt)
gxf->nb_fields += 2; // count fields
}
updatePacketSize(pb, pos);
update_packet_size(pb, pos);
gxf->packet_count++;
if (gxf->packet_count == 100) {

View File

@ -3205,7 +3205,7 @@ const FFOutputFormat ff_hls_muxer = {
.p.flags = AVFMT_NOFILE | AVFMT_GLOBALHEADER | AVFMT_NODIMENSIONS,
#endif
.p.priv_class = &hls_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
.priv_data_size = sizeof(HLSContext),
.init = hls_init,
.write_header = hls_write_header,

View File

@ -310,10 +310,8 @@ int ff_iamf_read_packet(AVFormatContext *s, IAMFDemuxContext *c,
c->recon_size = 0;
} else {
int64_t offset = avio_skip(pb, obu_size);
if (offset < 0) {
ret = offset;
break;
}
if (offset < 0)
return offset;
}
max_size -= len;
if (max_size < 0)

View File

@ -21,12 +21,7 @@
#include <stdint.h>
#include "libavutil/avassert.h"
#include "libavutil/common.h"
#include "libavutil/iamf.h"
#include "libavcodec/put_bits.h"
#include "avformat.h"
#include "avio_internal.h"
#include "iamf.h"
#include "iamf_writer.h"
#include "internal.h"
@ -48,11 +43,6 @@ static int iamf_init(AVFormatContext *s)
int nb_audio_elements = 0, nb_mix_presentations = 0;
int ret;
if (!s->nb_streams) {
av_log(s, AV_LOG_ERROR, "There must be at least one stream\n");
return AVERROR(EINVAL);
}
for (int i = 0; i < s->nb_streams; i++) {
if (s->streams[i]->codecpar->codec_type != AVMEDIA_TYPE_AUDIO ||
(s->streams[i]->codecpar->codec_tag != MKTAG('m','p','4','a') &&
@ -77,7 +67,7 @@ static int iamf_init(AVFormatContext *s)
}
}
if (!s->nb_stream_groups) {
if (s->nb_stream_groups <= 1) {
av_log(s, AV_LOG_ERROR, "There must be at least two stream groups\n");
return AVERROR(EINVAL);
}

View File

@ -66,6 +66,9 @@ const FFOutputFormat ff_roq_muxer = {
.p.extensions = "roq",
.p.audio_codec = AV_CODEC_ID_ROQ_DPCM,
.p.video_codec = AV_CODEC_ID_ROQ,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.write_header = roq_write_header,
.write_packet = ff_raw_write_packet,
};

View File

@ -33,18 +33,7 @@ static const char mode30_header[] = "#!iLBC30\n";
static int ilbc_write_header(AVFormatContext *s)
{
AVIOContext *pb = s->pb;
AVCodecParameters *par;
if (s->nb_streams != 1) {
av_log(s, AV_LOG_ERROR, "Unsupported number of streams\n");
return AVERROR(EINVAL);
}
par = s->streams[0]->codecpar;
if (par->codec_id != AV_CODEC_ID_ILBC) {
av_log(s, AV_LOG_ERROR, "Unsupported codec\n");
return AVERROR(EINVAL);
}
AVCodecParameters *par = s->streams[0]->codecpar;
if (par->block_align == 50) {
avio_write(pb, mode30_header, sizeof(mode30_header) - 1);
@ -127,8 +116,12 @@ const FFOutputFormat ff_ilbc_muxer = {
.p.long_name = NULL_IF_CONFIG_SMALL("iLBC storage"),
.p.mime_type = "audio/iLBC",
.p.extensions = "lbc",
.p.video_codec = AV_CODEC_ID_NONE,
.p.audio_codec = AV_CODEC_ID_ILBC,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.p.flags = AVFMT_NOTIMESTAMPS,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.write_header = ilbc_write_header,
.write_packet = ff_raw_write_packet,
};

View File

@ -32,11 +32,6 @@ static int ircam_write_header(AVFormatContext *s)
AVCodecParameters *par = s->streams[0]->codecpar;
uint32_t tag;
if (s->nb_streams != 1) {
av_log(s, AV_LOG_ERROR, "only one stream is supported\n");
return AVERROR(EINVAL);
}
tag = ff_codec_get_tag(ff_codec_ircam_le_tags, par->codec_id);
if (!tag) {
av_log(s, AV_LOG_ERROR, "unsupported codec\n");
@ -57,6 +52,8 @@ const FFOutputFormat ff_ircam_muxer = {
.p.long_name = NULL_IF_CONFIG_SMALL("Berkeley/IRCAM/CARL Sound Format"),
.p.audio_codec = AV_CODEC_ID_PCM_S16LE,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH,
.write_header = ircam_write_header,
.write_packet = ff_raw_write_packet,
.p.codec_tag = (const AVCodecTag *const []){ ff_codec_ircam_le_tags, 0 },

View File

@ -29,15 +29,9 @@ typedef struct IVFEncContext {
static int ivf_init(AVFormatContext *s)
{
AVCodecParameters *par;
AVCodecParameters *par = s->streams[0]->codecpar;
if (s->nb_streams != 1) {
av_log(s, AV_LOG_ERROR, "Format supports only exactly one video stream\n");
return AVERROR(EINVAL);
}
par = s->streams[0]->codecpar;
if (par->codec_type != AVMEDIA_TYPE_VIDEO ||
!(par->codec_id == AV_CODEC_ID_AV1 ||
if (!(par->codec_id == AV_CODEC_ID_AV1 ||
par->codec_id == AV_CODEC_ID_VP8 ||
par->codec_id == AV_CODEC_ID_VP9)) {
av_log(s, AV_LOG_ERROR, "Currently only VP8, VP9 and AV1 are supported!\n");
@ -125,7 +119,9 @@ const FFOutputFormat ff_ivf_muxer = {
.p.extensions = "ivf",
.p.audio_codec = AV_CODEC_ID_NONE,
.p.video_codec = AV_CODEC_ID_VP8,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.p.codec_tag = (const AVCodecTag* const []){ codec_ivf_tags, 0 },
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH,
.priv_data_size = sizeof(IVFEncContext),
.init = ivf_init,
.write_header = ivf_write_header,

View File

@ -36,7 +36,11 @@ const FFOutputFormat ff_jacosub_muxer = {
.p.mime_type = "text/x-jacosub",
.p.extensions = "jss,js",
.p.flags = AVFMT_TS_NONSTRICT,
.p.video_codec = AV_CODEC_ID_NONE,
.p.audio_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_JACOSUB,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.write_header = jacosub_write_header,
.write_packet = ff_raw_write_packet,
};

View File

@ -171,6 +171,8 @@ static int jpegxl_anim_read_packet(AVFormatContext *s, AVPacket *pkt)
av_buffer_unref(&ctx->initial);
}
pkt->pos = avio_tell(pb) - offset;
ret = avio_read(pb, pkt->data + offset, size - offset);
if (ret < 0)
return ret;

View File

@ -129,20 +129,7 @@ const FFInputFormat ff_kvag_demuxer = {
#if CONFIG_KVAG_MUXER
static int kvag_write_init(AVFormatContext *s)
{
AVCodecParameters *par;
if (s->nb_streams != 1) {
av_log(s, AV_LOG_ERROR, "KVAG files have exactly one stream\n");
return AVERROR(EINVAL);
}
par = s->streams[0]->codecpar;
if (par->codec_id != AV_CODEC_ID_ADPCM_IMA_SSI) {
av_log(s, AV_LOG_ERROR, "%s codec not supported\n",
avcodec_get_name(par->codec_id));
return AVERROR(EINVAL);
}
AVCodecParameters *par = s->streams[0]->codecpar;
if (par->ch_layout.nb_channels > 2) {
av_log(s, AV_LOG_ERROR, "KVAG files only support up to 2 channels\n");
@ -196,6 +183,9 @@ const FFOutputFormat ff_kvag_muxer = {
.p.extensions = "vag",
.p.audio_codec = AV_CODEC_ID_ADPCM_IMA_SSI,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.init = kvag_write_init,
.write_header = kvag_write_header,
.write_packet = ff_raw_write_packet,

View File

@ -268,6 +268,8 @@ const FFOutputFormat ff_latm_muxer = {
.priv_data_size = sizeof(LATMContext),
.p.audio_codec = AV_CODEC_ID_AAC,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH,
.write_header = latm_write_header,
.write_packet = latm_write_packet,
.p.priv_class = &latm_muxer_class,

View File

@ -37,12 +37,6 @@ static int lrc_write_header(AVFormatContext *s)
{
const AVDictionaryEntry *metadata_item;
if(s->nb_streams != 1 ||
s->streams[0]->codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {
av_log(s, AV_LOG_ERROR,
"LRC supports only a single subtitle stream.\n");
return AVERROR(EINVAL);
}
if(s->streams[0]->codecpar->codec_id != AV_CODEC_ID_SUBRIP &&
s->streams[0]->codecpar->codec_id != AV_CODEC_ID_TEXT) {
av_log(s, AV_LOG_ERROR, "Unsupported subtitle codec: %s\n",
@ -131,7 +125,10 @@ const FFOutputFormat ff_lrc_muxer = {
.p.extensions = "lrc",
.p.flags = AVFMT_VARIABLE_FPS | AVFMT_GLOBALHEADER |
AVFMT_TS_NEGATIVE | AVFMT_TS_NONSTRICT,
.p.video_codec = AV_CODEC_ID_NONE,
.p.audio_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_SUBRIP,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH,
.priv_data_size = 0,
.write_header = lrc_write_header,
.write_packet = lrc_write_packet,

View File

@ -54,6 +54,7 @@
#include "libavcodec/bytestream.h"
#include "libavcodec/defs.h"
#include "libavcodec/flac.h"
#include "libavcodec/itut35.h"
#include "libavcodec/mpeg4audio.h"
#include "libavcodec/packet_internal.h"
@ -3884,7 +3885,8 @@ static int matroska_parse_block_additional(MatroskaDemuxContext *matroska,
country_code = bytestream2_get_byteu(&bc);
provider_code = bytestream2_get_be16u(&bc);
if (country_code != 0xB5 || provider_code != 0x3C)
if (country_code != ITU_T_T35_COUNTRY_CODE_US ||
provider_code != ITU_T_T35_PROVIDER_CODE_SMTPE)
break; // ignore
provider_oriented_code = bytestream2_get_be16u(&bc);

View File

@ -63,6 +63,7 @@
#include "libavcodec/codec_desc.h"
#include "libavcodec/codec_par.h"
#include "libavcodec/defs.h"
#include "libavcodec/itut35.h"
#include "libavcodec/xiph.h"
#include "libavcodec/mpeg4audio.h"
@ -2824,8 +2825,8 @@ static int mkv_write_block(void *logctx, MatroskaMuxContext *mkv,
uint8_t *payload = t35_buf;
size_t payload_size = sizeof(t35_buf) - 6;
bytestream_put_byte(&payload, 0xB5); // country_code
bytestream_put_be16(&payload, 0x3C); // provider_code
bytestream_put_byte(&payload, ITU_T_T35_COUNTRY_CODE_US);
bytestream_put_be16(&payload, ITU_T_T35_PROVIDER_CODE_SMTPE);
bytestream_put_be16(&payload, 0x01); // provider_oriented_code
bytestream_put_byte(&payload, 0x04); // application_identifier
@ -3568,7 +3569,7 @@ const FFOutputFormat ff_matroska_muxer = {
.query_codec = mkv_query_codec,
.check_bitstream = mkv_check_bitstream,
.p.priv_class = &matroska_webm_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};
#endif
@ -3605,7 +3606,7 @@ const FFOutputFormat ff_webm_muxer = {
AVFMT_TS_NONSTRICT,
#endif
.p.priv_class = &matroska_webm_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};
#endif
@ -3635,6 +3636,6 @@ const FFOutputFormat ff_matroska_audio_muxer = {
ff_codec_wav_tags, additional_audio_tags, 0
},
.p.priv_class = &matroska_webm_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};
#endif

View File

@ -29,11 +29,6 @@ static int microdvd_write_header(struct AVFormatContext *s)
AVCodecParameters *par = s->streams[0]->codecpar;
AVRational framerate = s->streams[0]->avg_frame_rate;
if (s->nb_streams != 1 || par->codec_id != AV_CODEC_ID_MICRODVD) {
av_log(s, AV_LOG_ERROR, "Exactly one MicroDVD stream is needed.\n");
return -1;
}
if (par->extradata && par->extradata_size > 0) {
avio_write(s->pb, "{DEFAULT}{}", 11);
avio_write(s->pb, par->extradata, par->extradata_size);
@ -62,7 +57,11 @@ const FFOutputFormat ff_microdvd_muxer = {
.p.mime_type = "text/x-microdvd",
.p.extensions = "sub",
.p.flags = AVFMT_NOTIMESTAMPS,
.p.video_codec = AV_CODEC_ID_NONE,
.p.audio_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_MICRODVD,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.write_header = microdvd_write_header,
.write_packet = microdvd_write_packet,
};

View File

@ -319,6 +319,9 @@ const FFOutputFormat ff_mmf_muxer = {
.priv_data_size = sizeof(MMFContext),
.p.audio_codec = AV_CODEC_ID_ADPCM_YAMAHA,
.p.video_codec = AV_CODEC_ID_NONE,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_MAX_ONE_OF_EACH |
FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.write_header = mmf_write_header,
.write_packet = ff_raw_write_packet,
.write_trailer = mmf_write_trailer,

View File

@ -8038,6 +8038,7 @@ static int mov_check_bitstream(AVFormatContext *s, AVStream *st,
return ret;
}
#if CONFIG_AVIF_MUXER
static int avif_write_trailer(AVFormatContext *s)
{
AVIOContext *pb = s->pb;
@ -8093,6 +8094,7 @@ static int avif_write_trailer(AVFormatContext *s)
return 0;
}
#endif
#if CONFIG_TGP_MUXER || CONFIG_TG2_MUXER
static const AVCodecTag codec_3gp_tags[] = {
@ -8239,7 +8241,7 @@ const FFOutputFormat ff_mov_muxer = {
},
.check_bitstream = mov_check_bitstream,
.p.priv_class = &mov_isobmff_muxer_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};
#endif
#if CONFIG_TGP_MUXER
@ -8263,7 +8265,7 @@ const FFOutputFormat ff_tgp_muxer = {
.p.codec_tag = codec_3gp_tags_list,
.check_bitstream = mov_check_bitstream,
.p.priv_class = &mov_isobmff_muxer_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};
#endif
#if CONFIG_MP4_MUXER
@ -8289,7 +8291,7 @@ const FFOutputFormat ff_mp4_muxer = {
.p.codec_tag = mp4_codec_tags_list,
.check_bitstream = mov_check_bitstream,
.p.priv_class = &mov_isobmff_muxer_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};
#endif
#if CONFIG_PSP_MUXER
@ -8314,7 +8316,7 @@ const FFOutputFormat ff_psp_muxer = {
.p.codec_tag = mp4_codec_tags_list,
.check_bitstream = mov_check_bitstream,
.p.priv_class = &mov_isobmff_muxer_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};
#endif
#if CONFIG_TG2_MUXER
@ -8338,7 +8340,7 @@ const FFOutputFormat ff_tg2_muxer = {
.p.codec_tag = codec_3gp_tags_list,
.check_bitstream = mov_check_bitstream,
.p.priv_class = &mov_isobmff_muxer_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};
#endif
#if CONFIG_IPOD_MUXER
@ -8363,7 +8365,7 @@ const FFOutputFormat ff_ipod_muxer = {
.p.codec_tag = (const AVCodecTag* const []){ codec_ipod_tags, 0 },
.check_bitstream = mov_check_bitstream,
.p.priv_class = &mov_isobmff_muxer_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};
#endif
#if CONFIG_ISMV_MUXER
@ -8389,7 +8391,7 @@ const FFOutputFormat ff_ismv_muxer = {
codec_mp4_tags, codec_ism_tags, 0 },
.check_bitstream = mov_check_bitstream,
.p.priv_class = &mov_isobmff_muxer_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};
#endif
#if CONFIG_F4V_MUXER
@ -8414,7 +8416,7 @@ const FFOutputFormat ff_f4v_muxer = {
.p.codec_tag = (const AVCodecTag* const []){ codec_f4v_tags, 0 },
.check_bitstream = mov_check_bitstream,
.p.priv_class = &mov_isobmff_muxer_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};
#endif
#if CONFIG_AVIF_MUXER
@ -8437,6 +8439,6 @@ const FFOutputFormat ff_avif_muxer = {
#endif
.p.codec_tag = codec_avif_tags_list,
.p.priv_class = &mov_avif_muxer_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};
#endif

View File

@ -495,12 +495,16 @@ static int mp3_write_trailer(struct AVFormatContext *s)
static int query_codec(enum AVCodecID id, int std_compliance)
{
const CodecMime *cm= ff_id3v2_mime_tags;
if (id == AV_CODEC_ID_MP3)
return 1;
while(cm->id != AV_CODEC_ID_NONE) {
if(id == cm->id)
return MKTAG('A', 'P', 'I', 'C');
cm++;
}
return -1;
return 0;
}
static const AVOption options[] = {

View File

@ -2415,6 +2415,6 @@ const FFOutputFormat ff_mpegts_muxer = {
#else
.p.flags = AVFMT_VARIABLE_FPS | AVFMT_NODIMENSIONS,
#endif
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
.p.priv_class = &mpegts_muxer_class,
};

View File

@ -70,6 +70,8 @@ const FFOutputFormat ff_mpjpeg_muxer = {
.priv_data_size = sizeof(MPJPEGContext),
.p.audio_codec = AV_CODEC_ID_NONE,
.p.video_codec = AV_CODEC_ID_MJPEG,
.p.subtitle_codec = AV_CODEC_ID_NONE,
.flags_internal = FF_OFMT_FLAG_ONLY_DEFAULT_CODECS,
.write_header = mpjpeg_write_header,
.write_packet = mpjpeg_write_packet,
.p.flags = AVFMT_NOTIMESTAMPS,

View File

@ -188,6 +188,12 @@ static int init_muxer(AVFormatContext *s, AVDictionary **options)
AVDictionary *tmp = NULL;
const FFOutputFormat *of = ffofmt(s->oformat);
AVDictionaryEntry *e;
static const unsigned default_codec_offsets[] = {
[AVMEDIA_TYPE_VIDEO] = offsetof(AVOutputFormat, video_codec),
[AVMEDIA_TYPE_AUDIO] = offsetof(AVOutputFormat, audio_codec),
[AVMEDIA_TYPE_SUBTITLE] = offsetof(AVOutputFormat, subtitle_codec),
};
unsigned nb_type[FF_ARRAY_ELEMS(default_codec_offsets)] = { 0 };
int ret = 0;
if (options)
@ -262,6 +268,30 @@ static int init_muxer(AVFormatContext *s, AVDictionary **options)
}
break;
}
if (of->flags_internal & (FF_OFMT_FLAG_MAX_ONE_OF_EACH | FF_OFMT_FLAG_ONLY_DEFAULT_CODECS)) {
enum AVCodecID default_codec_id = AV_CODEC_ID_NONE;
unsigned nb;
if ((unsigned)par->codec_type < FF_ARRAY_ELEMS(default_codec_offsets)) {
nb = ++nb_type[par->codec_type];
if (default_codec_offsets[par->codec_type])
default_codec_id = *(const enum AVCodecID*)((const char*)of + default_codec_offsets[par->codec_type]);
}
if (of->flags_internal & FF_OFMT_FLAG_ONLY_DEFAULT_CODECS &&
default_codec_id != AV_CODEC_ID_NONE && par->codec_id != default_codec_id) {
av_log(s, AV_LOG_ERROR, "%s muxer supports only codec %s for type %s\n",
of->p.name, avcodec_get_name(default_codec_id), av_get_media_type_string(par->codec_type));
ret = AVERROR(EINVAL);
goto fail;
} else if (default_codec_id == AV_CODEC_ID_NONE ||
(of->flags_internal & FF_OFMT_FLAG_MAX_ONE_OF_EACH && nb > 1)) {
const char *type = av_get_media_type_string(par->codec_type);
av_log(s, AV_LOG_ERROR, "%s muxer does not support %s stream of type %s\n",
of->p.name, default_codec_id == AV_CODEC_ID_NONE ? "any" : "more than one",
type ? type : "unknown");
ret = AVERROR(EINVAL);
goto fail;
}
}
#if FF_API_AVSTREAM_SIDE_DATA
FF_DISABLE_DEPRECATION_WARNINGS
@ -1208,10 +1238,10 @@ int av_write_frame(AVFormatContext *s, AVPacket *in)
if (!in) {
#if FF_API_ALLOW_FLUSH || LIBAVFORMAT_VERSION_MAJOR >= 61
// Hint: The pulse audio output device has this set,
// so we can't switch the check to FF_FMT_ALLOW_FLUSH immediately.
// so we can't switch the check to FF_OFMT_FLAG_ALLOW_FLUSH immediately.
if (s->oformat->flags & AVFMT_ALLOW_FLUSH) {
#else
if (ffofmt(s->oformat)->flags_internal & FF_FMT_ALLOW_FLUSH) {
if (ffofmt(s->oformat)->flags_internal & FF_OFMT_FLAG_ALLOW_FLUSH) {
#endif
ret = ffofmt(s->oformat)->write_packet(s, NULL);
flush_if_needed(s);

View File

@ -27,7 +27,36 @@
struct AVDeviceInfoList;
#define FF_FMT_ALLOW_FLUSH (1 << 1)
/**
* This flag indicates that the muxer stores data internally
* and supports flushing it. Flushing is signalled by sending
* a NULL packet to the muxer's write_packet callback;
* without this flag, a muxer never receives NULL packets.
* So the documentation of write_packet below for the semantics
* of the return value in case of flushing.
*/
#define FF_OFMT_FLAG_ALLOW_FLUSH (1 << 1)
/**
* If this flag is set, it indicates that for each codec type
* whose corresponding default codec (i.e. AVOutputFormat.audio_codec,
* AVOutputFormat.video_codec and AVOutputFormat.subtitle_codec)
* is set (i.e. != AV_CODEC_ID_NONE) only one stream of this type
* can be muxed. It furthermore indicates that no stream with
* a codec type that has no default codec or whose default codec
* is AV_CODEC_ID_NONE can be muxed.
* Both of these restrictions are checked generically before
* the actual muxer's init/write_header callbacks.
*/
#define FF_OFMT_FLAG_MAX_ONE_OF_EACH (1 << 2)
/**
* If this flag is set, then the only permitted audio/video/subtitle
* codec ids are AVOutputFormat.audio/video/subtitle_codec;
* if any of the latter is unset (i.e. equal to AV_CODEC_ID_NONE),
* then no stream of the corresponding type is supported.
* In addition, codec types without default codec field
* are disallowed.
*/
#define FF_OFMT_FLAG_ONLY_DEFAULT_CODECS (1 << 3)
typedef struct FFOutputFormat {
/**
@ -40,13 +69,13 @@ typedef struct FFOutputFormat {
int priv_data_size;
/**
* Internal flags. See FF_FMT_* in internal.h and mux.h.
* Internal flags. See FF_OFMT_FLAG_* above and FF_FMT_FLAG_* in internal.h.
*/
int flags_internal;
int (*write_header)(AVFormatContext *);
/**
* Write a packet. If FF_FMT_ALLOW_FLUSH is set in flags_internal,
* Write a packet. If FF_OFMT_FLAG_ALLOW_FLUSH is set in flags_internal,
* pkt can be NULL in order to flush data buffered in the muxer.
* When flushing, return 0 if there still is more data to flush,
* or 1 if everything was flushed and there is no more buffered

View File

@ -39,10 +39,32 @@ int avformat_query_codec(const AVOutputFormat *ofmt, enum AVCodecID codec_id,
return ffofmt(ofmt)->query_codec(codec_id, std_compliance);
else if (ofmt->codec_tag)
return !!av_codec_get_tag2(ofmt->codec_tag, codec_id, &codec_tag);
else if (codec_id == ofmt->video_codec ||
codec_id == ofmt->audio_codec ||
codec_id == ofmt->subtitle_codec)
else if (codec_id != AV_CODEC_ID_NONE &&
(codec_id == ofmt->video_codec ||
codec_id == ofmt->audio_codec ||
codec_id == ofmt->subtitle_codec))
return 1;
else if (ffofmt(ofmt)->flags_internal & FF_OFMT_FLAG_ONLY_DEFAULT_CODECS)
return 0;
else if (ffofmt(ofmt)->flags_internal & FF_OFMT_FLAG_MAX_ONE_OF_EACH) {
enum AVMediaType type = avcodec_get_type(codec_id);
switch (type) {
case AVMEDIA_TYPE_AUDIO:
if (ofmt->audio_codec == AV_CODEC_ID_NONE)
return 0;
break;
case AVMEDIA_TYPE_VIDEO:
if (ofmt->video_codec == AV_CODEC_ID_NONE)
return 0;
break;
case AVMEDIA_TYPE_SUBTITLE:
if (ofmt->subtitle_codec == AV_CODEC_ID_NONE)
return 0;
break;
default:
return 0;
}
}
}
return AVERROR_PATCHWELCOME;
}

View File

@ -777,7 +777,7 @@ const FFOutputFormat ff_ogg_muxer = {
.p.flags = AVFMT_TS_NEGATIVE | AVFMT_TS_NONSTRICT,
#endif
.p.priv_class = &ogg_muxer_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};
#endif
@ -800,7 +800,7 @@ const FFOutputFormat ff_oga_muxer = {
.p.flags = AVFMT_TS_NEGATIVE,
#endif
.p.priv_class = &ogg_muxer_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};
#endif
@ -826,7 +826,7 @@ const FFOutputFormat ff_ogv_muxer = {
.p.flags = AVFMT_TS_NEGATIVE | AVFMT_TS_NONSTRICT,
#endif
.p.priv_class = &ogg_muxer_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};
#endif
@ -849,7 +849,7 @@ const FFOutputFormat ff_spx_muxer = {
.p.flags = AVFMT_TS_NEGATIVE,
#endif
.p.priv_class = &ogg_muxer_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};
#endif
@ -872,6 +872,6 @@ const FFOutputFormat ff_opus_muxer = {
.p.flags = AVFMT_TS_NEGATIVE,
#endif
.p.priv_class = &ogg_muxer_class,
.flags_internal = FF_FMT_ALLOW_FLUSH,
.flags_internal = FF_OFMT_FLAG_ALLOW_FLUSH,
};
#endif

Some files were not shown because too many files have changed in this diff Show More