1 @chapter Filtering Introduction
2 @c man begin FILTERING INTRODUCTION
4 Filtering in FFmpeg is enabled through the libavfilter library.
6 Libavfilter is the filtering API of FFmpeg. It is the substitute of
7 the now deprecated 'vhooks' and started as a Google Summer of Code
10 Audio filtering integration into the main FFmpeg repository is a work in
11 progress, so audio API and ABI should not be considered stable yet.
13 In libavfilter, it is possible for filters to have multiple inputs and
15 To illustrate the sorts of things that are possible, we can
16 use a complex filter graph. For example, the following one:
19 input --> split --> fifo -----------------------> overlay --> output
22 +------> fifo --> crop --> vflip --------+
25 splits the stream in two streams, sends one stream through the crop filter
26 and the vflip filter before merging it back with the other stream by
27 overlaying it on top. You can use the following command to achieve this:
30 ffmpeg -i input -vf "[in] split [T1], fifo, [T2] overlay=0:H/2 [out]; [T1] fifo, crop=iw:ih/2:0:ih/2, vflip [T2]" output
33 The result will be that in output the top half of the video is mirrored
36 Filters are loaded using the @var{-vf} or @var{-af} option passed to
37 @command{ffmpeg} or to @command{ffplay}. Filters in the same linear
38 chain are separated by commas. In our example, @var{split, fifo,
39 overlay} are in one linear chain, and @var{fifo, crop, vflip} are in
40 another. The points where the linear chains join are labeled by names
41 enclosed in square brackets. In our example, that is @var{[T1]} and
42 @var{[T2]}. The special labels @var{[in]} and @var{[out]} are the points
43 where video is input and output.
45 Some filters take in input a list of parameters: they are specified
46 after the filter name and an equal sign, and are separated from each other
49 There exist so-called @var{source filters} that do not have an
50 audio/video input, and @var{sink filters} that will not have audio/video
53 @c man end FILTERING INTRODUCTION
56 @c man begin GRAPH2DOT
58 The @file{graph2dot} program included in the FFmpeg @file{tools}
59 directory can be used to parse a filter graph description and issue a
60 corresponding textual representation in the dot language.
67 to see how to use @file{graph2dot}.
69 You can then pass the dot description to the @file{dot} program (from
70 the graphviz suite of programs) and obtain a graphical representation
73 For example the sequence of commands:
75 echo @var{GRAPH_DESCRIPTION} | \
76 tools/graph2dot -o graph.tmp && \
77 dot -Tpng graph.tmp -o graph.png && \
81 can be used to create and display an image representing the graph
82 described by the @var{GRAPH_DESCRIPTION} string.
86 @chapter Filtergraph description
87 @c man begin FILTERGRAPH DESCRIPTION
89 A filtergraph is a directed graph of connected filters. It can contain
90 cycles, and there can be multiple links between a pair of
91 filters. Each link has one input pad on one side connecting it to one
92 filter from which it takes its input, and one output pad on the other
93 side connecting it to the one filter accepting its output.
95 Each filter in a filtergraph is an instance of a filter class
96 registered in the application, which defines the features and the
97 number of input and output pads of the filter.
99 A filter with no input pads is called a "source", a filter with no
100 output pads is called a "sink".
102 @anchor{Filtergraph syntax}
103 @section Filtergraph syntax
105 A filtergraph can be represented using a textual representation, which is
106 recognized by the @option{-filter}/@option{-vf} and @option{-filter_complex}
107 options in @command{ffmpeg} and @option{-vf} in @command{ffplay}, and by the
108 @code{avfilter_graph_parse()}/@code{avfilter_graph_parse2()} function defined in
109 @file{libavfilter/avfiltergraph.h}.
111 A filterchain consists of a sequence of connected filters, each one
112 connected to the previous one in the sequence. A filterchain is
113 represented by a list of ","-separated filter descriptions.
115 A filtergraph consists of a sequence of filterchains. A sequence of
116 filterchains is represented by a list of ";"-separated filterchain
119 A filter is represented by a string of the form:
120 [@var{in_link_1}]...[@var{in_link_N}]@var{filter_name}=@var{arguments}[@var{out_link_1}]...[@var{out_link_M}]
122 @var{filter_name} is the name of the filter class of which the
123 described filter is an instance of, and has to be the name of one of
124 the filter classes registered in the program.
125 The name of the filter class is optionally followed by a string
128 @var{arguments} is a string which contains the parameters used to
129 initialize the filter instance, and are described in the filter
132 The list of arguments can be quoted using the character "'" as initial
133 and ending mark, and the character '\' for escaping the characters
134 within the quoted text; otherwise the argument string is considered
135 terminated when the next special character (belonging to the set
136 "[]=;,") is encountered.
138 The name and arguments of the filter are optionally preceded and
139 followed by a list of link labels.
140 A link label allows to name a link and associate it to a filter output
141 or input pad. The preceding labels @var{in_link_1}
142 ... @var{in_link_N}, are associated to the filter input pads,
143 the following labels @var{out_link_1} ... @var{out_link_M}, are
144 associated to the output pads.
146 When two link labels with the same name are found in the
147 filtergraph, a link between the corresponding input and output pad is
150 If an output pad is not labelled, it is linked by default to the first
151 unlabelled input pad of the next filter in the filterchain.
152 For example in the filterchain:
154 nullsrc, split[L1], [L2]overlay, nullsink
156 the split filter instance has two output pads, and the overlay filter
157 instance two input pads. The first output pad of split is labelled
158 "L1", the first input pad of overlay is labelled "L2", and the second
159 output pad of split is linked to the second input pad of overlay,
160 which are both unlabelled.
162 In a complete filterchain all the unlabelled filter input and output
163 pads must be connected. A filtergraph is considered valid if all the
164 filter input and output pads of all the filterchains are connected.
166 Libavfilter will automatically insert scale filters where format
167 conversion is required. It is possible to specify swscale flags
168 for those automatically inserted scalers by prepending
169 @code{sws_flags=@var{flags};}
170 to the filtergraph description.
172 Follows a BNF description for the filtergraph syntax:
174 @var{NAME} ::= sequence of alphanumeric characters and '_'
175 @var{LINKLABEL} ::= "[" @var{NAME} "]"
176 @var{LINKLABELS} ::= @var{LINKLABEL} [@var{LINKLABELS}]
177 @var{FILTER_ARGUMENTS} ::= sequence of chars (eventually quoted)
178 @var{FILTER} ::= [@var{LINKNAMES}] @var{NAME} ["=" @var{ARGUMENTS}] [@var{LINKNAMES}]
179 @var{FILTERCHAIN} ::= @var{FILTER} [,@var{FILTERCHAIN}]
180 @var{FILTERGRAPH} ::= [sws_flags=@var{flags};] @var{FILTERCHAIN} [;@var{FILTERGRAPH}]
183 @c man end FILTERGRAPH DESCRIPTION
185 @chapter Audio Filters
186 @c man begin AUDIO FILTERS
188 When you configure your FFmpeg build, you can disable any of the
189 existing filters using @code{--disable-filters}.
190 The configure output will show the audio filters included in your
193 Below is a description of the currently available audio filters.
197 Convert the input audio format to the specified formats.
199 The filter accepts a string of the form:
200 "@var{sample_format}:@var{channel_layout}".
202 @var{sample_format} specifies the sample format, and can be a string or the
203 corresponding numeric value defined in @file{libavutil/samplefmt.h}. Use 'p'
204 suffix for a planar sample format.
206 @var{channel_layout} specifies the channel layout, and can be a string
207 or the corresponding number value defined in @file{libavutil/audioconvert.h}.
209 The special parameter "auto", signifies that the filter will
210 automatically select the output format depending on the output filter.
212 Some examples follow.
216 Convert input to float, planar, stereo:
222 Convert input to unsigned 8-bit, automatically select out channel layout:
230 Convert the input audio to one of the specified formats. The framework will
231 negotiate the most appropriate format to minimize conversions.
233 The filter accepts the following named parameters:
237 A comma-separated list of requested sample formats.
240 A comma-separated list of requested sample rates.
242 @item channel_layouts
243 A comma-separated list of requested channel layouts.
247 If a parameter is omitted, all values are allowed.
249 For example to force the output to either unsigned 8-bit or signed 16-bit stereo:
251 aformat=sample_fmts\=u8\,s16:channel_layouts\=stereo
256 Merge two or more audio streams into a single multi-channel stream.
258 The filter accepts the following named options:
263 Set the number of inputs. Default is 2.
267 If the channel layouts of the inputs are disjoint, and therefore compatible,
268 the channel layout of the output will be set accordingly and the channels
269 will be reordered as necessary. If the channel layouts of the inputs are not
270 disjoint, the output will have all the channels of the first input then all
271 the channels of the second input, in that order, and the channel layout of
272 the output will be the default value corresponding to the total number of
275 For example, if the first input is in 2.1 (FL+FR+LF) and the second input
276 is FC+BL+BR, then the output will be in 5.1, with the channels in the
277 following order: a1, a2, b1, a3, b2, b3 (a1 is the first channel of the
278 first input, b1 is the first channel of the second input).
280 On the other hand, if both input are in stereo, the output channels will be
281 in the default order: a1, a2, b1, b2, and the channel layout will be
282 arbitrarily set to 4.0, which may or may not be the expected value.
284 All inputs must have the same sample rate, and format.
286 If inputs do not have the same duration, the output will stop with the
289 Example: merge two mono files into a stereo stream:
291 amovie=left.wav [l] ; amovie=right.mp3 [r] ; [l] [r] amerge
294 Example: multiple merges:
297 amovie=input.mkv:si=0 [a0];
298 amovie=input.mkv:si=1 [a1];
299 amovie=input.mkv:si=2 [a2];
300 amovie=input.mkv:si=3 [a3];
301 amovie=input.mkv:si=4 [a4];
302 amovie=input.mkv:si=5 [a5];
303 [a0][a1][a2][a3][a4][a5] amerge=inputs=6" -c:a pcm_s16le output.mkv
308 Mixes multiple audio inputs into a single output.
312 ffmpeg -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex amix=inputs=3:duration=first:dropout_transition=3 OUTPUT
314 will mix 3 input audio streams to a single output with the same duration as the
315 first input and a dropout transition time of 3 seconds.
317 The filter accepts the following named parameters:
321 Number of inputs. If unspecified, it defaults to 2.
324 How to determine the end-of-stream.
328 Duration of longest input. (default)
331 Duration of shortest input.
334 Duration of first input.
338 @item dropout_transition
339 Transition time, in seconds, for volume renormalization when an input
340 stream ends. The default value is 2 seconds.
346 Pass the audio source unchanged to the output.
350 Resample the input audio to the specified sample rate.
352 The filter accepts exactly one parameter, the output sample rate. If not
353 specified then the filter will automatically convert between its input
354 and output sample rates.
356 For example, to resample the input audio to 44100Hz:
361 @section asetnsamples
363 Set the number of samples per each output audio frame.
365 The last output packet may contain a different number of samples, as
366 the filter will flush all the remaining samples when the input audio
369 The filter accepts parameters as a list of @var{key}=@var{value} pairs,
374 @item nb_out_samples, n
375 Set the number of frames per each output audio frame. The number is
376 intended as the number of samples @emph{per each channel}.
377 Default value is 1024.
380 If set to 1, the filter will pad the last audio frame with zeroes, so
381 that the last frame will contain the same number of samples as the
382 previous ones. Default value is 1.
385 For example, to set the number of per-frame samples to 1234 and
386 disable padding for the last frame, use:
388 asetnsamples=n=1234:p=0
393 Show a line containing various information for each input audio frame.
394 The input audio is not modified.
396 The shown line contains a sequence of key/value pairs of the form
397 @var{key}:@var{value}.
399 A description of each shown parameter follows:
403 sequential number of the input frame, starting from 0
406 presentation TimeStamp of the input frame, expressed as a number of
407 time base units. The time base unit depends on the filter input pad, and
408 is usually 1/@var{sample_rate}.
411 presentation TimeStamp of the input frame, expressed as a number of
415 position of the frame in the input stream, -1 if this information in
416 unavailable and/or meaningless (for example in case of synthetic audio)
422 channel layout description
425 number of samples (per each channel) contained in the filtered frame
428 sample rate for the audio frame
431 Adler-32 checksum (printed in hexadecimal) of all the planes of the input frame
434 Adler-32 checksum (printed in hexadecimal) for each input frame plane,
435 expressed in the form "[@var{c0} @var{c1} @var{c2} @var{c3} @var{c4} @var{c5}
441 Split input audio into several identical outputs.
443 The filter accepts a single parameter which specifies the number of outputs. If
444 unspecified, it defaults to 2.
448 [in] asplit [out0][out1]
451 will create two separate outputs from the same input.
453 To create 3 or more outputs, you need to specify the number of
456 [in] asplit=3 [out0][out1][out2]
460 ffmpeg -i INPUT -filter_complex asplit=5 OUTPUT
462 will create 5 copies of the input audio.
467 Forward two audio streams and control the order the buffers are forwarded.
469 The argument to the filter is an expression deciding which stream should be
470 forwarded next: if the result is negative, the first stream is forwarded; if
471 the result is positive or zero, the second stream is forwarded. It can use
472 the following variables:
476 number of buffers forwarded so far on each stream
478 number of samples forwarded so far on each stream
480 current timestamp of each stream
483 The default value is @code{t1-t2}, which means to always forward the stream
484 that has a smaller timestamp.
486 Example: stress-test @code{amerge} by randomly sending buffers on the wrong
487 input, while avoiding too much of a desynchronization:
489 amovie=file.ogg [a] ; amovie=file.mp3 [b] ;
490 [a] [b] astreamsync=(2*random(1))-1+tanh(5*(t1-t2)) [a2] [b2] ;
498 The filter accepts exactly one parameter, the audio tempo. If not
499 specified then the filter will assume nominal 1.0 tempo. Tempo must
500 be in the [0.5, 2.0] range.
502 For example, to slow down audio to 80% tempo:
507 For example, to speed up audio to 125% tempo:
514 Make audio easier to listen to on headphones.
516 This filter adds `cues' to 44.1kHz stereo (i.e. audio CD format) audio
517 so that when listened to on headphones the stereo image is moved from
518 inside your head (standard for headphones) to outside and in front of
519 the listener (standard for speakers).
525 Mix channels with specific gain levels. The filter accepts the output
526 channel layout followed by a set of channels definitions.
528 This filter is also designed to remap efficiently the channels of an audio
531 The filter accepts parameters of the form:
532 "@var{l}:@var{outdef}:@var{outdef}:..."
536 output channel layout or number of channels
539 output channel specification, of the form:
540 "@var{out_name}=[@var{gain}*]@var{in_name}[+[@var{gain}*]@var{in_name}...]"
543 output channel to define, either a channel name (FL, FR, etc.) or a channel
544 number (c0, c1, etc.)
547 multiplicative coefficient for the channel, 1 leaving the volume unchanged
550 input channel to use, see out_name for details; it is not possible to mix
551 named and numbered input channels
554 If the `=' in a channel specification is replaced by `<', then the gains for
555 that specification will be renormalized so that the total is 1, thus
556 avoiding clipping noise.
558 @subsection Mixing examples
560 For example, if you want to down-mix from stereo to mono, but with a bigger
561 factor for the left channel:
563 pan=1:c0=0.9*c0+0.1*c1
566 A customized down-mix to stereo that works automatically for 3-, 4-, 5- and
569 pan=stereo: FL < FL + 0.5*FC + 0.6*BL + 0.6*SL : FR < FR + 0.5*FC + 0.6*BR + 0.6*SR
572 Note that @command{ffmpeg} integrates a default down-mix (and up-mix) system
573 that should be preferred (see "-ac" option) unless you have very specific
576 @subsection Remapping examples
578 The channel remapping will be effective if, and only if:
581 @item gain coefficients are zeroes or ones,
582 @item only one input per channel output,
585 If all these conditions are satisfied, the filter will notify the user ("Pure
586 channel mapping detected"), and use an optimized and lossless method to do the
589 For example, if you have a 5.1 source and want a stereo audio stream by
590 dropping the extra channels:
592 pan="stereo: c0=FL : c1=FR"
595 Given the same source, you can also switch front left and front right channels
596 and keep the input channel layout:
598 pan="5.1: c0=c1 : c1=c0 : c2=c2 : c3=c3 : c4=c4 : c5=c5"
601 If the input is a stereo audio stream, you can mute the front left channel (and
602 still keep the stereo channel layout) with:
607 Still with a stereo audio stream input, you can copy the right channel in both
608 front left and right:
610 pan="stereo: c0=FR : c1=FR"
613 @section silencedetect
615 Detect silence in an audio stream.
617 This filter logs a message when it detects that the input audio volume is less
618 or equal to a noise tolerance value for a duration greater or equal to the
619 minimum detected noise duration.
621 The printed times and duration are expressed in seconds.
625 Set silence duration until notification (default is 2 seconds).
628 Set noise tolerance. Can be specified in dB (in case "dB" is appended to the
629 specified value) or amplitude ratio. Default is -60dB, or 0.001.
632 Detect 5 seconds of silence with -50dB noise tolerance:
634 silencedetect=n=-50dB:d=5
637 Complete example with @command{ffmpeg} to detect silence with 0.0001 noise
638 tolerance in @file{silence.mp3}:
640 ffmpeg -f lavfi -i amovie=silence.mp3,silencedetect=noise=0.0001 -f null -
645 Adjust the input audio volume.
647 The filter accepts exactly one parameter @var{vol}, which expresses
648 how the audio volume will be increased or decreased.
650 Output values are clipped to the maximum value.
652 If @var{vol} is expressed as a decimal number, the output audio
653 volume is given by the relation:
655 @var{output_volume} = @var{vol} * @var{input_volume}
658 If @var{vol} is expressed as a decimal number followed by the string
659 "dB", the value represents the requested change in decibels of the
660 input audio power, and the output audio volume is given by the
663 @var{output_volume} = 10^(@var{vol}/20) * @var{input_volume}
666 Otherwise @var{vol} is considered an expression and its evaluated
667 value is used for computing the output audio volume according to the
670 Default value for @var{vol} is 1.0.
676 Half the input audio volume:
681 The above example is equivalent to:
687 Decrease input audio power by 12 decibels:
693 @section volumedetect
695 Detect the volume of the input video.
697 The filter has no parameters. The input is not modified. Statistics about
698 the volume will be printed in the log when the input stream end is reached.
700 In particular it will show the mean volume (root mean square), maximum
701 volume (on a per-sample basis), and the beginning of an histogram of the
702 registered volume values (from the maximum value to a cumulated 1/1000 of
705 All volumes are in decibels relative to the maximum PCM value.
707 Here is an excerpt of the output:
709 [Parsed_volumedetect_0 @ 0xa23120] mean_volume: -27 dB
710 [Parsed_volumedetect_0 @ 0xa23120] max_volume: -4 dB
711 [Parsed_volumedetect_0 @ 0xa23120] histogram_4db: 6
712 [Parsed_volumedetect_0 @ 0xa23120] histogram_5db: 62
713 [Parsed_volumedetect_0 @ 0xa23120] histogram_6db: 286
714 [Parsed_volumedetect_0 @ 0xa23120] histogram_7db: 1042
715 [Parsed_volumedetect_0 @ 0xa23120] histogram_8db: 2551
716 [Parsed_volumedetect_0 @ 0xa23120] histogram_9db: 4609
717 [Parsed_volumedetect_0 @ 0xa23120] histogram_10db: 8409
723 The mean square energy is approximately -27 dB, or 10^-2.7.
725 The largest sample is at -4 dB, or more precisely between -4 dB and -5 dB.
727 There are 6 samples at -4 dB, 62 at -5 dB, 286 at -6 dB, etc.
730 In other words, raising the volume by +4 dB does not cause any clipping,
731 raising it by +5 dB causes clipping for 6 samples, etc.
734 Synchronize audio data with timestamps by squeezing/stretching it and/or
735 dropping samples/adding silence when needed.
737 The filter accepts the following named parameters:
741 Enable stretching/squeezing the data to make it match the timestamps.
744 Minimum difference between timestamps and audio data (in seconds) to trigger
745 adding/dropping samples.
748 Maximum compensation in samples per second.
751 Assume the first pts should be this value.
752 This allows for padding/trimming at the start of stream. By default, no
753 assumption is made about the first frame's expected pts, so no padding or
754 trimming is done. For example, this could be set to 0 to pad the beginning with
755 silence if an audio stream starts after the video stream.
759 @section channelsplit
760 Split each channel in input audio stream into a separate output stream.
762 This filter accepts the following named parameters:
765 Channel layout of the input stream. Default is "stereo".
768 For example, assuming a stereo input MP3 file
770 ffmpeg -i in.mp3 -filter_complex channelsplit out.mkv
772 will create an output Matroska file with two audio streams, one containing only
773 the left channel and the other the right channel.
775 To split a 5.1 WAV file into per-channel files
777 ffmpeg -i in.wav -filter_complex
778 'channelsplit=channel_layout=5.1[FL][FR][FC][LFE][SL][SR]'
779 -map '[FL]' front_left.wav -map '[FR]' front_right.wav -map '[FC]'
780 front_center.wav -map '[LFE]' lfe.wav -map '[SL]' side_left.wav -map '[SR]'
785 Remap input channels to new locations.
787 This filter accepts the following named parameters:
790 Channel layout of the output stream.
793 Map channels from input to output. The argument is a comma-separated list of
794 mappings, each in the @code{@var{in_channel}-@var{out_channel}} or
795 @var{in_channel} form. @var{in_channel} can be either the name of the input
796 channel (e.g. FL for front left) or its index in the input channel layout.
797 @var{out_channel} is the name of the output channel or its index in the output
798 channel layout. If @var{out_channel} is not given then it is implicitly an
799 index, starting with zero and increasing by one for each mapping.
802 If no mapping is present, the filter will implicitly map input channels to
803 output channels preserving index.
805 For example, assuming a 5.1+downmix input MOV file
807 ffmpeg -i in.mov -filter 'channelmap=map=DL-FL\,DR-FR' out.wav
809 will create an output WAV file tagged as stereo from the downmix channels of
812 To fix a 5.1 WAV improperly encoded in AAC's native channel order
814 ffmpeg -i in.wav -filter 'channelmap=1\,2\,0\,5\,3\,4:channel_layout=5.1' out.wav
818 Join multiple input streams into one multi-channel stream.
820 The filter accepts the following named parameters:
824 Number of input streams. Defaults to 2.
827 Desired output channel layout. Defaults to stereo.
830 Map channels from inputs to output. The argument is a comma-separated list of
831 mappings, each in the @code{@var{input_idx}.@var{in_channel}-@var{out_channel}}
832 form. @var{input_idx} is the 0-based index of the input stream. @var{in_channel}
833 can be either the name of the input channel (e.g. FL for front left) or its
834 index in the specified input stream. @var{out_channel} is the name of the output
838 The filter will attempt to guess the mappings when those are not specified
839 explicitly. It does so by first trying to find an unused matching input channel
840 and if that fails it picks the first unused input channel.
842 E.g. to join 3 inputs (with properly set channel layouts)
844 ffmpeg -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex join=inputs=3 OUTPUT
847 To build a 5.1 output from 6 single-channel streams:
849 ffmpeg -i fl -i fr -i fc -i sl -i sr -i lfe -filter_complex
850 'join=inputs=6:channel_layout=5.1:map=0.0-FL\,1.0-FR\,2.0-FC\,3.0-SL\,4.0-SR\,5.0-LFE'
855 Convert the audio sample format, sample rate and channel layout. This filter is
856 not meant to be used directly.
858 @c man end AUDIO FILTERS
860 @chapter Audio Sources
861 @c man begin AUDIO SOURCES
863 Below is a description of the currently available audio sources.
867 Buffer audio frames, and make them available to the filter chain.
869 This source is mainly intended for a programmatic use, in particular
870 through the interface defined in @file{libavfilter/asrc_abuffer.h}.
872 It accepts the following mandatory parameters:
873 @var{sample_rate}:@var{sample_fmt}:@var{channel_layout}
878 The sample rate of the incoming audio buffers.
881 The sample format of the incoming audio buffers.
882 Either a sample format name or its corresponging integer representation from
883 the enum AVSampleFormat in @file{libavutil/samplefmt.h}
886 The channel layout of the incoming audio buffers.
887 Either a channel layout name from channel_layout_map in
888 @file{libavutil/audioconvert.c} or its corresponding integer representation
889 from the AV_CH_LAYOUT_* macros in @file{libavutil/audioconvert.h}
895 abuffer=44100:s16p:stereo
898 will instruct the source to accept planar 16bit signed stereo at 44100Hz.
899 Since the sample format with name "s16p" corresponds to the number
900 6 and the "stereo" channel layout corresponds to the value 0x3, this is
908 Generate an audio signal specified by an expression.
910 This source accepts in input one or more expressions (one for each
911 channel), which are evaluated and used to generate a corresponding
914 It accepts the syntax: @var{exprs}[::@var{options}].
915 @var{exprs} is a list of expressions separated by ":", one for each
916 separate channel. In case the @var{channel_layout} is not
917 specified, the selected channel layout depends on the number of
918 provided expressions.
920 @var{options} is an optional sequence of @var{key}=@var{value} pairs,
923 The description of the accepted options follows.
927 @item channel_layout, c
928 Set the channel layout. The number of channels in the specified layout
929 must be equal to the number of specified expressions.
932 Set the minimum duration of the sourced audio. See the function
933 @code{av_parse_time()} for the accepted format.
934 Note that the resulting duration may be greater than the specified
935 duration, as the generated audio is always cut at the end of a
938 If not specified, or the expressed duration is negative, the audio is
939 supposed to be generated forever.
942 Set the number of samples per channel per each output frame,
946 Specify the sample rate, default to 44100.
949 Each expression in @var{exprs} can contain the following constants:
953 number of the evaluated sample, starting from 0
956 time of the evaluated sample expressed in seconds, starting from 0
975 Generate a sin signal with frequency of 440 Hz, set sample rate to
978 aevalsrc="sin(440*2*PI*t)::s=8000"
982 Generate a two channels signal, specify the channel layout (Front
983 Center + Back Center) explicitly:
985 aevalsrc="sin(420*2*PI*t):cos(430*2*PI*t)::c=FC|BC"
989 Generate white noise:
991 aevalsrc="-2+random(0)"
995 Generate an amplitude modulated signal:
997 aevalsrc="sin(10*2*PI*t)*sin(880*2*PI*t)"
1001 Generate 2.5 Hz binaural beats on a 360 Hz carrier:
1003 aevalsrc="0.1*sin(2*PI*(360-2.5/2)*t) : 0.1*sin(2*PI*(360+2.5/2)*t)"
1010 Null audio source, return unprocessed audio frames. It is mainly useful
1011 as a template and to be employed in analysis / debugging tools, or as
1012 the source for filters which ignore the input data (for example the sox
1015 It accepts an optional sequence of @var{key}=@var{value} pairs,
1018 The description of the accepted options follows.
1022 @item sample_rate, s
1023 Specify the sample rate, and defaults to 44100.
1025 @item channel_layout, cl
1027 Specify the channel layout, and can be either an integer or a string
1028 representing a channel layout. The default value of @var{channel_layout}
1031 Check the channel_layout_map definition in
1032 @file{libavcodec/audioconvert.c} for the mapping between strings and
1033 channel layout values.
1036 Set the number of samples per requested frames.
1040 Follow some examples:
1042 # set the sample rate to 48000 Hz and the channel layout to AV_CH_LAYOUT_MONO.
1043 anullsrc=r=48000:cl=4
1046 anullsrc=r=48000:cl=mono
1050 Buffer audio frames, and make them available to the filter chain.
1052 This source is not intended to be part of user-supplied graph descriptions but
1053 for insertion by calling programs through the interface defined in
1054 @file{libavfilter/buffersrc.h}.
1056 It accepts the following named parameters:
1060 Timebase which will be used for timestamps of submitted frames. It must be
1061 either a floating-point number or in @var{numerator}/@var{denominator} form.
1067 Name of the sample format, as returned by @code{av_get_sample_fmt_name()}.
1069 @item channel_layout
1070 Channel layout of the audio data, in the form that can be accepted by
1071 @code{av_get_channel_layout()}.
1074 All the parameters need to be explicitly defined.
1078 Synthesize a voice utterance using the libflite library.
1080 To enable compilation of this filter you need to configure FFmpeg with
1081 @code{--enable-libflite}.
1083 Note that the flite library is not thread-safe.
1085 The source accepts parameters as a list of @var{key}=@var{value} pairs,
1088 The description of the accepted parameters follows.
1093 If set to 1, list the names of the available voices and exit
1094 immediately. Default value is 0.
1097 Set the maximum number of samples per frame. Default value is 512.
1100 Set the filename containing the text to speak.
1103 Set the text to speak.
1106 Set the voice to use for the speech synthesis. Default value is
1107 @code{kal}. See also the @var{list_voices} option.
1110 @subsection Examples
1114 Read from file @file{speech.txt}, and synthetize the text using the
1115 standard flite voice:
1117 flite=textfile=speech.txt
1121 Read the specified text selecting the @code{slt} voice:
1123 flite=text='So fare thee well, poor devil of a Sub-Sub, whose commentator I am':voice=slt
1127 Make @file{ffplay} speech the specified text, using @code{flite} and
1128 the @code{lavfi} device:
1130 ffplay -f lavfi flite='No more be grieved for which that thou hast done.'
1134 For more information about libflite, check:
1135 @url{http://www.speech.cs.cmu.edu/flite/}
1137 @c man end AUDIO SOURCES
1139 @chapter Audio Sinks
1140 @c man begin AUDIO SINKS
1142 Below is a description of the currently available audio sinks.
1144 @section abuffersink
1146 Buffer audio frames, and make them available to the end of filter chain.
1148 This sink is mainly intended for programmatic use, in particular
1149 through the interface defined in @file{libavfilter/buffersink.h}.
1151 It requires a pointer to an AVABufferSinkContext structure, which
1152 defines the incoming buffers' formats, to be passed as the opaque
1153 parameter to @code{avfilter_init_filter} for initialization.
1157 Null audio sink, do absolutely nothing with the input audio. It is
1158 mainly useful as a template and to be employed in analysis / debugging
1161 @section abuffersink
1162 This sink is intended for programmatic use. Frames that arrive on this sink can
1163 be retrieved by the calling program using the interface defined in
1164 @file{libavfilter/buffersink.h}.
1166 This filter accepts no parameters.
1168 @c man end AUDIO SINKS
1170 @chapter Video Filters
1171 @c man begin VIDEO FILTERS
1173 When you configure your FFmpeg build, you can disable any of the
1174 existing filters using @code{--disable-filters}.
1175 The configure output will show the video filters included in your
1178 Below is a description of the currently available video filters.
1180 @section alphaextract
1182 Extract the alpha component from the input as a grayscale video. This
1183 is especially useful with the @var{alphamerge} filter.
1187 Add or replace the alpha component of the primary input with the
1188 grayscale value of a second input. This is intended for use with
1189 @var{alphaextract} to allow the transmission or storage of frame
1190 sequences that have alpha in a format that doesn't support an alpha
1193 For example, to reconstruct full frames from a normal YUV-encoded video
1194 and a separate video created with @var{alphaextract}, you might use:
1196 movie=in_alpha.mkv [alpha]; [in][alpha] alphamerge [out]
1199 Since this filter is designed for reconstruction, it operates on frame
1200 sequences without considering timestamps, and terminates when either
1201 input reaches end of stream. This will cause problems if your encoding
1202 pipeline drops frames. If you're trying to apply an image as an
1203 overlay to a video stream, consider the @var{overlay} filter instead.
1207 Draw ASS (Advanced Substation Alpha) subtitles on top of input video
1208 using the libass library.
1210 To enable compilation of this filter you need to configure FFmpeg with
1211 @code{--enable-libass}.
1213 This filter accepts the syntax: @var{ass_filename}[:@var{options}],
1214 where @var{ass_filename} is the filename of the ASS file to read, and
1215 @var{options} is an optional sequence of @var{key}=@var{value} pairs,
1218 A description of the accepted options follows.
1222 Specifies the size of the original video, the video for which the ASS file
1223 was composed. Due to a misdesign in ASS aspect ratio arithmetic, this is
1224 necessary to correctly scale the fonts if the aspect ratio has been changed.
1227 For example, to render the file @file{sub.ass} on top of the input
1228 video, use the command:
1235 Compute the bounding box for the non-black pixels in the input frame
1238 This filter computes the bounding box containing all the pixels with a
1239 luminance value greater than the minimum allowed value.
1240 The parameters describing the bounding box are printed on the filter
1243 @section blackdetect
1245 Detect video intervals that are (almost) completely black. Can be
1246 useful to detect chapter transitions, commercials, or invalid
1247 recordings. Output lines contains the time for the start, end and
1248 duration of the detected black interval expressed in seconds.
1250 In order to display the output lines, you need to set the loglevel at
1251 least to the AV_LOG_INFO value.
1253 This filter accepts a list of options in the form of
1254 @var{key}=@var{value} pairs separated by ":". A description of the
1255 accepted options follows.
1258 @item black_min_duration, d
1259 Set the minimum detected black duration expressed in seconds. It must
1260 be a non-negative floating point number.
1262 Default value is 2.0.
1264 @item picture_black_ratio_th, pic_th
1265 Set the threshold for considering a picture "black".
1266 Express the minimum value for the ratio:
1268 @var{nb_black_pixels} / @var{nb_pixels}
1271 for which a picture is considered black.
1272 Default value is 0.98.
1274 @item pixel_black_th, pix_th
1275 Set the threshold for considering a pixel "black".
1277 The threshold expresses the maximum pixel luminance value for which a
1278 pixel is considered "black". The provided value is scaled according to
1279 the following equation:
1281 @var{absolute_threshold} = @var{luminance_minimum_value} + @var{pixel_black_th} * @var{luminance_range_size}
1284 @var{luminance_range_size} and @var{luminance_minimum_value} depend on
1285 the input video format, the range is [0-255] for YUV full-range
1286 formats and [16-235] for YUV non full-range formats.
1288 Default value is 0.10.
1291 The following example sets the maximum pixel threshold to the minimum
1292 value, and detects only black intervals of 2 or more seconds:
1294 blackdetect=d=2:pix_th=0.00
1299 Detect frames that are (almost) completely black. Can be useful to
1300 detect chapter transitions or commercials. Output lines consist of
1301 the frame number of the detected frame, the percentage of blackness,
1302 the position in the file if known or -1 and the timestamp in seconds.
1304 In order to display the output lines, you need to set the loglevel at
1305 least to the AV_LOG_INFO value.
1307 The filter accepts the syntax:
1309 blackframe[=@var{amount}:[@var{threshold}]]
1312 @var{amount} is the percentage of the pixels that have to be below the
1313 threshold, and defaults to 98.
1315 @var{threshold} is the threshold below which a pixel value is
1316 considered black, and defaults to 32.
1320 Apply boxblur algorithm to the input video.
1322 This filter accepts the parameters:
1323 @var{luma_radius}:@var{luma_power}:@var{chroma_radius}:@var{chroma_power}:@var{alpha_radius}:@var{alpha_power}
1325 Chroma and alpha parameters are optional, if not specified they default
1326 to the corresponding values set for @var{luma_radius} and
1329 @var{luma_radius}, @var{chroma_radius}, and @var{alpha_radius} represent
1330 the radius in pixels of the box used for blurring the corresponding
1331 input plane. They are expressions, and can contain the following
1335 the input width and height in pixels
1338 the input chroma image width and height in pixels
1341 horizontal and vertical chroma subsample values. For example for the
1342 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
1345 The radius must be a non-negative number, and must not be greater than
1346 the value of the expression @code{min(w,h)/2} for the luma and alpha planes,
1347 and of @code{min(cw,ch)/2} for the chroma planes.
1349 @var{luma_power}, @var{chroma_power}, and @var{alpha_power} represent
1350 how many times the boxblur filter is applied to the corresponding
1353 Some examples follow:
1358 Apply a boxblur filter with luma, chroma, and alpha radius
1365 Set luma radius to 2, alpha and chroma radius to 0
1371 Set luma and chroma radius to a fraction of the video dimension
1373 boxblur=min(h\,w)/10:1:min(cw\,ch)/10:1
1378 @section colormatrix
1380 The colormatrix filter allows conversion between any of the following color
1381 space: BT.709 (@var{bt709}), BT.601 (@var{bt601}), SMPTE-240M (@var{smpte240m})
1382 and FCC (@var{fcc}).
1384 The syntax of the parameters is @var{source}:@var{destination}:
1387 colormatrix=bt601:smpte240m
1392 Copy the input source unchanged to the output. Mainly useful for
1397 Crop the input video to @var{out_w}:@var{out_h}:@var{x}:@var{y}:@var{keep_aspect}
1399 The @var{keep_aspect} parameter is optional, if specified and set to a
1400 non-zero value will force the output display aspect ratio to be the
1401 same of the input, by changing the output sample aspect ratio.
1403 The @var{out_w}, @var{out_h}, @var{x}, @var{y} parameters are
1404 expressions containing the following constants:
1408 the computed values for @var{x} and @var{y}. They are evaluated for
1412 the input width and height
1415 same as @var{in_w} and @var{in_h}
1418 the output (cropped) width and height
1421 same as @var{out_w} and @var{out_h}
1424 same as @var{iw} / @var{ih}
1427 input sample aspect ratio
1430 input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
1433 horizontal and vertical chroma subsample values. For example for the
1434 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
1437 the number of input frame, starting from 0
1440 the position in the file of the input frame, NAN if unknown
1443 timestamp expressed in seconds, NAN if the input timestamp is unknown
1447 The @var{out_w} and @var{out_h} parameters specify the expressions for
1448 the width and height of the output (cropped) video. They are
1449 evaluated just at the configuration of the filter.
1451 The default value of @var{out_w} is "in_w", and the default value of
1452 @var{out_h} is "in_h".
1454 The expression for @var{out_w} may depend on the value of @var{out_h},
1455 and the expression for @var{out_h} may depend on @var{out_w}, but they
1456 cannot depend on @var{x} and @var{y}, as @var{x} and @var{y} are
1457 evaluated after @var{out_w} and @var{out_h}.
1459 The @var{x} and @var{y} parameters specify the expressions for the
1460 position of the top-left corner of the output (non-cropped) area. They
1461 are evaluated for each frame. If the evaluated value is not valid, it
1462 is approximated to the nearest valid value.
1464 The default value of @var{x} is "(in_w-out_w)/2", and the default
1465 value for @var{y} is "(in_h-out_h)/2", which set the cropped area at
1466 the center of the input image.
1468 The expression for @var{x} may depend on @var{y}, and the expression
1469 for @var{y} may depend on @var{x}.
1471 Follow some examples:
1473 # crop the central input area with size 100x100
1476 # crop the central input area with size 2/3 of the input video
1477 "crop=2/3*in_w:2/3*in_h"
1479 # crop the input video central square
1482 # delimit the rectangle with the top-left corner placed at position
1483 # 100:100 and the right-bottom corner corresponding to the right-bottom
1484 # corner of the input image.
1485 crop=in_w-100:in_h-100:100:100
1487 # crop 10 pixels from the left and right borders, and 20 pixels from
1488 # the top and bottom borders
1489 "crop=in_w-2*10:in_h-2*20"
1491 # keep only the bottom right quarter of the input image
1492 "crop=in_w/2:in_h/2:in_w/2:in_h/2"
1494 # crop height for getting Greek harmony
1495 "crop=in_w:1/PHI*in_w"
1498 "crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(n/10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(n/7)"
1500 # erratic camera effect depending on timestamp
1501 "crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(t*10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(t*13)"
1503 # set x depending on the value of y
1504 "crop=in_w/2:in_h/2:y:10+10*sin(n/10)"
1509 Auto-detect crop size.
1511 Calculate necessary cropping parameters and prints the recommended
1512 parameters through the logging system. The detected dimensions
1513 correspond to the non-black area of the input video.
1515 It accepts the syntax:
1517 cropdetect[=@var{limit}[:@var{round}[:@var{reset}]]]
1523 Threshold, which can be optionally specified from nothing (0) to
1524 everything (255), defaults to 24.
1527 Value which the width/height should be divisible by, defaults to
1528 16. The offset is automatically adjusted to center the video. Use 2 to
1529 get only even dimensions (needed for 4:2:2 video). 16 is best when
1530 encoding to most video codecs.
1533 Counter that determines after how many frames cropdetect will reset
1534 the previously detected largest video area and start over to detect
1535 the current optimal crop area. Defaults to 0.
1537 This can be useful when channel logos distort the video area. 0
1538 indicates never reset and return the largest area encountered during
1544 This filter drops frames that do not differ greatly from the previous
1545 frame in order to reduce framerate. The main use of this filter is
1546 for very-low-bitrate encoding (e.g. streaming over dialup modem), but
1547 it could in theory be used for fixing movies that were
1548 inverse-telecined incorrectly.
1550 It accepts the following parameters:
1551 @var{max}:@var{hi}:@var{lo}:@var{frac}.
1556 Set the maximum number of consecutive frames which can be dropped (if
1557 positive), or the minimum interval between dropped frames (if
1558 negative). If the value is 0, the frame is dropped unregarding the
1559 number of previous sequentially dropped frames.
1564 Set the dropping threshold values.
1566 Values for @var{hi} and @var{lo} are for 8x8 pixel blocks and
1567 represent actual pixel value differences, so a threshold of 64
1568 corresponds to 1 unit of difference for each pixel, or the same spread
1569 out differently over the block.
1571 A frame is a candidate for dropping if no 8x8 blocks differ by more
1572 than a threshold of @var{hi}, and if no more than @var{frac} blocks (1
1573 meaning the whole image) differ by more than a threshold of @var{lo}.
1575 Default value for @var{hi} is 64*12, default value for @var{lo} is
1576 64*5, and default value for @var{frac} is 0.33.
1581 Suppress a TV station logo by a simple interpolation of the surrounding
1582 pixels. Just set a rectangle covering the logo and watch it disappear
1583 (and sometimes something even uglier appear - your mileage may vary).
1585 The filter accepts parameters as a string of the form
1586 "@var{x}:@var{y}:@var{w}:@var{h}:@var{band}", or as a list of
1587 @var{key}=@var{value} pairs, separated by ":".
1589 The description of the accepted parameters follows.
1594 Specify the top left corner coordinates of the logo. They must be
1598 Specify the width and height of the logo to clear. They must be
1602 Specify the thickness of the fuzzy edge of the rectangle (added to
1603 @var{w} and @var{h}). The default value is 4.
1606 When set to 1, a green rectangle is drawn on the screen to simplify
1607 finding the right @var{x}, @var{y}, @var{w}, @var{h} parameters, and
1608 @var{band} is set to 4. The default value is 0.
1612 Some examples follow.
1617 Set a rectangle covering the area with top left corner coordinates 0,0
1618 and size 100x77, setting a band of size 10:
1620 delogo=0:0:100:77:10
1624 As the previous example, but use named options:
1626 delogo=x=0:y=0:w=100:h=77:band=10
1633 Attempt to fix small changes in horizontal and/or vertical shift. This
1634 filter helps remove camera shake from hand-holding a camera, bumping a
1635 tripod, moving on a vehicle, etc.
1637 The filter accepts parameters as a string of the form
1638 "@var{x}:@var{y}:@var{w}:@var{h}:@var{rx}:@var{ry}:@var{edge}:@var{blocksize}:@var{contrast}:@var{search}:@var{filename}"
1640 A description of the accepted parameters follows.
1645 Specify a rectangular area where to limit the search for motion
1647 If desired the search for motion vectors can be limited to a
1648 rectangular area of the frame defined by its top left corner, width
1649 and height. These parameters have the same meaning as the drawbox
1650 filter which can be used to visualise the position of the bounding
1653 This is useful when simultaneous movement of subjects within the frame
1654 might be confused for camera motion by the motion vector search.
1656 If any or all of @var{x}, @var{y}, @var{w} and @var{h} are set to -1
1657 then the full frame is used. This allows later options to be set
1658 without specifying the bounding box for the motion vector search.
1660 Default - search the whole frame.
1663 Specify the maximum extent of movement in x and y directions in the
1664 range 0-64 pixels. Default 16.
1667 Specify how to generate pixels to fill blanks at the edge of the
1668 frame. An integer from 0 to 3 as follows:
1671 Fill zeroes at blank locations
1673 Original image at blank locations
1675 Extruded edge value at blank locations
1677 Mirrored edge at blank locations
1680 The default setting is mirror edge at blank locations.
1683 Specify the blocksize to use for motion search. Range 4-128 pixels,
1687 Specify the contrast threshold for blocks. Only blocks with more than
1688 the specified contrast (difference between darkest and lightest
1689 pixels) will be considered. Range 1-255, default 125.
1692 Specify the search strategy 0 = exhaustive search, 1 = less exhaustive
1693 search. Default - exhaustive search.
1696 If set then a detailed log of the motion search is written to the
1703 Draw a colored box on the input image.
1705 It accepts the syntax:
1707 drawbox=@var{x}:@var{y}:@var{width}:@var{height}:@var{color}
1713 Specify the top left corner coordinates of the box. Default to 0.
1716 Specify the width and height of the box, if 0 they are interpreted as
1717 the input width and height. Default to 0.
1720 Specify the color of the box to write, it can be the name of a color
1721 (case insensitive match) or a 0xRRGGBB[AA] sequence.
1724 Follow some examples:
1726 # draw a black box around the edge of the input image
1729 # draw a box with color red and an opacity of 50%
1730 drawbox=10:20:200:60:red@@0.5"
1735 Draw text string or text from specified file on top of video using the
1736 libfreetype library.
1738 To enable compilation of this filter you need to configure FFmpeg with
1739 @code{--enable-libfreetype}.
1741 The filter also recognizes strftime() sequences in the provided text
1742 and expands them accordingly. Check the documentation of strftime().
1744 The filter accepts parameters as a list of @var{key}=@var{value} pairs,
1747 The description of the accepted parameters follows.
1752 Used to draw a box around text using background color.
1753 Value should be either 1 (enable) or 0 (disable).
1754 The default value of @var{box} is 0.
1757 The color to be used for drawing box around text.
1758 Either a string (e.g. "yellow") or in 0xRRGGBB[AA] format
1759 (e.g. "0xff00ff"), possibly followed by an alpha specifier.
1760 The default value of @var{boxcolor} is "white".
1763 Set an expression which specifies if the text should be drawn. If the
1764 expression evaluates to 0, the text is not drawn. This is useful for
1765 specifying that the text should be drawn only when specific conditions
1768 Default value is "1".
1770 See below for the list of accepted constants and functions.
1773 If true, check and fix text coords to avoid clipping.
1776 The color to be used for drawing fonts.
1777 Either a string (e.g. "red") or in 0xRRGGBB[AA] format
1778 (e.g. "0xff000033"), possibly followed by an alpha specifier.
1779 The default value of @var{fontcolor} is "black".
1782 The font file to be used for drawing text. Path must be included.
1783 This parameter is mandatory.
1786 The font size to be used for drawing text.
1787 The default value of @var{fontsize} is 16.
1790 Flags to be used for loading the fonts.
1792 The flags map the corresponding flags supported by libfreetype, and are
1793 a combination of the following values:
1800 @item vertical_layout
1801 @item force_autohint
1804 @item ignore_global_advance_width
1806 @item ignore_transform
1813 Default value is "render".
1815 For more information consult the documentation for the FT_LOAD_*
1819 The color to be used for drawing a shadow behind the drawn text. It
1820 can be a color name (e.g. "yellow") or a string in the 0xRRGGBB[AA]
1821 form (e.g. "0xff00ff"), possibly followed by an alpha specifier.
1822 The default value of @var{shadowcolor} is "black".
1824 @item shadowx, shadowy
1825 The x and y offsets for the text shadow position with respect to the
1826 position of the text. They can be either positive or negative
1827 values. Default value for both is "0".
1830 The size in number of spaces to use for rendering the tab.
1834 Set the initial timecode representation in "hh:mm:ss[:;.]ff"
1835 format. It can be used with or without text parameter. @var{timecode_rate}
1836 option must be specified.
1838 @item timecode_rate, rate, r
1839 Set the timecode frame rate (timecode only).
1842 The text string to be drawn. The text must be a sequence of UTF-8
1844 This parameter is mandatory if no file is specified with the parameter
1848 A text file containing text to be drawn. The text must be a sequence
1849 of UTF-8 encoded characters.
1851 This parameter is mandatory if no text string is specified with the
1852 parameter @var{text}.
1854 If both @var{text} and @var{textfile} are specified, an error is thrown.
1857 The expressions which specify the offsets where text will be drawn
1858 within the video frame. They are relative to the top/left border of the
1861 The default value of @var{x} and @var{y} is "0".
1863 See below for the list of accepted constants and functions.
1866 The parameters for @var{x} and @var{y} are expressions containing the
1867 following constants and functions:
1871 input display aspect ratio, it is the same as (@var{w} / @var{h}) * @var{sar}
1874 horizontal and vertical chroma subsample values. For example for the
1875 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
1878 the height of each text line
1886 @item max_glyph_a, ascent
1887 the maximum distance from the baseline to the highest/upper grid
1888 coordinate used to place a glyph outline point, for all the rendered
1890 It is a positive value, due to the grid's orientation with the Y axis
1893 @item max_glyph_d, descent
1894 the maximum distance from the baseline to the lowest grid coordinate
1895 used to place a glyph outline point, for all the rendered glyphs.
1896 This is a negative value, due to the grid's orientation, with the Y axis
1900 maximum glyph height, that is the maximum height for all the glyphs
1901 contained in the rendered text, it is equivalent to @var{ascent} -
1905 maximum glyph width, that is the maximum width for all the glyphs
1906 contained in the rendered text
1909 the number of input frame, starting from 0
1911 @item rand(min, max)
1912 return a random number included between @var{min} and @var{max}
1915 input sample aspect ratio
1918 timestamp expressed in seconds, NAN if the input timestamp is unknown
1921 the height of the rendered text
1924 the width of the rendered text
1927 the x and y offset coordinates where the text is drawn.
1929 These parameters allow the @var{x} and @var{y} expressions to refer
1930 each other, so you can for example specify @code{y=x/dar}.
1933 If libavfilter was built with @code{--enable-fontconfig}, then
1934 @option{fontfile} can be a fontconfig pattern or omitted.
1936 Some examples follow.
1941 Draw "Test Text" with font FreeSerif, using the default values for the
1942 optional parameters.
1945 drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text'"
1949 Draw 'Test Text' with font FreeSerif of size 24 at position x=100
1950 and y=50 (counting from the top-left corner of the screen), text is
1951 yellow with a red box around it. Both the text and the box have an
1955 drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text':\
1956 x=100: y=50: fontsize=24: fontcolor=yellow@@0.2: box=1: boxcolor=red@@0.2"
1959 Note that the double quotes are not necessary if spaces are not used
1960 within the parameter list.
1963 Show the text at the center of the video frame:
1965 drawtext="fontsize=30:fontfile=FreeSerif.ttf:text='hello world':x=(w-text_w)/2:y=(h-text_h-line_h)/2"
1969 Show a text line sliding from right to left in the last row of the video
1970 frame. The file @file{LONG_LINE} is assumed to contain a single line
1973 drawtext="fontsize=15:fontfile=FreeSerif.ttf:text=LONG_LINE:y=h-line_h:x=-50*t"
1977 Show the content of file @file{CREDITS} off the bottom of the frame and scroll up.
1979 drawtext="fontsize=20:fontfile=FreeSerif.ttf:textfile=CREDITS:y=h-20*t"
1983 Draw a single green letter "g", at the center of the input video.
1984 The glyph baseline is placed at half screen height.
1986 drawtext="fontsize=60:fontfile=FreeSerif.ttf:fontcolor=green:text=g:x=(w-max_glyph_w)/2:y=h/2-ascent"
1990 Show text for 1 second every 3 seconds:
1992 drawtext="fontfile=FreeSerif.ttf:fontcolor=white:x=100:y=x/dar:draw=lt(mod(t\\,3)\\,1):text='blink'"
1996 Use fontconfig to set the font. Note that the colons need to be escaped.
1998 drawtext='fontfile=Linux Libertine O-40\\:style=Semibold:text=FFmpeg'
2003 For more information about libfreetype, check:
2004 @url{http://www.freetype.org/}.
2006 For more information about fontconfig, check:
2007 @url{http://freedesktop.org/software/fontconfig/fontconfig-user.html}.
2011 Detect and draw edges. The filter uses the Canny Edge Detection algorithm.
2013 This filter accepts the following optional named parameters:
2017 Set low and high threshold values used by the Canny thresholding
2020 The high threshold selects the "strong" edge pixels, which are then
2021 connected through 8-connectivity with the "weak" edge pixels selected
2022 by the low threshold.
2024 @var{low} and @var{high} threshold values must be choosen in the range
2025 [0,1], and @var{low} should be lesser or equal to @var{high}.
2027 Default value for @var{low} is @code{20/255}, and default value for @var{high}
2033 edgedetect=low=0.1:high=0.4
2038 Apply fade-in/out effect to input video.
2040 It accepts the parameters:
2041 @var{type}:@var{start_frame}:@var{nb_frames}[:@var{options}]
2043 @var{type} specifies if the effect type, can be either "in" for
2044 fade-in, or "out" for a fade-out effect.
2046 @var{start_frame} specifies the number of the start frame for starting
2047 to apply the fade effect.
2049 @var{nb_frames} specifies the number of frames for which the fade
2050 effect has to last. At the end of the fade-in effect the output video
2051 will have the same intensity as the input video, at the end of the
2052 fade-out transition the output video will be completely black.
2054 @var{options} is an optional sequence of @var{key}=@var{value} pairs,
2055 separated by ":". The description of the accepted options follows.
2062 @item start_frame, s
2063 See @var{start_frame}.
2066 See @var{nb_frames}.
2069 If set to 1, fade only alpha channel, if one exists on the input.
2073 A few usage examples follow, usable too as test scenarios.
2075 # fade in first 30 frames of video
2078 # fade out last 45 frames of a 200-frame video
2081 # fade in first 25 frames and fade out last 25 frames of a 1000-frame video
2082 fade=in:0:25, fade=out:975:25
2084 # make first 5 frames black, then fade in from frame 5-24
2087 # fade in alpha over first 25 frames of video
2088 fade=in:0:25:alpha=1
2093 Transform the field order of the input video.
2095 It accepts one parameter which specifies the required field order that
2096 the input interlaced video will be transformed to. The parameter can
2097 assume one of the following values:
2101 output bottom field first
2103 output top field first
2106 Default value is "tff".
2108 Transformation is achieved by shifting the picture content up or down
2109 by one line, and filling the remaining line with appropriate picture content.
2110 This method is consistent with most broadcast field order converters.
2112 If the input video is not flagged as being interlaced, or it is already
2113 flagged as being of the required output field order then this filter does
2114 not alter the incoming video.
2116 This filter is very useful when converting to or from PAL DV material,
2117 which is bottom field first.
2121 ffmpeg -i in.vob -vf "fieldorder=bff" out.dv
2126 Buffer input images and send them when they are requested.
2128 This filter is mainly useful when auto-inserted by the libavfilter
2131 The filter does not take parameters.
2135 Convert the input video to one of the specified pixel formats.
2136 Libavfilter will try to pick one that is supported for the input to
2139 The filter accepts a list of pixel format names, separated by ":",
2140 for example "yuv420p:monow:rgb24".
2142 Some examples follow:
2144 # convert the input video to the format "yuv420p"
2147 # convert the input video to any of the formats in the list
2148 format=yuv420p:yuv444p:yuv410p
2153 Convert the video to specified constant framerate by duplicating or dropping
2154 frames as necessary.
2156 This filter accepts the following named parameters:
2160 Desired output framerate.
2166 Select one frame every N.
2168 This filter accepts in input a string representing a positive
2169 integer. Default argument is @code{1}.
2174 Apply a frei0r effect to the input video.
2176 To enable compilation of this filter you need to install the frei0r
2177 header and configure FFmpeg with @code{--enable-frei0r}.
2179 The filter supports the syntax:
2181 @var{filter_name}[@{:|=@}@var{param1}:@var{param2}:...:@var{paramN}]
2184 @var{filter_name} is the name to the frei0r effect to load. If the
2185 environment variable @env{FREI0R_PATH} is defined, the frei0r effect
2186 is searched in each one of the directories specified by the colon
2187 separated list in @env{FREIOR_PATH}, otherwise in the standard frei0r
2188 paths, which are in this order: @file{HOME/.frei0r-1/lib/},
2189 @file{/usr/local/lib/frei0r-1/}, @file{/usr/lib/frei0r-1/}.
2191 @var{param1}, @var{param2}, ... , @var{paramN} specify the parameters
2192 for the frei0r effect.
2194 A frei0r effect parameter can be a boolean (whose values are specified
2195 with "y" and "n"), a double, a color (specified by the syntax
2196 @var{R}/@var{G}/@var{B}, @var{R}, @var{G}, and @var{B} being float
2197 numbers from 0.0 to 1.0) or by an @code{av_parse_color()} color
2198 description), a position (specified by the syntax @var{X}/@var{Y},
2199 @var{X} and @var{Y} being float numbers) and a string.
2201 The number and kind of parameters depend on the loaded effect. If an
2202 effect parameter is not specified the default value is set.
2204 Some examples follow:
2208 Apply the distort0r effect, set the first two double parameters:
2210 frei0r=distort0r:0.5:0.01
2214 Apply the colordistance effect, takes a color as first parameter:
2216 frei0r=colordistance:0.2/0.3/0.4
2217 frei0r=colordistance:violet
2218 frei0r=colordistance:0x112233
2222 Apply the perspective effect, specify the top left and top right image
2225 frei0r=perspective:0.2/0.2:0.8/0.2
2229 For more information see:
2230 @url{http://frei0r.dyne.org}
2234 Fix the banding artifacts that are sometimes introduced into nearly flat
2235 regions by truncation to 8bit color depth.
2236 Interpolate the gradients that should go where the bands are, and
2239 This filter is designed for playback only. Do not use it prior to
2240 lossy compression, because compression tends to lose the dither and
2241 bring back the bands.
2243 The filter takes two optional parameters, separated by ':':
2244 @var{strength}:@var{radius}
2246 @var{strength} is the maximum amount by which the filter will change
2247 any one pixel. Also the threshold for detecting nearly flat
2248 regions. Acceptable values range from .51 to 255, default value is
2249 1.2, out-of-range values will be clipped to the valid range.
2251 @var{radius} is the neighborhood to fit the gradient to. A larger
2252 radius makes for smoother gradients, but also prevents the filter from
2253 modifying the pixels near detailed regions. Acceptable values are
2254 8-32, default value is 16, out-of-range values will be clipped to the
2258 # default parameters
2267 Flip the input video horizontally.
2269 For example to horizontally flip the input video with @command{ffmpeg}:
2271 ffmpeg -i in.avi -vf "hflip" out.avi
2276 High precision/quality 3d denoise filter. This filter aims to reduce
2277 image noise producing smooth images and making still images really
2278 still. It should enhance compressibility.
2280 It accepts the following optional parameters:
2281 @var{luma_spatial}:@var{chroma_spatial}:@var{luma_tmp}:@var{chroma_tmp}
2285 a non-negative float number which specifies spatial luma strength,
2288 @item chroma_spatial
2289 a non-negative float number which specifies spatial chroma strength,
2290 defaults to 3.0*@var{luma_spatial}/4.0
2293 a float number which specifies luma temporal strength, defaults to
2294 6.0*@var{luma_spatial}/4.0
2297 a float number which specifies chroma temporal strength, defaults to
2298 @var{luma_tmp}*@var{chroma_spatial}/@var{luma_spatial}
2303 Modify the hue and/or the saturation of the input.
2305 This filter accepts the following optional named options:
2309 Specify the hue angle as a number of degrees. It accepts a float
2310 number or an expression, and defaults to 0.0.
2313 Specify the hue angle as a number of degrees. It accepts a float
2314 number or an expression, and defaults to 0.0.
2317 Specify the saturation in the [-10,10] range. It accepts a float number and
2321 The options can also be set using the syntax: @var{hue}:@var{saturation}
2323 In this case @var{hue} is expressed in degrees.
2325 Some examples follow:
2328 Set the hue to 90 degrees and the saturation to 1.0:
2334 Same command but expressing the hue in radians:
2340 Same command without named options, hue must be expressed in degrees:
2346 Note that "h:s" syntax does not support expressions for the values of
2347 h and s, so the following example will issue an error:
2353 @subsection Commands
2355 This filter supports the following command:
2358 Modify the hue and/or the saturation of the input video.
2359 The command accepts the same named options and syntax than when calling the
2360 filter from the command-line.
2362 If a parameter is omitted, it is kept at its current value.
2367 Interlaceing detect filter. This filter tries to detect if the input is
2368 interlaced or progressive. Top or bottom field first.
2370 @section lut, lutrgb, lutyuv
2372 Compute a look-up table for binding each pixel component input value
2373 to an output value, and apply it to input video.
2375 @var{lutyuv} applies a lookup table to a YUV input video, @var{lutrgb}
2376 to an RGB input video.
2378 These filters accept in input a ":"-separated list of options, which
2379 specify the expressions used for computing the lookup table for the
2380 corresponding pixel component values.
2382 The @var{lut} filter requires either YUV or RGB pixel formats in
2383 input, and accepts the options:
2386 first pixel component
2388 second pixel component
2390 third pixel component
2392 fourth pixel component, corresponds to the alpha component
2395 The exact component associated to each option depends on the format in
2398 The @var{lutrgb} filter requires RGB pixel formats in input, and
2399 accepts the options:
2411 The @var{lutyuv} filter requires YUV pixel formats in input, and
2412 accepts the options:
2415 Y/luminance component
2424 The expressions can contain the following constants and functions:
2428 the input width and height
2431 input value for the pixel component
2434 the input value clipped in the @var{minval}-@var{maxval} range
2437 maximum value for the pixel component
2440 minimum value for the pixel component
2443 the negated value for the pixel component value clipped in the
2444 @var{minval}-@var{maxval} range , it corresponds to the expression
2445 "maxval-clipval+minval"
2448 the computed value in @var{val} clipped in the
2449 @var{minval}-@var{maxval} range
2451 @item gammaval(gamma)
2452 the computed gamma correction value of the pixel component value
2453 clipped in the @var{minval}-@var{maxval} range, corresponds to the
2455 "pow((clipval-minval)/(maxval-minval)\,@var{gamma})*(maxval-minval)+minval"
2459 All expressions default to "val".
2461 Some examples follow:
2463 # negate input video
2464 lutrgb="r=maxval+minval-val:g=maxval+minval-val:b=maxval+minval-val"
2465 lutyuv="y=maxval+minval-val:u=maxval+minval-val:v=maxval+minval-val"
2467 # the above is the same as
2468 lutrgb="r=negval:g=negval:b=negval"
2469 lutyuv="y=negval:u=negval:v=negval"
2474 # remove chroma components, turns the video into a graytone image
2475 lutyuv="u=128:v=128"
2477 # apply a luma burning effect
2480 # remove green and blue components
2483 # set a constant alpha channel value on input
2484 format=rgba,lutrgb=a="maxval-minval/2"
2486 # correct luminance gamma by a 0.5 factor
2487 lutyuv=y=gammaval(0.5)
2492 Apply an MPlayer filter to the input video.
2494 This filter provides a wrapper around most of the filters of
2497 This wrapper is considered experimental. Some of the wrapped filters
2498 may not work properly and we may drop support for them, as they will
2499 be implemented natively into FFmpeg. Thus you should avoid
2500 depending on them when writing portable scripts.
2502 The filters accepts the parameters:
2503 @var{filter_name}[:=]@var{filter_params}
2505 @var{filter_name} is the name of a supported MPlayer filter,
2506 @var{filter_params} is a string containing the parameters accepted by
2509 The list of the currently supported filters follows:
2553 The parameter syntax and behavior for the listed filters are the same
2554 of the corresponding MPlayer filters. For detailed instructions check
2555 the "VIDEO FILTERS" section in the MPlayer manual.
2557 Some examples follow:
2560 Adjust gamma, brightness, contrast:
2566 Add temporal noise to input video:
2572 See also mplayer(1), @url{http://www.mplayerhq.hu/}.
2578 This filter accepts an integer in input, if non-zero it negates the
2579 alpha component (if available). The default value in input is 0.
2583 Force libavfilter not to use any of the specified pixel formats for the
2584 input to the next filter.
2586 The filter accepts a list of pixel format names, separated by ":",
2587 for example "yuv420p:monow:rgb24".
2589 Some examples follow:
2591 # force libavfilter to use a format different from "yuv420p" for the
2592 # input to the vflip filter
2593 noformat=yuv420p,vflip
2595 # convert the input video to any of the formats not contained in the list
2596 noformat=yuv420p:yuv444p:yuv410p
2601 Pass the video source unchanged to the output.
2605 Apply video transform using libopencv.
2607 To enable this filter install libopencv library and headers and
2608 configure FFmpeg with @code{--enable-libopencv}.
2610 The filter takes the parameters: @var{filter_name}@{:=@}@var{filter_params}.
2612 @var{filter_name} is the name of the libopencv filter to apply.
2614 @var{filter_params} specifies the parameters to pass to the libopencv
2615 filter. If not specified the default values are assumed.
2617 Refer to the official libopencv documentation for more precise
2619 @url{http://opencv.willowgarage.com/documentation/c/image_filtering.html}
2621 Follows the list of supported libopencv filters.
2626 Dilate an image by using a specific structuring element.
2627 This filter corresponds to the libopencv function @code{cvDilate}.
2629 It accepts the parameters: @var{struct_el}:@var{nb_iterations}.
2631 @var{struct_el} represents a structuring element, and has the syntax:
2632 @var{cols}x@var{rows}+@var{anchor_x}x@var{anchor_y}/@var{shape}
2634 @var{cols} and @var{rows} represent the number of columns and rows of
2635 the structuring element, @var{anchor_x} and @var{anchor_y} the anchor
2636 point, and @var{shape} the shape for the structuring element, and
2637 can be one of the values "rect", "cross", "ellipse", "custom".
2639 If the value for @var{shape} is "custom", it must be followed by a
2640 string of the form "=@var{filename}". The file with name
2641 @var{filename} is assumed to represent a binary image, with each
2642 printable character corresponding to a bright pixel. When a custom
2643 @var{shape} is used, @var{cols} and @var{rows} are ignored, the number
2644 or columns and rows of the read file are assumed instead.
2646 The default value for @var{struct_el} is "3x3+0x0/rect".
2648 @var{nb_iterations} specifies the number of times the transform is
2649 applied to the image, and defaults to 1.
2651 Follow some example:
2653 # use the default values
2656 # dilate using a structuring element with a 5x5 cross, iterate two times
2657 ocv=dilate=5x5+2x2/cross:2
2659 # read the shape from the file diamond.shape, iterate two times
2660 # the file diamond.shape may contain a pattern of characters like this:
2666 # the specified cols and rows are ignored (but not the anchor point coordinates)
2667 ocv=0x0+2x2/custom=diamond.shape:2
2672 Erode an image by using a specific structuring element.
2673 This filter corresponds to the libopencv function @code{cvErode}.
2675 The filter accepts the parameters: @var{struct_el}:@var{nb_iterations},
2676 with the same syntax and semantics as the @ref{dilate} filter.
2680 Smooth the input video.
2682 The filter takes the following parameters:
2683 @var{type}:@var{param1}:@var{param2}:@var{param3}:@var{param4}.
2685 @var{type} is the type of smooth filter to apply, and can be one of
2686 the following values: "blur", "blur_no_scale", "median", "gaussian",
2687 "bilateral". The default value is "gaussian".
2689 @var{param1}, @var{param2}, @var{param3}, and @var{param4} are
2690 parameters whose meanings depend on smooth type. @var{param1} and
2691 @var{param2} accept integer positive values or 0, @var{param3} and
2692 @var{param4} accept float values.
2694 The default value for @var{param1} is 3, the default value for the
2695 other parameters is 0.
2697 These parameters correspond to the parameters assigned to the
2698 libopencv function @code{cvSmooth}.
2703 Overlay one video on top of another.
2705 It takes two inputs and one output, the first input is the "main"
2706 video on which the second input is overlayed.
2708 It accepts the parameters: @var{x}:@var{y}[:@var{options}].
2710 @var{x} is the x coordinate of the overlayed video on the main video,
2711 @var{y} is the y coordinate. @var{x} and @var{y} are expressions containing
2712 the following parameters:
2715 @item main_w, main_h
2716 main input width and height
2719 same as @var{main_w} and @var{main_h}
2721 @item overlay_w, overlay_h
2722 overlay input width and height
2725 same as @var{overlay_w} and @var{overlay_h}
2728 @var{options} is an optional list of @var{key}=@var{value} pairs,
2731 The description of the accepted options follows.
2735 If set to 1, force the filter to accept inputs in the RGB
2736 color space. Default value is 0.
2739 Be aware that frames are taken from each input video in timestamp
2740 order, hence, if their initial timestamps differ, it is a a good idea
2741 to pass the two inputs through a @var{setpts=PTS-STARTPTS} filter to
2742 have them begin in the same zero timestamp, as it does the example for
2743 the @var{movie} filter.
2745 Follow some examples:
2747 # draw the overlay at 10 pixels from the bottom right
2748 # corner of the main video.
2749 overlay=main_w-overlay_w-10:main_h-overlay_h-10
2751 # insert a transparent PNG logo in the bottom left corner of the input
2752 ffmpeg -i input -i logo -filter_complex 'overlay=10:main_h-overlay_h-10' output
2754 # insert 2 different transparent PNG logos (second logo on bottom
2756 ffmpeg -i input -i logo1 -i logo2 -filter_complex
2757 'overlay=10:H-h-10,overlay=W-w-10:H-h-10' output
2759 # add a transparent color layer on top of the main video,
2760 # WxH specifies the size of the main input to the overlay filter
2761 color=red@.3:WxH [over]; [in][over] overlay [out]
2763 # play an original video and a filtered version (here with the deshake filter)
2765 ffplay input.avi -vf 'split[a][b]; [a]pad=iw*2:ih[src]; [b]deshake[filt]; [src][filt]overlay=w'
2767 # the previous example is the same as:
2768 ffplay input.avi -vf 'split[b], pad=iw*2[src], [b]deshake, [src]overlay=w'
2771 You can chain together more overlays but the efficiency of such
2772 approach is yet to be tested.
2776 Add paddings to the input image, and places the original input at the
2777 given coordinates @var{x}, @var{y}.
2779 It accepts the following parameters:
2780 @var{width}:@var{height}:@var{x}:@var{y}:@var{color}.
2782 The parameters @var{width}, @var{height}, @var{x}, and @var{y} are
2783 expressions containing the following constants:
2787 the input video width and height
2790 same as @var{in_w} and @var{in_h}
2793 the output width and height, that is the size of the padded area as
2794 specified by the @var{width} and @var{height} expressions
2797 same as @var{out_w} and @var{out_h}
2800 x and y offsets as specified by the @var{x} and @var{y}
2801 expressions, or NAN if not yet specified
2804 same as @var{iw} / @var{ih}
2807 input sample aspect ratio
2810 input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
2813 horizontal and vertical chroma subsample values. For example for the
2814 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
2817 Follows the description of the accepted parameters.
2822 Specify the size of the output image with the paddings added. If the
2823 value for @var{width} or @var{height} is 0, the corresponding input size
2824 is used for the output.
2826 The @var{width} expression can reference the value set by the
2827 @var{height} expression, and vice versa.
2829 The default value of @var{width} and @var{height} is 0.
2833 Specify the offsets where to place the input image in the padded area
2834 with respect to the top/left border of the output image.
2836 The @var{x} expression can reference the value set by the @var{y}
2837 expression, and vice versa.
2839 The default value of @var{x} and @var{y} is 0.
2843 Specify the color of the padded area, it can be the name of a color
2844 (case insensitive match) or a 0xRRGGBB[AA] sequence.
2846 The default value of @var{color} is "black".
2854 Add paddings with color "violet" to the input video. Output video
2855 size is 640x480, the top-left corner of the input video is placed at
2858 pad=640:480:0:40:violet
2862 Pad the input to get an output with dimensions increased by 3/2,
2863 and put the input video at the center of the padded area:
2865 pad="3/2*iw:3/2*ih:(ow-iw)/2:(oh-ih)/2"
2869 Pad the input to get a squared output with size equal to the maximum
2870 value between the input width and height, and put the input video at
2871 the center of the padded area:
2873 pad="max(iw\,ih):ow:(ow-iw)/2:(oh-ih)/2"
2877 Pad the input to get a final w/h ratio of 16:9:
2879 pad="ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"
2883 In case of anamorphic video, in order to set the output display aspect
2884 correctly, it is necessary to use @var{sar} in the expression,
2885 according to the relation:
2887 (ih * X / ih) * sar = output_dar
2888 X = output_dar / sar
2891 Thus the previous example needs to be modified to:
2893 pad="ih*16/9/sar:ih:(ow-iw)/2:(oh-ih)/2"
2897 Double output size and put the input video in the bottom-right
2898 corner of the output padded area:
2900 pad="2*iw:2*ih:ow-iw:oh-ih"
2904 @section pixdesctest
2906 Pixel format descriptor test filter, mainly useful for internal
2907 testing. The output video should be equal to the input video.
2911 format=monow, pixdesctest
2914 can be used to test the monowhite pixel format descriptor definition.
2918 Suppress a TV station logo, using an image file to determine which
2919 pixels comprise the logo. It works by filling in the pixels that
2920 comprise the logo with neighboring pixels.
2922 This filter requires one argument which specifies the filter bitmap
2923 file, which can be any image format supported by libavformat. The
2924 width and height of the image file must match those of the video
2925 stream being processed.
2927 Pixels in the provided bitmap image with a value of zero are not
2928 considered part of the logo, non-zero pixels are considered part of
2929 the logo. If you use white (255) for the logo and black (0) for the
2930 rest, you will be safe. For making the filter bitmap, it is
2931 recommended to take a screen capture of a black frame with the logo
2932 visible, and then using a threshold filter followed by the erode
2933 filter once or twice.
2935 If needed, little splotches can be fixed manually. Remember that if
2936 logo pixels are not covered, the filter quality will be much
2937 reduced. Marking too many pixels as part of the logo does not hurt as
2938 much, but it will increase the amount of blurring needed to cover over
2939 the image and will destroy more information than necessary, and extra
2940 pixels will slow things down on a large logo.
2944 Scale the input video to @var{width}:@var{height}[:@var{interl}=@{1|-1@}] and/or convert the image format.
2946 The scale filter forces the output display aspect ratio to be the same
2947 of the input, by changing the output sample aspect ratio.
2949 The parameters @var{width} and @var{height} are expressions containing
2950 the following constants:
2954 the input width and height
2957 same as @var{in_w} and @var{in_h}
2960 the output (cropped) width and height
2963 same as @var{out_w} and @var{out_h}
2966 same as @var{iw} / @var{ih}
2969 input sample aspect ratio
2972 input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
2975 horizontal and vertical chroma subsample values. For example for the
2976 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
2979 If the input image format is different from the format requested by
2980 the next filter, the scale filter will convert the input to the
2983 If the value for @var{width} or @var{height} is 0, the respective input
2984 size is used for the output.
2986 If the value for @var{width} or @var{height} is -1, the scale filter will
2987 use, for the respective output size, a value that maintains the aspect
2988 ratio of the input image.
2990 The default value of @var{width} and @var{height} is 0.
2992 Valid values for the optional parameter @var{interl} are:
2996 force interlaced aware scaling
2999 select interlaced aware scaling depending on whether the source frames
3000 are flagged as interlaced or not
3003 Unless @var{interl} is set to one of the above options, interlaced scaling will not be used.
3005 Some examples follow:
3007 # scale the input video to a size of 200x100.
3010 # scale the input to 2x
3012 # the above is the same as
3015 # scale the input to 2x with forced interlaced scaling
3016 scale=2*iw:2*ih:interl=1
3018 # scale the input to half size
3021 # increase the width, and set the height to the same size
3024 # seek for Greek harmony
3028 # increase the height, and set the width to 3/2 of the height
3031 # increase the size, but make the size a multiple of the chroma
3032 scale="trunc(3/2*iw/hsub)*hsub:trunc(3/2*ih/vsub)*vsub"
3034 # increase the width to a maximum of 500 pixels, keep the same input aspect ratio
3035 scale='min(500\, iw*3/2):-1'
3039 Select frames to pass in output.
3041 It accepts in input an expression, which is evaluated for each input
3042 frame. If the expression is evaluated to a non-zero value, the frame
3043 is selected and passed to the output, otherwise it is discarded.
3045 The expression can contain the following constants:
3049 the sequential number of the filtered frame, starting from 0
3052 the sequential number of the selected frame, starting from 0
3054 @item prev_selected_n
3055 the sequential number of the last selected frame, NAN if undefined
3058 timebase of the input timestamps
3061 the PTS (Presentation TimeStamp) of the filtered video frame,
3062 expressed in @var{TB} units, NAN if undefined
3065 the PTS (Presentation TimeStamp) of the filtered video frame,
3066 expressed in seconds, NAN if undefined
3069 the PTS of the previously filtered video frame, NAN if undefined
3071 @item prev_selected_pts
3072 the PTS of the last previously filtered video frame, NAN if undefined
3074 @item prev_selected_t
3075 the PTS of the last previously selected video frame, NAN if undefined
3078 the PTS of the first video frame in the video, NAN if undefined
3081 the time of the first video frame in the video, NAN if undefined
3084 the type of the filtered frame, can assume one of the following
3096 @item interlace_type
3097 the frame interlace type, can assume one of the following values:
3100 the frame is progressive (not interlaced)
3102 the frame is top-field-first
3104 the frame is bottom-field-first
3108 1 if the filtered frame is a key-frame, 0 otherwise
3111 the position in the file of the filtered frame, -1 if the information
3112 is not available (e.g. for synthetic video)
3115 value between 0 and 1 to indicate a new scene; a low value reflects a low
3116 probability for the current frame to introduce a new scene, while a higher
3117 value means the current frame is more likely to be one (see the example below)
3121 The default value of the select expression is "1".
3123 Some examples follow:
3126 # select all frames in input
3129 # the above is the same as:
3135 # select only I-frames
3136 select='eq(pict_type\,I)'
3138 # select one frame every 100
3139 select='not(mod(n\,100))'
3141 # select only frames contained in the 10-20 time interval
3142 select='gte(t\,10)*lte(t\,20)'
3144 # select only I frames contained in the 10-20 time interval
3145 select='gte(t\,10)*lte(t\,20)*eq(pict_type\,I)'
3147 # select frames with a minimum distance of 10 seconds
3148 select='isnan(prev_selected_t)+gte(t-prev_selected_t\,10)'
3151 Complete example to create a mosaic of the first scenes:
3154 ffmpeg -i video.avi -vf select='gt(scene\,0.4)',scale=160:120,tile -frames:v 1 preview.png
3157 Comparing @var{scene} against a value between 0.3 and 0.5 is generally a sane
3160 @section setdar, setsar
3162 The @code{setdar} filter sets the Display Aspect Ratio for the filter
3165 This is done by changing the specified Sample (aka Pixel) Aspect
3166 Ratio, according to the following equation:
3168 @var{DAR} = @var{HORIZONTAL_RESOLUTION} / @var{VERTICAL_RESOLUTION} * @var{SAR}
3171 Keep in mind that the @code{setdar} filter does not modify the pixel
3172 dimensions of the video frame. Also the display aspect ratio set by
3173 this filter may be changed by later filters in the filterchain,
3174 e.g. in case of scaling or if another "setdar" or a "setsar" filter is
3177 The @code{setsar} filter sets the Sample (aka Pixel) Aspect Ratio for
3178 the filter output video.
3180 Note that as a consequence of the application of this filter, the
3181 output display aspect ratio will change according to the equation
3184 Keep in mind that the sample aspect ratio set by the @code{setsar}
3185 filter may be changed by later filters in the filterchain, e.g. if
3186 another "setsar" or a "setdar" filter is applied.
3188 The @code{setdar} and @code{setsar} filters accept a parameter string
3189 which represents the wanted aspect ratio. The parameter can
3190 be a floating point number string, an expression, or a string of the form
3191 @var{num}:@var{den}, where @var{num} and @var{den} are the numerator
3192 and denominator of the aspect ratio. If the parameter is not
3193 specified, it is assumed the value "0:1".
3195 For example to change the display aspect ratio to 16:9, specify:
3200 The example above is equivalent to:
3205 To change the sample aspect ratio to 10:11, specify:
3212 Force field for the output video frame.
3214 The @code{setfield} filter marks the interlace type field for the
3215 output frames. It does not change the input frame, but only sets the
3216 corresponding property, which affects how the frame is treated by
3217 following filters (e.g. @code{fieldorder} or @code{yadif}).
3219 It accepts a string parameter, which can assume the following values:
3222 Keep the same field property.
3225 Mark the frame as bottom-field-first.
3228 Mark the frame as top-field-first.
3231 Mark the frame as progressive.
3236 Show a line containing various information for each input video frame.
3237 The input video is not modified.
3239 The shown line contains a sequence of key/value pairs of the form
3240 @var{key}:@var{value}.
3242 A description of each shown parameter follows:
3246 sequential number of the input frame, starting from 0
3249 Presentation TimeStamp of the input frame, expressed as a number of
3250 time base units. The time base unit depends on the filter input pad.
3253 Presentation TimeStamp of the input frame, expressed as a number of
3257 position of the frame in the input stream, -1 if this information in
3258 unavailable and/or meaningless (for example in case of synthetic video)
3264 sample aspect ratio of the input frame, expressed in the form
3268 size of the input frame, expressed in the form
3269 @var{width}x@var{height}
3272 interlaced mode ("P" for "progressive", "T" for top field first, "B"
3273 for bottom field first)
3276 1 if the frame is a key frame, 0 otherwise
3279 picture type of the input frame ("I" for an I-frame, "P" for a
3280 P-frame, "B" for a B-frame, "?" for unknown type).
3281 Check also the documentation of the @code{AVPictureType} enum and of
3282 the @code{av_get_picture_type_char} function defined in
3283 @file{libavutil/avutil.h}.
3286 Adler-32 checksum (printed in hexadecimal) of all the planes of the input frame
3288 @item plane_checksum
3289 Adler-32 checksum (printed in hexadecimal) of each plane of the input frame,
3290 expressed in the form "[@var{c0} @var{c1} @var{c2} @var{c3}]"
3295 Pass the images of input video on to next video filter as multiple
3299 ffmpeg -i in.avi -vf "slicify=32" out.avi
3302 The filter accepts the slice height as parameter. If the parameter is
3303 not specified it will use the default value of 16.
3305 Adding this in the beginning of filter chains should make filtering
3306 faster due to better use of the memory cache.
3310 Blur the input video without impacting the outlines.
3312 The filter accepts the following parameters:
3313 @var{luma_radius}:@var{luma_strength}:@var{luma_threshold}[:@var{chroma_radius}:@var{chroma_strength}:@var{chroma_threshold}]
3315 Parameters prefixed by @var{luma} indicate that they work on the
3316 luminance of the pixels whereas parameters prefixed by @var{chroma}
3317 refer to the chrominance of the pixels.
3319 If the chroma parameters are not set, the luma parameters are used for
3320 either the luminance and the chrominance of the pixels.
3322 @var{luma_radius} or @var{chroma_radius} must be a float number in the
3323 range [0.1,5.0] that specifies the variance of the gaussian filter
3324 used to blur the image (slower if larger).
3326 @var{luma_strength} or @var{chroma_strength} must be a float number in
3327 the range [-1.0,1.0] that configures the blurring. A value included in
3328 [0.0,1.0] will blur the image whereas a value included in [-1.0,0.0]
3329 will sharpen the image.
3331 @var{luma_threshold} or @var{chroma_threshold} must be an integer in
3332 the range [-30,30] that is used as a coefficient to determine whether
3333 a pixel should be blurred or not. A value of 0 will filter all the
3334 image, a value included in [0,30] will filter flat areas and a value
3335 included in [-30,0] will filter edges.
3339 Split input video into several identical outputs.
3341 The filter accepts a single parameter which specifies the number of outputs. If
3342 unspecified, it defaults to 2.
3346 ffmpeg -i INPUT -filter_complex split=5 OUTPUT
3348 will create 5 copies of the input video.
3352 [in] split [splitout1][splitout2];
3353 [splitout1] crop=100:100:0:0 [cropout];
3354 [splitout2] pad=200:200:100:100 [padout];
3357 will create two separate outputs from the same input, one cropped and
3362 Scale the input by 2x and smooth using the Super2xSaI (Scale and
3363 Interpolate) pixel art scaling algorithm.
3365 Useful for enlarging pixel art images without reducing sharpness.
3371 Select the most representative frame in a given sequence of consecutive frames.
3373 It accepts as argument the frames batch size to analyze (default @var{N}=100);
3374 in a set of @var{N} frames, the filter will pick one of them, and then handle
3375 the next batch of @var{N} frames until the end.
3377 Since the filter keeps track of the whole frames sequence, a bigger @var{N}
3378 value will result in a higher memory usage, so a high value is not recommended.
3380 The following example extract one picture each 50 frames:
3385 Complete example of a thumbnail creation with @command{ffmpeg}:
3387 ffmpeg -i in.avi -vf thumbnail,scale=300:200 -frames:v 1 out.png
3392 Tile several successive frames together.
3394 It accepts as argument the tile size (i.e. the number of lines and columns)
3395 in the form "@var{w}x@var{h}".
3397 For example, produce 8×8 PNG tiles of all keyframes (@option{-skip_frame
3400 ffmpeg -skip_frame nokey -i file.avi -vf 'scale=128:72,tile=8x8' -an -vsync 0 keyframes%03d.png
3402 The @option{-vsync 0} is necessary to prevent @command{ffmpeg} from
3403 duplicating each output frame to accomodate the originally detected frame
3408 Perform various types of temporal field interlacing.
3410 Frames are counted starting from 1, so the first input frame is
3413 This filter accepts a single parameter specifying the mode. Available
3418 Move odd frames into the upper field, even into the lower field,
3419 generating a double height frame at half framerate.
3422 Only output even frames, odd frames are dropped, generating a frame with
3423 unchanged height at half framerate.
3426 Only output odd frames, even frames are dropped, generating a frame with
3427 unchanged height at half framerate.
3430 Expand each frame to full height, but pad alternate lines with black,
3431 generating a frame with double height at the same input framerate.
3433 @item interleave_top, 4
3434 Interleave the upper field from odd frames with the lower field from
3435 even frames, generating a frame with unchanged height at half framerate.
3437 @item interleave_bottom, 5
3438 Interleave the lower field from odd frames with the upper field from
3439 even frames, generating a frame with unchanged height at half framerate.
3441 @item interlacex2, 6
3442 Double frame rate with unchanged height. Frames are inserted each
3443 containing the second temporal field from the previous input frame and
3444 the first temporal field from the next input frame. This mode relies on
3445 the top_field_first flag. Useful for interlaced video displays with no
3446 field synchronisation.
3449 Numeric values are deprecated but are accepted for backward
3450 compatibility reasons.
3452 Default mode is @code{merge}.
3456 Transpose rows with columns in the input video and optionally flip it.
3458 It accepts a parameter representing an integer, which can assume the
3463 Rotate by 90 degrees counterclockwise and vertically flip (default), that is:
3471 Rotate by 90 degrees clockwise, that is:
3479 Rotate by 90 degrees counterclockwise, that is:
3487 Rotate by 90 degrees clockwise and vertically flip, that is:
3495 For values between 4-7 transposition is only done if the input video
3496 geometry is portrait and not landscape.
3500 Sharpen or blur the input video.
3502 It accepts the following parameters:
3503 @var{luma_msize_x}:@var{luma_msize_y}:@var{luma_amount}:@var{chroma_msize_x}:@var{chroma_msize_y}:@var{chroma_amount}
3505 Negative values for the amount will blur the input video, while positive
3506 values will sharpen. All parameters are optional and default to the
3507 equivalent of the string '5:5:1.0:5:5:0.0'.
3512 Set the luma matrix horizontal size. It can be an integer between 3
3513 and 13, default value is 5.
3516 Set the luma matrix vertical size. It can be an integer between 3
3517 and 13, default value is 5.
3520 Set the luma effect strength. It can be a float number between -2.0
3521 and 5.0, default value is 1.0.
3523 @item chroma_msize_x
3524 Set the chroma matrix horizontal size. It can be an integer between 3
3525 and 13, default value is 5.
3527 @item chroma_msize_y
3528 Set the chroma matrix vertical size. It can be an integer between 3
3529 and 13, default value is 5.
3532 Set the chroma effect strength. It can be a float number between -2.0
3533 and 5.0, default value is 0.0.
3538 # Strong luma sharpen effect parameters
3541 # Strong blur of both luma and chroma parameters
3542 unsharp=7:7:-2:7:7:-2
3544 # Use the default values with @command{ffmpeg}
3545 ffmpeg -i in.avi -vf "unsharp" out.mp4
3550 Flip the input video vertically.
3553 ffmpeg -i in.avi -vf "vflip" out.avi
3558 Deinterlace the input video ("yadif" means "yet another deinterlacing
3561 It accepts the optional parameters: @var{mode}:@var{parity}:@var{auto}.
3563 @var{mode} specifies the interlacing mode to adopt, accepts one of the
3568 output 1 frame for each frame
3570 output 1 frame for each field
3572 like 0 but skips spatial interlacing check
3574 like 1 but skips spatial interlacing check
3579 @var{parity} specifies the picture field parity assumed for the input
3580 interlaced video, accepts one of the following values:
3584 assume top field first
3586 assume bottom field first
3588 enable automatic detection
3591 Default value is -1.
3592 If interlacing is unknown or decoder does not export this information,
3593 top field first will be assumed.
3595 @var{auto} specifies if deinterlacer should trust the interlaced flag
3596 and only deinterlace frames marked as interlaced
3600 deinterlace all frames
3602 only deinterlace frames marked as interlaced
3607 @c man end VIDEO FILTERS
3609 @chapter Video Sources
3610 @c man begin VIDEO SOURCES
3612 Below is a description of the currently available video sources.
3616 Buffer video frames, and make them available to the filter chain.
3618 This source is mainly intended for a programmatic use, in particular
3619 through the interface defined in @file{libavfilter/vsrc_buffer.h}.
3621 It accepts a list of options in the form of @var{key}=@var{value} pairs
3622 separated by ":". A descroption of the accepted options follows.
3627 Specify the size (width and height) of the buffered video frames.
3630 A string representing the pixel format of the buffered video frames.
3631 It may be a number corresponding to a pixel format, or a pixel format
3635 Specify the timebase assumed by the timestamps of the buffered frames.
3638 Specify the frame rate expected for the video stream.
3641 Specify the sample aspect ratio assumed by the video frames.
3644 Specify the optional parameters to be used for the scale filter which
3645 is automatically inserted when an input change is detected in the
3646 input size or format.
3651 buffer=size=320x240:pix_fmt=yuv410p:time_base=1/24:pixel_aspect=1/1
3654 will instruct the source to accept video frames with size 320x240 and
3655 with format "yuv410p", assuming 1/24 as the timestamps timebase and
3656 square pixels (1:1 sample aspect ratio).
3657 Since the pixel format with name "yuv410p" corresponds to the number 6
3658 (check the enum PixelFormat definition in @file{libavutil/pixfmt.h}),
3659 this example corresponds to:
3661 buffer=size=320x240:pixfmt=6:time_base=1/24:pixel_aspect=1/1
3664 Alternatively, the options can be specified as a flat string, but this
3665 syntax is deprecated:
3667 @var{width}:@var{height}:@var{pix_fmt}:@var{time_base.num}:@var{time_base.den}:@var{pixel_aspect.num}:@var{pixel_aspect.den}[:@var{sws_param}]
3671 Create a pattern generated by an elementary cellular automaton.
3673 The initial state of the cellular automaton can be defined through the
3674 @option{filename}, and @option{pattern} options. If such options are
3675 not specified an initial state is created randomly.
3677 At each new frame a new row in the video is filled with the result of
3678 the cellular automaton next generation. The behavior when the whole
3679 frame is filled is defined by the @option{scroll} option.
3681 This source accepts a list of options in the form of
3682 @var{key}=@var{value} pairs separated by ":". A description of the
3683 accepted options follows.
3687 Read the initial cellular automaton state, i.e. the starting row, from
3689 In the file, each non-whitespace character is considered an alive
3690 cell, a newline will terminate the row, and further characters in the
3691 file will be ignored.
3694 Read the initial cellular automaton state, i.e. the starting row, from
3695 the specified string.
3697 Each non-whitespace character in the string is considered an alive
3698 cell, a newline will terminate the row, and further characters in the
3699 string will be ignored.
3702 Set the video rate, that is the number of frames generated per second.
3705 @item random_fill_ratio, ratio
3706 Set the random fill ratio for the initial cellular automaton row. It
3707 is a floating point number value ranging from 0 to 1, defaults to
3710 This option is ignored when a file or a pattern is specified.
3712 @item random_seed, seed
3713 Set the seed for filling randomly the initial row, must be an integer
3714 included between 0 and UINT32_MAX. If not specified, or if explicitly
3715 set to -1, the filter will try to use a good random seed on a best
3719 Set the cellular automaton rule, it is a number ranging from 0 to 255.
3720 Default value is 110.
3723 Set the size of the output video.
3725 If @option{filename} or @option{pattern} is specified, the size is set
3726 by default to the width of the specified initial state row, and the
3727 height is set to @var{width} * PHI.
3729 If @option{size} is set, it must contain the width of the specified
3730 pattern string, and the specified pattern will be centered in the
3733 If a filename or a pattern string is not specified, the size value
3734 defaults to "320x518" (used for a randomly generated initial state).
3737 If set to 1, scroll the output upward when all the rows in the output
3738 have been already filled. If set to 0, the new generated row will be
3739 written over the top row just after the bottom row is filled.
3742 @item start_full, full
3743 If set to 1, completely fill the output with generated rows before
3744 outputting the first frame.
3745 This is the default behavior, for disabling set the value to 0.
3748 If set to 1, stitch the left and right row edges together.
3749 This is the default behavior, for disabling set the value to 0.
3752 @subsection Examples
3756 Read the initial state from @file{pattern}, and specify an output of
3759 cellauto=f=pattern:s=200x400
3763 Generate a random initial row with a width of 200 cells, with a fill
3766 cellauto=ratio=2/3:s=200x200
3770 Create a pattern generated by rule 18 starting by a single alive cell
3771 centered on an initial row with width 100:
3773 cellauto=p=@@:s=100x400:full=0:rule=18
3777 Specify a more elaborated initial pattern:
3779 cellauto=p='@@@@ @@ @@@@':s=100x400:full=0:rule=18
3786 Generate a Mandelbrot set fractal, and progressively zoom towards the
3787 point specified with @var{start_x} and @var{start_y}.
3789 This source accepts a list of options in the form of
3790 @var{key}=@var{value} pairs separated by ":". A description of the
3791 accepted options follows.
3796 Set the terminal pts value. Default value is 400.
3799 Set the terminal scale value.
3800 Must be a floating point value. Default value is 0.3.
3803 Set the inner coloring mode, that is the algorithm used to draw the
3804 Mandelbrot fractal internal region.
3806 It shall assume one of the following values:
3811 Show time until convergence.
3813 Set color based on point closest to the origin of the iterations.
3818 Default value is @var{mincol}.
3821 Set the bailout value. Default value is 10.0.
3824 Set the maximum of iterations performed by the rendering
3825 algorithm. Default value is 7189.
3828 Set outer coloring mode.
3829 It shall assume one of following values:
3831 @item iteration_count
3832 Set iteration cound mode.
3833 @item normalized_iteration_count
3834 set normalized iteration count mode.
3836 Default value is @var{normalized_iteration_count}.
3839 Set frame rate, expressed as number of frames per second. Default
3843 Set frame size. Default value is "640x480".
3846 Set the initial scale value. Default value is 3.0.
3849 Set the initial x position. Must be a floating point value between
3850 -100 and 100. Default value is -0.743643887037158704752191506114774.
3853 Set the initial y position. Must be a floating point value between
3854 -100 and 100. Default value is -0.131825904205311970493132056385139.
3859 Generate various test patterns, as generated by the MPlayer test filter.
3861 The size of the generated video is fixed, and is 256x256.
3862 This source is useful in particular for testing encoding features.
3864 This source accepts an optional sequence of @var{key}=@var{value} pairs,
3865 separated by ":". The description of the accepted options follows.
3870 Specify the frame rate of the sourced video, as the number of frames
3871 generated per second. It has to be a string in the format
3872 @var{frame_rate_num}/@var{frame_rate_den}, an integer number, a float
3873 number or a valid video frame rate abbreviation. The default value is
3877 Set the video duration of the sourced video. The accepted syntax is:
3882 See also the function @code{av_parse_time()}.
3884 If not specified, or the expressed duration is negative, the video is
3885 supposed to be generated forever.
3889 Set the number or the name of the test to perform. Supported tests are:
3904 Default value is "all", which will cycle through the list of all tests.
3907 For example the following:
3912 will generate a "dc_luma" test pattern.
3916 Provide a frei0r source.
3918 To enable compilation of this filter you need to install the frei0r
3919 header and configure FFmpeg with @code{--enable-frei0r}.
3921 The source supports the syntax:
3923 @var{size}:@var{rate}:@var{src_name}[@{=|:@}@var{param1}:@var{param2}:...:@var{paramN}]
3926 @var{size} is the size of the video to generate, may be a string of the
3927 form @var{width}x@var{height} or a frame size abbreviation.
3928 @var{rate} is the rate of the video to generate, may be a string of
3929 the form @var{num}/@var{den} or a frame rate abbreviation.
3930 @var{src_name} is the name to the frei0r source to load. For more
3931 information regarding frei0r and how to set the parameters read the
3932 section @ref{frei0r} in the description of the video filters.
3934 For example, to generate a frei0r partik0l source with size 200x200
3935 and frame rate 10 which is overlayed on the overlay filter main input:
3937 frei0r_src=200x200:10:partik0l=1234 [overlay]; [in][overlay] overlay
3942 Generate a life pattern.
3944 This source is based on a generalization of John Conway's life game.
3946 The sourced input represents a life grid, each pixel represents a cell
3947 which can be in one of two possible states, alive or dead. Every cell
3948 interacts with its eight neighbours, which are the cells that are
3949 horizontally, vertically, or diagonally adjacent.
3951 At each interaction the grid evolves according to the adopted rule,
3952 which specifies the number of neighbor alive cells which will make a
3953 cell stay alive or born. The @option{rule} option allows to specify
3956 This source accepts a list of options in the form of
3957 @var{key}=@var{value} pairs separated by ":". A description of the
3958 accepted options follows.
3962 Set the file from which to read the initial grid state. In the file,
3963 each non-whitespace character is considered an alive cell, and newline
3964 is used to delimit the end of each row.
3966 If this option is not specified, the initial grid is generated
3970 Set the video rate, that is the number of frames generated per second.
3973 @item random_fill_ratio, ratio
3974 Set the random fill ratio for the initial random grid. It is a
3975 floating point number value ranging from 0 to 1, defaults to 1/PHI.
3976 It is ignored when a file is specified.
3978 @item random_seed, seed
3979 Set the seed for filling the initial random grid, must be an integer
3980 included between 0 and UINT32_MAX. If not specified, or if explicitly
3981 set to -1, the filter will try to use a good random seed on a best
3987 A rule can be specified with a code of the kind "S@var{NS}/B@var{NB}",
3988 where @var{NS} and @var{NB} are sequences of numbers in the range 0-8,
3989 @var{NS} specifies the number of alive neighbor cells which make a
3990 live cell stay alive, and @var{NB} the number of alive neighbor cells
3991 which make a dead cell to become alive (i.e. to "born").
3992 "s" and "b" can be used in place of "S" and "B", respectively.
3994 Alternatively a rule can be specified by an 18-bits integer. The 9
3995 high order bits are used to encode the next cell state if it is alive
3996 for each number of neighbor alive cells, the low order bits specify
3997 the rule for "borning" new cells. Higher order bits encode for an
3998 higher number of neighbor cells.
3999 For example the number 6153 = @code{(12<<9)+9} specifies a stay alive
4000 rule of 12 and a born rule of 9, which corresponds to "S23/B03".
4002 Default value is "S23/B3", which is the original Conway's game of life
4003 rule, and will keep a cell alive if it has 2 or 3 neighbor alive
4004 cells, and will born a new cell if there are three alive cells around
4008 Set the size of the output video.
4010 If @option{filename} is specified, the size is set by default to the
4011 same size of the input file. If @option{size} is set, it must contain
4012 the size specified in the input file, and the initial grid defined in
4013 that file is centered in the larger resulting area.
4015 If a filename is not specified, the size value defaults to "320x240"
4016 (used for a randomly generated initial grid).
4019 If set to 1, stitch the left and right grid edges together, and the
4020 top and bottom edges also. Defaults to 1.
4023 Set cell mold speed. If set, a dead cell will go from @option{death_color} to
4024 @option{mold_color} with a step of @option{mold}. @option{mold} can have a
4025 value from 0 to 255.
4028 Set the color of living (or new born) cells.
4031 Set the color of dead cells. If @option{mold} is set, this is the first color
4032 used to represent a dead cell.
4035 Set mold color, for definitely dead and moldy cells.
4038 @subsection Examples
4042 Read a grid from @file{pattern}, and center it on a grid of size
4045 life=f=pattern:s=300x300
4049 Generate a random grid of size 200x200, with a fill ratio of 2/3:
4051 life=ratio=2/3:s=200x200
4055 Specify a custom rule for evolving a randomly generated grid:
4061 Full example with slow death effect (mold) using @command{ffplay}:
4063 ffplay -f lavfi life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800:flags=16
4067 @section color, nullsrc, rgbtestsrc, smptebars, testsrc
4069 The @code{color} source provides an uniformly colored input.
4071 The @code{nullsrc} source returns unprocessed video frames. It is
4072 mainly useful to be employed in analysis / debugging tools, or as the
4073 source for filters which ignore the input data.
4075 The @code{rgbtestsrc} source generates an RGB test pattern useful for
4076 detecting RGB vs BGR issues. You should see a red, green and blue
4077 stripe from top to bottom.
4079 The @code{smptebars} source generates a color bars pattern, based on
4080 the SMPTE Engineering Guideline EG 1-1990.
4082 The @code{testsrc} source generates a test video pattern, showing a
4083 color pattern, a scrolling gradient and a timestamp. This is mainly
4084 intended for testing purposes.
4086 These sources accept an optional sequence of @var{key}=@var{value} pairs,
4087 separated by ":". The description of the accepted options follows.
4092 Specify the color of the source, only used in the @code{color}
4093 source. It can be the name of a color (case insensitive match) or a
4094 0xRRGGBB[AA] sequence, possibly followed by an alpha specifier. The
4095 default value is "black".
4098 Specify the size of the sourced video, it may be a string of the form
4099 @var{width}x@var{height}, or the name of a size abbreviation. The
4100 default value is "320x240".
4103 Specify the frame rate of the sourced video, as the number of frames
4104 generated per second. It has to be a string in the format
4105 @var{frame_rate_num}/@var{frame_rate_den}, an integer number, a float
4106 number or a valid video frame rate abbreviation. The default value is
4110 Set the sample aspect ratio of the sourced video.
4113 Set the video duration of the sourced video. The accepted syntax is:
4115 [-]HH[:MM[:SS[.m...]]]
4118 See also the function @code{av_parse_time()}.
4120 If not specified, or the expressed duration is negative, the video is
4121 supposed to be generated forever.
4124 Set the number of decimals to show in the timestamp, only used in the
4125 @code{testsrc} source.
4127 The displayed timestamp value will correspond to the original
4128 timestamp value multiplied by the power of 10 of the specified
4129 value. Default value is 0.
4132 For example the following:
4134 testsrc=duration=5.3:size=qcif:rate=10
4137 will generate a video with a duration of 5.3 seconds, with size
4138 176x144 and a frame rate of 10 frames per second.
4140 The following graph description will generate a red source
4141 with an opacity of 0.2, with size "qcif" and a frame rate of 10
4144 color=c=red@@0.2:s=qcif:r=10
4147 If the input content is to be ignored, @code{nullsrc} can be used. The
4148 following command generates noise in the luminance plane by employing
4149 the @code{mp=geq} filter:
4151 nullsrc=s=256x256, mp=geq=random(1)*255:128:128
4154 @c man end VIDEO SOURCES
4156 @chapter Video Sinks
4157 @c man begin VIDEO SINKS
4159 Below is a description of the currently available video sinks.
4163 Buffer video frames, and make them available to the end of the filter
4166 This sink is mainly intended for a programmatic use, in particular
4167 through the interface defined in @file{libavfilter/buffersink.h}.
4169 It does not require a string parameter in input, but you need to
4170 specify a pointer to a list of supported pixel formats terminated by
4171 -1 in the opaque parameter provided to @code{avfilter_init_filter}
4172 when initializing this sink.
4176 Null video sink, do absolutely nothing with the input video. It is
4177 mainly useful as a template and to be employed in analysis / debugging
4180 @c man end VIDEO SINKS
4182 @chapter Multimedia Filters
4183 @c man begin MULTIMEDIA FILTERS
4185 Below is a description of the currently available multimedia filters.
4187 @section asetpts, setpts
4189 Change the PTS (presentation timestamp) of the input frames.
4191 @code{asetpts} works on audio frames, @code{setpts} on video frames.