1 @chapter Filtering Introduction
2 @c man begin FILTERING INTRODUCTION
4 Filtering in FFmpeg is enabled through the libavfilter library.
6 Libavfilter is the filtering API of FFmpeg. It is the substitute of
7 the now deprecated 'vhooks' and started as a Google Summer of Code
10 Audio filtering integration into the main FFmpeg repository is a work in
11 progress, so audio API and ABI should not be considered stable yet.
13 In libavfilter, it is possible for filters to have multiple inputs and
15 To illustrate the sorts of things that are possible, we can
16 use a complex filter graph. For example, the following one:
19 input --> split --> fifo -----------------------> overlay --> output
22 +------> fifo --> crop --> vflip --------+
25 splits the stream in two streams, sends one stream through the crop filter
26 and the vflip filter before merging it back with the other stream by
27 overlaying it on top. You can use the following command to achieve this:
30 ffmpeg -i input -vf "[in] split [T1], fifo, [T2] overlay=0:H/2 [out]; [T1] fifo, crop=iw:ih/2:0:ih/2, vflip [T2]" output
33 The result will be that in output the top half of the video is mirrored
36 Filters are loaded using the @var{-vf} or @var{-af} option passed to
37 @command{ffmpeg} or to @command{ffplay}. Filters in the same linear
38 chain are separated by commas. In our example, @var{split, fifo,
39 overlay} are in one linear chain, and @var{fifo, crop, vflip} are in
40 another. The points where the linear chains join are labeled by names
41 enclosed in square brackets. In our example, that is @var{[T1]} and
42 @var{[T2]}. The special labels @var{[in]} and @var{[out]} are the points
43 where video is input and output.
45 Some filters take in input a list of parameters: they are specified
46 after the filter name and an equal sign, and are separated from each other
49 There exist so-called @var{source filters} that do not have an
50 audio/video input, and @var{sink filters} that will not have audio/video
53 @c man end FILTERING INTRODUCTION
56 @c man begin GRAPH2DOT
58 The @file{graph2dot} program included in the FFmpeg @file{tools}
59 directory can be used to parse a filter graph description and issue a
60 corresponding textual representation in the dot language.
67 to see how to use @file{graph2dot}.
69 You can then pass the dot description to the @file{dot} program (from
70 the graphviz suite of programs) and obtain a graphical representation
73 For example the sequence of commands:
75 echo @var{GRAPH_DESCRIPTION} | \
76 tools/graph2dot -o graph.tmp && \
77 dot -Tpng graph.tmp -o graph.png && \
81 can be used to create and display an image representing the graph
82 described by the @var{GRAPH_DESCRIPTION} string. Note that this string must be
83 a complete self-contained graph, with its inputs and outputs explicitly defined.
84 For example if your command line is of the form:
86 ffmpeg -i infile -vf scale=640:360 outfile
88 your @var{GRAPH_DESCRIPTION} string will need to be of the form:
90 nullsrc,scale=640:360,nullsink
92 you may also need to set the @var{nullsrc} parameters and add a @var{format}
93 filter in order to simulate a specific input file.
97 @chapter Filtergraph description
98 @c man begin FILTERGRAPH DESCRIPTION
100 A filtergraph is a directed graph of connected filters. It can contain
101 cycles, and there can be multiple links between a pair of
102 filters. Each link has one input pad on one side connecting it to one
103 filter from which it takes its input, and one output pad on the other
104 side connecting it to the one filter accepting its output.
106 Each filter in a filtergraph is an instance of a filter class
107 registered in the application, which defines the features and the
108 number of input and output pads of the filter.
110 A filter with no input pads is called a "source", a filter with no
111 output pads is called a "sink".
113 @anchor{Filtergraph syntax}
114 @section Filtergraph syntax
116 A filtergraph can be represented using a textual representation, which is
117 recognized by the @option{-filter}/@option{-vf} and @option{-filter_complex}
118 options in @command{ffmpeg} and @option{-vf} in @command{ffplay}, and by the
119 @code{avfilter_graph_parse()}/@code{avfilter_graph_parse2()} function defined in
120 @file{libavfilter/avfiltergraph.h}.
122 A filterchain consists of a sequence of connected filters, each one
123 connected to the previous one in the sequence. A filterchain is
124 represented by a list of ","-separated filter descriptions.
126 A filtergraph consists of a sequence of filterchains. A sequence of
127 filterchains is represented by a list of ";"-separated filterchain
130 A filter is represented by a string of the form:
131 [@var{in_link_1}]...[@var{in_link_N}]@var{filter_name}=@var{arguments}[@var{out_link_1}]...[@var{out_link_M}]
133 @var{filter_name} is the name of the filter class of which the
134 described filter is an instance of, and has to be the name of one of
135 the filter classes registered in the program.
136 The name of the filter class is optionally followed by a string
139 @var{arguments} is a string which contains the parameters used to
140 initialize the filter instance, and are described in the filter
143 The list of arguments can be quoted using the character "'" as initial
144 and ending mark, and the character '\' for escaping the characters
145 within the quoted text; otherwise the argument string is considered
146 terminated when the next special character (belonging to the set
147 "[]=;,") is encountered.
149 The name and arguments of the filter are optionally preceded and
150 followed by a list of link labels.
151 A link label allows to name a link and associate it to a filter output
152 or input pad. The preceding labels @var{in_link_1}
153 ... @var{in_link_N}, are associated to the filter input pads,
154 the following labels @var{out_link_1} ... @var{out_link_M}, are
155 associated to the output pads.
157 When two link labels with the same name are found in the
158 filtergraph, a link between the corresponding input and output pad is
161 If an output pad is not labelled, it is linked by default to the first
162 unlabelled input pad of the next filter in the filterchain.
163 For example in the filterchain:
165 nullsrc, split[L1], [L2]overlay, nullsink
167 the split filter instance has two output pads, and the overlay filter
168 instance two input pads. The first output pad of split is labelled
169 "L1", the first input pad of overlay is labelled "L2", and the second
170 output pad of split is linked to the second input pad of overlay,
171 which are both unlabelled.
173 In a complete filterchain all the unlabelled filter input and output
174 pads must be connected. A filtergraph is considered valid if all the
175 filter input and output pads of all the filterchains are connected.
177 Libavfilter will automatically insert scale filters where format
178 conversion is required. It is possible to specify swscale flags
179 for those automatically inserted scalers by prepending
180 @code{sws_flags=@var{flags};}
181 to the filtergraph description.
183 Follows a BNF description for the filtergraph syntax:
185 @var{NAME} ::= sequence of alphanumeric characters and '_'
186 @var{LINKLABEL} ::= "[" @var{NAME} "]"
187 @var{LINKLABELS} ::= @var{LINKLABEL} [@var{LINKLABELS}]
188 @var{FILTER_ARGUMENTS} ::= sequence of chars (eventually quoted)
189 @var{FILTER} ::= [@var{LINKNAMES}] @var{NAME} ["=" @var{ARGUMENTS}] [@var{LINKNAMES}]
190 @var{FILTERCHAIN} ::= @var{FILTER} [,@var{FILTERCHAIN}]
191 @var{FILTERGRAPH} ::= [sws_flags=@var{flags};] @var{FILTERCHAIN} [;@var{FILTERGRAPH}]
194 @c man end FILTERGRAPH DESCRIPTION
196 @chapter Audio Filters
197 @c man begin AUDIO FILTERS
199 When you configure your FFmpeg build, you can disable any of the
200 existing filters using @code{--disable-filters}.
201 The configure output will show the audio filters included in your
204 Below is a description of the currently available audio filters.
208 Convert the input audio format to the specified formats.
210 The filter accepts a string of the form:
211 "@var{sample_format}:@var{channel_layout}".
213 @var{sample_format} specifies the sample format, and can be a string or the
214 corresponding numeric value defined in @file{libavutil/samplefmt.h}. Use 'p'
215 suffix for a planar sample format.
217 @var{channel_layout} specifies the channel layout, and can be a string
218 or the corresponding number value defined in @file{libavutil/audioconvert.h}.
220 The special parameter "auto", signifies that the filter will
221 automatically select the output format depending on the output filter.
223 Some examples follow.
227 Convert input to float, planar, stereo:
233 Convert input to unsigned 8-bit, automatically select out channel layout:
241 Convert the input audio to one of the specified formats. The framework will
242 negotiate the most appropriate format to minimize conversions.
244 The filter accepts the following named parameters:
248 A comma-separated list of requested sample formats.
251 A comma-separated list of requested sample rates.
253 @item channel_layouts
254 A comma-separated list of requested channel layouts.
258 If a parameter is omitted, all values are allowed.
260 For example to force the output to either unsigned 8-bit or signed 16-bit stereo:
262 aformat=sample_fmts\=u8\,s16:channel_layouts\=stereo
267 Merge two or more audio streams into a single multi-channel stream.
269 The filter accepts the following named options:
274 Set the number of inputs. Default is 2.
278 If the channel layouts of the inputs are disjoint, and therefore compatible,
279 the channel layout of the output will be set accordingly and the channels
280 will be reordered as necessary. If the channel layouts of the inputs are not
281 disjoint, the output will have all the channels of the first input then all
282 the channels of the second input, in that order, and the channel layout of
283 the output will be the default value corresponding to the total number of
286 For example, if the first input is in 2.1 (FL+FR+LF) and the second input
287 is FC+BL+BR, then the output will be in 5.1, with the channels in the
288 following order: a1, a2, b1, a3, b2, b3 (a1 is the first channel of the
289 first input, b1 is the first channel of the second input).
291 On the other hand, if both input are in stereo, the output channels will be
292 in the default order: a1, a2, b1, b2, and the channel layout will be
293 arbitrarily set to 4.0, which may or may not be the expected value.
295 All inputs must have the same sample rate, and format.
297 If inputs do not have the same duration, the output will stop with the
300 Example: merge two mono files into a stereo stream:
302 amovie=left.wav [l] ; amovie=right.mp3 [r] ; [l] [r] amerge
305 Example: multiple merges:
308 amovie=input.mkv:si=0 [a0];
309 amovie=input.mkv:si=1 [a1];
310 amovie=input.mkv:si=2 [a2];
311 amovie=input.mkv:si=3 [a3];
312 amovie=input.mkv:si=4 [a4];
313 amovie=input.mkv:si=5 [a5];
314 [a0][a1][a2][a3][a4][a5] amerge=inputs=6" -c:a pcm_s16le output.mkv
319 Mixes multiple audio inputs into a single output.
323 ffmpeg -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex amix=inputs=3:duration=first:dropout_transition=3 OUTPUT
325 will mix 3 input audio streams to a single output with the same duration as the
326 first input and a dropout transition time of 3 seconds.
328 The filter accepts the following named parameters:
332 Number of inputs. If unspecified, it defaults to 2.
335 How to determine the end-of-stream.
339 Duration of longest input. (default)
342 Duration of shortest input.
345 Duration of first input.
349 @item dropout_transition
350 Transition time, in seconds, for volume renormalization when an input
351 stream ends. The default value is 2 seconds.
357 Pass the audio source unchanged to the output.
361 Resample the input audio to the specified sample rate.
363 The filter accepts exactly one parameter, the output sample rate. If not
364 specified then the filter will automatically convert between its input
365 and output sample rates.
367 For example, to resample the input audio to 44100Hz:
372 @section asetnsamples
374 Set the number of samples per each output audio frame.
376 The last output packet may contain a different number of samples, as
377 the filter will flush all the remaining samples when the input audio
380 The filter accepts parameters as a list of @var{key}=@var{value} pairs,
385 @item nb_out_samples, n
386 Set the number of frames per each output audio frame. The number is
387 intended as the number of samples @emph{per each channel}.
388 Default value is 1024.
391 If set to 1, the filter will pad the last audio frame with zeroes, so
392 that the last frame will contain the same number of samples as the
393 previous ones. Default value is 1.
396 For example, to set the number of per-frame samples to 1234 and
397 disable padding for the last frame, use:
399 asetnsamples=n=1234:p=0
404 Show a line containing various information for each input audio frame.
405 The input audio is not modified.
407 The shown line contains a sequence of key/value pairs of the form
408 @var{key}:@var{value}.
410 A description of each shown parameter follows:
414 sequential number of the input frame, starting from 0
417 Presentation timestamp of the input frame, in time base units; the time base
418 depends on the filter input pad, and is usually 1/@var{sample_rate}.
421 presentation timestamp of the input frame in seconds
424 position of the frame in the input stream, -1 if this information in
425 unavailable and/or meaningless (for example in case of synthetic audio)
434 sample rate for the audio frame
437 number of samples (per channel) in the frame
440 Adler-32 checksum (printed in hexadecimal) of the audio data. For planar audio
441 the data is treated as if all the planes were concatenated.
443 @item plane_checksums
444 A list of Adler-32 checksums for each data plane.
449 Split input audio into several identical outputs.
451 The filter accepts a single parameter which specifies the number of outputs. If
452 unspecified, it defaults to 2.
456 [in] asplit [out0][out1]
459 will create two separate outputs from the same input.
461 To create 3 or more outputs, you need to specify the number of
464 [in] asplit=3 [out0][out1][out2]
468 ffmpeg -i INPUT -filter_complex asplit=5 OUTPUT
470 will create 5 copies of the input audio.
475 Forward two audio streams and control the order the buffers are forwarded.
477 The argument to the filter is an expression deciding which stream should be
478 forwarded next: if the result is negative, the first stream is forwarded; if
479 the result is positive or zero, the second stream is forwarded. It can use
480 the following variables:
484 number of buffers forwarded so far on each stream
486 number of samples forwarded so far on each stream
488 current timestamp of each stream
491 The default value is @code{t1-t2}, which means to always forward the stream
492 that has a smaller timestamp.
494 Example: stress-test @code{amerge} by randomly sending buffers on the wrong
495 input, while avoiding too much of a desynchronization:
497 amovie=file.ogg [a] ; amovie=file.mp3 [b] ;
498 [a] [b] astreamsync=(2*random(1))-1+tanh(5*(t1-t2)) [a2] [b2] ;
506 The filter accepts exactly one parameter, the audio tempo. If not
507 specified then the filter will assume nominal 1.0 tempo. Tempo must
508 be in the [0.5, 2.0] range.
510 For example, to slow down audio to 80% tempo:
515 For example, to speed up audio to 125% tempo:
522 Make audio easier to listen to on headphones.
524 This filter adds `cues' to 44.1kHz stereo (i.e. audio CD format) audio
525 so that when listened to on headphones the stereo image is moved from
526 inside your head (standard for headphones) to outside and in front of
527 the listener (standard for speakers).
533 Mix channels with specific gain levels. The filter accepts the output
534 channel layout followed by a set of channels definitions.
536 This filter is also designed to remap efficiently the channels of an audio
539 The filter accepts parameters of the form:
540 "@var{l}:@var{outdef}:@var{outdef}:..."
544 output channel layout or number of channels
547 output channel specification, of the form:
548 "@var{out_name}=[@var{gain}*]@var{in_name}[+[@var{gain}*]@var{in_name}...]"
551 output channel to define, either a channel name (FL, FR, etc.) or a channel
552 number (c0, c1, etc.)
555 multiplicative coefficient for the channel, 1 leaving the volume unchanged
558 input channel to use, see out_name for details; it is not possible to mix
559 named and numbered input channels
562 If the `=' in a channel specification is replaced by `<', then the gains for
563 that specification will be renormalized so that the total is 1, thus
564 avoiding clipping noise.
566 @subsection Mixing examples
568 For example, if you want to down-mix from stereo to mono, but with a bigger
569 factor for the left channel:
571 pan=1:c0=0.9*c0+0.1*c1
574 A customized down-mix to stereo that works automatically for 3-, 4-, 5- and
577 pan=stereo: FL < FL + 0.5*FC + 0.6*BL + 0.6*SL : FR < FR + 0.5*FC + 0.6*BR + 0.6*SR
580 Note that @command{ffmpeg} integrates a default down-mix (and up-mix) system
581 that should be preferred (see "-ac" option) unless you have very specific
584 @subsection Remapping examples
586 The channel remapping will be effective if, and only if:
589 @item gain coefficients are zeroes or ones,
590 @item only one input per channel output,
593 If all these conditions are satisfied, the filter will notify the user ("Pure
594 channel mapping detected"), and use an optimized and lossless method to do the
597 For example, if you have a 5.1 source and want a stereo audio stream by
598 dropping the extra channels:
600 pan="stereo: c0=FL : c1=FR"
603 Given the same source, you can also switch front left and front right channels
604 and keep the input channel layout:
606 pan="5.1: c0=c1 : c1=c0 : c2=c2 : c3=c3 : c4=c4 : c5=c5"
609 If the input is a stereo audio stream, you can mute the front left channel (and
610 still keep the stereo channel layout) with:
615 Still with a stereo audio stream input, you can copy the right channel in both
616 front left and right:
618 pan="stereo: c0=FR : c1=FR"
621 @section silencedetect
623 Detect silence in an audio stream.
625 This filter logs a message when it detects that the input audio volume is less
626 or equal to a noise tolerance value for a duration greater or equal to the
627 minimum detected noise duration.
629 The printed times and duration are expressed in seconds.
633 Set silence duration until notification (default is 2 seconds).
636 Set noise tolerance. Can be specified in dB (in case "dB" is appended to the
637 specified value) or amplitude ratio. Default is -60dB, or 0.001.
640 Detect 5 seconds of silence with -50dB noise tolerance:
642 silencedetect=n=-50dB:d=5
645 Complete example with @command{ffmpeg} to detect silence with 0.0001 noise
646 tolerance in @file{silence.mp3}:
648 ffmpeg -f lavfi -i amovie=silence.mp3,silencedetect=noise=0.0001 -f null -
653 Adjust the input audio volume.
655 The filter accepts exactly one parameter @var{vol}, which expresses
656 how the audio volume will be increased or decreased.
658 Output values are clipped to the maximum value.
660 If @var{vol} is expressed as a decimal number, the output audio
661 volume is given by the relation:
663 @var{output_volume} = @var{vol} * @var{input_volume}
666 If @var{vol} is expressed as a decimal number followed by the string
667 "dB", the value represents the requested change in decibels of the
668 input audio power, and the output audio volume is given by the
671 @var{output_volume} = 10^(@var{vol}/20) * @var{input_volume}
674 Otherwise @var{vol} is considered an expression and its evaluated
675 value is used for computing the output audio volume according to the
678 Default value for @var{vol} is 1.0.
684 Half the input audio volume:
689 The above example is equivalent to:
695 Decrease input audio power by 12 decibels:
701 @section volumedetect
703 Detect the volume of the input video.
705 The filter has no parameters. The input is not modified. Statistics about
706 the volume will be printed in the log when the input stream end is reached.
708 In particular it will show the mean volume (root mean square), maximum
709 volume (on a per-sample basis), and the beginning of an histogram of the
710 registered volume values (from the maximum value to a cumulated 1/1000 of
713 All volumes are in decibels relative to the maximum PCM value.
715 Here is an excerpt of the output:
717 [Parsed_volumedetect_0 @ 0xa23120] mean_volume: -27 dB
718 [Parsed_volumedetect_0 @ 0xa23120] max_volume: -4 dB
719 [Parsed_volumedetect_0 @ 0xa23120] histogram_4db: 6
720 [Parsed_volumedetect_0 @ 0xa23120] histogram_5db: 62
721 [Parsed_volumedetect_0 @ 0xa23120] histogram_6db: 286
722 [Parsed_volumedetect_0 @ 0xa23120] histogram_7db: 1042
723 [Parsed_volumedetect_0 @ 0xa23120] histogram_8db: 2551
724 [Parsed_volumedetect_0 @ 0xa23120] histogram_9db: 4609
725 [Parsed_volumedetect_0 @ 0xa23120] histogram_10db: 8409
731 The mean square energy is approximately -27 dB, or 10^-2.7.
733 The largest sample is at -4 dB, or more precisely between -4 dB and -5 dB.
735 There are 6 samples at -4 dB, 62 at -5 dB, 286 at -6 dB, etc.
738 In other words, raising the volume by +4 dB does not cause any clipping,
739 raising it by +5 dB causes clipping for 6 samples, etc.
742 Synchronize audio data with timestamps by squeezing/stretching it and/or
743 dropping samples/adding silence when needed.
745 The filter accepts the following named parameters:
749 Enable stretching/squeezing the data to make it match the timestamps. Disabled
750 by default. When disabled, time gaps are covered with silence.
753 Minimum difference between timestamps and audio data (in seconds) to trigger
754 adding/dropping samples. Default value is 0.1. If you get non-perfect sync with
755 this filter, try setting this parameter to 0.
758 Maximum compensation in samples per second. Relevant only with compensate=1.
762 Assume the first pts should be this value.
763 This allows for padding/trimming at the start of stream. By default, no
764 assumption is made about the first frame's expected pts, so no padding or
765 trimming is done. For example, this could be set to 0 to pad the beginning with
766 silence if an audio stream starts after the video stream.
770 @section channelsplit
771 Split each channel in input audio stream into a separate output stream.
773 This filter accepts the following named parameters:
776 Channel layout of the input stream. Default is "stereo".
779 For example, assuming a stereo input MP3 file
781 ffmpeg -i in.mp3 -filter_complex channelsplit out.mkv
783 will create an output Matroska file with two audio streams, one containing only
784 the left channel and the other the right channel.
786 To split a 5.1 WAV file into per-channel files
788 ffmpeg -i in.wav -filter_complex
789 'channelsplit=channel_layout=5.1[FL][FR][FC][LFE][SL][SR]'
790 -map '[FL]' front_left.wav -map '[FR]' front_right.wav -map '[FC]'
791 front_center.wav -map '[LFE]' lfe.wav -map '[SL]' side_left.wav -map '[SR]'
796 Remap input channels to new locations.
798 This filter accepts the following named parameters:
801 Channel layout of the output stream.
804 Map channels from input to output. The argument is a comma-separated list of
805 mappings, each in the @code{@var{in_channel}-@var{out_channel}} or
806 @var{in_channel} form. @var{in_channel} can be either the name of the input
807 channel (e.g. FL for front left) or its index in the input channel layout.
808 @var{out_channel} is the name of the output channel or its index in the output
809 channel layout. If @var{out_channel} is not given then it is implicitly an
810 index, starting with zero and increasing by one for each mapping.
813 If no mapping is present, the filter will implicitly map input channels to
814 output channels preserving index.
816 For example, assuming a 5.1+downmix input MOV file
818 ffmpeg -i in.mov -filter 'channelmap=map=DL-FL\,DR-FR' out.wav
820 will create an output WAV file tagged as stereo from the downmix channels of
823 To fix a 5.1 WAV improperly encoded in AAC's native channel order
825 ffmpeg -i in.wav -filter 'channelmap=1\,2\,0\,5\,3\,4:channel_layout=5.1' out.wav
829 Join multiple input streams into one multi-channel stream.
831 The filter accepts the following named parameters:
835 Number of input streams. Defaults to 2.
838 Desired output channel layout. Defaults to stereo.
841 Map channels from inputs to output. The argument is a comma-separated list of
842 mappings, each in the @code{@var{input_idx}.@var{in_channel}-@var{out_channel}}
843 form. @var{input_idx} is the 0-based index of the input stream. @var{in_channel}
844 can be either the name of the input channel (e.g. FL for front left) or its
845 index in the specified input stream. @var{out_channel} is the name of the output
849 The filter will attempt to guess the mappings when those are not specified
850 explicitly. It does so by first trying to find an unused matching input channel
851 and if that fails it picks the first unused input channel.
853 E.g. to join 3 inputs (with properly set channel layouts)
855 ffmpeg -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex join=inputs=3 OUTPUT
858 To build a 5.1 output from 6 single-channel streams:
860 ffmpeg -i fl -i fr -i fc -i sl -i sr -i lfe -filter_complex
861 'join=inputs=6:channel_layout=5.1:map=0.0-FL\,1.0-FR\,2.0-FC\,3.0-SL\,4.0-SR\,5.0-LFE'
866 Convert the audio sample format, sample rate and channel layout. This filter is
867 not meant to be used directly.
869 @c man end AUDIO FILTERS
871 @chapter Audio Sources
872 @c man begin AUDIO SOURCES
874 Below is a description of the currently available audio sources.
878 Buffer audio frames, and make them available to the filter chain.
880 This source is mainly intended for a programmatic use, in particular
881 through the interface defined in @file{libavfilter/asrc_abuffer.h}.
883 It accepts the following mandatory parameters:
884 @var{sample_rate}:@var{sample_fmt}:@var{channel_layout}
889 The sample rate of the incoming audio buffers.
892 The sample format of the incoming audio buffers.
893 Either a sample format name or its corresponging integer representation from
894 the enum AVSampleFormat in @file{libavutil/samplefmt.h}
897 The channel layout of the incoming audio buffers.
898 Either a channel layout name from channel_layout_map in
899 @file{libavutil/audioconvert.c} or its corresponding integer representation
900 from the AV_CH_LAYOUT_* macros in @file{libavutil/audioconvert.h}
906 abuffer=44100:s16p:stereo
909 will instruct the source to accept planar 16bit signed stereo at 44100Hz.
910 Since the sample format with name "s16p" corresponds to the number
911 6 and the "stereo" channel layout corresponds to the value 0x3, this is
919 Generate an audio signal specified by an expression.
921 This source accepts in input one or more expressions (one for each
922 channel), which are evaluated and used to generate a corresponding
925 It accepts the syntax: @var{exprs}[::@var{options}].
926 @var{exprs} is a list of expressions separated by ":", one for each
927 separate channel. In case the @var{channel_layout} is not
928 specified, the selected channel layout depends on the number of
929 provided expressions.
931 @var{options} is an optional sequence of @var{key}=@var{value} pairs,
934 The description of the accepted options follows.
938 @item channel_layout, c
939 Set the channel layout. The number of channels in the specified layout
940 must be equal to the number of specified expressions.
943 Set the minimum duration of the sourced audio. See the function
944 @code{av_parse_time()} for the accepted format.
945 Note that the resulting duration may be greater than the specified
946 duration, as the generated audio is always cut at the end of a
949 If not specified, or the expressed duration is negative, the audio is
950 supposed to be generated forever.
953 Set the number of samples per channel per each output frame,
957 Specify the sample rate, default to 44100.
960 Each expression in @var{exprs} can contain the following constants:
964 number of the evaluated sample, starting from 0
967 time of the evaluated sample expressed in seconds, starting from 0
986 Generate a sin signal with frequency of 440 Hz, set sample rate to
989 aevalsrc="sin(440*2*PI*t)::s=8000"
993 Generate a two channels signal, specify the channel layout (Front
994 Center + Back Center) explicitly:
996 aevalsrc="sin(420*2*PI*t):cos(430*2*PI*t)::c=FC|BC"
1000 Generate white noise:
1002 aevalsrc="-2+random(0)"
1006 Generate an amplitude modulated signal:
1008 aevalsrc="sin(10*2*PI*t)*sin(880*2*PI*t)"
1012 Generate 2.5 Hz binaural beats on a 360 Hz carrier:
1014 aevalsrc="0.1*sin(2*PI*(360-2.5/2)*t) : 0.1*sin(2*PI*(360+2.5/2)*t)"
1021 Null audio source, return unprocessed audio frames. It is mainly useful
1022 as a template and to be employed in analysis / debugging tools, or as
1023 the source for filters which ignore the input data (for example the sox
1026 It accepts an optional sequence of @var{key}=@var{value} pairs,
1029 The description of the accepted options follows.
1033 @item sample_rate, s
1034 Specify the sample rate, and defaults to 44100.
1036 @item channel_layout, cl
1038 Specify the channel layout, and can be either an integer or a string
1039 representing a channel layout. The default value of @var{channel_layout}
1042 Check the channel_layout_map definition in
1043 @file{libavcodec/audioconvert.c} for the mapping between strings and
1044 channel layout values.
1047 Set the number of samples per requested frames.
1051 Follow some examples:
1053 # set the sample rate to 48000 Hz and the channel layout to AV_CH_LAYOUT_MONO.
1054 anullsrc=r=48000:cl=4
1057 anullsrc=r=48000:cl=mono
1061 Buffer audio frames, and make them available to the filter chain.
1063 This source is not intended to be part of user-supplied graph descriptions but
1064 for insertion by calling programs through the interface defined in
1065 @file{libavfilter/buffersrc.h}.
1067 It accepts the following named parameters:
1071 Timebase which will be used for timestamps of submitted frames. It must be
1072 either a floating-point number or in @var{numerator}/@var{denominator} form.
1078 Name of the sample format, as returned by @code{av_get_sample_fmt_name()}.
1080 @item channel_layout
1081 Channel layout of the audio data, in the form that can be accepted by
1082 @code{av_get_channel_layout()}.
1085 All the parameters need to be explicitly defined.
1089 Synthesize a voice utterance using the libflite library.
1091 To enable compilation of this filter you need to configure FFmpeg with
1092 @code{--enable-libflite}.
1094 Note that the flite library is not thread-safe.
1096 The source accepts parameters as a list of @var{key}=@var{value} pairs,
1099 The description of the accepted parameters follows.
1104 If set to 1, list the names of the available voices and exit
1105 immediately. Default value is 0.
1108 Set the maximum number of samples per frame. Default value is 512.
1111 Set the filename containing the text to speak.
1114 Set the text to speak.
1117 Set the voice to use for the speech synthesis. Default value is
1118 @code{kal}. See also the @var{list_voices} option.
1121 @subsection Examples
1125 Read from file @file{speech.txt}, and synthetize the text using the
1126 standard flite voice:
1128 flite=textfile=speech.txt
1132 Read the specified text selecting the @code{slt} voice:
1134 flite=text='So fare thee well, poor devil of a Sub-Sub, whose commentator I am':voice=slt
1138 Input text to ffmpeg:
1140 ffmpeg -f lavfi -i flite=text='So fare thee well, poor devil of a Sub-Sub, whose commentator I am':voice=slt
1144 Make @file{ffplay} speak the specified text, using @code{flite} and
1145 the @code{lavfi} device:
1147 ffplay -f lavfi flite=text='No more be grieved for which that thou hast done.'
1151 For more information about libflite, check:
1152 @url{http://www.speech.cs.cmu.edu/flite/}
1154 @c man end AUDIO SOURCES
1156 @chapter Audio Sinks
1157 @c man begin AUDIO SINKS
1159 Below is a description of the currently available audio sinks.
1161 @section abuffersink
1163 Buffer audio frames, and make them available to the end of filter chain.
1165 This sink is mainly intended for programmatic use, in particular
1166 through the interface defined in @file{libavfilter/buffersink.h}.
1168 It requires a pointer to an AVABufferSinkContext structure, which
1169 defines the incoming buffers' formats, to be passed as the opaque
1170 parameter to @code{avfilter_init_filter} for initialization.
1174 Null audio sink, do absolutely nothing with the input audio. It is
1175 mainly useful as a template and to be employed in analysis / debugging
1178 @section abuffersink
1179 This sink is intended for programmatic use. Frames that arrive on this sink can
1180 be retrieved by the calling program using the interface defined in
1181 @file{libavfilter/buffersink.h}.
1183 This filter accepts no parameters.
1185 @c man end AUDIO SINKS
1187 @chapter Video Filters
1188 @c man begin VIDEO FILTERS
1190 When you configure your FFmpeg build, you can disable any of the
1191 existing filters using @code{--disable-filters}.
1192 The configure output will show the video filters included in your
1195 Below is a description of the currently available video filters.
1197 @section alphaextract
1199 Extract the alpha component from the input as a grayscale video. This
1200 is especially useful with the @var{alphamerge} filter.
1204 Add or replace the alpha component of the primary input with the
1205 grayscale value of a second input. This is intended for use with
1206 @var{alphaextract} to allow the transmission or storage of frame
1207 sequences that have alpha in a format that doesn't support an alpha
1210 For example, to reconstruct full frames from a normal YUV-encoded video
1211 and a separate video created with @var{alphaextract}, you might use:
1213 movie=in_alpha.mkv [alpha]; [in][alpha] alphamerge [out]
1216 Since this filter is designed for reconstruction, it operates on frame
1217 sequences without considering timestamps, and terminates when either
1218 input reaches end of stream. This will cause problems if your encoding
1219 pipeline drops frames. If you're trying to apply an image as an
1220 overlay to a video stream, consider the @var{overlay} filter instead.
1224 Draw ASS (Advanced Substation Alpha) subtitles on top of input video
1225 using the libass library.
1227 To enable compilation of this filter you need to configure FFmpeg with
1228 @code{--enable-libass}.
1230 This filter accepts the following named options, expressed as a
1231 sequence of @var{key}=@var{value} pairs, separated by ":".
1235 Set the filename of the ASS file to read. It must be specified.
1238 Specify the size of the original video, the video for which the ASS file
1239 was composed. Due to a misdesign in ASS aspect ratio arithmetic, this is
1240 necessary to correctly scale the fonts if the aspect ratio has been changed.
1243 If the first key is not specified, it is assumed that the first value
1244 specifies the @option{filename}.
1246 For example, to render the file @file{sub.ass} on top of the input
1247 video, use the command:
1252 which is equivalent to:
1254 ass=filename=sub.ass
1259 Compute the bounding box for the non-black pixels in the input frame
1262 This filter computes the bounding box containing all the pixels with a
1263 luminance value greater than the minimum allowed value.
1264 The parameters describing the bounding box are printed on the filter
1267 @section blackdetect
1269 Detect video intervals that are (almost) completely black. Can be
1270 useful to detect chapter transitions, commercials, or invalid
1271 recordings. Output lines contains the time for the start, end and
1272 duration of the detected black interval expressed in seconds.
1274 In order to display the output lines, you need to set the loglevel at
1275 least to the AV_LOG_INFO value.
1277 This filter accepts a list of options in the form of
1278 @var{key}=@var{value} pairs separated by ":". A description of the
1279 accepted options follows.
1282 @item black_min_duration, d
1283 Set the minimum detected black duration expressed in seconds. It must
1284 be a non-negative floating point number.
1286 Default value is 2.0.
1288 @item picture_black_ratio_th, pic_th
1289 Set the threshold for considering a picture "black".
1290 Express the minimum value for the ratio:
1292 @var{nb_black_pixels} / @var{nb_pixels}
1295 for which a picture is considered black.
1296 Default value is 0.98.
1298 @item pixel_black_th, pix_th
1299 Set the threshold for considering a pixel "black".
1301 The threshold expresses the maximum pixel luminance value for which a
1302 pixel is considered "black". The provided value is scaled according to
1303 the following equation:
1305 @var{absolute_threshold} = @var{luminance_minimum_value} + @var{pixel_black_th} * @var{luminance_range_size}
1308 @var{luminance_range_size} and @var{luminance_minimum_value} depend on
1309 the input video format, the range is [0-255] for YUV full-range
1310 formats and [16-235] for YUV non full-range formats.
1312 Default value is 0.10.
1315 The following example sets the maximum pixel threshold to the minimum
1316 value, and detects only black intervals of 2 or more seconds:
1318 blackdetect=d=2:pix_th=0.00
1323 Detect frames that are (almost) completely black. Can be useful to
1324 detect chapter transitions or commercials. Output lines consist of
1325 the frame number of the detected frame, the percentage of blackness,
1326 the position in the file if known or -1 and the timestamp in seconds.
1328 In order to display the output lines, you need to set the loglevel at
1329 least to the AV_LOG_INFO value.
1331 The filter accepts the syntax:
1333 blackframe[=@var{amount}:[@var{threshold}]]
1336 @var{amount} is the percentage of the pixels that have to be below the
1337 threshold, and defaults to 98.
1339 @var{threshold} is the threshold below which a pixel value is
1340 considered black, and defaults to 32.
1344 Apply boxblur algorithm to the input video.
1346 This filter accepts the parameters:
1347 @var{luma_radius}:@var{luma_power}:@var{chroma_radius}:@var{chroma_power}:@var{alpha_radius}:@var{alpha_power}
1349 Chroma and alpha parameters are optional, if not specified they default
1350 to the corresponding values set for @var{luma_radius} and
1353 @var{luma_radius}, @var{chroma_radius}, and @var{alpha_radius} represent
1354 the radius in pixels of the box used for blurring the corresponding
1355 input plane. They are expressions, and can contain the following
1359 the input width and height in pixels
1362 the input chroma image width and height in pixels
1365 horizontal and vertical chroma subsample values. For example for the
1366 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
1369 The radius must be a non-negative number, and must not be greater than
1370 the value of the expression @code{min(w,h)/2} for the luma and alpha planes,
1371 and of @code{min(cw,ch)/2} for the chroma planes.
1373 @var{luma_power}, @var{chroma_power}, and @var{alpha_power} represent
1374 how many times the boxblur filter is applied to the corresponding
1377 Some examples follow:
1382 Apply a boxblur filter with luma, chroma, and alpha radius
1389 Set luma radius to 2, alpha and chroma radius to 0
1395 Set luma and chroma radius to a fraction of the video dimension
1397 boxblur=min(h\,w)/10:1:min(cw\,ch)/10:1
1402 @section colormatrix
1404 The colormatrix filter allows conversion between any of the following color
1405 space: BT.709 (@var{bt709}), BT.601 (@var{bt601}), SMPTE-240M (@var{smpte240m})
1406 and FCC (@var{fcc}).
1408 The syntax of the parameters is @var{source}:@var{destination}:
1411 colormatrix=bt601:smpte240m
1416 Copy the input source unchanged to the output. Mainly useful for
1421 Crop the input video to @var{out_w}:@var{out_h}:@var{x}:@var{y}:@var{keep_aspect}
1423 The @var{keep_aspect} parameter is optional, if specified and set to a
1424 non-zero value will force the output display aspect ratio to be the
1425 same of the input, by changing the output sample aspect ratio.
1427 The @var{out_w}, @var{out_h}, @var{x}, @var{y} parameters are
1428 expressions containing the following constants:
1432 the computed values for @var{x} and @var{y}. They are evaluated for
1436 the input width and height
1439 same as @var{in_w} and @var{in_h}
1442 the output (cropped) width and height
1445 same as @var{out_w} and @var{out_h}
1448 same as @var{iw} / @var{ih}
1451 input sample aspect ratio
1454 input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
1457 horizontal and vertical chroma subsample values. For example for the
1458 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
1461 the number of input frame, starting from 0
1464 the position in the file of the input frame, NAN if unknown
1467 timestamp expressed in seconds, NAN if the input timestamp is unknown
1471 The @var{out_w} and @var{out_h} parameters specify the expressions for
1472 the width and height of the output (cropped) video. They are
1473 evaluated just at the configuration of the filter.
1475 The default value of @var{out_w} is "in_w", and the default value of
1476 @var{out_h} is "in_h".
1478 The expression for @var{out_w} may depend on the value of @var{out_h},
1479 and the expression for @var{out_h} may depend on @var{out_w}, but they
1480 cannot depend on @var{x} and @var{y}, as @var{x} and @var{y} are
1481 evaluated after @var{out_w} and @var{out_h}.
1483 The @var{x} and @var{y} parameters specify the expressions for the
1484 position of the top-left corner of the output (non-cropped) area. They
1485 are evaluated for each frame. If the evaluated value is not valid, it
1486 is approximated to the nearest valid value.
1488 The default value of @var{x} is "(in_w-out_w)/2", and the default
1489 value for @var{y} is "(in_h-out_h)/2", which set the cropped area at
1490 the center of the input image.
1492 The expression for @var{x} may depend on @var{y}, and the expression
1493 for @var{y} may depend on @var{x}.
1495 Follow some examples:
1497 # crop the central input area with size 100x100
1500 # crop the central input area with size 2/3 of the input video
1501 "crop=2/3*in_w:2/3*in_h"
1503 # crop the input video central square
1506 # delimit the rectangle with the top-left corner placed at position
1507 # 100:100 and the right-bottom corner corresponding to the right-bottom
1508 # corner of the input image.
1509 crop=in_w-100:in_h-100:100:100
1511 # crop 10 pixels from the left and right borders, and 20 pixels from
1512 # the top and bottom borders
1513 "crop=in_w-2*10:in_h-2*20"
1515 # keep only the bottom right quarter of the input image
1516 "crop=in_w/2:in_h/2:in_w/2:in_h/2"
1518 # crop height for getting Greek harmony
1519 "crop=in_w:1/PHI*in_w"
1522 "crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(n/10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(n/7)"
1524 # erratic camera effect depending on timestamp
1525 "crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(t*10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(t*13)"
1527 # set x depending on the value of y
1528 "crop=in_w/2:in_h/2:y:10+10*sin(n/10)"
1533 Auto-detect crop size.
1535 Calculate necessary cropping parameters and prints the recommended
1536 parameters through the logging system. The detected dimensions
1537 correspond to the non-black area of the input video.
1539 It accepts the syntax:
1541 cropdetect[=@var{limit}[:@var{round}[:@var{reset}]]]
1547 Threshold, which can be optionally specified from nothing (0) to
1548 everything (255), defaults to 24.
1551 Value which the width/height should be divisible by, defaults to
1552 16. The offset is automatically adjusted to center the video. Use 2 to
1553 get only even dimensions (needed for 4:2:2 video). 16 is best when
1554 encoding to most video codecs.
1557 Counter that determines after how many frames cropdetect will reset
1558 the previously detected largest video area and start over to detect
1559 the current optimal crop area. Defaults to 0.
1561 This can be useful when channel logos distort the video area. 0
1562 indicates never reset and return the largest area encountered during
1568 This filter drops frames that do not differ greatly from the previous
1569 frame in order to reduce framerate. The main use of this filter is
1570 for very-low-bitrate encoding (e.g. streaming over dialup modem), but
1571 it could in theory be used for fixing movies that were
1572 inverse-telecined incorrectly.
1574 It accepts the following parameters:
1575 @var{max}:@var{hi}:@var{lo}:@var{frac}.
1580 Set the maximum number of consecutive frames which can be dropped (if
1581 positive), or the minimum interval between dropped frames (if
1582 negative). If the value is 0, the frame is dropped unregarding the
1583 number of previous sequentially dropped frames.
1588 Set the dropping threshold values.
1590 Values for @var{hi} and @var{lo} are for 8x8 pixel blocks and
1591 represent actual pixel value differences, so a threshold of 64
1592 corresponds to 1 unit of difference for each pixel, or the same spread
1593 out differently over the block.
1595 A frame is a candidate for dropping if no 8x8 blocks differ by more
1596 than a threshold of @var{hi}, and if no more than @var{frac} blocks (1
1597 meaning the whole image) differ by more than a threshold of @var{lo}.
1599 Default value for @var{hi} is 64*12, default value for @var{lo} is
1600 64*5, and default value for @var{frac} is 0.33.
1605 Suppress a TV station logo by a simple interpolation of the surrounding
1606 pixels. Just set a rectangle covering the logo and watch it disappear
1607 (and sometimes something even uglier appear - your mileage may vary).
1609 The filter accepts parameters as a string of the form
1610 "@var{x}:@var{y}:@var{w}:@var{h}:@var{band}", or as a list of
1611 @var{key}=@var{value} pairs, separated by ":".
1613 The description of the accepted parameters follows.
1618 Specify the top left corner coordinates of the logo. They must be
1622 Specify the width and height of the logo to clear. They must be
1626 Specify the thickness of the fuzzy edge of the rectangle (added to
1627 @var{w} and @var{h}). The default value is 4.
1630 When set to 1, a green rectangle is drawn on the screen to simplify
1631 finding the right @var{x}, @var{y}, @var{w}, @var{h} parameters, and
1632 @var{band} is set to 4. The default value is 0.
1636 Some examples follow.
1641 Set a rectangle covering the area with top left corner coordinates 0,0
1642 and size 100x77, setting a band of size 10:
1644 delogo=0:0:100:77:10
1648 As the previous example, but use named options:
1650 delogo=x=0:y=0:w=100:h=77:band=10
1657 Attempt to fix small changes in horizontal and/or vertical shift. This
1658 filter helps remove camera shake from hand-holding a camera, bumping a
1659 tripod, moving on a vehicle, etc.
1661 The filter accepts parameters as a string of the form
1662 "@var{x}:@var{y}:@var{w}:@var{h}:@var{rx}:@var{ry}:@var{edge}:@var{blocksize}:@var{contrast}:@var{search}:@var{filename}"
1664 A description of the accepted parameters follows.
1669 Specify a rectangular area where to limit the search for motion
1671 If desired the search for motion vectors can be limited to a
1672 rectangular area of the frame defined by its top left corner, width
1673 and height. These parameters have the same meaning as the drawbox
1674 filter which can be used to visualise the position of the bounding
1677 This is useful when simultaneous movement of subjects within the frame
1678 might be confused for camera motion by the motion vector search.
1680 If any or all of @var{x}, @var{y}, @var{w} and @var{h} are set to -1
1681 then the full frame is used. This allows later options to be set
1682 without specifying the bounding box for the motion vector search.
1684 Default - search the whole frame.
1687 Specify the maximum extent of movement in x and y directions in the
1688 range 0-64 pixels. Default 16.
1691 Specify how to generate pixels to fill blanks at the edge of the
1692 frame. An integer from 0 to 3 as follows:
1695 Fill zeroes at blank locations
1697 Original image at blank locations
1699 Extruded edge value at blank locations
1701 Mirrored edge at blank locations
1704 The default setting is mirror edge at blank locations.
1707 Specify the blocksize to use for motion search. Range 4-128 pixels,
1711 Specify the contrast threshold for blocks. Only blocks with more than
1712 the specified contrast (difference between darkest and lightest
1713 pixels) will be considered. Range 1-255, default 125.
1716 Specify the search strategy 0 = exhaustive search, 1 = less exhaustive
1717 search. Default - exhaustive search.
1720 If set then a detailed log of the motion search is written to the
1727 Draw a colored box on the input image.
1729 The filter accepts parameters as a list of @var{key}=@var{value} pairs,
1732 The description of the accepted parameters follows.
1736 Specify the top left corner coordinates of the box. Default to 0.
1739 Specify the width and height of the box, if 0 they are interpreted as
1740 the input width and height. Default to 0.
1743 Specify the color of the box to write, it can be the name of a color
1744 (case insensitive match) or a 0xRRGGBB[AA] sequence. If the special
1745 value @code{invert} is used, the box edge color is the same as the
1746 video with inverted luma.
1749 Set the thickness of the box edge. Default value is @code{4}.
1752 If the key of the first options is omitted, the arguments are
1753 interpreted according to the following syntax:
1755 drawbox=@var{x}:@var{y}:@var{width}:@var{height}:@var{color}:@var{thickness}
1758 Some examples follow:
1761 Draw a black box around the edge of the input image:
1767 Draw a box with color red and an opacity of 50%:
1769 drawbox=10:20:200:60:red@@0.5
1772 The previous example can be specified as:
1774 drawbox=x=10:y=20:w=200:h=60:color=red@@0.5
1778 Fill the box with pink color:
1780 drawbox=x=10:y=10:w=100:h=100:color=pink@@0.5:t=max
1786 Draw text string or text from specified file on top of video using the
1787 libfreetype library.
1789 To enable compilation of this filter you need to configure FFmpeg with
1790 @code{--enable-libfreetype}.
1792 The filter also recognizes strftime() sequences in the provided text
1793 and expands them accordingly. Check the documentation of strftime().
1795 The filter accepts parameters as a list of @var{key}=@var{value} pairs,
1798 The description of the accepted parameters follows.
1803 Used to draw a box around text using background color.
1804 Value should be either 1 (enable) or 0 (disable).
1805 The default value of @var{box} is 0.
1808 The color to be used for drawing box around text.
1809 Either a string (e.g. "yellow") or in 0xRRGGBB[AA] format
1810 (e.g. "0xff00ff"), possibly followed by an alpha specifier.
1811 The default value of @var{boxcolor} is "white".
1814 Set an expression which specifies if the text should be drawn. If the
1815 expression evaluates to 0, the text is not drawn. This is useful for
1816 specifying that the text should be drawn only when specific conditions
1819 Default value is "1".
1821 See below for the list of accepted constants and functions.
1824 If true, check and fix text coords to avoid clipping.
1827 The color to be used for drawing fonts.
1828 Either a string (e.g. "red") or in 0xRRGGBB[AA] format
1829 (e.g. "0xff000033"), possibly followed by an alpha specifier.
1830 The default value of @var{fontcolor} is "black".
1833 The font file to be used for drawing text. Path must be included.
1834 This parameter is mandatory.
1837 The font size to be used for drawing text.
1838 The default value of @var{fontsize} is 16.
1841 Flags to be used for loading the fonts.
1843 The flags map the corresponding flags supported by libfreetype, and are
1844 a combination of the following values:
1851 @item vertical_layout
1852 @item force_autohint
1855 @item ignore_global_advance_width
1857 @item ignore_transform
1864 Default value is "render".
1866 For more information consult the documentation for the FT_LOAD_*
1870 The color to be used for drawing a shadow behind the drawn text. It
1871 can be a color name (e.g. "yellow") or a string in the 0xRRGGBB[AA]
1872 form (e.g. "0xff00ff"), possibly followed by an alpha specifier.
1873 The default value of @var{shadowcolor} is "black".
1875 @item shadowx, shadowy
1876 The x and y offsets for the text shadow position with respect to the
1877 position of the text. They can be either positive or negative
1878 values. Default value for both is "0".
1881 The size in number of spaces to use for rendering the tab.
1885 Set the initial timecode representation in "hh:mm:ss[:;.]ff"
1886 format. It can be used with or without text parameter. @var{timecode_rate}
1887 option must be specified.
1889 @item timecode_rate, rate, r
1890 Set the timecode frame rate (timecode only).
1893 The text string to be drawn. The text must be a sequence of UTF-8
1895 This parameter is mandatory if no file is specified with the parameter
1899 A text file containing text to be drawn. The text must be a sequence
1900 of UTF-8 encoded characters.
1902 This parameter is mandatory if no text string is specified with the
1903 parameter @var{text}.
1905 If both @var{text} and @var{textfile} are specified, an error is thrown.
1908 The expressions which specify the offsets where text will be drawn
1909 within the video frame. They are relative to the top/left border of the
1912 The default value of @var{x} and @var{y} is "0".
1914 See below for the list of accepted constants and functions.
1917 The parameters for @var{x} and @var{y} are expressions containing the
1918 following constants and functions:
1922 input display aspect ratio, it is the same as (@var{w} / @var{h}) * @var{sar}
1925 horizontal and vertical chroma subsample values. For example for the
1926 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
1929 the height of each text line
1937 @item max_glyph_a, ascent
1938 the maximum distance from the baseline to the highest/upper grid
1939 coordinate used to place a glyph outline point, for all the rendered
1941 It is a positive value, due to the grid's orientation with the Y axis
1944 @item max_glyph_d, descent
1945 the maximum distance from the baseline to the lowest grid coordinate
1946 used to place a glyph outline point, for all the rendered glyphs.
1947 This is a negative value, due to the grid's orientation, with the Y axis
1951 maximum glyph height, that is the maximum height for all the glyphs
1952 contained in the rendered text, it is equivalent to @var{ascent} -
1956 maximum glyph width, that is the maximum width for all the glyphs
1957 contained in the rendered text
1960 the number of input frame, starting from 0
1962 @item rand(min, max)
1963 return a random number included between @var{min} and @var{max}
1966 input sample aspect ratio
1969 timestamp expressed in seconds, NAN if the input timestamp is unknown
1972 the height of the rendered text
1975 the width of the rendered text
1978 the x and y offset coordinates where the text is drawn.
1980 These parameters allow the @var{x} and @var{y} expressions to refer
1981 each other, so you can for example specify @code{y=x/dar}.
1984 If libavfilter was built with @code{--enable-fontconfig}, then
1985 @option{fontfile} can be a fontconfig pattern or omitted.
1987 Some examples follow.
1992 Draw "Test Text" with font FreeSerif, using the default values for the
1993 optional parameters.
1996 drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text'"
2000 Draw 'Test Text' with font FreeSerif of size 24 at position x=100
2001 and y=50 (counting from the top-left corner of the screen), text is
2002 yellow with a red box around it. Both the text and the box have an
2006 drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text':\
2007 x=100: y=50: fontsize=24: fontcolor=yellow@@0.2: box=1: boxcolor=red@@0.2"
2010 Note that the double quotes are not necessary if spaces are not used
2011 within the parameter list.
2014 Show the text at the center of the video frame:
2016 drawtext="fontsize=30:fontfile=FreeSerif.ttf:text='hello world':x=(w-text_w)/2:y=(h-text_h-line_h)/2"
2020 Show a text line sliding from right to left in the last row of the video
2021 frame. The file @file{LONG_LINE} is assumed to contain a single line
2024 drawtext="fontsize=15:fontfile=FreeSerif.ttf:text=LONG_LINE:y=h-line_h:x=-50*t"
2028 Show the content of file @file{CREDITS} off the bottom of the frame and scroll up.
2030 drawtext="fontsize=20:fontfile=FreeSerif.ttf:textfile=CREDITS:y=h-20*t"
2034 Draw a single green letter "g", at the center of the input video.
2035 The glyph baseline is placed at half screen height.
2037 drawtext="fontsize=60:fontfile=FreeSerif.ttf:fontcolor=green:text=g:x=(w-max_glyph_w)/2:y=h/2-ascent"
2041 Show text for 1 second every 3 seconds:
2043 drawtext="fontfile=FreeSerif.ttf:fontcolor=white:x=100:y=x/dar:draw=lt(mod(t\,3)\,1):text='blink'"
2047 Use fontconfig to set the font. Note that the colons need to be escaped.
2049 drawtext='fontfile=Linux Libertine O-40\:style=Semibold:text=FFmpeg'
2054 For more information about libfreetype, check:
2055 @url{http://www.freetype.org/}.
2057 For more information about fontconfig, check:
2058 @url{http://freedesktop.org/software/fontconfig/fontconfig-user.html}.
2062 Detect and draw edges. The filter uses the Canny Edge Detection algorithm.
2064 This filter accepts the following optional named parameters:
2068 Set low and high threshold values used by the Canny thresholding
2071 The high threshold selects the "strong" edge pixels, which are then
2072 connected through 8-connectivity with the "weak" edge pixels selected
2073 by the low threshold.
2075 @var{low} and @var{high} threshold values must be choosen in the range
2076 [0,1], and @var{low} should be lesser or equal to @var{high}.
2078 Default value for @var{low} is @code{20/255}, and default value for @var{high}
2084 edgedetect=low=0.1:high=0.4
2089 Apply fade-in/out effect to input video.
2091 It accepts the parameters:
2092 @var{type}:@var{start_frame}:@var{nb_frames}[:@var{options}]
2094 @var{type} specifies if the effect type, can be either "in" for
2095 fade-in, or "out" for a fade-out effect.
2097 @var{start_frame} specifies the number of the start frame for starting
2098 to apply the fade effect.
2100 @var{nb_frames} specifies the number of frames for which the fade
2101 effect has to last. At the end of the fade-in effect the output video
2102 will have the same intensity as the input video, at the end of the
2103 fade-out transition the output video will be completely black.
2105 @var{options} is an optional sequence of @var{key}=@var{value} pairs,
2106 separated by ":". The description of the accepted options follows.
2113 @item start_frame, s
2114 See @var{start_frame}.
2117 See @var{nb_frames}.
2120 If set to 1, fade only alpha channel, if one exists on the input.
2124 A few usage examples follow, usable too as test scenarios.
2126 # fade in first 30 frames of video
2129 # fade out last 45 frames of a 200-frame video
2132 # fade in first 25 frames and fade out last 25 frames of a 1000-frame video
2133 fade=in:0:25, fade=out:975:25
2135 # make first 5 frames black, then fade in from frame 5-24
2138 # fade in alpha over first 25 frames of video
2139 fade=in:0:25:alpha=1
2144 Extract a single field from an interlaced image using stride
2145 arithmetic to avoid wasting CPU time. The output frames are marked as
2148 This filter accepts the following named options:
2151 Specify whether to extract the top (if the value is @code{0} or
2152 @code{top}) or the bottom field (if the value is @code{1} or
2156 If the option key is not specified, the first value sets the @var{type}
2157 option. For example:
2169 Transform the field order of the input video.
2171 It accepts one parameter which specifies the required field order that
2172 the input interlaced video will be transformed to. The parameter can
2173 assume one of the following values:
2177 output bottom field first
2179 output top field first
2182 Default value is "tff".
2184 Transformation is achieved by shifting the picture content up or down
2185 by one line, and filling the remaining line with appropriate picture content.
2186 This method is consistent with most broadcast field order converters.
2188 If the input video is not flagged as being interlaced, or it is already
2189 flagged as being of the required output field order then this filter does
2190 not alter the incoming video.
2192 This filter is very useful when converting to or from PAL DV material,
2193 which is bottom field first.
2197 ffmpeg -i in.vob -vf "fieldorder=bff" out.dv
2202 Buffer input images and send them when they are requested.
2204 This filter is mainly useful when auto-inserted by the libavfilter
2207 The filter does not take parameters.
2211 Convert the input video to one of the specified pixel formats.
2212 Libavfilter will try to pick one that is supported for the input to
2215 The filter accepts a list of pixel format names, separated by ":",
2216 for example "yuv420p:monow:rgb24".
2218 Some examples follow:
2220 # convert the input video to the format "yuv420p"
2223 # convert the input video to any of the formats in the list
2224 format=yuv420p:yuv444p:yuv410p
2229 Convert the video to specified constant framerate by duplicating or dropping
2230 frames as necessary.
2232 This filter accepts the following named parameters:
2236 Desired output framerate.
2239 Rounding method. The default is @code{near}.
2245 Select one frame every N.
2247 This filter accepts in input a string representing a positive
2248 integer. Default argument is @code{1}.
2253 Apply a frei0r effect to the input video.
2255 To enable compilation of this filter you need to install the frei0r
2256 header and configure FFmpeg with @code{--enable-frei0r}.
2258 The filter supports the syntax:
2260 @var{filter_name}[@{:|=@}@var{param1}:@var{param2}:...:@var{paramN}]
2263 @var{filter_name} is the name to the frei0r effect to load. If the
2264 environment variable @env{FREI0R_PATH} is defined, the frei0r effect
2265 is searched in each one of the directories specified by the colon (or
2266 semicolon on Windows platforms) separated list in @env{FREIOR_PATH},
2267 otherwise in the standard frei0r paths, which are in this order:
2268 @file{HOME/.frei0r-1/lib/}, @file{/usr/local/lib/frei0r-1/},
2269 @file{/usr/lib/frei0r-1/}.
2271 @var{param1}, @var{param2}, ... , @var{paramN} specify the parameters
2272 for the frei0r effect.
2274 A frei0r effect parameter can be a boolean (whose values are specified
2275 with "y" and "n"), a double, a color (specified by the syntax
2276 @var{R}/@var{G}/@var{B}, @var{R}, @var{G}, and @var{B} being float
2277 numbers from 0.0 to 1.0) or by an @code{av_parse_color()} color
2278 description), a position (specified by the syntax @var{X}/@var{Y},
2279 @var{X} and @var{Y} being float numbers) and a string.
2281 The number and kind of parameters depend on the loaded effect. If an
2282 effect parameter is not specified the default value is set.
2284 Some examples follow:
2288 Apply the distort0r effect, set the first two double parameters:
2290 frei0r=distort0r:0.5:0.01
2294 Apply the colordistance effect, takes a color as first parameter:
2296 frei0r=colordistance:0.2/0.3/0.4
2297 frei0r=colordistance:violet
2298 frei0r=colordistance:0x112233
2302 Apply the perspective effect, specify the top left and top right image
2305 frei0r=perspective:0.2/0.2:0.8/0.2
2309 For more information see:
2310 @url{http://frei0r.dyne.org}
2314 Fix the banding artifacts that are sometimes introduced into nearly flat
2315 regions by truncation to 8bit color depth.
2316 Interpolate the gradients that should go where the bands are, and
2319 This filter is designed for playback only. Do not use it prior to
2320 lossy compression, because compression tends to lose the dither and
2321 bring back the bands.
2323 The filter takes two optional parameters, separated by ':':
2324 @var{strength}:@var{radius}
2326 @var{strength} is the maximum amount by which the filter will change
2327 any one pixel. Also the threshold for detecting nearly flat
2328 regions. Acceptable values range from .51 to 255, default value is
2329 1.2, out-of-range values will be clipped to the valid range.
2331 @var{radius} is the neighborhood to fit the gradient to. A larger
2332 radius makes for smoother gradients, but also prevents the filter from
2333 modifying the pixels near detailed regions. Acceptable values are
2334 8-32, default value is 16, out-of-range values will be clipped to the
2338 # default parameters
2347 Flip the input video horizontally.
2349 For example to horizontally flip the input video with @command{ffmpeg}:
2351 ffmpeg -i in.avi -vf "hflip" out.avi
2356 High precision/quality 3d denoise filter. This filter aims to reduce
2357 image noise producing smooth images and making still images really
2358 still. It should enhance compressibility.
2360 It accepts the following optional parameters:
2361 @var{luma_spatial}:@var{chroma_spatial}:@var{luma_tmp}:@var{chroma_tmp}
2365 a non-negative float number which specifies spatial luma strength,
2368 @item chroma_spatial
2369 a non-negative float number which specifies spatial chroma strength,
2370 defaults to 3.0*@var{luma_spatial}/4.0
2373 a float number which specifies luma temporal strength, defaults to
2374 6.0*@var{luma_spatial}/4.0
2377 a float number which specifies chroma temporal strength, defaults to
2378 @var{luma_tmp}*@var{chroma_spatial}/@var{luma_spatial}
2383 Modify the hue and/or the saturation of the input.
2385 This filter accepts the following optional named options:
2389 Specify the hue angle as a number of degrees. It accepts a float
2390 number or an expression, and defaults to 0.0.
2393 Specify the hue angle as a number of degrees. It accepts a float
2394 number or an expression, and defaults to 0.0.
2397 Specify the saturation in the [-10,10] range. It accepts a float number and
2401 The @var{h}, @var{H} and @var{s} parameters are expressions containing the
2402 following constants:
2406 frame count of the input frame starting from 0
2409 presentation timestamp of the input frame expressed in time base units
2412 frame rate of the input video, NAN if the input frame rate is unknown
2415 timestamp expressed in seconds, NAN if the input timestamp is unknown
2418 time base of the input video
2421 The options can also be set using the syntax: @var{hue}:@var{saturation}
2423 In this case @var{hue} is expressed in degrees.
2425 Some examples follow:
2428 Set the hue to 90 degrees and the saturation to 1.0:
2434 Same command but expressing the hue in radians:
2440 Same command without named options, hue must be expressed in degrees:
2446 Note that "h:s" syntax does not support expressions for the values of
2447 h and s, so the following example will issue an error:
2453 Rotate hue and make the saturation swing between 0
2454 and 2 over a period of 1 second:
2456 hue="H=2*PI*t: s=sin(2*PI*t)+1"
2460 Apply a 3 seconds saturation fade-in effect starting at 0:
2465 The general fade-in expression can be written as:
2467 hue="s=min(0\, max((t-START)/DURATION\, 1))"
2471 Apply a 3 seconds saturation fade-out effect starting at 5 seconds:
2473 hue="s=max(0\, min(1\, (8-t)/3))"
2476 The general fade-out expression can be written as:
2478 hue="s=max(0\, min(1\, (START+DURATION-t)/DURATION))"
2483 @subsection Commands
2485 This filter supports the following command:
2488 Modify the hue and/or the saturation of the input video.
2489 The command accepts the same named options and syntax than when calling the
2490 filter from the command-line.
2492 If a parameter is omitted, it is kept at its current value.
2497 Interlaceing detect filter. This filter tries to detect if the input is
2498 interlaced or progressive. Top or bottom field first.
2500 @section lut, lutrgb, lutyuv
2502 Compute a look-up table for binding each pixel component input value
2503 to an output value, and apply it to input video.
2505 @var{lutyuv} applies a lookup table to a YUV input video, @var{lutrgb}
2506 to an RGB input video.
2508 These filters accept in input a ":"-separated list of options, which
2509 specify the expressions used for computing the lookup table for the
2510 corresponding pixel component values.
2512 The @var{lut} filter requires either YUV or RGB pixel formats in
2513 input, and accepts the options:
2515 @item @var{c0} (first pixel component)
2516 @item @var{c1} (second pixel component)
2517 @item @var{c2} (third pixel component)
2518 @item @var{c3} (fourth pixel component, corresponds to the alpha component)
2521 The exact component associated to each option depends on the format in
2524 The @var{lutrgb} filter requires RGB pixel formats in input, and
2525 accepts the options:
2527 @item @var{r} (red component)
2528 @item @var{g} (green component)
2529 @item @var{b} (blue component)
2530 @item @var{a} (alpha component)
2533 The @var{lutyuv} filter requires YUV pixel formats in input, and
2534 accepts the options:
2536 @item @var{y} (Y/luminance component)
2537 @item @var{u} (U/Cb component)
2538 @item @var{v} (V/Cr component)
2539 @item @var{a} (alpha component)
2542 The expressions can contain the following constants and functions:
2546 the input width and height
2549 input value for the pixel component
2552 the input value clipped in the @var{minval}-@var{maxval} range
2555 maximum value for the pixel component
2558 minimum value for the pixel component
2561 the negated value for the pixel component value clipped in the
2562 @var{minval}-@var{maxval} range , it corresponds to the expression
2563 "maxval-clipval+minval"
2566 the computed value in @var{val} clipped in the
2567 @var{minval}-@var{maxval} range
2569 @item gammaval(gamma)
2570 the computed gamma correction value of the pixel component value
2571 clipped in the @var{minval}-@var{maxval} range, corresponds to the
2573 "pow((clipval-minval)/(maxval-minval)\,@var{gamma})*(maxval-minval)+minval"
2577 All expressions default to "val".
2579 Some examples follow:
2581 # negate input video
2582 lutrgb="r=maxval+minval-val:g=maxval+minval-val:b=maxval+minval-val"
2583 lutyuv="y=maxval+minval-val:u=maxval+minval-val:v=maxval+minval-val"
2585 # the above is the same as
2586 lutrgb="r=negval:g=negval:b=negval"
2587 lutyuv="y=negval:u=negval:v=negval"
2592 # remove chroma components, turns the video into a graytone image
2593 lutyuv="u=128:v=128"
2595 # apply a luma burning effect
2598 # remove green and blue components
2601 # set a constant alpha channel value on input
2602 format=rgba,lutrgb=a="maxval-minval/2"
2604 # correct luminance gamma by a 0.5 factor
2605 lutyuv=y=gammaval(0.5)
2610 Apply an MPlayer filter to the input video.
2612 This filter provides a wrapper around most of the filters of
2615 This wrapper is considered experimental. Some of the wrapped filters
2616 may not work properly and we may drop support for them, as they will
2617 be implemented natively into FFmpeg. Thus you should avoid
2618 depending on them when writing portable scripts.
2620 The filters accepts the parameters:
2621 @var{filter_name}[:=]@var{filter_params}
2623 @var{filter_name} is the name of a supported MPlayer filter,
2624 @var{filter_params} is a string containing the parameters accepted by
2627 The list of the currently supported filters follows:
2667 The parameter syntax and behavior for the listed filters are the same
2668 of the corresponding MPlayer filters. For detailed instructions check
2669 the "VIDEO FILTERS" section in the MPlayer manual.
2671 Some examples follow:
2674 Adjust gamma, brightness, contrast:
2680 Add temporal noise to input video:
2686 See also mplayer(1), @url{http://www.mplayerhq.hu/}.
2692 This filter accepts an integer in input, if non-zero it negates the
2693 alpha component (if available). The default value in input is 0.
2697 Force libavfilter not to use any of the specified pixel formats for the
2698 input to the next filter.
2700 The filter accepts a list of pixel format names, separated by ":",
2701 for example "yuv420p:monow:rgb24".
2703 Some examples follow:
2705 # force libavfilter to use a format different from "yuv420p" for the
2706 # input to the vflip filter
2707 noformat=yuv420p,vflip
2709 # convert the input video to any of the formats not contained in the list
2710 noformat=yuv420p:yuv444p:yuv410p
2715 Pass the video source unchanged to the output.
2719 Apply video transform using libopencv.
2721 To enable this filter install libopencv library and headers and
2722 configure FFmpeg with @code{--enable-libopencv}.
2724 The filter takes the parameters: @var{filter_name}@{:=@}@var{filter_params}.
2726 @var{filter_name} is the name of the libopencv filter to apply.
2728 @var{filter_params} specifies the parameters to pass to the libopencv
2729 filter. If not specified the default values are assumed.
2731 Refer to the official libopencv documentation for more precise
2733 @url{http://opencv.willowgarage.com/documentation/c/image_filtering.html}
2735 Follows the list of supported libopencv filters.
2740 Dilate an image by using a specific structuring element.
2741 This filter corresponds to the libopencv function @code{cvDilate}.
2743 It accepts the parameters: @var{struct_el}:@var{nb_iterations}.
2745 @var{struct_el} represents a structuring element, and has the syntax:
2746 @var{cols}x@var{rows}+@var{anchor_x}x@var{anchor_y}/@var{shape}
2748 @var{cols} and @var{rows} represent the number of columns and rows of
2749 the structuring element, @var{anchor_x} and @var{anchor_y} the anchor
2750 point, and @var{shape} the shape for the structuring element, and
2751 can be one of the values "rect", "cross", "ellipse", "custom".
2753 If the value for @var{shape} is "custom", it must be followed by a
2754 string of the form "=@var{filename}". The file with name
2755 @var{filename} is assumed to represent a binary image, with each
2756 printable character corresponding to a bright pixel. When a custom
2757 @var{shape} is used, @var{cols} and @var{rows} are ignored, the number
2758 or columns and rows of the read file are assumed instead.
2760 The default value for @var{struct_el} is "3x3+0x0/rect".
2762 @var{nb_iterations} specifies the number of times the transform is
2763 applied to the image, and defaults to 1.
2765 Follow some example:
2767 # use the default values
2770 # dilate using a structuring element with a 5x5 cross, iterate two times
2771 ocv=dilate=5x5+2x2/cross:2
2773 # read the shape from the file diamond.shape, iterate two times
2774 # the file diamond.shape may contain a pattern of characters like this:
2780 # the specified cols and rows are ignored (but not the anchor point coordinates)
2781 ocv=0x0+2x2/custom=diamond.shape:2
2786 Erode an image by using a specific structuring element.
2787 This filter corresponds to the libopencv function @code{cvErode}.
2789 The filter accepts the parameters: @var{struct_el}:@var{nb_iterations},
2790 with the same syntax and semantics as the @ref{dilate} filter.
2794 Smooth the input video.
2796 The filter takes the following parameters:
2797 @var{type}:@var{param1}:@var{param2}:@var{param3}:@var{param4}.
2799 @var{type} is the type of smooth filter to apply, and can be one of
2800 the following values: "blur", "blur_no_scale", "median", "gaussian",
2801 "bilateral". The default value is "gaussian".
2803 @var{param1}, @var{param2}, @var{param3}, and @var{param4} are
2804 parameters whose meanings depend on smooth type. @var{param1} and
2805 @var{param2} accept integer positive values or 0, @var{param3} and
2806 @var{param4} accept float values.
2808 The default value for @var{param1} is 3, the default value for the
2809 other parameters is 0.
2811 These parameters correspond to the parameters assigned to the
2812 libopencv function @code{cvSmooth}.
2817 Overlay one video on top of another.
2819 It takes two inputs and one output, the first input is the "main"
2820 video on which the second input is overlayed.
2822 It accepts the parameters: @var{x}:@var{y}[:@var{options}].
2824 @var{x} is the x coordinate of the overlayed video on the main video,
2825 @var{y} is the y coordinate. @var{x} and @var{y} are expressions containing
2826 the following parameters:
2829 @item main_w, main_h
2830 main input width and height
2833 same as @var{main_w} and @var{main_h}
2835 @item overlay_w, overlay_h
2836 overlay input width and height
2839 same as @var{overlay_w} and @var{overlay_h}
2842 @var{options} is an optional list of @var{key}=@var{value} pairs,
2845 The description of the accepted options follows.
2849 If set to 1, force the filter to accept inputs in the RGB
2850 color space. Default value is 0.
2853 Be aware that frames are taken from each input video in timestamp
2854 order, hence, if their initial timestamps differ, it is a a good idea
2855 to pass the two inputs through a @var{setpts=PTS-STARTPTS} filter to
2856 have them begin in the same zero timestamp, as it does the example for
2857 the @var{movie} filter.
2859 Follow some examples:
2861 # draw the overlay at 10 pixels from the bottom right
2862 # corner of the main video.
2863 overlay=main_w-overlay_w-10:main_h-overlay_h-10
2865 # insert a transparent PNG logo in the bottom left corner of the input
2866 ffmpeg -i input -i logo -filter_complex 'overlay=10:main_h-overlay_h-10' output
2868 # insert 2 different transparent PNG logos (second logo on bottom
2870 ffmpeg -i input -i logo1 -i logo2 -filter_complex
2871 'overlay=10:H-h-10,overlay=W-w-10:H-h-10' output
2873 # add a transparent color layer on top of the main video,
2874 # WxH specifies the size of the main input to the overlay filter
2875 color=red@@.3:WxH [over]; [in][over] overlay [out]
2877 # play an original video and a filtered version (here with the deshake filter)
2879 ffplay input.avi -vf 'split[a][b]; [a]pad=iw*2:ih[src]; [b]deshake[filt]; [src][filt]overlay=w'
2881 # the previous example is the same as:
2882 ffplay input.avi -vf 'split[b], pad=iw*2[src], [b]deshake, [src]overlay=w'
2885 You can chain together more overlays but the efficiency of such
2886 approach is yet to be tested.
2890 Add paddings to the input image, and places the original input at the
2891 given coordinates @var{x}, @var{y}.
2893 It accepts the following parameters:
2894 @var{width}:@var{height}:@var{x}:@var{y}:@var{color}.
2896 The parameters @var{width}, @var{height}, @var{x}, and @var{y} are
2897 expressions containing the following constants:
2901 the input video width and height
2904 same as @var{in_w} and @var{in_h}
2907 the output width and height, that is the size of the padded area as
2908 specified by the @var{width} and @var{height} expressions
2911 same as @var{out_w} and @var{out_h}
2914 x and y offsets as specified by the @var{x} and @var{y}
2915 expressions, or NAN if not yet specified
2918 same as @var{iw} / @var{ih}
2921 input sample aspect ratio
2924 input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
2927 horizontal and vertical chroma subsample values. For example for the
2928 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
2931 Follows the description of the accepted parameters.
2936 Specify the size of the output image with the paddings added. If the
2937 value for @var{width} or @var{height} is 0, the corresponding input size
2938 is used for the output.
2940 The @var{width} expression can reference the value set by the
2941 @var{height} expression, and vice versa.
2943 The default value of @var{width} and @var{height} is 0.
2947 Specify the offsets where to place the input image in the padded area
2948 with respect to the top/left border of the output image.
2950 The @var{x} expression can reference the value set by the @var{y}
2951 expression, and vice versa.
2953 The default value of @var{x} and @var{y} is 0.
2957 Specify the color of the padded area, it can be the name of a color
2958 (case insensitive match) or a 0xRRGGBB[AA] sequence.
2960 The default value of @var{color} is "black".
2964 @subsection Examples
2968 Add paddings with color "violet" to the input video. Output video
2969 size is 640x480, the top-left corner of the input video is placed at
2972 pad=640:480:0:40:violet
2976 Pad the input to get an output with dimensions increased by 3/2,
2977 and put the input video at the center of the padded area:
2979 pad="3/2*iw:3/2*ih:(ow-iw)/2:(oh-ih)/2"
2983 Pad the input to get a squared output with size equal to the maximum
2984 value between the input width and height, and put the input video at
2985 the center of the padded area:
2987 pad="max(iw\,ih):ow:(ow-iw)/2:(oh-ih)/2"
2991 Pad the input to get a final w/h ratio of 16:9:
2993 pad="ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"
2997 In case of anamorphic video, in order to set the output display aspect
2998 correctly, it is necessary to use @var{sar} in the expression,
2999 according to the relation:
3001 (ih * X / ih) * sar = output_dar
3002 X = output_dar / sar
3005 Thus the previous example needs to be modified to:
3007 pad="ih*16/9/sar:ih:(ow-iw)/2:(oh-ih)/2"
3011 Double output size and put the input video in the bottom-right
3012 corner of the output padded area:
3014 pad="2*iw:2*ih:ow-iw:oh-ih"
3018 @section pixdesctest
3020 Pixel format descriptor test filter, mainly useful for internal
3021 testing. The output video should be equal to the input video.
3025 format=monow, pixdesctest
3028 can be used to test the monowhite pixel format descriptor definition.
3032 Suppress a TV station logo, using an image file to determine which
3033 pixels comprise the logo. It works by filling in the pixels that
3034 comprise the logo with neighboring pixels.
3036 This filter requires one argument which specifies the filter bitmap
3037 file, which can be any image format supported by libavformat. The
3038 width and height of the image file must match those of the video
3039 stream being processed.
3041 Pixels in the provided bitmap image with a value of zero are not
3042 considered part of the logo, non-zero pixels are considered part of
3043 the logo. If you use white (255) for the logo and black (0) for the
3044 rest, you will be safe. For making the filter bitmap, it is
3045 recommended to take a screen capture of a black frame with the logo
3046 visible, and then using a threshold filter followed by the erode
3047 filter once or twice.
3049 If needed, little splotches can be fixed manually. Remember that if
3050 logo pixels are not covered, the filter quality will be much
3051 reduced. Marking too many pixels as part of the logo does not hurt as
3052 much, but it will increase the amount of blurring needed to cover over
3053 the image and will destroy more information than necessary, and extra
3054 pixels will slow things down on a large logo.
3058 Scale (resize) the input video, using the libswscale library.
3060 The scale filter forces the output display aspect ratio to be the same
3061 of the input, by changing the output sample aspect ratio.
3063 This filter accepts a list of named options in the form of
3064 @var{key}=@var{value} pairs separated by ":". If the key for the first
3065 two options is not specified, the assumed keys for the first two
3066 values are @code{w} and @code{h}. If the first option has no key and
3067 can be interpreted like a video size specification, it will be used
3068 to set the video size.
3070 A description of the accepted options follows.
3074 Set the video width expression, default value is @code{iw}. See below
3075 for the list of accepted constants.
3078 Set the video heiht expression, default value is @code{ih}.
3079 See below for the list of accepted constants.
3082 Set the interlacing. It accepts the following values:
3086 force interlaced aware scaling
3089 do not apply interlaced scaling
3092 select interlaced aware scaling depending on whether the source frames
3093 are flagged as interlaced or not
3096 Default value is @code{0}.
3099 Set libswscale scaling flags. If not explictly specified the filter
3100 applies a bilinear scaling algorithm.
3103 Set the video size, the value must be a valid abbreviation or in the
3104 form @var{width}x@var{height}.
3107 The values of the @var{w} and @var{h} options are expressions
3108 containing the following constants:
3112 the input width and height
3115 same as @var{in_w} and @var{in_h}
3118 the output (cropped) width and height
3121 same as @var{out_w} and @var{out_h}
3124 same as @var{iw} / @var{ih}
3127 input sample aspect ratio
3130 input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
3133 horizontal and vertical chroma subsample values. For example for the
3134 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
3137 If the input image format is different from the format requested by
3138 the next filter, the scale filter will convert the input to the
3141 If the value for @var{width} or @var{height} is 0, the respective input
3142 size is used for the output.
3144 If the value for @var{width} or @var{height} is -1, the scale filter will
3145 use, for the respective output size, a value that maintains the aspect
3146 ratio of the input image.
3148 @subsection Examples
3152 Scale the input video to a size of 200x100:
3157 This is equivalent to:
3168 Specify a size abbreviation for the output size:
3173 which can also be written as:
3179 Scale the input to 2x:
3185 The above is the same as:
3191 Scale the input to 2x with forced interlaced scaling:
3193 scale=2*iw:2*ih:interl=1
3197 Scale the input to half size:
3203 Increase the width, and set the height to the same size:
3209 Seek for Greek harmony:
3216 Increase the height, and set the width to 3/2 of the height:
3222 Increase the size, but make the size a multiple of the chroma:
3224 scale="trunc(3/2*iw/hsub)*hsub:trunc(3/2*ih/vsub)*vsub"
3228 Increase the width to a maximum of 500 pixels, keep the same input
3231 scale='min(500\, iw*3/2):-1'
3236 Select frames to pass in output.
3238 It accepts in input an expression, which is evaluated for each input
3239 frame. If the expression is evaluated to a non-zero value, the frame
3240 is selected and passed to the output, otherwise it is discarded.
3242 The expression can contain the following constants:
3246 the sequential number of the filtered frame, starting from 0
3249 the sequential number of the selected frame, starting from 0
3251 @item prev_selected_n
3252 the sequential number of the last selected frame, NAN if undefined
3255 timebase of the input timestamps
3258 the PTS (Presentation TimeStamp) of the filtered video frame,
3259 expressed in @var{TB} units, NAN if undefined
3262 the PTS (Presentation TimeStamp) of the filtered video frame,
3263 expressed in seconds, NAN if undefined
3266 the PTS of the previously filtered video frame, NAN if undefined
3268 @item prev_selected_pts
3269 the PTS of the last previously filtered video frame, NAN if undefined
3271 @item prev_selected_t
3272 the PTS of the last previously selected video frame, NAN if undefined
3275 the PTS of the first video frame in the video, NAN if undefined
3278 the time of the first video frame in the video, NAN if undefined
3281 the type of the filtered frame, can assume one of the following
3293 @item interlace_type
3294 the frame interlace type, can assume one of the following values:
3297 the frame is progressive (not interlaced)
3299 the frame is top-field-first
3301 the frame is bottom-field-first
3305 1 if the filtered frame is a key-frame, 0 otherwise
3308 the position in the file of the filtered frame, -1 if the information
3309 is not available (e.g. for synthetic video)
3312 value between 0 and 1 to indicate a new scene; a low value reflects a low
3313 probability for the current frame to introduce a new scene, while a higher
3314 value means the current frame is more likely to be one (see the example below)
3318 The default value of the select expression is "1".
3320 Some examples follow:
3323 # select all frames in input
3326 # the above is the same as:
3332 # select only I-frames
3333 select='eq(pict_type\,I)'
3335 # select one frame every 100
3336 select='not(mod(n\,100))'
3338 # select only frames contained in the 10-20 time interval
3339 select='gte(t\,10)*lte(t\,20)'
3341 # select only I frames contained in the 10-20 time interval
3342 select='gte(t\,10)*lte(t\,20)*eq(pict_type\,I)'
3344 # select frames with a minimum distance of 10 seconds
3345 select='isnan(prev_selected_t)+gte(t-prev_selected_t\,10)'
3348 Complete example to create a mosaic of the first scenes:
3351 ffmpeg -i video.avi -vf select='gt(scene\,0.4)',scale=160:120,tile -frames:v 1 preview.png
3354 Comparing @var{scene} against a value between 0.3 and 0.5 is generally a sane
3357 @section setdar, setsar
3359 The @code{setdar} filter sets the Display Aspect Ratio for the filter
3362 This is done by changing the specified Sample (aka Pixel) Aspect
3363 Ratio, according to the following equation:
3365 @var{DAR} = @var{HORIZONTAL_RESOLUTION} / @var{VERTICAL_RESOLUTION} * @var{SAR}
3368 Keep in mind that the @code{setdar} filter does not modify the pixel
3369 dimensions of the video frame. Also the display aspect ratio set by
3370 this filter may be changed by later filters in the filterchain,
3371 e.g. in case of scaling or if another "setdar" or a "setsar" filter is
3374 The @code{setsar} filter sets the Sample (aka Pixel) Aspect Ratio for
3375 the filter output video.
3377 Note that as a consequence of the application of this filter, the
3378 output display aspect ratio will change according to the equation
3381 Keep in mind that the sample aspect ratio set by the @code{setsar}
3382 filter may be changed by later filters in the filterchain, e.g. if
3383 another "setsar" or a "setdar" filter is applied.
3385 The @code{setdar} and @code{setsar} filters accept a string in the
3386 form @var{num}:@var{den} expressing an aspect ratio, or the following
3387 named options, expressed as a sequence of @var{key}=@var{value} pairs,
3392 Set the maximum integer value to use for expressing numerator and
3393 denominator when reducing the expressed aspect ratio to a rational.
3394 Default value is @code{100}.
3397 Set the aspect ratio used by the filter.
3399 The parameter can be a floating point number string, an expression, or
3400 a string of the form @var{num}:@var{den}, where @var{num} and
3401 @var{den} are the numerator and denominator of the aspect ratio. If
3402 the parameter is not specified, it is assumed the value "0".
3403 In case the form "@var{num}:@var{den}" the @code{:} character should
3407 If the keys are omitted in the named options list, the specifed values
3408 are assumed to be @var{ratio} and @var{max} in that order.
3410 For example to change the display aspect ratio to 16:9, specify:
3415 The example above is equivalent to:
3420 To change the sample aspect ratio to 10:11, specify:
3425 To set a display aspect ratio of 16:9, and specify a maximum integer value of
3426 1000 in the aspect ratio reduction, use the command:
3428 setdar=ratio='16:9':max=1000
3433 Force field for the output video frame.
3435 The @code{setfield} filter marks the interlace type field for the
3436 output frames. It does not change the input frame, but only sets the
3437 corresponding property, which affects how the frame is treated by
3438 following filters (e.g. @code{fieldorder} or @code{yadif}).
3440 It accepts a string parameter, which can assume the following values:
3443 Keep the same field property.
3446 Mark the frame as bottom-field-first.
3449 Mark the frame as top-field-first.
3452 Mark the frame as progressive.
3457 Show a line containing various information for each input video frame.
3458 The input video is not modified.
3460 The shown line contains a sequence of key/value pairs of the form
3461 @var{key}:@var{value}.
3463 A description of each shown parameter follows:
3467 sequential number of the input frame, starting from 0
3470 Presentation TimeStamp of the input frame, expressed as a number of
3471 time base units. The time base unit depends on the filter input pad.
3474 Presentation TimeStamp of the input frame, expressed as a number of
3478 position of the frame in the input stream, -1 if this information in
3479 unavailable and/or meaningless (for example in case of synthetic video)
3485 sample aspect ratio of the input frame, expressed in the form
3489 size of the input frame, expressed in the form
3490 @var{width}x@var{height}
3493 interlaced mode ("P" for "progressive", "T" for top field first, "B"
3494 for bottom field first)
3497 1 if the frame is a key frame, 0 otherwise
3500 picture type of the input frame ("I" for an I-frame, "P" for a
3501 P-frame, "B" for a B-frame, "?" for unknown type).
3502 Check also the documentation of the @code{AVPictureType} enum and of
3503 the @code{av_get_picture_type_char} function defined in
3504 @file{libavutil/avutil.h}.
3507 Adler-32 checksum (printed in hexadecimal) of all the planes of the input frame
3509 @item plane_checksum
3510 Adler-32 checksum (printed in hexadecimal) of each plane of the input frame,
3511 expressed in the form "[@var{c0} @var{c1} @var{c2} @var{c3}]"
3516 Pass the images of input video on to next video filter as multiple
3520 ffmpeg -i in.avi -vf "slicify=32" out.avi
3523 The filter accepts the slice height as parameter. If the parameter is
3524 not specified it will use the default value of 16.
3526 Adding this in the beginning of filter chains should make filtering
3527 faster due to better use of the memory cache.
3531 Blur the input video without impacting the outlines.
3533 The filter accepts the following parameters:
3534 @var{luma_radius}:@var{luma_strength}:@var{luma_threshold}[:@var{chroma_radius}:@var{chroma_strength}:@var{chroma_threshold}]
3536 Parameters prefixed by @var{luma} indicate that they work on the
3537 luminance of the pixels whereas parameters prefixed by @var{chroma}
3538 refer to the chrominance of the pixels.
3540 If the chroma parameters are not set, the luma parameters are used for
3541 either the luminance and the chrominance of the pixels.
3543 @var{luma_radius} or @var{chroma_radius} must be a float number in the
3544 range [0.1,5.0] that specifies the variance of the gaussian filter
3545 used to blur the image (slower if larger).
3547 @var{luma_strength} or @var{chroma_strength} must be a float number in
3548 the range [-1.0,1.0] that configures the blurring. A value included in
3549 [0.0,1.0] will blur the image whereas a value included in [-1.0,0.0]
3550 will sharpen the image.
3552 @var{luma_threshold} or @var{chroma_threshold} must be an integer in
3553 the range [-30,30] that is used as a coefficient to determine whether
3554 a pixel should be blurred or not. A value of 0 will filter all the
3555 image, a value included in [0,30] will filter flat areas and a value
3556 included in [-30,0] will filter edges.
3560 Split input video into several identical outputs.
3562 The filter accepts a single parameter which specifies the number of outputs. If
3563 unspecified, it defaults to 2.
3567 ffmpeg -i INPUT -filter_complex split=5 OUTPUT
3569 will create 5 copies of the input video.
3573 [in] split [splitout1][splitout2];
3574 [splitout1] crop=100:100:0:0 [cropout];
3575 [splitout2] pad=200:200:100:100 [padout];
3578 will create two separate outputs from the same input, one cropped and
3583 Scale the input by 2x and smooth using the Super2xSaI (Scale and
3584 Interpolate) pixel art scaling algorithm.
3586 Useful for enlarging pixel art images without reducing sharpness.
3592 Select the most representative frame in a given sequence of consecutive frames.
3594 It accepts as argument the frames batch size to analyze (default @var{N}=100);
3595 in a set of @var{N} frames, the filter will pick one of them, and then handle
3596 the next batch of @var{N} frames until the end.
3598 Since the filter keeps track of the whole frames sequence, a bigger @var{N}
3599 value will result in a higher memory usage, so a high value is not recommended.
3601 The following example extract one picture each 50 frames:
3606 Complete example of a thumbnail creation with @command{ffmpeg}:
3608 ffmpeg -i in.avi -vf thumbnail,scale=300:200 -frames:v 1 out.png
3613 Tile several successive frames together.
3615 It accepts as argument the tile size (i.e. the number of lines and columns)
3616 in the form "@var{w}x@var{h}".
3618 For example, produce 8×8 PNG tiles of all keyframes (@option{-skip_frame
3621 ffmpeg -skip_frame nokey -i file.avi -vf 'scale=128:72,tile=8x8' -an -vsync 0 keyframes%03d.png
3623 The @option{-vsync 0} is necessary to prevent @command{ffmpeg} from
3624 duplicating each output frame to accomodate the originally detected frame
3629 Perform various types of temporal field interlacing.
3631 Frames are counted starting from 1, so the first input frame is
3634 This filter accepts a single parameter specifying the mode. Available
3639 Move odd frames into the upper field, even into the lower field,
3640 generating a double height frame at half framerate.
3643 Only output even frames, odd frames are dropped, generating a frame with
3644 unchanged height at half framerate.
3647 Only output odd frames, even frames are dropped, generating a frame with
3648 unchanged height at half framerate.
3651 Expand each frame to full height, but pad alternate lines with black,
3652 generating a frame with double height at the same input framerate.
3654 @item interleave_top, 4
3655 Interleave the upper field from odd frames with the lower field from
3656 even frames, generating a frame with unchanged height at half framerate.
3658 @item interleave_bottom, 5
3659 Interleave the lower field from odd frames with the upper field from
3660 even frames, generating a frame with unchanged height at half framerate.
3662 @item interlacex2, 6
3663 Double frame rate with unchanged height. Frames are inserted each
3664 containing the second temporal field from the previous input frame and
3665 the first temporal field from the next input frame. This mode relies on
3666 the top_field_first flag. Useful for interlaced video displays with no
3667 field synchronisation.
3670 Numeric values are deprecated but are accepted for backward
3671 compatibility reasons.
3673 Default mode is @code{merge}.
3677 Transpose rows with columns in the input video and optionally flip it.
3679 This filter accepts the following named parameters:
3683 Specify the transposition direction. Can assume the following values:
3687 Rotate by 90 degrees counterclockwise and vertically flip (default), that is:
3695 Rotate by 90 degrees clockwise, that is:
3703 Rotate by 90 degrees counterclockwise, that is:
3711 Rotate by 90 degrees clockwise and vertically flip, that is:
3719 For values between 4-7, the transposition is only done if the input
3720 video geometry is portrait and not landscape. These values are
3721 deprecated, the @code{passthrough} option should be used instead.
3724 Do not apply the transposition if the input geometry matches the one
3725 specified by the specified value. It accepts the following values:
3728 Always apply transposition.
3730 Preserve portrait geometry (when @var{height} >= @var{width}).
3732 Preserve landscape geometry (when @var{width} >= @var{height}).
3735 Default value is @code{none}.
3740 Sharpen or blur the input video.
3742 It accepts the following parameters:
3743 @var{luma_msize_x}:@var{luma_msize_y}:@var{luma_amount}:@var{chroma_msize_x}:@var{chroma_msize_y}:@var{chroma_amount}
3745 Negative values for the amount will blur the input video, while positive
3746 values will sharpen. All parameters are optional and default to the
3747 equivalent of the string '5:5:1.0:5:5:0.0'.
3752 Set the luma matrix horizontal size. It can be an integer between 3
3753 and 13, default value is 5.
3756 Set the luma matrix vertical size. It can be an integer between 3
3757 and 13, default value is 5.
3760 Set the luma effect strength. It can be a float number between -2.0
3761 and 5.0, default value is 1.0.
3763 @item chroma_msize_x
3764 Set the chroma matrix horizontal size. It can be an integer between 3
3765 and 13, default value is 5.
3767 @item chroma_msize_y
3768 Set the chroma matrix vertical size. It can be an integer between 3
3769 and 13, default value is 5.
3772 Set the chroma effect strength. It can be a float number between -2.0
3773 and 5.0, default value is 0.0.
3778 # Strong luma sharpen effect parameters
3781 # Strong blur of both luma and chroma parameters
3782 unsharp=7:7:-2:7:7:-2
3784 # Use the default values with @command{ffmpeg}
3785 ffmpeg -i in.avi -vf "unsharp" out.mp4
3790 Flip the input video vertically.
3793 ffmpeg -i in.avi -vf "vflip" out.avi
3798 Deinterlace the input video ("yadif" means "yet another deinterlacing
3801 It accepts the optional parameters: @var{mode}:@var{parity}:@var{auto}.
3803 @var{mode} specifies the interlacing mode to adopt, accepts one of the
3808 output 1 frame for each frame
3810 output 1 frame for each field
3812 like 0 but skips spatial interlacing check
3814 like 1 but skips spatial interlacing check
3819 @var{parity} specifies the picture field parity assumed for the input
3820 interlaced video, accepts one of the following values:
3824 assume top field first
3826 assume bottom field first
3828 enable automatic detection
3831 Default value is -1.
3832 If interlacing is unknown or decoder does not export this information,
3833 top field first will be assumed.
3835 @var{auto} specifies if deinterlacer should trust the interlaced flag
3836 and only deinterlace frames marked as interlaced
3840 deinterlace all frames
3842 only deinterlace frames marked as interlaced
3847 @c man end VIDEO FILTERS
3849 @chapter Video Sources
3850 @c man begin VIDEO SOURCES
3852 Below is a description of the currently available video sources.
3856 Buffer video frames, and make them available to the filter chain.
3858 This source is mainly intended for a programmatic use, in particular
3859 through the interface defined in @file{libavfilter/vsrc_buffer.h}.
3861 It accepts a list of options in the form of @var{key}=@var{value} pairs
3862 separated by ":". A description of the accepted options follows.
3867 Specify the size (width and height) of the buffered video frames.
3870 A string representing the pixel format of the buffered video frames.
3871 It may be a number corresponding to a pixel format, or a pixel format
3875 Specify the timebase assumed by the timestamps of the buffered frames.
3878 Specify the frame rate expected for the video stream.
3881 Specify the sample aspect ratio assumed by the video frames.
3884 Specify the optional parameters to be used for the scale filter which
3885 is automatically inserted when an input change is detected in the
3886 input size or format.
3891 buffer=size=320x240:pix_fmt=yuv410p:time_base=1/24:pixel_aspect=1/1
3894 will instruct the source to accept video frames with size 320x240 and
3895 with format "yuv410p", assuming 1/24 as the timestamps timebase and
3896 square pixels (1:1 sample aspect ratio).
3897 Since the pixel format with name "yuv410p" corresponds to the number 6
3898 (check the enum AVPixelFormat definition in @file{libavutil/pixfmt.h}),
3899 this example corresponds to:
3901 buffer=size=320x240:pixfmt=6:time_base=1/24:pixel_aspect=1/1
3904 Alternatively, the options can be specified as a flat string, but this
3905 syntax is deprecated:
3907 @var{width}:@var{height}:@var{pix_fmt}:@var{time_base.num}:@var{time_base.den}:@var{pixel_aspect.num}:@var{pixel_aspect.den}[:@var{sws_param}]
3911 Create a pattern generated by an elementary cellular automaton.
3913 The initial state of the cellular automaton can be defined through the
3914 @option{filename}, and @option{pattern} options. If such options are
3915 not specified an initial state is created randomly.
3917 At each new frame a new row in the video is filled with the result of
3918 the cellular automaton next generation. The behavior when the whole
3919 frame is filled is defined by the @option{scroll} option.
3921 This source accepts a list of options in the form of
3922 @var{key}=@var{value} pairs separated by ":". A description of the
3923 accepted options follows.
3927 Read the initial cellular automaton state, i.e. the starting row, from
3929 In the file, each non-whitespace character is considered an alive
3930 cell, a newline will terminate the row, and further characters in the
3931 file will be ignored.
3934 Read the initial cellular automaton state, i.e. the starting row, from
3935 the specified string.
3937 Each non-whitespace character in the string is considered an alive
3938 cell, a newline will terminate the row, and further characters in the
3939 string will be ignored.
3942 Set the video rate, that is the number of frames generated per second.
3945 @item random_fill_ratio, ratio
3946 Set the random fill ratio for the initial cellular automaton row. It
3947 is a floating point number value ranging from 0 to 1, defaults to
3950 This option is ignored when a file or a pattern is specified.
3952 @item random_seed, seed
3953 Set the seed for filling randomly the initial row, must be an integer
3954 included between 0 and UINT32_MAX. If not specified, or if explicitly
3955 set to -1, the filter will try to use a good random seed on a best
3959 Set the cellular automaton rule, it is a number ranging from 0 to 255.
3960 Default value is 110.
3963 Set the size of the output video.
3965 If @option{filename} or @option{pattern} is specified, the size is set
3966 by default to the width of the specified initial state row, and the
3967 height is set to @var{width} * PHI.
3969 If @option{size} is set, it must contain the width of the specified
3970 pattern string, and the specified pattern will be centered in the
3973 If a filename or a pattern string is not specified, the size value
3974 defaults to "320x518" (used for a randomly generated initial state).
3977 If set to 1, scroll the output upward when all the rows in the output
3978 have been already filled. If set to 0, the new generated row will be
3979 written over the top row just after the bottom row is filled.
3982 @item start_full, full
3983 If set to 1, completely fill the output with generated rows before
3984 outputting the first frame.
3985 This is the default behavior, for disabling set the value to 0.
3988 If set to 1, stitch the left and right row edges together.
3989 This is the default behavior, for disabling set the value to 0.
3992 @subsection Examples
3996 Read the initial state from @file{pattern}, and specify an output of
3999 cellauto=f=pattern:s=200x400
4003 Generate a random initial row with a width of 200 cells, with a fill
4006 cellauto=ratio=2/3:s=200x200
4010 Create a pattern generated by rule 18 starting by a single alive cell
4011 centered on an initial row with width 100:
4013 cellauto=p=@@:s=100x400:full=0:rule=18
4017 Specify a more elaborated initial pattern:
4019 cellauto=p='@@@@ @@ @@@@':s=100x400:full=0:rule=18
4026 Generate a Mandelbrot set fractal, and progressively zoom towards the
4027 point specified with @var{start_x} and @var{start_y}.
4029 This source accepts a list of options in the form of
4030 @var{key}=@var{value} pairs separated by ":". A description of the
4031 accepted options follows.
4036 Set the terminal pts value. Default value is 400.
4039 Set the terminal scale value.
4040 Must be a floating point value. Default value is 0.3.
4043 Set the inner coloring mode, that is the algorithm used to draw the
4044 Mandelbrot fractal internal region.
4046 It shall assume one of the following values:
4051 Show time until convergence.
4053 Set color based on point closest to the origin of the iterations.
4058 Default value is @var{mincol}.
4061 Set the bailout value. Default value is 10.0.
4064 Set the maximum of iterations performed by the rendering
4065 algorithm. Default value is 7189.
4068 Set outer coloring mode.
4069 It shall assume one of following values:
4071 @item iteration_count
4072 Set iteration cound mode.
4073 @item normalized_iteration_count
4074 set normalized iteration count mode.
4076 Default value is @var{normalized_iteration_count}.
4079 Set frame rate, expressed as number of frames per second. Default
4083 Set frame size. Default value is "640x480".
4086 Set the initial scale value. Default value is 3.0.
4089 Set the initial x position. Must be a floating point value between
4090 -100 and 100. Default value is -0.743643887037158704752191506114774.
4093 Set the initial y position. Must be a floating point value between
4094 -100 and 100. Default value is -0.131825904205311970493132056385139.
4099 Generate various test patterns, as generated by the MPlayer test filter.
4101 The size of the generated video is fixed, and is 256x256.
4102 This source is useful in particular for testing encoding features.
4104 This source accepts an optional sequence of @var{key}=@var{value} pairs,
4105 separated by ":". The description of the accepted options follows.
4110 Specify the frame rate of the sourced video, as the number of frames
4111 generated per second. It has to be a string in the format
4112 @var{frame_rate_num}/@var{frame_rate_den}, an integer number, a float
4113 number or a valid video frame rate abbreviation. The default value is
4117 Set the video duration of the sourced video. The accepted syntax is:
4122 See also the function @code{av_parse_time()}.
4124 If not specified, or the expressed duration is negative, the video is
4125 supposed to be generated forever.
4129 Set the number or the name of the test to perform. Supported tests are:
4144 Default value is "all", which will cycle through the list of all tests.
4147 For example the following:
4152 will generate a "dc_luma" test pattern.
4156 Provide a frei0r source.
4158 To enable compilation of this filter you need to install the frei0r
4159 header and configure FFmpeg with @code{--enable-frei0r}.
4161 The source supports the syntax:
4163 @var{size}:@var{rate}:@var{src_name}[@{=|:@}@var{param1}:@var{param2}:...:@var{paramN}]
4166 @var{size} is the size of the video to generate, may be a string of the
4167 form @var{width}x@var{height} or a frame size abbreviation.
4168 @var{rate} is the rate of the video to generate, may be a string of
4169 the form @var{num}/@var{den} or a frame rate abbreviation.
4170 @var{src_name} is the name to the frei0r source to load. For more
4171 information regarding frei0r and how to set the parameters read the
4172 section @ref{frei0r} in the description of the video filters.
4174 For example, to generate a frei0r partik0l source with size 200x200
4175 and frame rate 10 which is overlayed on the overlay filter main input:
4177 frei0r_src=200x200:10:partik0l=1234 [overlay]; [in][overlay] overlay
4182 Generate a life pattern.
4184 This source is based on a generalization of John Conway's life game.
4186 The sourced input represents a life grid, each pixel represents a cell
4187 which can be in one of two possible states, alive or dead. Every cell
4188 interacts with its eight neighbours, which are the cells that are
4189 horizontally, vertically, or diagonally adjacent.
4191 At each interaction the grid evolves according to the adopted rule,
4192 which specifies the number of neighbor alive cells which will make a
4193 cell stay alive or born. The @option{rule} option allows to specify
4196 This source accepts a list of options in the form of
4197 @var{key}=@var{value} pairs separated by ":". A description of the
4198 accepted options follows.
4202 Set the file from which to read the initial grid state. In the file,
4203 each non-whitespace character is considered an alive cell, and newline
4204 is used to delimit the end of each row.