1 @chapter Filtergraph description
2 @c man begin FILTERGRAPH DESCRIPTION
4 A filtergraph is a directed graph of connected filters. It can contain
5 cycles, and there can be multiple links between a pair of
6 filters. Each link has one input pad on one side connecting it to one
7 filter from which it takes its input, and one output pad on the other
8 side connecting it to the one filter accepting its output.
10 Each filter in a filtergraph is an instance of a filter class
11 registered in the application, which defines the features and the
12 number of input and output pads of the filter.
14 A filter with no input pads is called a "source", a filter with no
15 output pads is called a "sink".
17 @section Filtergraph syntax
19 A filtergraph can be represented using a textual representation, which
20 is recognized by the @code{-vf} and @code{-af} options of the ff*
21 tools, and by the @code{avfilter_graph_parse()} function defined in
22 @file{libavfilter/avfiltergraph.h}.
24 A filterchain consists of a sequence of connected filters, each one
25 connected to the previous one in the sequence. A filterchain is
26 represented by a list of ","-separated filter descriptions.
28 A filtergraph consists of a sequence of filterchains. A sequence of
29 filterchains is represented by a list of ";"-separated filterchain
32 A filter is represented by a string of the form:
33 [@var{in_link_1}]...[@var{in_link_N}]@var{filter_name}=@var{arguments}[@var{out_link_1}]...[@var{out_link_M}]
35 @var{filter_name} is the name of the filter class of which the
36 described filter is an instance of, and has to be the name of one of
37 the filter classes registered in the program.
38 The name of the filter class is optionally followed by a string
41 @var{arguments} is a string which contains the parameters used to
42 initialize the filter instance, and are described in the filter
45 The list of arguments can be quoted using the character "'" as initial
46 and ending mark, and the character '\' for escaping the characters
47 within the quoted text; otherwise the argument string is considered
48 terminated when the next special character (belonging to the set
49 "[]=;,") is encountered.
51 The name and arguments of the filter are optionally preceded and
52 followed by a list of link labels.
53 A link label allows to name a link and associate it to a filter output
54 or input pad. The preceding labels @var{in_link_1}
55 ... @var{in_link_N}, are associated to the filter input pads,
56 the following labels @var{out_link_1} ... @var{out_link_M}, are
57 associated to the output pads.
59 When two link labels with the same name are found in the
60 filtergraph, a link between the corresponding input and output pad is
63 If an output pad is not labelled, it is linked by default to the first
64 unlabelled input pad of the next filter in the filterchain.
65 For example in the filterchain:
67 nullsrc, split[L1], [L2]overlay, nullsink
69 the split filter instance has two output pads, and the overlay filter
70 instance two input pads. The first output pad of split is labelled
71 "L1", the first input pad of overlay is labelled "L2", and the second
72 output pad of split is linked to the second input pad of overlay,
73 which are both unlabelled.
75 In a complete filterchain all the unlabelled filter input and output
76 pads must be connected. A filtergraph is considered valid if all the
77 filter input and output pads of all the filterchains are connected.
79 Follows a BNF description for the filtergraph syntax:
81 @var{NAME} ::= sequence of alphanumeric characters and '_'
82 @var{LINKLABEL} ::= "[" @var{NAME} "]"
83 @var{LINKLABELS} ::= @var{LINKLABEL} [@var{LINKLABELS}]
84 @var{FILTER_ARGUMENTS} ::= sequence of chars (eventually quoted)
85 @var{FILTER} ::= [@var{LINKNAMES}] @var{NAME} ["=" @var{ARGUMENTS}] [@var{LINKNAMES}]
86 @var{FILTERCHAIN} ::= @var{FILTER} [,@var{FILTERCHAIN}]
87 @var{FILTERGRAPH} ::= @var{FILTERCHAIN} [;@var{FILTERGRAPH}]
90 @c man end FILTERGRAPH DESCRIPTION
92 @chapter Audio Filters
93 @c man begin AUDIO FILTERS
95 When you configure your FFmpeg build, you can disable any of the
96 existing filters using --disable-filters.
97 The configure output will show the audio filters included in your
100 Below is a description of the currently available audio filters.
104 Convert the input audio format to the specified formats.
106 The filter accepts a string of the form:
107 "@var{sample_format}:@var{channel_layout}:@var{packing_format}".
109 @var{sample_format} specifies the sample format, and can be a string or
110 the corresponding numeric value defined in @file{libavutil/samplefmt.h}.
112 @var{channel_layout} specifies the channel layout, and can be a string
113 or the corresponding number value defined in @file{libavutil/chlayout.h}.
115 @var{packing_format} specifies the type of packing in output, can be one
116 of "planar" or "packed", or the corresponding numeric values "0" or "1".
118 The special parameter "auto", signifies that the filter will
119 automatically select the output format depending on the output filter.
121 Some examples follow.
125 Convert input to unsigned 8-bit, stereo, packed:
127 aconvert=u8:stereo:packed
131 Convert input to unsigned 8-bit, automatically select out channel layout
134 aconvert=u8:auto:auto
140 Convert the input audio to one of the specified formats. The framework will
141 negotiate the most appropriate format to minimize conversions.
143 The filter accepts three lists of formats, separated by ":", in the form:
144 "@var{sample_formats}:@var{channel_layouts}:@var{packing_formats}".
146 Elements in each list are separated by "," which has to be escaped in the
147 filtergraph specification.
149 The special parameter "all", in place of a list of elements, signifies all
152 Some examples follow:
154 aformat=u8\\,s16:mono:packed
156 aformat=s16:mono\\,stereo:all
161 Pass the audio source unchanged to the output.
165 Resample the input audio to the specified sample rate.
167 The filter accepts exactly one parameter, the output sample rate. If not
168 specified then the filter will automatically convert between its input
169 and output sample rates.
171 For example, to resample the input audio to 44100Hz:
178 Show a line containing various information for each input audio frame.
179 The input audio is not modified.
181 The shown line contains a sequence of key/value pairs of the form
182 @var{key}:@var{value}.
184 A description of each shown parameter follows:
188 sequential number of the input frame, starting from 0
191 presentation TimeStamp of the input frame, expressed as a number of
192 time base units. The time base unit depends on the filter input pad, and
193 is usually 1/@var{sample_rate}.
196 presentation TimeStamp of the input frame, expressed as a number of
200 position of the frame in the input stream, -1 if this information in
201 unavailable and/or meanigless (for example in case of synthetic audio)
207 channel layout description
210 number of samples (per each channel) contained in the filtered frame
213 sample rate for the audio frame
216 if the packing format is planar, 0 if packed
219 Adler-32 checksum of all the planes of the input frame
222 Adler-32 checksum for each input frame plane, expressed in the form
223 "[@var{c0} @var{c1} @var{c2} @var{c3} @var{c4} @var{c5} @var{c6} @var{c7}]"
226 @c man end AUDIO FILTERS
228 @chapter Audio Sources
229 @c man begin AUDIO SOURCES
231 Below is a description of the currently available audio sources.
235 Buffer audio frames, and make them available to the filter chain.
237 This source is mainly intended for a programmatic use, in particular
238 through the interface defined in @file{libavfilter/asrc_abuffer.h}.
240 It accepts the following mandatory parameters:
241 @var{sample_rate}:@var{sample_fmt}:@var{channel_layout}:@var{packing}
246 The sample rate of the incoming audio buffers.
249 The sample format of the incoming audio buffers.
250 Either a sample format name or its corresponging integer representation from
251 the enum AVSampleFormat in @file{libavutil/samplefmt.h}
254 The channel layout of the incoming audio buffers.
255 Either a channel layout name from channel_layout_map in
256 @file{libavutil/audioconvert.c} or its corresponding integer representation
257 from the AV_CH_LAYOUT_* macros in @file{libavutil/audioconvert.h}
260 Either "packed" or "planar", or their integer representation: 0 or 1
267 abuffer=44100:s16:stereo:planar
270 will instruct the source to accept planar 16bit signed stereo at 44100Hz.
271 Since the sample format with name "s16" corresponds to the number
272 1 and the "stereo" channel layout corresponds to the value 3, this is
280 Generate an audio signal specified by an expression.
282 This source accepts in input one or more expressions (one for each
283 channel), which are evaluated and used to generate a corresponding
286 It accepts the syntax: @var{exprs}[::@var{options}].
287 @var{exprs} is a list of expressions separated by ":", one for each
288 separate channel. The output channel layout depends on the number of
289 provided expressions, up to 8 channels are supported.
291 @var{options} is an optional sequence of @var{key}=@var{value} pairs,
294 The description of the accepted options follows.
299 Set the number of samples per channel per each output frame,
303 Specify the sample rate, default to 44100.
306 Each expression in @var{exprs} can contain the following constants:
310 number of the evaluated sample, starting from 0
313 time of the evaluated sample expressed in seconds, starting from 0
332 Generate a sin signal with frequence of 440 Hz, set sample rate to
335 aevalsrc="sin(440*2*PI*t)::s=8000"
339 Generate white noise:
341 aevalsrc="-2+random(0)"
345 Generate an amplitude modulated signal:
347 aevalsrc="sin(10*2*PI*t)*sin(880*2*PI*t)"
351 Generate 2.5 Hz binaural beats on a 360 Hz carrier:
353 aevalsrc="0.1*sin(2*PI*(360-2.5/2)*t) : 0.1*sin(2*PI*(360+2.5/2)*t)"
360 Read an audio stream from a movie container.
362 It accepts the syntax: @var{movie_name}[:@var{options}] where
363 @var{movie_name} is the name of the resource to read (not necessarily
364 a file but also a device or a stream accessed through some protocol),
365 and @var{options} is an optional sequence of @var{key}=@var{value}
366 pairs, separated by ":".
368 The description of the accepted options follows.
373 Specify the format assumed for the movie to read, and can be either
374 the name of a container or an input device. If not specified the
375 format is guessed from @var{movie_name} or by probing.
378 Specify the seek point in seconds, the frames will be output
379 starting from this seek point, the parameter is evaluated with
380 @code{av_strtod} so the numerical value may be suffixed by an IS
381 postfix. Default value is "0".
383 @item stream_index, si
384 Specify the index of the audio stream to read. If the value is -1,
385 the best suited audio stream will be automatically selected. Default
392 Null audio source, return unprocessed audio frames. It is mainly useful
393 as a template and to be employed in analysis / debugging tools, or as
394 the source for filters which ignore the input data (for example the sox
397 It accepts an optional sequence of @var{key}=@var{value} pairs,
400 The description of the accepted options follows.
405 Specify the sample rate, and defaults to 44100.
407 @item channel_layout, cl
409 Specify the channel layout, and can be either an integer or a string
410 representing a channel layout. The default value of @var{channel_layout}
413 Check the channel_layout_map definition in
414 @file{libavcodec/audioconvert.c} for the mapping between strings and
415 channel layout values.
418 Set the number of samples per requested frames.
422 Follow some examples:
424 # set the sample rate to 48000 Hz and the channel layout to AV_CH_LAYOUT_MONO.
425 anullsrc=r=48000:cl=4
428 anullsrc=r=48000:cl=mono
431 @c man end AUDIO SOURCES
434 @c man begin AUDIO SINKS
436 Below is a description of the currently available audio sinks.
440 Buffer audio frames, and make them available to the end of filter chain.
442 This sink is mainly intended for programmatic use, in particular
443 through the interface defined in @file{libavfilter/buffersink.h}.
445 It requires a pointer to an AVABufferSinkContext structure, which
446 defines the incoming buffers' formats, to be passed as the opaque
447 parameter to @code{avfilter_init_filter} for initialization.
451 Null audio sink, do absolutely nothing with the input audio. It is
452 mainly useful as a template and to be employed in analysis / debugging
455 @c man end AUDIO SINKS
457 @chapter Video Filters
458 @c man begin VIDEO FILTERS
460 When you configure your FFmpeg build, you can disable any of the
461 existing filters using --disable-filters.
462 The configure output will show the video filters included in your
465 Below is a description of the currently available video filters.
469 Detect frames that are (almost) completely black. Can be useful to
470 detect chapter transitions or commercials. Output lines consist of
471 the frame number of the detected frame, the percentage of blackness,
472 the position in the file if known or -1 and the timestamp in seconds.
474 In order to display the output lines, you need to set the loglevel at
475 least to the AV_LOG_INFO value.
477 The filter accepts the syntax:
479 blackframe[=@var{amount}:[@var{threshold}]]
482 @var{amount} is the percentage of the pixels that have to be below the
483 threshold, and defaults to 98.
485 @var{threshold} is the threshold below which a pixel value is
486 considered black, and defaults to 32.
490 Apply boxblur algorithm to the input video.
492 This filter accepts the parameters:
493 @var{luma_radius}:@var{luma_power}:@var{chroma_radius}:@var{chroma_power}:@var{alpha_radius}:@var{alpha_power}
495 Chroma and alpha parameters are optional, if not specified they default
496 to the corresponding values set for @var{luma_radius} and
499 @var{luma_radius}, @var{chroma_radius}, and @var{alpha_radius} represent
500 the radius in pixels of the box used for blurring the corresponding
501 input plane. They are expressions, and can contain the following
505 the input width and heigth in pixels
508 the input chroma image width and height in pixels
511 horizontal and vertical chroma subsample values. For example for the
512 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
515 The radius must be a non-negative number, and must not be greater than
516 the value of the expression @code{min(w,h)/2} for the luma and alpha planes,
517 and of @code{min(cw,ch)/2} for the chroma planes.
519 @var{luma_power}, @var{chroma_power}, and @var{alpha_power} represent
520 how many times the boxblur filter is applied to the corresponding
523 Some examples follow:
528 Apply a boxblur filter with luma, chroma, and alpha radius
535 Set luma radius to 2, alpha and chroma radius to 0
541 Set luma and chroma radius to a fraction of the video dimension
543 boxblur=min(h\,w)/10:1:min(cw\,ch)/10:1
550 Copy the input source unchanged to the output. Mainly useful for
555 Crop the input video to @var{out_w}:@var{out_h}:@var{x}:@var{y}.
557 The parameters are expressions containing the following constants:
561 the computed values for @var{x} and @var{y}. They are evaluated for
565 the input width and height
568 same as @var{in_w} and @var{in_h}
571 the output (cropped) width and height
574 same as @var{out_w} and @var{out_h}
577 same as @var{iw} / @var{ih}
580 input sample aspect ratio
583 input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
586 horizontal and vertical chroma subsample values. For example for the
587 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
590 the number of input frame, starting from 0
593 the position in the file of the input frame, NAN if unknown
596 timestamp expressed in seconds, NAN if the input timestamp is unknown
600 The @var{out_w} and @var{out_h} parameters specify the expressions for
601 the width and height of the output (cropped) video. They are
602 evaluated just at the configuration of the filter.
604 The default value of @var{out_w} is "in_w", and the default value of
605 @var{out_h} is "in_h".
607 The expression for @var{out_w} may depend on the value of @var{out_h},
608 and the expression for @var{out_h} may depend on @var{out_w}, but they
609 cannot depend on @var{x} and @var{y}, as @var{x} and @var{y} are
610 evaluated after @var{out_w} and @var{out_h}.
612 The @var{x} and @var{y} parameters specify the expressions for the
613 position of the top-left corner of the output (non-cropped) area. They
614 are evaluated for each frame. If the evaluated value is not valid, it
615 is approximated to the nearest valid value.
617 The default value of @var{x} is "(in_w-out_w)/2", and the default
618 value for @var{y} is "(in_h-out_h)/2", which set the cropped area at
619 the center of the input image.
621 The expression for @var{x} may depend on @var{y}, and the expression
622 for @var{y} may depend on @var{x}.
624 Follow some examples:
626 # crop the central input area with size 100x100
629 # crop the central input area with size 2/3 of the input video
630 "crop=2/3*in_w:2/3*in_h"
632 # crop the input video central square
635 # delimit the rectangle with the top-left corner placed at position
636 # 100:100 and the right-bottom corner corresponding to the right-bottom
637 # corner of the input image.
638 crop=in_w-100:in_h-100:100:100
640 # crop 10 pixels from the left and right borders, and 20 pixels from
641 # the top and bottom borders
642 "crop=in_w-2*10:in_h-2*20"
644 # keep only the bottom right quarter of the input image
645 "crop=in_w/2:in_h/2:in_w/2:in_h/2"
647 # crop height for getting Greek harmony
648 "crop=in_w:1/PHI*in_w"
651 "crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(n/10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(n/7)"
653 # erratic camera effect depending on timestamp
654 "crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(t*10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(t*13)"
656 # set x depending on the value of y
657 "crop=in_w/2:in_h/2:y:10+10*sin(n/10)"
662 Auto-detect crop size.
664 Calculate necessary cropping parameters and prints the recommended
665 parameters through the logging system. The detected dimensions
666 correspond to the non-black area of the input video.
668 It accepts the syntax:
670 cropdetect[=@var{limit}[:@var{round}[:@var{reset}]]]
676 Threshold, which can be optionally specified from nothing (0) to
677 everything (255), defaults to 24.
680 Value which the width/height should be divisible by, defaults to
681 16. The offset is automatically adjusted to center the video. Use 2 to
682 get only even dimensions (needed for 4:2:2 video). 16 is best when
683 encoding to most video codecs.
686 Counter that determines after how many frames cropdetect will reset
687 the previously detected largest video area and start over to detect
688 the current optimal crop area. Defaults to 0.
690 This can be useful when channel logos distort the video area. 0
691 indicates never reset and return the largest area encountered during
697 Suppress a TV station logo by a simple interpolation of the surrounding
698 pixels. Just set a rectangle covering the logo and watch it disappear
699 (and sometimes something even uglier appear - your mileage may vary).
701 The filter accepts parameters as a string of the form
702 "@var{x}:@var{y}:@var{w}:@var{h}:@var{band}", or as a list of
703 @var{key}=@var{value} pairs, separated by ":".
705 The description of the accepted parameters follows.
710 Specify the top left corner coordinates of the logo. They must be
714 Specify the width and height of the logo to clear. They must be
718 Specify the thickness of the fuzzy edge of the rectangle (added to
719 @var{w} and @var{h}). The default value is 4.
722 When set to 1, a green rectangle is drawn on the screen to simplify
723 finding the right @var{x}, @var{y}, @var{w}, @var{h} parameters, and
724 @var{band} is set to 4. The default value is 0.
728 Some examples follow.
733 Set a rectangle covering the area with top left corner coordinates 0,0
734 and size 100x77, setting a band of size 10:
740 As the previous example, but use named options:
742 delogo=x=0:y=0:w=100:h=77:band=10
749 Attempt to fix small changes in horizontal and/or vertical shift. This
750 filter helps remove camera shake from hand-holding a camera, bumping a
751 tripod, moving on a vehicle, etc.
753 The filter accepts parameters as a string of the form
754 "@var{x}:@var{y}:@var{w}:@var{h}:@var{rx}:@var{ry}:@var{edge}:@var{blocksize}:@var{contrast}:@var{search}:@var{filename}"
756 A description of the accepted parameters follows.
761 Specify a rectangular area where to limit the sarch for motion
763 If desired the search for motion vectors can be limited to a
764 rectangular area of the frame defined by its top left corner, width
765 and height. These parameters have the same meaning as the drawbox
766 filter which can be used to visualise the position of the bounding
769 This is useful when simultaneous movement of subjects within the frame
770 might be confused for camera motion by the motion vector search.
772 If any or all of @var{x}, @var{y}, @var{w} and @var{h} are set to -1
773 then the full frame is used. This allows later options to be set
774 without specifying the bounding box for the motion vector search.
776 Default - search the whole frame.
779 Specify the maximum extent of movement in x and y directions in the
780 range 0-64 pixels. Default 16.
783 Specify how to generate pixels to fill blanks at the edge of the
784 frame. An integer from 0 to 3 as follows:
787 Fill zeroes at blank locations
789 Original image at blank locations
791 Extruded edge value at blank locations
793 Mirrored edge at blank locations
796 The default setting is mirror edge at blank locations.
799 Specify the blocksize to use for motion search. Range 4-128 pixels,
803 Specify the contrast threshold for blocks. Only blocks with more than
804 the specified contrast (difference between darkest and lightest
805 pixels) will be considered. Range 1-255, default 125.
808 Specify the search strategy 0 = exhaustive search, 1 = less exhaustive
809 search. Default - exhaustive search.
812 If set then a detailed log of the motion search is written to the
819 Draw a colored box on the input image.
821 It accepts the syntax:
823 drawbox=@var{x}:@var{y}:@var{width}:@var{height}:@var{color}
829 Specify the top left corner coordinates of the box. Default to 0.
832 Specify the width and height of the box, if 0 they are interpreted as
833 the input width and height. Default to 0.
836 Specify the color of the box to write, it can be the name of a color
837 (case insensitive match) or a 0xRRGGBB[AA] sequence.
840 Follow some examples:
842 # draw a black box around the edge of the input image
845 # draw a box with color red and an opacity of 50%
846 drawbox=10:20:200:60:red@@0.5"
851 Draw text string or text from specified file on top of video using the
854 To enable compilation of this filter you need to configure FFmpeg with
855 @code{--enable-libfreetype}.
857 The filter also recognizes strftime() sequences in the provided text
858 and expands them accordingly. Check the documentation of strftime().
860 The filter accepts parameters as a list of @var{key}=@var{value} pairs,
863 The description of the accepted parameters follows.
868 The font file to be used for drawing text. Path must be included.
869 This parameter is mandatory.
872 The text string to be drawn. The text must be a sequence of UTF-8
874 This parameter is mandatory if no file is specified with the parameter
878 A text file containing text to be drawn. The text must be a sequence
879 of UTF-8 encoded characters.
881 This parameter is mandatory if no text string is specified with the
882 parameter @var{text}.
884 If both text and textfile are specified, an error is thrown.
887 The expressions which specify the offsets where text will be drawn
888 within the video frame. They are relative to the top/left border of the
891 The default value of @var{x} and @var{y} is "0".
893 See below for the list of accepted constants.
896 The font size to be used for drawing text.
897 The default value of @var{fontsize} is 16.
900 The color to be used for drawing fonts.
901 Either a string (e.g. "red") or in 0xRRGGBB[AA] format
902 (e.g. "0xff000033"), possibly followed by an alpha specifier.
903 The default value of @var{fontcolor} is "black".
906 The color to be used for drawing box around text.
907 Either a string (e.g. "yellow") or in 0xRRGGBB[AA] format
908 (e.g. "0xff00ff"), possibly followed by an alpha specifier.
909 The default value of @var{boxcolor} is "white".
912 Used to draw a box around text using background color.
913 Value should be either 1 (enable) or 0 (disable).
914 The default value of @var{box} is 0.
916 @item shadowx, shadowy
917 The x and y offsets for the text shadow position with respect to the
918 position of the text. They can be either positive or negative
919 values. Default value for both is "0".
922 The color to be used for drawing a shadow behind the drawn text. It
923 can be a color name (e.g. "yellow") or a string in the 0xRRGGBB[AA]
924 form (e.g. "0xff00ff"), possibly followed by an alpha specifier.
925 The default value of @var{shadowcolor} is "black".
928 Flags to be used for loading the fonts.
930 The flags map the corresponding flags supported by libfreetype, and are
931 a combination of the following values:
938 @item vertical_layout
942 @item ignore_global_advance_width
944 @item ignore_transform
951 Default value is "render".
953 For more information consult the documentation for the FT_LOAD_*
957 The size in number of spaces to use for rendering the tab.
961 The parameters for @var{x} and @var{y} are expressions containing the
966 the input width and heigth
969 the width of the rendered text
972 the height of the rendered text
975 the height of each text line
978 input sample aspect ratio
981 input display aspect ratio, it is the same as (@var{w} / @var{h}) * @var{sar}
984 horizontal and vertical chroma subsample values. For example for the
985 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
988 maximum glyph width, that is the maximum width for all the glyphs
989 contained in the rendered text
992 maximum glyph height, that is the maximum height for all the glyphs
993 contained in the rendered text, it is equivalent to @var{ascent} -
996 @item max_glyph_a, ascent
998 the maximum distance from the baseline to the highest/upper grid
999 coordinate used to place a glyph outline point, for all the rendered
1001 It is a positive value, due to the grid's orientation with the Y axis
1004 @item max_glyph_d, descent
1005 the maximum distance from the baseline to the lowest grid coordinate
1006 used to place a glyph outline point, for all the rendered glyphs.
1007 This is a negative value, due to the grid's orientation, with the Y axis
1011 the number of input frame, starting from 0
1014 timestamp expressed in seconds, NAN if the input timestamp is unknown
1017 Some examples follow.
1022 Draw "Test Text" with font FreeSerif, using the default values for the
1023 optional parameters.
1026 drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text'"
1030 Draw 'Test Text' with font FreeSerif of size 24 at position x=100
1031 and y=50 (counting from the top-left corner of the screen), text is
1032 yellow with a red box around it. Both the text and the box have an
1036 drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text':\
1037 x=100: y=50: fontsize=24: fontcolor=yellow@@0.2: box=1: boxcolor=red@@0.2"
1040 Note that the double quotes are not necessary if spaces are not used
1041 within the parameter list.
1044 Show the text at the center of the video frame:
1046 drawtext=fontsize=30:fontfile=FreeSerif.ttf:text='hello world':x=(w-text_w)/2:y=(h-text_h-line_h)/2"
1050 Show a text line sliding from right to left in the last row of the video
1051 frame. The file @file{LONG_LINE} is assumed to contain a single line
1054 drawtext=fontsize=15:fontfile=FreeSerif.ttf:text=LONG_LINE:y=h-line_h:x=-50*t
1058 Show the content of file @file{CREDITS} off the bottom of the frame and scroll up.
1060 drawtext=fontsize=20:fontfile=FreeSerif.ttf:textfile=CREDITS:y=h-20*t"
1064 Draw a single green letter "g", at the center of the input video.
1065 The glyph baseline is placed at half screen height.
1067 drawtext=fontsize=60:fontfile=FreeSerif.ttf:fontcolor=green:text=g:x=(w-max_glyph_w)/2:y=h/2-ascent
1072 For more information about libfreetype, check:
1073 @url{http://www.freetype.org/}.
1077 Apply fade-in/out effect to input video.
1079 It accepts the parameters:
1080 @var{type}:@var{start_frame}:@var{nb_frames}
1082 @var{type} specifies if the effect type, can be either "in" for
1083 fade-in, or "out" for a fade-out effect.
1085 @var{start_frame} specifies the number of the start frame for starting
1086 to apply the fade effect.
1088 @var{nb_frames} specifies the number of frames for which the fade
1089 effect has to last. At the end of the fade-in effect the output video
1090 will have the same intensity as the input video, at the end of the
1091 fade-out transition the output video will be completely black.
1093 A few usage examples follow, usable too as test scenarios.
1095 # fade in first 30 frames of video
1098 # fade out last 45 frames of a 200-frame video
1101 # fade in first 25 frames and fade out last 25 frames of a 1000-frame video
1102 fade=in:0:25, fade=out:975:25
1104 # make first 5 frames black, then fade in from frame 5-24
1110 Transform the field order of the input video.
1112 It accepts one parameter which specifies the required field order that
1113 the input interlaced video will be transformed to. The parameter can
1114 assume one of the following values:
1118 output bottom field first
1120 output top field first
1123 Default value is "tff".
1125 Transformation is achieved by shifting the picture content up or down
1126 by one line, and filling the remaining line with appropriate picture content.
1127 This method is consistent with most broadcast field order converters.
1129 If the input video is not flagged as being interlaced, or it is already
1130 flagged as being of the required output field order then this filter does
1131 not alter the incoming video.
1133 This filter is very useful when converting to or from PAL DV material,
1134 which is bottom field first.
1138 ./ffmpeg -i in.vob -vf "fieldorder=bff" out.dv
1143 Buffer input images and send them when they are requested.
1145 This filter is mainly useful when auto-inserted by the libavfilter
1148 The filter does not take parameters.
1152 Convert the input video to one of the specified pixel formats.
1153 Libavfilter will try to pick one that is supported for the input to
1156 The filter accepts a list of pixel format names, separated by ":",
1157 for example "yuv420p:monow:rgb24".
1159 Some examples follow:
1161 # convert the input video to the format "yuv420p"
1164 # convert the input video to any of the formats in the list
1165 format=yuv420p:yuv444p:yuv410p
1171 Apply a frei0r effect to the input video.
1173 To enable compilation of this filter you need to install the frei0r
1174 header and configure FFmpeg with --enable-frei0r.
1176 The filter supports the syntax:
1178 @var{filter_name}[@{:|=@}@var{param1}:@var{param2}:...:@var{paramN}]
1181 @var{filter_name} is the name to the frei0r effect to load. If the
1182 environment variable @env{FREI0R_PATH} is defined, the frei0r effect
1183 is searched in each one of the directories specified by the colon
1184 separated list in @env{FREIOR_PATH}, otherwise in the standard frei0r
1185 paths, which are in this order: @file{HOME/.frei0r-1/lib/},
1186 @file{/usr/local/lib/frei0r-1/}, @file{/usr/lib/frei0r-1/}.
1188 @var{param1}, @var{param2}, ... , @var{paramN} specify the parameters
1189 for the frei0r effect.
1191 A frei0r effect parameter can be a boolean (whose values are specified
1192 with "y" and "n"), a double, a color (specified by the syntax
1193 @var{R}/@var{G}/@var{B}, @var{R}, @var{G}, and @var{B} being float
1194 numbers from 0.0 to 1.0) or by an @code{av_parse_color()} color
1195 description), a position (specified by the syntax @var{X}/@var{Y},
1196 @var{X} and @var{Y} being float numbers) and a string.
1198 The number and kind of parameters depend on the loaded effect. If an
1199 effect parameter is not specified the default value is set.
1201 Some examples follow:
1203 # apply the distort0r effect, set the first two double parameters
1204 frei0r=distort0r:0.5:0.01
1206 # apply the colordistance effect, takes a color as first parameter
1207 frei0r=colordistance:0.2/0.3/0.4
1208 frei0r=colordistance:violet
1209 frei0r=colordistance:0x112233
1211 # apply the perspective effect, specify the top left and top right
1213 frei0r=perspective:0.2/0.2:0.8/0.2
1216 For more information see:
1217 @url{http://piksel.org/frei0r}
1221 Fix the banding artifacts that are sometimes introduced into nearly flat
1222 regions by truncation to 8bit colordepth.
1223 Interpolate the gradients that should go where the bands are, and
1226 This filter is designed for playback only. Do not use it prior to
1227 lossy compression, because compression tends to lose the dither and
1228 bring back the bands.
1230 The filter takes two optional parameters, separated by ':':
1231 @var{strength}:@var{radius}
1233 @var{strength} is the maximum amount by which the filter will change
1234 any one pixel. Also the threshold for detecting nearly flat
1235 regions. Acceptable values range from .51 to 255, default value is
1236 1.2, out-of-range values will be clipped to the valid range.
1238 @var{radius} is the neighborhood to fit the gradient to. A larger
1239 radius makes for smoother gradients, but also prevents the filter from
1240 modifying the pixels near detailed regions. Acceptable values are
1241 8-32, default value is 16, out-of-range values will be clipped to the
1245 # default parameters
1254 Flip the input video horizontally.
1256 For example to horizontally flip the video in input with
1259 ffmpeg -i in.avi -vf "hflip" out.avi
1264 High precision/quality 3d denoise filter. This filter aims to reduce
1265 image noise producing smooth images and making still images really
1266 still. It should enhance compressibility.
1268 It accepts the following optional parameters:
1269 @var{luma_spatial}:@var{chroma_spatial}:@var{luma_tmp}:@var{chroma_tmp}
1273 a non-negative float number which specifies spatial luma strength,
1276 @item chroma_spatial
1277 a non-negative float number which specifies spatial chroma strength,
1278 defaults to 3.0*@var{luma_spatial}/4.0
1281 a float number which specifies luma temporal strength, defaults to
1282 6.0*@var{luma_spatial}/4.0
1285 a float number which specifies chroma temporal strength, defaults to
1286 @var{luma_tmp}*@var{chroma_spatial}/@var{luma_spatial}
1289 @section lut, lutrgb, lutyuv
1291 Compute a look-up table for binding each pixel component input value
1292 to an output value, and apply it to input video.
1294 @var{lutyuv} applies a lookup table to a YUV input video, @var{lutrgb}
1295 to an RGB input video.
1297 These filters accept in input a ":"-separated list of options, which
1298 specify the expressions used for computing the lookup table for the
1299 corresponding pixel component values.
1301 The @var{lut} filter requires either YUV or RGB pixel formats in
1302 input, and accepts the options:
1305 first pixel component
1307 second pixel component
1309 third pixel component
1311 fourth pixel component, corresponds to the alpha component
1314 The exact component associated to each option depends on the format in
1317 The @var{lutrgb} filter requires RGB pixel formats in input, and
1318 accepts the options:
1330 The @var{lutyuv} filter requires YUV pixel formats in input, and
1331 accepts the options:
1334 Y/luminance component
1343 The expressions can contain the following constants and functions:
1347 the input width and heigth
1350 input value for the pixel component
1353 the input value clipped in the @var{minval}-@var{maxval} range
1356 maximum value for the pixel component
1359 minimum value for the pixel component
1362 the negated value for the pixel component value clipped in the
1363 @var{minval}-@var{maxval} range , it corresponds to the expression
1364 "maxval-clipval+minval"
1367 the computed value in @var{val} clipped in the
1368 @var{minval}-@var{maxval} range
1370 @item gammaval(gamma)
1371 the computed gamma correction value of the pixel component value
1372 clipped in the @var{minval}-@var{maxval} range, corresponds to the
1374 "pow((clipval-minval)/(maxval-minval)\,@var{gamma})*(maxval-minval)+minval"
1378 All expressions default to "val".
1380 Some examples follow:
1382 # negate input video
1383 lutrgb="r=maxval+minval-val:g=maxval+minval-val:b=maxval+minval-val"
1384 lutyuv="y=maxval+minval-val:u=maxval+minval-val:v=maxval+minval-val"
1386 # the above is the same as
1387 lutrgb="r=negval:g=negval:b=negval"
1388 lutyuv="y=negval:u=negval:v=negval"
1393 # remove chroma components, turns the video into a graytone image
1394 lutyuv="u=128:v=128"
1396 # apply a luma burning effect
1399 # remove green and blue components
1402 # set a constant alpha channel value on input
1403 format=rgba,lutrgb=a="maxval-minval/2"
1405 # correct luminance gamma by a 0.5 factor
1406 lutyuv=y=gammaval(0.5)
1411 Apply an MPlayer filter to the input video.
1413 This filter provides a wrapper around most of the filters of
1416 This wrapper is considered experimental. Some of the wrapped filters
1417 may not work properly and we may drop support for them, as they will
1418 be implemented natively into FFmpeg. Thus you should avoid
1419 depending on them when writing portable scripts.
1421 The filters accepts the parameters:
1422 @var{filter_name}[:=]@var{filter_params}
1424 @var{filter_name} is the name of a supported MPlayer filter,
1425 @var{filter_params} is a string containing the parameters accepted by
1428 The list of the currently supported filters follows:
1482 The parameter syntax and behavior for the listed filters are the same
1483 of the corresponding MPlayer filters. For detailed instructions check
1484 the "VIDEO FILTERS" section in the MPlayer manual.
1486 Some examples follow:
1488 # remove a logo by interpolating the surrounding pixels
1489 mp=delogo=200:200:80:20:1
1491 # adjust gamma, brightness, contrast
1494 # tweak hue and saturation
1498 See also mplayer(1), @url{http://www.mplayerhq.hu/}.
1504 This filter accepts an integer in input, if non-zero it negates the
1505 alpha component (if available). The default value in input is 0.
1509 Force libavfilter not to use any of the specified pixel formats for the
1510 input to the next filter.
1512 The filter accepts a list of pixel format names, separated by ":",
1513 for example "yuv420p:monow:rgb24".
1515 Some examples follow:
1517 # force libavfilter to use a format different from "yuv420p" for the
1518 # input to the vflip filter
1519 noformat=yuv420p,vflip
1521 # convert the input video to any of the formats not contained in the list
1522 noformat=yuv420p:yuv444p:yuv410p
1527 Pass the video source unchanged to the output.
1531 Apply video transform using libopencv.
1533 To enable this filter install libopencv library and headers and
1534 configure FFmpeg with --enable-libopencv.
1536 The filter takes the parameters: @var{filter_name}@{:=@}@var{filter_params}.
1538 @var{filter_name} is the name of the libopencv filter to apply.
1540 @var{filter_params} specifies the parameters to pass to the libopencv
1541 filter. If not specified the default values are assumed.
1543 Refer to the official libopencv documentation for more precise
1545 @url{http://opencv.willowgarage.com/documentation/c/image_filtering.html}
1547 Follows the list of supported libopencv filters.
1552 Dilate an image by using a specific structuring element.
1553 This filter corresponds to the libopencv function @code{cvDilate}.
1555 It accepts the parameters: @var{struct_el}:@var{nb_iterations}.
1557 @var{struct_el} represents a structuring element, and has the syntax:
1558 @var{cols}x@var{rows}+@var{anchor_x}x@var{anchor_y}/@var{shape}
1560 @var{cols} and @var{rows} represent the number of colums and rows of
1561 the structuring element, @var{anchor_x} and @var{anchor_y} the anchor
1562 point, and @var{shape} the shape for the structuring element, and
1563 can be one of the values "rect", "cross", "ellipse", "custom".
1565 If the value for @var{shape} is "custom", it must be followed by a
1566 string of the form "=@var{filename}". The file with name
1567 @var{filename} is assumed to represent a binary image, with each
1568 printable character corresponding to a bright pixel. When a custom
1569 @var{shape} is used, @var{cols} and @var{rows} are ignored, the number
1570 or columns and rows of the read file are assumed instead.
1572 The default value for @var{struct_el} is "3x3+0x0/rect".
1574 @var{nb_iterations} specifies the number of times the transform is
1575 applied to the image, and defaults to 1.
1577 Follow some example:
1579 # use the default values
1582 # dilate using a structuring element with a 5x5 cross, iterate two times
1583 ocv=dilate=5x5+2x2/cross:2
1585 # read the shape from the file diamond.shape, iterate two times
1586 # the file diamond.shape may contain a pattern of characters like this:
1592 # the specified cols and rows are ignored (but not the anchor point coordinates)
1593 ocv=0x0+2x2/custom=diamond.shape:2
1598 Erode an image by using a specific structuring element.
1599 This filter corresponds to the libopencv function @code{cvErode}.
1601 The filter accepts the parameters: @var{struct_el}:@var{nb_iterations},
1602 with the same syntax and semantics as the @ref{dilate} filter.
1606 Smooth the input video.
1608 The filter takes the following parameters:
1609 @var{type}:@var{param1}:@var{param2}:@var{param3}:@var{param4}.
1611 @var{type} is the type of smooth filter to apply, and can be one of
1612 the following values: "blur", "blur_no_scale", "median", "gaussian",
1613 "bilateral". The default value is "gaussian".
1615 @var{param1}, @var{param2}, @var{param3}, and @var{param4} are
1616 parameters whose meanings depend on smooth type. @var{param1} and
1617 @var{param2} accept integer positive values or 0, @var{param3} and
1618 @var{param4} accept float values.
1620 The default value for @var{param1} is 3, the default value for the
1621 other parameters is 0.
1623 These parameters correspond to the parameters assigned to the
1624 libopencv function @code{cvSmooth}.
1628 Overlay one video on top of another.
1630 It takes two inputs and one output, the first input is the "main"
1631 video on which the second input is overlayed.
1633 It accepts the parameters: @var{x}:@var{y}.
1635 @var{x} is the x coordinate of the overlayed video on the main video,
1636 @var{y} is the y coordinate. The parameters are expressions containing
1637 the following parameters:
1640 @item main_w, main_h
1641 main input width and height
1644 same as @var{main_w} and @var{main_h}
1646 @item overlay_w, overlay_h
1647 overlay input width and height
1650 same as @var{overlay_w} and @var{overlay_h}
1653 Be aware that frames are taken from each input video in timestamp
1654 order, hence, if their initial timestamps differ, it is a a good idea
1655 to pass the two inputs through a @var{setpts=PTS-STARTPTS} filter to
1656 have them begin in the same zero timestamp, as it does the example for
1657 the @var{movie} filter.
1659 Follow some examples:
1661 # draw the overlay at 10 pixels from the bottom right
1662 # corner of the main video.
1663 overlay=main_w-overlay_w-10:main_h-overlay_h-10
1665 # insert a transparent PNG logo in the bottom left corner of the input
1666 movie=logo.png [logo];
1667 [in][logo] overlay=10:main_h-overlay_h-10 [out]
1669 # insert 2 different transparent PNG logos (second logo on bottom
1671 movie=logo1.png [logo1];
1672 movie=logo2.png [logo2];
1673 [in][logo1] overlay=10:H-h-10 [in+logo1];
1674 [in+logo1][logo2] overlay=W-w-10:H-h-10 [out]
1676 # add a transparent color layer on top of the main video,
1677 # WxH specifies the size of the main input to the overlay filter
1678 color=red@.3:WxH [over]; [in][over] overlay [out]
1681 You can chain togheter more overlays but the efficiency of such
1682 approach is yet to be tested.
1686 Add paddings to the input image, and places the original input at the
1687 given coordinates @var{x}, @var{y}.
1689 It accepts the following parameters:
1690 @var{width}:@var{height}:@var{x}:@var{y}:@var{color}.
1692 The parameters @var{width}, @var{height}, @var{x}, and @var{y} are
1693 expressions containing the following constants:
1697 the input video width and height
1700 same as @var{in_w} and @var{in_h}
1703 the output width and height, that is the size of the padded area as
1704 specified by the @var{width} and @var{height} expressions
1707 same as @var{out_w} and @var{out_h}
1710 x and y offsets as specified by the @var{x} and @var{y}
1711 expressions, or NAN if not yet specified
1714 same as @var{iw} / @var{ih}
1717 input sample aspect ratio
1720 input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
1723 horizontal and vertical chroma subsample values. For example for the
1724 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
1727 Follows the description of the accepted parameters.
1732 Specify the size of the output image with the paddings added. If the
1733 value for @var{width} or @var{height} is 0, the corresponding input size
1734 is used for the output.
1736 The @var{width} expression can reference the value set by the
1737 @var{height} expression, and viceversa.
1739 The default value of @var{width} and @var{height} is 0.
1743 Specify the offsets where to place the input image in the padded area
1744 with respect to the top/left border of the output image.
1746 The @var{x} expression can reference the value set by the @var{y}
1747 expression, and viceversa.
1749 The default value of @var{x} and @var{y} is 0.
1753 Specify the color of the padded area, it can be the name of a color
1754 (case insensitive match) or a 0xRRGGBB[AA] sequence.
1756 The default value of @var{color} is "black".
1760 Some examples follow:
1763 # Add paddings with color "violet" to the input video. Output video
1764 # size is 640x480, the top-left corner of the input video is placed at
1766 pad=640:480:0:40:violet
1768 # pad the input to get an output with dimensions increased bt 3/2,
1769 # and put the input video at the center of the padded area
1770 pad="3/2*iw:3/2*ih:(ow-iw)/2:(oh-ih)/2"
1772 # pad the input to get a squared output with size equal to the maximum
1773 # value between the input width and height, and put the input video at
1774 # the center of the padded area
1775 pad="max(iw\,ih):ow:(ow-iw)/2:(oh-ih)/2"
1777 # pad the input to get a final w/h ratio of 16:9
1778 pad="ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"
1780 # for anamorphic video, in order to set the output display aspect ratio,
1781 # it is necessary to use sar in the expression, according to the relation:
1782 # (ih * X / ih) * sar = output_dar
1783 # X = output_dar / sar
1784 pad="ih*16/9/sar:ih:(ow-iw)/2:(oh-ih)/2"
1786 # double output size and put the input video in the bottom-right
1787 # corner of the output padded area
1788 pad="2*iw:2*ih:ow-iw:oh-ih"
1791 @section pixdesctest
1793 Pixel format descriptor test filter, mainly useful for internal
1794 testing. The output video should be equal to the input video.
1798 format=monow, pixdesctest
1801 can be used to test the monowhite pixel format descriptor definition.
1805 Scale the input video to @var{width}:@var{height}[:@var{interl}=@{1|-1@}] and/or convert the image format.
1807 The parameters @var{width} and @var{height} are expressions containing
1808 the following constants:
1812 the input width and height
1815 same as @var{in_w} and @var{in_h}
1818 the output (cropped) width and height
1821 same as @var{out_w} and @var{out_h}
1824 same as @var{iw} / @var{ih}
1827 input sample aspect ratio
1830 input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
1833 input sample aspect ratio
1836 horizontal and vertical chroma subsample values. For example for the
1837 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
1840 If the input image format is different from the format requested by
1841 the next filter, the scale filter will convert the input to the
1844 If the value for @var{width} or @var{height} is 0, the respective input
1845 size is used for the output.
1847 If the value for @var{width} or @var{height} is -1, the scale filter will
1848 use, for the respective output size, a value that maintains the aspect
1849 ratio of the input image.
1851 The default value of @var{width} and @var{height} is 0.
1853 Valid values for the optional parameter @var{interl} are:
1857 force interlaced aware scaling
1860 select interlaced aware scaling depending on whether the source frames
1861 are flagged as interlaced or not
1864 Some examples follow:
1866 # scale the input video to a size of 200x100.
1869 # scale the input to 2x
1871 # the above is the same as
1874 # scale the input to half size
1877 # increase the width, and set the height to the same size
1880 # seek for Greek harmony
1884 # increase the height, and set the width to 3/2 of the height
1887 # increase the size, but make the size a multiple of the chroma
1888 scale="trunc(3/2*iw/hsub)*hsub:trunc(3/2*ih/vsub)*vsub"
1890 # increase the width to a maximum of 500 pixels, keep the same input aspect ratio
1891 scale='min(500\, iw*3/2):-1'
1895 Select frames to pass in output.
1897 It accepts in input an expression, which is evaluated for each input
1898 frame. If the expression is evaluated to a non-zero value, the frame
1899 is selected and passed to the output, otherwise it is discarded.
1901 The expression can contain the following constants:
1905 the sequential number of the filtered frame, starting from 0
1908 the sequential number of the selected frame, starting from 0
1910 @item prev_selected_n
1911 the sequential number of the last selected frame, NAN if undefined
1914 timebase of the input timestamps
1917 the PTS (Presentation TimeStamp) of the filtered video frame,
1918 expressed in @var{TB} units, NAN if undefined
1921 the PTS (Presentation TimeStamp) of the filtered video frame,
1922 expressed in seconds, NAN if undefined
1925 the PTS of the previously filtered video frame, NAN if undefined
1927 @item prev_selected_pts
1928 the PTS of the last previously filtered video frame, NAN if undefined
1930 @item prev_selected_t
1931 the PTS of the last previously selected video frame, NAN if undefined
1934 the PTS of the first video frame in the video, NAN if undefined
1937 the time of the first video frame in the video, NAN if undefined
1940 the type of the filtered frame, can assume one of the following
1952 @item interlace_type
1953 the frame interlace type, can assume one of the following values:
1956 the frame is progressive (not interlaced)
1958 the frame is top-field-first
1960 the frame is bottom-field-first
1964 1 if the filtered frame is a key-frame, 0 otherwise
1967 the position in the file of the filtered frame, -1 if the information
1968 is not available (e.g. for synthetic video)
1971 The default value of the select expression is "1".
1973 Some examples follow:
1976 # select all frames in input
1979 # the above is the same as:
1985 # select only I-frames
1986 select='eq(pict_type\,I)'
1988 # select one frame every 100
1989 select='not(mod(n\,100))'
1991 # select only frames contained in the 10-20 time interval
1992 select='gte(t\,10)*lte(t\,20)'
1994 # select only I frames contained in the 10-20 time interval
1995 select='gte(t\,10)*lte(t\,20)*eq(pict_type\,I)'
1997 # select frames with a minimum distance of 10 seconds
1998 select='isnan(prev_selected_t)+gte(t-prev_selected_t\,10)'
2004 Set the Display Aspect Ratio for the filter output video.
2006 This is done by changing the specified Sample (aka Pixel) Aspect
2007 Ratio, according to the following equation:
2008 @math{DAR = HORIZONTAL_RESOLUTION / VERTICAL_RESOLUTION * SAR}
2010 Keep in mind that this filter does not modify the pixel dimensions of
2011 the video frame. Also the display aspect ratio set by this filter may
2012 be changed by later filters in the filterchain, e.g. in case of
2013 scaling or if another "setdar" or a "setsar" filter is applied.
2015 The filter accepts a parameter string which represents the wanted
2016 display aspect ratio.
2017 The parameter can be a floating point number string, or an expression
2018 of the form @var{num}:@var{den}, where @var{num} and @var{den} are the
2019 numerator and denominator of the aspect ratio.
2020 If the parameter is not specified, it is assumed the value "0:1".
2022 For example to change the display aspect ratio to 16:9, specify:
2025 # the above is equivalent to
2029 See also the @ref{setsar} filter documentation.
2033 Change the PTS (presentation timestamp) of the input video frames.
2035 Accept in input an expression evaluated through the eval API, which
2036 can contain the following constants:
2040 the presentation timestamp in input
2043 the count of the input frame, starting from 0.
2046 the PTS of the first video frame
2049 tell if the current frame is interlaced
2052 original position in the file of the frame, or undefined if undefined
2053 for the current frame
2063 Some examples follow:
2066 # start counting PTS from zero
2078 # fixed rate 25 fps with some jitter
2079 setpts='1/(25*TB) * (N + 0.05 * sin(N*2*PI/25))'
2085 Set the Sample (aka Pixel) Aspect Ratio for the filter output video.
2087 Note that as a consequence of the application of this filter, the
2088 output display aspect ratio will change according to the following
2090 @math{DAR = HORIZONTAL_RESOLUTION / VERTICAL_RESOLUTION * SAR}
2092 Keep in mind that the sample aspect ratio set by this filter may be
2093 changed by later filters in the filterchain, e.g. if another "setsar"
2094 or a "setdar" filter is applied.
2096 The filter accepts a parameter string which represents the wanted
2097 sample aspect ratio.
2098 The parameter can be a floating point number string, or an expression
2099 of the form @var{num}:@var{den}, where @var{num} and @var{den} are the
2100 numerator and denominator of the aspect ratio.
2101 If the parameter is not specified, it is assumed the value "0:1".
2103 For example to change the sample aspect ratio to 10:11, specify:
2110 Set the timebase to use for the output frames timestamps.
2111 It is mainly useful for testing timebase configuration.
2113 It accepts in input an arithmetic expression representing a rational.
2114 The expression can contain the constants "AVTB" (the
2115 default timebase), and "intb" (the input timebase).
2117 The default value for the input is "intb".
2119 Follow some examples.
2122 # set the timebase to 1/25
2125 # set the timebase to 1/10
2128 #set the timebase to 1001/1000
2131 #set the timebase to 2*intb
2134 #set the default timebase value
2140 Show a line containing various information for each input video frame.
2141 The input video is not modified.
2143 The shown line contains a sequence of key/value pairs of the form
2144 @var{key}:@var{value}.
2146 A description of each shown parameter follows:
2150 sequential number of the input frame, starting from 0
2153 Presentation TimeStamp of the input frame, expressed as a number of
2154 time base units. The time base unit depends on the filter input pad.
2157 Presentation TimeStamp of the input frame, expressed as a number of
2161 position of the frame in the input stream, -1 if this information in
2162 unavailable and/or meanigless (for example in case of synthetic video)
2168 sample aspect ratio of the input frame, expressed in the form
2172 size of the input frame, expressed in the form
2173 @var{width}x@var{height}
2176 interlaced mode ("P" for "progressive", "T" for top field first, "B"
2177 for bottom field first)
2180 1 if the frame is a key frame, 0 otherwise
2183 picture type of the input frame ("I" for an I-frame, "P" for a
2184 P-frame, "B" for a B-frame, "?" for unknown type).
2185 Check also the documentation of the @code{AVPictureType} enum and of
2186 the @code{av_get_picture_type_char} function defined in
2187 @file{libavutil/avutil.h}.
2190 Adler-32 checksum of all the planes of the input frame
2192 @item plane_checksum
2193 Adler-32 checksum of each plane of the input frame, expressed in the form
2194 "[@var{c0} @var{c1} @var{c2} @var{c3}]"
2199 Pass the images of input video on to next video filter as multiple
2203 ./ffmpeg -i in.avi -vf "slicify=32" out.avi
2206 The filter accepts the slice height as parameter. If the parameter is
2207 not specified it will use the default value of 16.
2209 Adding this in the beginning of filter chains should make filtering
2210 faster due to better use of the memory cache.
2214 Pass on the input video to two outputs. Both outputs are identical to
2219 [in] split [splitout1][splitout2];
2220 [splitout1] crop=100:100:0:0 [cropout];
2221 [splitout2] pad=200:200:100:100 [padout];
2224 will create two separate outputs from the same input, one cropped and
2229 Transpose rows with columns in the input video and optionally flip it.
2231 It accepts a parameter representing an integer, which can assume the
2236 Rotate by 90 degrees counterclockwise and vertically flip (default), that is:
2244 Rotate by 90 degrees clockwise, that is:
2252 Rotate by 90 degrees counterclockwise, that is:
2260 Rotate by 90 degrees clockwise and vertically flip, that is:
2270 Sharpen or blur the input video.
2272 It accepts the following parameters:
2273 @var{luma_msize_x}:@var{luma_msize_y}:@var{luma_amount}:@var{chroma_msize_x}:@var{chroma_msize_y}:@var{chroma_amount}
2275 Negative values for the amount will blur the input video, while positive
2276 values will sharpen. All parameters are optional and default to the
2277 equivalent of the string '5:5:1.0:5:5:0.0'.
2282 Set the luma matrix horizontal size. It can be an integer between 3
2283 and 13, default value is 5.
2286 Set the luma matrix vertical size. It can be an integer between 3
2287 and 13, default value is 5.
2290 Set the luma effect strength. It can be a float number between -2.0
2291 and 5.0, default value is 1.0.
2293 @item chroma_msize_x
2294 Set the chroma matrix horizontal size. It can be an integer between 3
2295 and 13, default value is 5.
2297 @item chroma_msize_y
2298 Set the chroma matrix vertical size. It can be an integer between 3
2299 and 13, default value is 5.
2302 Set the chroma effect strength. It can be a float number between -2.0
2303 and 5.0, default value is 0.0.
2308 # Strong luma sharpen effect parameters
2311 # Strong blur of both luma and chroma parameters
2312 unsharp=7:7:-2:7:7:-2
2314 # Use the default values with @command{ffmpeg}
2315 ./ffmpeg -i in.avi -vf "unsharp" out.mp4
2320 Flip the input video vertically.
2323 ./ffmpeg -i in.avi -vf "vflip" out.avi
2328 Deinterlace the input video ("yadif" means "yet another deinterlacing
2331 It accepts the optional parameters: @var{mode}:@var{parity}:@var{auto}.
2333 @var{mode} specifies the interlacing mode to adopt, accepts one of the
2338 output 1 frame for each frame
2340 output 1 frame for each field
2342 like 0 but skips spatial interlacing check
2344 like 1 but skips spatial interlacing check
2349 @var{parity} specifies the picture field parity assumed for the input
2350 interlaced video, accepts one of the following values:
2354 assume top field first
2356 assume bottom field first
2358 enable automatic detection
2361 Default value is -1.
2362 If interlacing is unknown or decoder does not export this information,
2363 top field first will be assumed.
2365 @var{auto} specifies if deinterlacer should trust the interlaced flag
2366 and only deinterlace frames marked as interlaced
2370 deinterlace all frames
2372 only deinterlace frames marked as interlaced
2377 @c man end VIDEO FILTERS
2379 @chapter Video Sources
2380 @c man begin VIDEO SOURCES
2382 Below is a description of the currently available video sources.
2386 Buffer video frames, and make them available to the filter chain.
2388 This source is mainly intended for a programmatic use, in particular
2389 through the interface defined in @file{libavfilter/vsrc_buffer.h}.
2391 It accepts the following parameters:
2392 @var{width}:@var{height}:@var{pix_fmt_string}:@var{timebase_num}:@var{timebase_den}:@var{sample_aspect_ratio_num}:@var{sample_aspect_ratio.den}:@var{scale_params}
2394 All the parameters but @var{scale_params} need to be explicitely
2397 Follows the list of the accepted parameters.
2402 Specify the width and height of the buffered video frames.
2404 @item pix_fmt_string
2405 A string representing the pixel format of the buffered video frames.
2406 It may be a number corresponding to a pixel format, or a pixel format
2409 @item timebase_num, timebase_den
2410 Specify numerator and denomitor of the timebase assumed by the
2411 timestamps of the buffered frames.
2413 @item sample_aspect_ratio.num, sample_aspect_ratio.den
2414 Specify numerator and denominator of the sample aspect ratio assumed
2415 by the video frames.
2418 Specify the optional parameters to be used for the scale filter which
2419 is automatically inserted when an input change is detected in the
2420 input size or format.
2425 buffer=320:240:yuv410p:1:24:1:1
2428 will instruct the source to accept video frames with size 320x240 and
2429 with format "yuv410p", assuming 1/24 as the timestamps timebase and
2430 square pixels (1:1 sample aspect ratio).
2431 Since the pixel format with name "yuv410p" corresponds to the number 6
2432 (check the enum PixelFormat definition in @file{libavutil/pixfmt.h}),
2433 this example corresponds to:
2435 buffer=320:240:6:1:24:1:1
2440 Provide an uniformly colored input.
2442 It accepts the following parameters:
2443 @var{color}:@var{frame_size}:@var{frame_rate}
2445 Follows the description of the accepted parameters.
2450 Specify the color of the source. It can be the name of a color (case
2451 insensitive match) or a 0xRRGGBB[AA] sequence, possibly followed by an
2452 alpha specifier. The default value is "black".
2455 Specify the size of the sourced video, it may be a string of the form
2456 @var{width}x@var{height}, or the name of a size abbreviation. The
2457 default value is "320x240".
2460 Specify the frame rate of the sourced video, as the number of frames
2461 generated per second. It has to be a string in the format
2462 @var{frame_rate_num}/@var{frame_rate_den}, an integer number, a float
2463 number or a valid video frame rate abbreviation. The default value is
2468 For example the following graph description will generate a red source
2469 with an opacity of 0.2, with size "qcif" and a frame rate of 10
2470 frames per second, which will be overlayed over the source connected
2471 to the pad with identifier "in".
2474 "color=red@@0.2:qcif:10 [color]; [in][color] overlay [out]"
2479 Read a video stream from a movie container.
2481 It accepts the syntax: @var{movie_name}[:@var{options}] where
2482 @var{movie_name} is the name of the resource to read (not necessarily
2483 a file but also a device or a stream accessed through some protocol),
2484 and @var{options} is an optional sequence of @var{key}=@var{value}
2485 pairs, separated by ":".
2487 The description of the accepted options follows.
2491 @item format_name, f
2492 Specifies the format assumed for the movie to read, and can be either
2493 the name of a container or an input device. If not specified the
2494 format is guessed from @var{movie_name} or by probing.
2496 @item seek_point, sp
2497 Specifies the seek point in seconds, the frames will be output
2498 starting from this seek point, the parameter is evaluated with
2499 @code{av_strtod} so the numerical value may be suffixed by an IS
2500 postfix. Default value is "0".
2502 @item stream_index, si
2503 Specifies the index of the video stream to read. If the value is -1,
2504 the best suited video stream will be automatically selected. Default
2509 This filter allows to overlay a second video on top of main input of
2510 a filtergraph as shown in this graph:
2512 input -----------> deltapts0 --> overlay --> output
2515 movie --> scale--> deltapts1 -------+
2518 Some examples follow:
2520 # skip 3.2 seconds from the start of the avi file in.avi, and overlay it
2521 # on top of the input labelled as "in".
2522 movie=in.avi:seek_point=3.2, scale=180:-1, setpts=PTS-STARTPTS [movie];
2523 [in] setpts=PTS-STARTPTS, [movie] overlay=16:16 [out]
2525 # read from a video4linux2 device, and overlay it on top of the input
2527 movie=/dev/video0:f=video4linux2, scale=180:-1, setpts=PTS-STARTPTS [movie];
2528 [in] setpts=PTS-STARTPTS, [movie] overlay=16:16 [out]
2534 Generate various test patterns, as generated by the MPlayer test filter.
2536 The size of the generated video is fixed, and is 256x256.
2537 This source is useful in particular for testing encoding features.
2539 This source accepts an optional sequence of @var{key}=@var{value} pairs,
2540 separated by ":". The description of the accepted options follows.
2545 Specify the frame rate of the sourced video, as the number of frames
2546 generated per second. It has to be a string in the format
2547 @var{frame_rate_num}/@var{frame_rate_den}, an integer number, a float
2548 number or a valid video frame rate abbreviation. The default value is
2552 Set the video duration of the sourced video. The accepted syntax is:
2554 [-]HH[:MM[:SS[.m...]]]
2557 See also the function @code{av_parse_time()}.
2559 If not specified, or the expressed duration is negative, the video is
2560 supposed to be generated forever.
2564 Set the number or the name of the test to perform. Supported tests are:
2579 Default value is "all", which will cycle through the list of all tests.
2582 For example the following:
2587 will generate a "dc_luma" test pattern.
2591 Provide a frei0r source.
2593 To enable compilation of this filter you need to install the frei0r
2594 header and configure FFmpeg with --enable-frei0r.
2596 The source supports the syntax:
2598 @var{size}:@var{rate}:@var{src_name}[@{=|:@}@var{param1}:@var{param2}:...:@var{paramN}]
2601 @var{size} is the size of the video to generate, may be a string of the
2602 form @var{width}x@var{height} or a frame size abbreviation.
2603 @var{rate} is the rate of the video to generate, may be a string of
2604 the form @var{num}/@var{den} or a frame rate abbreviation.
2605 @var{src_name} is the name to the frei0r source to load. For more
2606 information regarding frei0r and how to set the parameters read the
2607 section @ref{frei0r} in the description of the video filters.
2609 Some examples follow:
2611 # generate a frei0r partik0l source with size 200x200 and framerate 10
2612 # which is overlayed on the overlay filter main input
2613 frei0r_src=200x200:10:partik0l=1234 [overlay]; [in][overlay] overlay
2616 @section nullsrc, rgbtestsrc, testsrc
2618 The @code{nullsrc} source returns unprocessed video frames. It is
2619 mainly useful to be employed in analysis / debugging tools, or as the
2620 source for filters which ignore the input data.
2622 The @code{rgbtestsrc} source generates an RGB test pattern useful for
2623 detecting RGB vs BGR issues. You should see a red, green and blue
2624 stripe from top to bottom.
2626 The @code{testsrc} source generates a test video pattern, showing a
2627 color pattern, a scrolling gradient and a timestamp. This is mainly
2628 intended for testing purposes.
2630 These sources accept an optional sequence of @var{key}=@var{value} pairs,
2631 separated by ":". The description of the accepted options follows.
2636 Specify the size of the sourced video, it may be a string of the form
2637 @var{width}x@var{heigth}, or the name of a size abbreviation. The
2638 default value is "320x240".
2641 Specify the frame rate of the sourced video, as the number of frames
2642 generated per second. It has to be a string in the format
2643 @var{frame_rate_num}/@var{frame_rate_den}, an integer number, a float
2644 number or a valid video frame rate abbreviation. The default value is
2648 Set the sample aspect ratio of the sourced video.
2651 Set the video duration of the sourced video. The accepted syntax is:
2653 [-]HH[:MM[:SS[.m...]]]
2656 See also the function @code{av_parse_time()}.
2658 If not specified, or the expressed duration is negative, the video is
2659 supposed to be generated forever.
2662 For example the following:
2664 testsrc=duration=5.3:size=qcif:rate=10
2667 will generate a video with a duration of 5.3 seconds, with size
2668 176x144 and a framerate of 10 frames per second.
2670 If the input content is to be ignored, @code{nullsrc} can be used. The
2671 following command generates noise in the luminance plane by employing
2672 the @code{mp=geq} filter:
2674 nullsrc=s=256x256, mp=geq=random(1)*255:128:128
2677 @c man end VIDEO SOURCES
2679 @chapter Video Sinks
2680 @c man begin VIDEO SINKS
2682 Below is a description of the currently available video sinks.
2686 Buffer video frames, and make them available to the end of the filter
2689 This sink is mainly intended for a programmatic use, in particular
2690 through the interface defined in @file{libavfilter/buffersink.h}.
2692 It does not require a string parameter in input, but you need to
2693 specify a pointer to a list of supported pixel formats terminated by
2694 -1 in the opaque parameter provided to @code{avfilter_init_filter}
2695 when initializing this sink.
2699 Null video sink, do absolutely nothing with the input video. It is
2700 mainly useful as a template and to be employed in analysis / debugging
2703 @c man end VIDEO SINKS