Prefer the album gain, if present.
@end table
-@item replaygain_preamp
-Pre-amplification gain in dB to apply to the selected replaygain gain.
+@item replaygain_preamp
+Pre-amplification gain in dB to apply to the selected replaygain gain.
+
+Default value for @var{replaygain_preamp} is 0.0.
+
+@item eval
+Set when the volume expression is evaluated.
+
+It accepts the following values:
+@table @samp
+@item once
+only evaluate expression once during the filter initialization, or
+when the @samp{volume} command is sent
+
+@item frame
+evaluate expression for each incoming frame
+@end table
+
+Default value is @samp{once}.
+@end table
+
+The volume expression can contain the following parameters.
+
+@table @option
+@item n
+frame number (starting at zero)
+@item nb_channels
+number of channels
+@item nb_consumed_samples
+number of samples consumed by the filter
+@item nb_samples
+number of samples in the current frame
+@item pos
+original frame position in the file
+@item pts
+frame PTS
+@item sample_rate
+sample rate
+@item startpts
+PTS at start of stream
+@item startt
+time at start of stream
+@item t
+frame time
+@item tb
+timestamp timebase
+@item volume
+last set volume value
+@end table
+
+Note that when @option{eval} is set to @samp{once} only the
+@var{sample_rate} and @var{tb} variables are available, all other
+variables will evaluate to NAN.
+
+@subsection Commands
+
+This filter supports the following commands:
+@table @option
+@item volume
+Modify the volume expression.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
++@item replaygain_noclip
++Prevent clipping by limiting the gain applied.
++
++Default value for @var{replaygain_noclip} is 1.
++
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Halve the input audio volume:
+@example
+volume=volume=0.5
+volume=volume=1/2
+volume=volume=-6.0206dB
+@end example
+
+In all the above example the named key for @option{volume} can be
+omitted, for example like in:
+@example
+volume=0.5
+@end example
+
+@item
+Increase input audio power by 6 decibels using fixed-point precision:
+@example
+volume=volume=6dB:precision=fixed
+@end example
+
+@item
+Fade volume after time 10 with an annihilation period of 5 seconds:
+@example
+volume='if(lt(t,10),1,max(1-(t-10)/5,0))':eval=frame
+@end example
+@end itemize
+
+@section volumedetect
+
+Detect the volume of the input video.
+
+The filter has no parameters. The input is not modified. Statistics about
+the volume will be printed in the log when the input stream end is reached.
+
+In particular it will show the mean volume (root mean square), maximum
+volume (on a per-sample basis), and the beginning of a histogram of the
+registered volume values (from the maximum value to a cumulated 1/1000 of
+the samples).
+
+All volumes are in decibels relative to the maximum PCM value.
+
+@subsection Examples
+
+Here is an excerpt of the output:
+@example
+[Parsed_volumedetect_0 @ 0xa23120] mean_volume: -27 dB
+[Parsed_volumedetect_0 @ 0xa23120] max_volume: -4 dB
+[Parsed_volumedetect_0 @ 0xa23120] histogram_4db: 6
+[Parsed_volumedetect_0 @ 0xa23120] histogram_5db: 62
+[Parsed_volumedetect_0 @ 0xa23120] histogram_6db: 286
+[Parsed_volumedetect_0 @ 0xa23120] histogram_7db: 1042
+[Parsed_volumedetect_0 @ 0xa23120] histogram_8db: 2551
+[Parsed_volumedetect_0 @ 0xa23120] histogram_9db: 4609
+[Parsed_volumedetect_0 @ 0xa23120] histogram_10db: 8409
+@end example
+
+It means that:
+@itemize
+@item
+The mean square energy is approximately -27 dB, or 10^-2.7.
+@item
+The largest sample is at -4 dB, or more precisely between -4 dB and -5 dB.
+@item
+There are 6 samples at -4 dB, 62 at -5 dB, 286 at -6 dB, etc.
+@end itemize
+
+In other words, raising the volume by +4 dB does not cause any clipping,
+raising it by +5 dB causes clipping for 6 samples, etc.
+
+@c man end AUDIO FILTERS
+
+@chapter Audio Sources
+@c man begin AUDIO SOURCES
+
+Below is a description of the currently available audio sources.
+
+@section abuffer
+
+Buffer audio frames, and make them available to the filter chain.
+
+This source is mainly intended for a programmatic use, in particular
+through the interface defined in @file{libavfilter/asrc_abuffer.h}.
+
+It accepts the following parameters:
+@table @option
+
+@item time_base
+The timebase which will be used for timestamps of submitted frames. It must be
+either a floating-point number or in @var{numerator}/@var{denominator} form.
+
+@item sample_rate
+The sample rate of the incoming audio buffers.
+
+@item sample_fmt
+The sample format of the incoming audio buffers.
+Either a sample format name or its corresponging integer representation from
+the enum AVSampleFormat in @file{libavutil/samplefmt.h}
+
+@item channel_layout
+The channel layout of the incoming audio buffers.
+Either a channel layout name from channel_layout_map in
+@file{libavutil/channel_layout.c} or its corresponding integer representation
+from the AV_CH_LAYOUT_* macros in @file{libavutil/channel_layout.h}
+
+@item channels
+The number of channels of the incoming audio buffers.
+If both @var{channels} and @var{channel_layout} are specified, then they
+must be consistent.
+
+@end table
+
+@subsection Examples
+
+@example
+abuffer=sample_rate=44100:sample_fmt=s16p:channel_layout=stereo
+@end example
+
+will instruct the source to accept planar 16bit signed stereo at 44100Hz.
+Since the sample format with name "s16p" corresponds to the number
+6 and the "stereo" channel layout corresponds to the value 0x3, this is
+equivalent to:
+@example
+abuffer=sample_rate=44100:sample_fmt=6:channel_layout=0x3
+@end example
+
+@section aevalsrc
+
+Generate an audio signal specified by an expression.
+
+This source accepts in input one or more expressions (one for each
+channel), which are evaluated and used to generate a corresponding
+audio signal.
+
+This source accepts the following options:
+
+@table @option
+@item exprs
+Set the '|'-separated expressions list for each separate channel. In case the
+@option{channel_layout} option is not specified, the selected channel layout
+depends on the number of provided expressions. Otherwise the last
+specified expression is applied to the remaining output channels.
+
+@item channel_layout, c
+Set the channel layout. The number of channels in the specified layout
+must be equal to the number of specified expressions.
+
+@item duration, d
+Set the minimum duration of the sourced audio. See the function
+@code{av_parse_time()} for the accepted format.
+Note that the resulting duration may be greater than the specified
+duration, as the generated audio is always cut at the end of a
+complete frame.
+
+If not specified, or the expressed duration is negative, the audio is
+supposed to be generated forever.
+
+@item nb_samples, n
+Set the number of samples per channel per each output frame,
+default to 1024.
+
+@item sample_rate, s
+Specify the sample rate, default to 44100.
+@end table
+
+Each expression in @var{exprs} can contain the following constants:
+
+@table @option
+@item n
+number of the evaluated sample, starting from 0
+
+@item t
+time of the evaluated sample expressed in seconds, starting from 0
+
+@item s
+sample rate
+
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Generate silence:
+@example
+aevalsrc=0
+@end example
+
+@item
+Generate a sin signal with frequency of 440 Hz, set sample rate to
+8000 Hz:
+@example
+aevalsrc="sin(440*2*PI*t):s=8000"
+@end example
+
+@item
+Generate a two channels signal, specify the channel layout (Front
+Center + Back Center) explicitly:
+@example
+aevalsrc="sin(420*2*PI*t)|cos(430*2*PI*t):c=FC|BC"
+@end example
+
+@item
+Generate white noise:
+@example
+aevalsrc="-2+random(0)"
+@end example
+
+@item
+Generate an amplitude modulated signal:
+@example
+aevalsrc="sin(10*2*PI*t)*sin(880*2*PI*t)"
+@end example
+
+@item
+Generate 2.5 Hz binaural beats on a 360 Hz carrier:
+@example
+aevalsrc="0.1*sin(2*PI*(360-2.5/2)*t) | 0.1*sin(2*PI*(360+2.5/2)*t)"
+@end example
+
+@end itemize
+
+@section anullsrc
+
+The null audio source, return unprocessed audio frames. It is mainly useful
+as a template and to be employed in analysis / debugging tools, or as
+the source for filters which ignore the input data (for example the sox
+synth filter).
+
+This source accepts the following options:
+
+@table @option
+
+@item channel_layout, cl
+
+Specifies the channel layout, and can be either an integer or a string
+representing a channel layout. The default value of @var{channel_layout}
+is "stereo".
+
+Check the channel_layout_map definition in
+@file{libavutil/channel_layout.c} for the mapping between strings and
+channel layout values.
+
+@item sample_rate, r
+Specifies the sample rate, and defaults to 44100.
+
+@item nb_samples, n
+Set the number of samples per requested frames.
+
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Set the sample rate to 48000 Hz and the channel layout to AV_CH_LAYOUT_MONO.
+@example
+anullsrc=r=48000:cl=4
+@end example
+
+@item
+Do the same operation with a more obvious syntax:
+@example
+anullsrc=r=48000:cl=mono
+@end example
+@end itemize
+
+All the parameters need to be explicitly defined.
+
+@section flite
+
+Synthesize a voice utterance using the libflite library.
+
+To enable compilation of this filter you need to configure FFmpeg with
+@code{--enable-libflite}.
+
+Note that the flite library is not thread-safe.
+
+The filter accepts the following options:
+
+@table @option
+
+@item list_voices
+If set to 1, list the names of the available voices and exit
+immediately. Default value is 0.
+
+@item nb_samples, n
+Set the maximum number of samples per frame. Default value is 512.
+
+@item textfile
+Set the filename containing the text to speak.
+
+@item text
+Set the text to speak.
+
+@item voice, v
+Set the voice to use for the speech synthesis. Default value is
+@code{kal}. See also the @var{list_voices} option.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Read from file @file{speech.txt}, and synthetize the text using the
+standard flite voice:
+@example
+flite=textfile=speech.txt
+@end example
+
+@item
+Read the specified text selecting the @code{slt} voice:
+@example
+flite=text='So fare thee well, poor devil of a Sub-Sub, whose commentator I am':voice=slt
+@end example
+
+@item
+Input text to ffmpeg:
+@example
+ffmpeg -f lavfi -i flite=text='So fare thee well, poor devil of a Sub-Sub, whose commentator I am':voice=slt
+@end example
+
+@item
+Make @file{ffplay} speak the specified text, using @code{flite} and
+the @code{lavfi} device:
+@example
+ffplay -f lavfi flite=text='No more be grieved for which that thou hast done.'
+@end example
+@end itemize
+
+For more information about libflite, check:
+@url{http://www.speech.cs.cmu.edu/flite/}
+
+@section sine
+
+Generate an audio signal made of a sine wave with amplitude 1/8.
+
+The audio signal is bit-exact.
+
+The filter accepts the following options:
+
+@table @option
+
+@item frequency, f
+Set the carrier frequency. Default is 440 Hz.
+
+@item beep_factor, b
+Enable a periodic beep every second with frequency @var{beep_factor} times
+the carrier frequency. Default is 0, meaning the beep is disabled.
+
+@item sample_rate, r
+Specify the sample rate, default is 44100.
+
+@item duration, d
+Specify the duration of the generated audio stream.
+
+@item samples_per_frame
+Set the number of samples per output frame, default is 1024.
+@end table
+
+@subsection Examples
+
+@itemize
+
+@item
+Generate a simple 440 Hz sine wave:
+@example
+sine
+@end example
+
+@item
+Generate a 220 Hz sine wave with a 880 Hz beep each second, for 5 seconds:
+@example
+sine=220:4:d=5
+sine=f=220:b=4:d=5
+sine=frequency=220:beep_factor=4:duration=5
+@end example
+
+@end itemize
+
+@c man end AUDIO SOURCES
+
+@chapter Audio Sinks
+@c man begin AUDIO SINKS
+
+Below is a description of the currently available audio sinks.
+
+@section abuffersink
+
+Buffer audio frames, and make them available to the end of filter chain.
+
+This sink is mainly intended for programmatic use, in particular
+through the interface defined in @file{libavfilter/buffersink.h}
+or the options system.
+
+It accepts a pointer to an AVABufferSinkContext structure, which
+defines the incoming buffers' formats, to be passed as the opaque
+parameter to @code{avfilter_init_filter} for initialization.
+@section anullsink
+
+Null audio sink; do absolutely nothing with the input audio. It is
+mainly useful as a template and for use in analysis / debugging
+tools.
+
+@c man end AUDIO SINKS
+
+@chapter Video Filters
+@c man begin VIDEO FILTERS
+
+When you configure your FFmpeg build, you can disable any of the
+existing filters using @code{--disable-filters}.
+The configure output will show the video filters included in your
+build.
+
+Below is a description of the currently available video filters.
+
+@section alphaextract
+
+Extract the alpha component from the input as a grayscale video. This
+is especially useful with the @var{alphamerge} filter.
+
+@section alphamerge
+
+Add or replace the alpha component of the primary input with the
+grayscale value of a second input. This is intended for use with
+@var{alphaextract} to allow the transmission or storage of frame
+sequences that have alpha in a format that doesn't support an alpha
+channel.
+
+For example, to reconstruct full frames from a normal YUV-encoded video
+and a separate video created with @var{alphaextract}, you might use:
+@example
+movie=in_alpha.mkv [alpha]; [in][alpha] alphamerge [out]
+@end example
+
+Since this filter is designed for reconstruction, it operates on frame
+sequences without considering timestamps, and terminates when either
+input reaches end of stream. This will cause problems if your encoding
+pipeline drops frames. If you're trying to apply an image as an
+overlay to a video stream, consider the @var{overlay} filter instead.
+
+@section ass
+
+Same as the @ref{subtitles} filter, except that it doesn't require libavcodec
+and libavformat to work. On the other hand, it is limited to ASS (Advanced
+Substation Alpha) subtitles files.
+
+@section bbox
+
+Compute the bounding box for the non-black pixels in the input frame
+luminance plane.
+
+This filter computes the bounding box containing all the pixels with a
+luminance value greater than the minimum allowed value.
+The parameters describing the bounding box are printed on the filter
+log.
+
+The filter accepts the following option:
+
+@table @option
+@item min_val
+Set the minimal luminance value. Default is @code{16}.
+@end table
+
+@section blackdetect
+
+Detect video intervals that are (almost) completely black. Can be
+useful to detect chapter transitions, commercials, or invalid
+recordings. Output lines contains the time for the start, end and
+duration of the detected black interval expressed in seconds.
+
+In order to display the output lines, you need to set the loglevel at
+least to the AV_LOG_INFO value.
+
+The filter accepts the following options:
+
+@table @option
+@item black_min_duration, d
+Set the minimum detected black duration expressed in seconds. It must
+be a non-negative floating point number.
+
+Default value is 2.0.
+
+@item picture_black_ratio_th, pic_th
+Set the threshold for considering a picture "black".
+Express the minimum value for the ratio:
+@example
+@var{nb_black_pixels} / @var{nb_pixels}
+@end example
+
+for which a picture is considered black.
+Default value is 0.98.
+
+@item pixel_black_th, pix_th
+Set the threshold for considering a pixel "black".
+
+The threshold expresses the maximum pixel luminance value for which a
+pixel is considered "black". The provided value is scaled according to
+the following equation:
+@example
+@var{absolute_threshold} = @var{luminance_minimum_value} + @var{pixel_black_th} * @var{luminance_range_size}
+@end example
+
+@var{luminance_range_size} and @var{luminance_minimum_value} depend on
+the input video format, the range is [0-255] for YUV full-range
+formats and [16-235] for YUV non full-range formats.
+
+Default value is 0.10.
+@end table
+
+The following example sets the maximum pixel threshold to the minimum
+value, and detects only black intervals of 2 or more seconds:
+@example
+blackdetect=d=2:pix_th=0.00
+@end example
+
+@section blackframe
+
+Detect frames that are (almost) completely black. Can be useful to
+detect chapter transitions or commercials. Output lines consist of
+the frame number of the detected frame, the percentage of blackness,
+the position in the file if known or -1 and the timestamp in seconds.
+
+In order to display the output lines, you need to set the loglevel at
+least to the AV_LOG_INFO value.
+
+It accepts the following parameters:
+
+@table @option
+
+@item amount
+The percentage of the pixels that have to be below the threshold; it defaults to
+@code{98}.
+
+@item threshold, thresh
+The threshold below which a pixel value is considered black; it defaults to
+@code{32}.
+
+@end table
+
+@section blend
+
+Blend two video frames into each other.
+
+It takes two input streams and outputs one stream, the first input is the
+"top" layer and second input is "bottom" layer.
+Output terminates when shortest input terminates.
+
+A description of the accepted options follows.
+
+@table @option
+@item c0_mode
+@item c1_mode
+@item c2_mode
+@item c3_mode
+@item all_mode
+Set blend mode for specific pixel component or all pixel components in case
+of @var{all_mode}. Default value is @code{normal}.
+
+Available values for component modes are:
+@table @samp
+@item addition
+@item and
+@item average
+@item burn
+@item darken
+@item difference
+@item divide
+@item dodge
+@item exclusion
+@item hardlight
+@item lighten
+@item multiply
+@item negation
+@item normal
+@item or
+@item overlay
+@item phoenix
+@item pinlight
+@item reflect
+@item screen
+@item softlight
+@item subtract
+@item vividlight
+@item xor
+@end table
+
+@item c0_opacity
+@item c1_opacity
+@item c2_opacity
+@item c3_opacity
+@item all_opacity
+Set blend opacity for specific pixel component or all pixel components in case
+of @var{all_opacity}. Only used in combination with pixel component blend modes.
+
+@item c0_expr
+@item c1_expr
+@item c2_expr
+@item c3_expr
+@item all_expr
+Set blend expression for specific pixel component or all pixel components in case
+of @var{all_expr}. Note that related mode options will be ignored if those are set.
+
+The expressions can use the following variables:
+
+@table @option
+@item N
+The sequential number of the filtered frame, starting from @code{0}.
+
+@item X
+@item Y
+the coordinates of the current sample
+
+@item W
+@item H
+the width and height of currently filtered plane
+
+@item SW
+@item SH
+Width and height scale depending on the currently filtered plane. It is the
+ratio between the corresponding luma plane number of pixels and the current
+plane ones. E.g. for YUV4:2:0 the values are @code{1,1} for the luma plane, and
+@code{0.5,0.5} for chroma planes.
+
+@item T
+Time of the current frame, expressed in seconds.
+
+@item TOP, A
+Value of pixel component at current location for first video frame (top layer).
+
+@item BOTTOM, B
+Value of pixel component at current location for second video frame (bottom layer).
+@end table
+
+@item shortest
+Force termination when the shortest input terminates. Default is @code{0}.
+@item repeatlast
+Continue applying the last bottom frame after the end of the stream. A value of
+@code{0} disable the filter after the last frame of the bottom layer is reached.
+Default is @code{1}.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Apply transition from bottom layer to top layer in first 10 seconds:
+@example
+blend=all_expr='A*(if(gte(T,10),1,T/10))+B*(1-(if(gte(T,10),1,T/10)))'
+@end example
+
+@item
+Apply 1x1 checkerboard effect:
+@example
+blend=all_expr='if(eq(mod(X,2),mod(Y,2)),A,B)'
+@end example
+
+@item
+Apply uncover left effect:
+@example
+blend=all_expr='if(gte(N*SW+X,W),A,B)'
+@end example
+
+@item
+Apply uncover down effect:
+@example
+blend=all_expr='if(gte(Y-N*SH,0),A,B)'
+@end example
+
+@item
+Apply uncover up-left effect:
+@example
+blend=all_expr='if(gte(T*SH*40+Y,H)*gte((T*40*SW+X)*W/H,W),A,B)'
+@end example
+@end itemize
+
+@section boxblur
+
+Apply a boxblur algorithm to the input video.
+
+It accepts the following parameters:
+
+@table @option
+
+@item luma_radius, lr
+@item luma_power, lp
+@item chroma_radius, cr
+@item chroma_power, cp
+@item alpha_radius, ar
+@item alpha_power, ap
+
+@end table
+
+A description of the accepted options follows.
+
+@table @option
+@item luma_radius, lr
+@item chroma_radius, cr
+@item alpha_radius, ar
+Set an expression for the box radius in pixels used for blurring the
+corresponding input plane.
+
+The radius value must be a non-negative number, and must not be
+greater than the value of the expression @code{min(w,h)/2} for the
+luma and alpha planes, and of @code{min(cw,ch)/2} for the chroma
+planes.
+
+Default value for @option{luma_radius} is "2". If not specified,
+@option{chroma_radius} and @option{alpha_radius} default to the
+corresponding value set for @option{luma_radius}.
+
+The expressions can contain the following constants:
+@table @option
+@item w
+@item h
+The input width and height in pixels.
+
+@item cw
+@item ch
+The input chroma image width and height in pixels.
+
+@item hsub
+@item vsub
+The horizontal and vertical chroma subsample values. For example, for the
+pixel format "yuv422p", @var{hsub} is 2 and @var{vsub} is 1.
+@end table
+
+@item luma_power, lp
+@item chroma_power, cp
+@item alpha_power, ap
+Specify how many times the boxblur filter is applied to the
+corresponding plane.
+
+Default value for @option{luma_power} is 2. If not specified,
+@option{chroma_power} and @option{alpha_power} default to the
+corresponding value set for @option{luma_power}.
+
+A value of 0 will disable the effect.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Apply a boxblur filter with the luma, chroma, and alpha radii
+set to 2:
+@example
+boxblur=luma_radius=2:luma_power=1
+boxblur=2:1
+@end example
+
+@item
+Set the luma radius to 2, and alpha and chroma radius to 0:
+@example
+boxblur=2:1:cr=0:ar=0
+@end example
+
+@item
+Set the luma and chroma radii to a fraction of the video dimension:
+@example
+boxblur=luma_radius=min(h\,w)/10:luma_power=1:chroma_radius=min(cw\,ch)/10:chroma_power=1
+@end example
+@end itemize
+
+@section colorbalance
+Modify intensity of primary colors (red, green and blue) of input frames.
+
+The filter allows an input frame to be adjusted in the shadows, midtones or highlights
+regions for the red-cyan, green-magenta or blue-yellow balance.
+
+A positive adjustment value shifts the balance towards the primary color, a negative
+value towards the complementary color.
+
+The filter accepts the following options:
+
+@table @option
+@item rs
+@item gs
+@item bs
+Adjust red, green and blue shadows (darkest pixels).
+
+@item rm
+@item gm
+@item bm
+Adjust red, green and blue midtones (medium pixels).
+
+@item rh
+@item gh
+@item bh
+Adjust red, green and blue highlights (brightest pixels).
+
+Allowed ranges for options are @code{[-1.0, 1.0]}. Defaults are @code{0}.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Add red color cast to shadows:
+@example
+colorbalance=rs=.3
+@end example
+@end itemize
+
+@section colorchannelmixer
+
+Adjust video input frames by re-mixing color channels.
+
+This filter modifies a color channel by adding the values associated to
+the other channels of the same pixels. For example if the value to
+modify is red, the output value will be:
+@example
+@var{red}=@var{red}*@var{rr} + @var{blue}*@var{rb} + @var{green}*@var{rg} + @var{alpha}*@var{ra}
+@end example
+
+The filter accepts the following options:
+
+@table @option
+@item rr
+@item rg
+@item rb
+@item ra
+Adjust contribution of input red, green, blue and alpha channels for output red channel.
+Default is @code{1} for @var{rr}, and @code{0} for @var{rg}, @var{rb} and @var{ra}.
+
+@item gr
+@item gg
+@item gb
+@item ga
+Adjust contribution of input red, green, blue and alpha channels for output green channel.
+Default is @code{1} for @var{gg}, and @code{0} for @var{gr}, @var{gb} and @var{ga}.
+
+@item br
+@item bg
+@item bb
+@item ba
+Adjust contribution of input red, green, blue and alpha channels for output blue channel.
+Default is @code{1} for @var{bb}, and @code{0} for @var{br}, @var{bg} and @var{ba}.
+
+@item ar
+@item ag
+@item ab
+@item aa
+Adjust contribution of input red, green, blue and alpha channels for output alpha channel.
+Default is @code{1} for @var{aa}, and @code{0} for @var{ar}, @var{ag} and @var{ab}.
+
+Allowed ranges for options are @code{[-2.0, 2.0]}.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Convert source to grayscale:
+@example
+colorchannelmixer=.3:.4:.3:0:.3:.4:.3:0:.3:.4:.3
+@end example
+@item
+Simulate sepia tones:
+@example
+colorchannelmixer=.393:.769:.189:0:.349:.686:.168:0:.272:.534:.131
+@end example
+@end itemize
+
+@section colormatrix
+
+Convert color matrix.
+
+The filter accepts the following options:
+
+@table @option
+@item src
+@item dst
+Specify the source and destination color matrix. Both values must be
+specified.
+
+The accepted values are:
+@table @samp
+@item bt709
+BT.709
+
+@item bt601
+BT.601
+
+@item smpte240m
+SMPTE-240M
+
+@item fcc
+FCC
+@end table
+@end table
+
+For example to convert from BT.601 to SMPTE-240M, use the command:
+@example
+colormatrix=bt601:smpte240m
+@end example
+
+@section copy
+
+Copy the input source unchanged to the output. This is mainly useful for
+testing purposes.
+
+@section crop
+
+Crop the input video to given dimensions.
+
+It accepts the following parameters:
+
+@table @option
+@item w, out_w
+The width of the output video. It defaults to @code{iw}.
+This expression is evaluated only once during the filter
+configuration.
+
+@item h, out_h
+The height of the output video. It defaults to @code{ih}.
+This expression is evaluated only once during the filter
+configuration.
+
+@item x
+The horizontal position, in the input video, of the left edge of the output
+video. It defaults to @code{(in_w-out_w)/2}.
+This expression is evaluated per-frame.
+
+@item y
+The vertical position, in the input video, of the top edge of the output video.
+It defaults to @code{(in_h-out_h)/2}.
+This expression is evaluated per-frame.
+
+@item keep_aspect
+If set to 1 will force the output display aspect ratio
+to be the same of the input, by changing the output sample aspect
+ratio. It defaults to 0.
+@end table
+
+The @var{out_w}, @var{out_h}, @var{x}, @var{y} parameters are
+expressions containing the following constants:
+
+@table @option
+@item x
+@item y
+The computed values for @var{x} and @var{y}. They are evaluated for
+each new frame.
+
+@item in_w
+@item in_h
+The input width and height.
+
+@item iw
+@item ih
+These are the same as @var{in_w} and @var{in_h}.
+
+@item out_w
+@item out_h
+The output (cropped) width and height.
+
+@item ow
+@item oh
+These are the same as @var{out_w} and @var{out_h}.
+
+@item a
+same as @var{iw} / @var{ih}
+
+@item sar
+input sample aspect ratio
+
+@item dar
+input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
+
+@item hsub
+@item vsub
+horizontal and vertical chroma subsample values. For example for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+
+@item n
+The number of the input frame, starting from 0.
+
+@item pos
+the position in the file of the input frame, NAN if unknown
+
+@item t
+The timestamp expressed in seconds. It's NAN if the input timestamp is unknown.
+
+@end table
+
+The expression for @var{out_w} may depend on the value of @var{out_h},
+and the expression for @var{out_h} may depend on @var{out_w}, but they
+cannot depend on @var{x} and @var{y}, as @var{x} and @var{y} are
+evaluated after @var{out_w} and @var{out_h}.
+
+The @var{x} and @var{y} parameters specify the expressions for the
+position of the top-left corner of the output (non-cropped) area. They
+are evaluated for each frame. If the evaluated value is not valid, it
+is approximated to the nearest valid value.
+
+The expression for @var{x} may depend on @var{y}, and the expression
+for @var{y} may depend on @var{x}.
+
+@subsection Examples
+
+@itemize
+@item
+Crop area with size 100x100 at position (12,34).
+@example
+crop=100:100:12:34
+@end example
+
+Using named options, the example above becomes:
+@example
+crop=w=100:h=100:x=12:y=34
+@end example
+
+@item
+Crop the central input area with size 100x100:
+@example
+crop=100:100
+@end example
+
+@item
+Crop the central input area with size 2/3 of the input video:
+@example
+crop=2/3*in_w:2/3*in_h
+@end example
+
+@item
+Crop the input video central square:
+@example
+crop=out_w=in_h
+crop=in_h
+@end example
+
+@item
+Delimit the rectangle with the top-left corner placed at position
+100:100 and the right-bottom corner corresponding to the right-bottom
+corner of the input image.
+@example
+crop=in_w-100:in_h-100:100:100
+@end example
+
+@item
+Crop 10 pixels from the left and right borders, and 20 pixels from
+the top and bottom borders
+@example
+crop=in_w-2*10:in_h-2*20
+@end example
+
+@item
+Keep only the bottom right quarter of the input image:
+@example
+crop=in_w/2:in_h/2:in_w/2:in_h/2
+@end example
+
+@item
+Crop height for getting Greek harmony:
+@example
+crop=in_w:1/PHI*in_w
+@end example
+
+@item
+Appply trembling effect:
+@example
+crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(n/10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(n/7)
+@end example
+
+@item
+Apply erratic camera effect depending on timestamp:
+@example
+crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(t*10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(t*13)"
+@end example
+
+@item
+Set x depending on the value of y:
+@example
+crop=in_w/2:in_h/2:y:10+10*sin(n/10)
+@end example
+@end itemize
+
+@section cropdetect
+
+Auto-detect the crop size.
+
+It calculates the necessary cropping parameters and prints the
+recommended parameters via the logging system. The detected dimensions
+correspond to the non-black area of the input video.
+
+It accepts the following parameters:
+
+@table @option
+
+@item limit
+Set higher black value threshold, which can be optionally specified
+from nothing (0) to everything (255). An intensity value greater
+to the set value is considered non-black. It defaults to 24.
+
+@item round
+The value which the width/height should be divisible by. It defaults to
+16. The offset is automatically adjusted to center the video. Use 2 to
+get only even dimensions (needed for 4:2:2 video). 16 is best when
+encoding to most video codecs.
+
+@item reset_count, reset
+Set the counter that determines after how many frames cropdetect will
+reset the previously detected largest video area and start over to
+detect the current optimal crop area. Default value is 0.
+
+This can be useful when channel logos distort the video area. 0
+indicates 'never reset', and returns the largest area encountered during
+playback.
+@end table
+
+@anchor{curves}
+@section curves
+
+Apply color adjustments using curves.
+
+This filter is similar to the Adobe Photoshop and GIMP curves tools. Each
+component (red, green and blue) has its values defined by @var{N} key points
+tied from each other using a smooth curve. The x-axis represents the pixel
+values from the input frame, and the y-axis the new pixel values to be set for
+the output frame.
+
+By default, a component curve is defined by the two points @var{(0;0)} and
+@var{(1;1)}. This creates a straight line where each original pixel value is
+"adjusted" to its own value, which means no change to the image.
+
+The filter allows you to redefine these two points and add some more. A new
+curve (using a natural cubic spline interpolation) will be define to pass
+smoothly through all these new coordinates. The new defined points needs to be
+strictly increasing over the x-axis, and their @var{x} and @var{y} values must
+be in the @var{[0;1]} interval. If the computed curves happened to go outside
+the vector spaces, the values will be clipped accordingly.
+
+If there is no key point defined in @code{x=0}, the filter will automatically
+insert a @var{(0;0)} point. In the same way, if there is no key point defined
+in @code{x=1}, the filter will automatically insert a @var{(1;1)} point.
+
+The filter accepts the following options:
+
+@table @option
+@item preset
+Select one of the available color presets. This option can be used in addition
+to the @option{r}, @option{g}, @option{b} parameters; in this case, the later
+options takes priority on the preset values.
+Available presets are:
+@table @samp
+@item none
+@item color_negative
+@item cross_process
+@item darker
+@item increase_contrast
+@item lighter
+@item linear_contrast
+@item medium_contrast
+@item negative
+@item strong_contrast
+@item vintage
+@end table
+Default is @code{none}.
+@item master, m
+Set the master key points. These points will define a second pass mapping. It
+is sometimes called a "luminance" or "value" mapping. It can be used with
+@option{r}, @option{g}, @option{b} or @option{all} since it acts like a
+post-processing LUT.
+@item red, r
+Set the key points for the red component.
+@item green, g
+Set the key points for the green component.
+@item blue, b
+Set the key points for the blue component.
+@item all
+Set the key points for all components (not including master).
+Can be used in addition to the other key points component
+options. In this case, the unset component(s) will fallback on this
+@option{all} setting.
+@item psfile
+Specify a Photoshop curves file (@code{.asv}) to import the settings from.
+@end table
+
+To avoid some filtergraph syntax conflicts, each key points list need to be
+defined using the following syntax: @code{x0/y0 x1/y1 x2/y2 ...}.
+
+@subsection Examples
+
+@itemize
+@item
+Increase slightly the middle level of blue:
+@example
+curves=blue='0.5/0.58'
+@end example
+
+@item
+Vintage effect:
+@example
+curves=r='0/0.11 .42/.51 1/0.95':g='0.50/0.48':b='0/0.22 .49/.44 1/0.8'
+@end example
+Here we obtain the following coordinates for each components:
+@table @var
+@item red
+@code{(0;0.11) (0.42;0.51) (1;0.95)}
+@item green
+@code{(0;0) (0.50;0.48) (1;1)}
+@item blue
+@code{(0;0.22) (0.49;0.44) (1;0.80)}
+@end table
+
+@item
+The previous example can also be achieved with the associated built-in preset:
+@example
+curves=preset=vintage
+@end example
+
+@item
+Or simply:
+@example
+curves=vintage
+@end example
+
+@item
+Use a Photoshop preset and redefine the points of the green component:
+@example
+curves=psfile='MyCurvesPresets/purple.asv':green='0.45/0.53'
+@end example
+@end itemize
+
+@section dctdnoiz
+
+Denoise frames using 2D DCT (frequency domain filtering).
+
+This filter is not designed for real time and can be extremely slow.
+
+The filter accepts the following options:
+
+@table @option
+@item sigma, s
+Set the noise sigma constant.
+
+This @var{sigma} defines a hard threshold of @code{3 * sigma}; every DCT
+coefficient (absolute value) below this threshold with be dropped.
+
+If you need a more advanced filtering, see @option{expr}.
+
+Default is @code{0}.
+
+@item overlap
+Set number overlapping pixels for each block. Each block is of size
+@code{16x16}. Since the filter can be slow, you may want to reduce this value,
+at the cost of a less effective filter and the risk of various artefacts.
+
+If the overlapping value doesn't allow to process the whole input width or
+height, a warning will be displayed and according borders won't be denoised.
+
+Default value is @code{15}.
+
+@item expr, e
+Set the coefficient factor expression.
+
+For each coefficient of a DCT block, this expression will be evaluated as a
+multiplier value for the coefficient.
+
+If this is option is set, the @option{sigma} option will be ignored.
+
+The absolute value of the coefficient can be accessed through the @var{c}
+variable.
+@end table
+
+@subsection Examples
+
+Apply a denoise with a @option{sigma} of @code{4.5}:
+@example
+dctdnoiz=4.5
+@end example
+
+The same operation can be achieved using the expression system:
+@example
+dctdnoiz=e='gte(c, 4.5*3)'
+@end example
+
+@anchor{decimate}
+@section decimate
+
+Drop duplicated frames at regular intervals.
+
+The filter accepts the following options:
+
+@table @option
+@item cycle
+Set the number of frames from which one will be dropped. Setting this to
+@var{N} means one frame in every batch of @var{N} frames will be dropped.
+Default is @code{5}.
+
+@item dupthresh
+Set the threshold for duplicate detection. If the difference metric for a frame
+is less than or equal to this value, then it is declared as duplicate. Default
+is @code{1.1}
+
+@item scthresh
+Set scene change threshold. Default is @code{15}.
+
+@item blockx
+@item blocky
+Set the size of the x and y-axis blocks used during metric calculations.
+Larger blocks give better noise suppression, but also give worse detection of
+small movements. Must be a power of two. Default is @code{32}.
+
+@item ppsrc
+Mark main input as a pre-processed input and activate clean source input
+stream. This allows the input to be pre-processed with various filters to help
+the metrics calculation while keeping the frame selection lossless. When set to
+@code{1}, the first stream is for the pre-processed input, and the second
+stream is the clean source from where the kept frames are chosen. Default is
+@code{0}.
+
+@item chroma
+Set whether or not chroma is considered in the metric calculations. Default is
+@code{1}.
+@end table
+
+@section dejudder
+
+Remove judder produced by partially interlaced telecined content.
+
+Judder can be introduced, for instance, by @ref{pullup} filter. If the original
+source was partially telecined content then the output of @code{pullup,dejudder}
+will have a variable frame rate. May change the recorded frame rate of the
+container. Aside from that change, this filter will not affect constant frame
+rate video.
+
+The option available in this filter is:
+@table @option
+
+@item cycle
+Specify the length of the window over which the judder repeats.
+
+Accepts any interger greater than 1. Useful values are:
+@table @samp
+
+@item 4
+If the original was telecined from 24 to 30 fps (Film to NTSC).
+
+@item 5
+If the original was telecined from 25 to 30 fps (PAL to NTSC).
+
+@item 20
+If a mixture of the two.
+@end table
+
+The default is @samp{4}.
+@end table
+
+@section delogo
+
+Suppress a TV station logo by a simple interpolation of the surrounding
+pixels. Just set a rectangle covering the logo and watch it disappear
+(and sometimes something even uglier appear - your mileage may vary).
+
+It accepts the following parameters:
+@table @option
+
+@item x
+@item y
+Specify the top left corner coordinates of the logo. They must be
+specified.
+
+@item w
+@item h
+Specify the width and height of the logo to clear. They must be
+specified.
+
+@item band, t
+Specify the thickness of the fuzzy edge of the rectangle (added to
+@var{w} and @var{h}). The default value is 4.
+
+@item show
+When set to 1, a green rectangle is drawn on the screen to simplify
+finding the right @var{x}, @var{y}, @var{w}, and @var{h} parameters.
+The default value is 0.
+
+The rectangle is drawn on the outermost pixels which will be (partly)
+replaced with interpolated values. The values of the next pixels
+immediately outside this rectangle in each direction will be used to
+compute the interpolated pixel values inside the rectangle.
+
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Set a rectangle covering the area with top left corner coordinates 0,0
+and size 100x77, and a band of size 10:
+@example
+delogo=x=0:y=0:w=100:h=77:band=10
+@end example
+
+@end itemize
+
+@section deshake
+
+Attempt to fix small changes in horizontal and/or vertical shift. This
+filter helps remove camera shake from hand-holding a camera, bumping a
+tripod, moving on a vehicle, etc.
+
+The filter accepts the following options:
+
+@table @option
+
+@item x
+@item y
+@item w
+@item h
+Specify a rectangular area where to limit the search for motion
+vectors.
+If desired the search for motion vectors can be limited to a
+rectangular area of the frame defined by its top left corner, width
+and height. These parameters have the same meaning as the drawbox
+filter which can be used to visualise the position of the bounding
+box.
+
+This is useful when simultaneous movement of subjects within the frame
+might be confused for camera motion by the motion vector search.
+
+If any or all of @var{x}, @var{y}, @var{w} and @var{h} are set to -1
+then the full frame is used. This allows later options to be set
+without specifying the bounding box for the motion vector search.
+
+Default - search the whole frame.
+
+@item rx
+@item ry
+Specify the maximum extent of movement in x and y directions in the
+range 0-64 pixels. Default 16.
+
+@item edge
+Specify how to generate pixels to fill blanks at the edge of the
+frame. Available values are:
+@table @samp
+@item blank, 0
+Fill zeroes at blank locations
+@item original, 1
+Original image at blank locations
+@item clamp, 2
+Extruded edge value at blank locations
+@item mirror, 3
+Mirrored edge at blank locations
+@end table
+Default value is @samp{mirror}.
+
+@item blocksize
+Specify the blocksize to use for motion search. Range 4-128 pixels,
+default 8.
+
+@item contrast
+Specify the contrast threshold for blocks. Only blocks with more than
+the specified contrast (difference between darkest and lightest
+pixels) will be considered. Range 1-255, default 125.
+
+@item search
+Specify the search strategy. Available values are:
+@table @samp
+@item exhaustive, 0
+Set exhaustive search
+@item less, 1
+Set less exhaustive search.
+@end table
+Default value is @samp{exhaustive}.
+
+@item filename
+If set then a detailed log of the motion search is written to the
+specified file.
+
+@item opencl
+If set to 1, specify using OpenCL capabilities, only available if
+FFmpeg was configured with @code{--enable-opencl}. Default value is 0.
+
+@end table
+
+@section drawbox
+
+Draw a colored box on the input image.
+
+It accepts the following parameters:
+
+@table @option
+@item x
+@item y
+The expressions which specify the top left corner coordinates of the box. It defaults to 0.
+
+@item width, w
+@item height, h
+The expressions which specify the width and height of the box; if 0 they are interpreted as
+the input width and height. It defaults to 0.
+
+@item color, c
+Specify the color of the box to write. For the general syntax of this option,
+check the "Color" section in the ffmpeg-utils manual. If the special
+value @code{invert} is used, the box edge color is the same as the
+video with inverted luma.
+
+@item thickness, t
+The expression which sets the thickness of the box edge. Default value is @code{3}.
+
+See below for the list of accepted constants.
+@end table
+
+The parameters for @var{x}, @var{y}, @var{w} and @var{h} and @var{t} are expressions containing the
+following constants:
+
+@table @option
+@item dar
+The input display aspect ratio, it is the same as (@var{w} / @var{h}) * @var{sar}.
+
+@item hsub
+@item vsub
+horizontal and vertical chroma subsample values. For example for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+
+@item in_h, ih
+@item in_w, iw
+The input width and height.
+
+@item sar
+The input sample aspect ratio.
+
+@item x
+@item y
+The x and y offset coordinates where the box is drawn.
+
+@item w
+@item h
+The width and height of the drawn box.
+
+@item t
+The thickness of the drawn box.
+
+These constants allow the @var{x}, @var{y}, @var{w}, @var{h} and @var{t} expressions to refer to
+each other, so you may for example specify @code{y=x/dar} or @code{h=w/dar}.
+
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Draw a black box around the edge of the input image:
+@example
+drawbox
+@end example
+
+@item
+Draw a box with color red and an opacity of 50%:
+@example
+drawbox=10:20:200:60:red@@0.5
+@end example
+
+The previous example can be specified as:
+@example
+drawbox=x=10:y=20:w=200:h=60:color=red@@0.5
+@end example
+
+@item
+Fill the box with pink color:
+@example
+drawbox=x=10:y=10:w=100:h=100:color=pink@@0.5:t=max
+@end example
+
+@item
+Draw a 2-pixel red 2.40:1 mask:
+@example
+drawbox=x=-t:y=0.5*(ih-iw/2.4)-t:w=iw+t*2:h=iw/2.4+t*2:t=2:c=red
+@end example
+@end itemize
+
+@section drawgrid
+
+Draw a grid on the input image.
+
+It accepts the following parameters:
+
+@table @option
+@item x
+@item y
+The expressions which specify the coordinates of some point of grid intersection (meant to configure offset). Both default to 0.
+
+@item width, w
+@item height, h
+The expressions which specify the width and height of the grid cell, if 0 they are interpreted as the
+input width and height, respectively, minus @code{thickness}, so image gets
+framed. Default to 0.
+
+@item color, c
+Specify the color of the grid. For the general syntax of this option,
+check the "Color" section in the ffmpeg-utils manual. If the special
+value @code{invert} is used, the grid color is the same as the
+video with inverted luma.
+
+@item thickness, t
+The expression which sets the thickness of the grid line. Default value is @code{1}.
+
+See below for the list of accepted constants.
+@end table
+
+The parameters for @var{x}, @var{y}, @var{w} and @var{h} and @var{t} are expressions containing the
+following constants:
+
+@table @option
+@item dar
+The input display aspect ratio, it is the same as (@var{w} / @var{h}) * @var{sar}.
+
+@item hsub
+@item vsub
+horizontal and vertical chroma subsample values. For example for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+
+@item in_h, ih
+@item in_w, iw
+The input grid cell width and height.
+
+@item sar
+The input sample aspect ratio.
+
+@item x
+@item y
+The x and y coordinates of some point of grid intersection (meant to configure offset).
+
+@item w
+@item h
+The width and height of the drawn cell.
+
+@item t
+The thickness of the drawn cell.
+
+These constants allow the @var{x}, @var{y}, @var{w}, @var{h} and @var{t} expressions to refer to
+each other, so you may for example specify @code{y=x/dar} or @code{h=w/dar}.
+
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Draw a grid with cell 100x100 pixels, thickness 2 pixels, with color red and an opacity of 50%:
+@example
+drawgrid=width=100:height=100:thickness=2:color=red@@0.5
+@end example
+
+@item
+Draw a white 3x3 grid with an opacity of 50%:
+@example
+drawgrid=w=iw/3:h=ih/3:t=2:c=white@@0.5
+@end example
+@end itemize
+
+@anchor{drawtext}
+@section drawtext
+
+Draw a text string or text from a specified file on top of a video, using the
+libfreetype library.
+
+To enable compilation of this filter, you need to configure FFmpeg with
+@code{--enable-libfreetype}.
+
+@subsection Syntax
+
+It accepts the following parameters:
+
+@table @option
+
+@item box
+Used to draw a box around text using the background color.
+The value must be either 1 (enable) or 0 (disable).
+The default value of @var{box} is 0.
+
+@item boxcolor
+The color to be used for drawing box around text. For the syntax of this
+option, check the "Color" section in the ffmpeg-utils manual.
+
+The default value of @var{boxcolor} is "white".
+
+@item borderw
+Set the width of the border to be drawn around the text using @var{bordercolor}.
+The default value of @var{borderw} is 0.
+
+@item bordercolor
+Set the color to be used for drawing border around text. For the syntax of this
+option, check the "Color" section in the ffmpeg-utils manual.
+
+The default value of @var{bordercolor} is "black".
+
+@item expansion
+Select how the @var{text} is expanded. Can be either @code{none},
+@code{strftime} (deprecated) or
+@code{normal} (default). See the @ref{drawtext_expansion, Text expansion} section
+below for details.
+
+@item fix_bounds
+If true, check and fix text coords to avoid clipping.
+
+@item fontcolor
+The color to be used for drawing fonts. For the syntax of this option, check
+the "Color" section in the ffmpeg-utils manual.
+
+The default value of @var{fontcolor} is "black".
+
+@item fontfile
+The font file to be used for drawing text. The path must be included.
+This parameter is mandatory.
+
+@item fontsize
+The font size to be used for drawing text.
+The default value of @var{fontsize} is 16.
+
+@item ft_load_flags
+The flags to be used for loading the fonts.
+
+The flags map the corresponding flags supported by libfreetype, and are
+a combination of the following values:
+@table @var
+@item default
+@item no_scale
+@item no_hinting
+@item render
+@item no_bitmap
+@item vertical_layout
+@item force_autohint
+@item crop_bitmap
+@item pedantic
+@item ignore_global_advance_width
+@item no_recurse
+@item ignore_transform
+@item monochrome
+@item linear_design
+@item no_autohint
+@end table
+
+Default value is "default".
+
+For more information consult the documentation for the FT_LOAD_*
+libfreetype flags.
+
+@item shadowcolor
+The color to be used for drawing a shadow behind the drawn text. For the
+syntax of this option, check the "Color" section in the ffmpeg-utils manual.
+
+The default value of @var{shadowcolor} is "black".
+
+@item shadowx
+@item shadowy
+The x and y offsets for the text shadow position with respect to the
+position of the text. They can be either positive or negative
+values. The default value for both is "0".
+
+@item start_number
+The starting frame number for the n/frame_num variable. The default value
+is "0".
+
+@item tabsize
+The size in number of spaces to use for rendering the tab.
+Default value is 4.
+
+@item timecode
+Set the initial timecode representation in "hh:mm:ss[:;.]ff"
+format. It can be used with or without text parameter. @var{timecode_rate}
+option must be specified.
+
+@item timecode_rate, rate, r
+Set the timecode frame rate (timecode only).
+
+@item text
+The text string to be drawn. The text must be a sequence of UTF-8
+encoded characters.
+This parameter is mandatory if no file is specified with the parameter
+@var{textfile}.
+
+@item textfile
+A text file containing text to be drawn. The text must be a sequence
+of UTF-8 encoded characters.
+
+This parameter is mandatory if no text string is specified with the
+parameter @var{text}.
+
+If both @var{text} and @var{textfile} are specified, an error is thrown.
+
+@item reload
+If set to 1, the @var{textfile} will be reloaded before each frame.
+Be sure to update it atomically, or it may be read partially, or even fail.
+
+@item x
+@item y
+The expressions which specify the offsets where text will be drawn
+within the video frame. They are relative to the top/left border of the
+output image.
+
+The default value of @var{x} and @var{y} is "0".
+
+See below for the list of accepted constants and functions.
+@end table
+
+The parameters for @var{x} and @var{y} are expressions containing the
+following constants and functions:
+
+@table @option
+@item dar
+input display aspect ratio, it is the same as (@var{w} / @var{h}) * @var{sar}
+
+@item hsub
+@item vsub
+horizontal and vertical chroma subsample values. For example for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+
+@item line_h, lh
+the height of each text line
+
+@item main_h, h, H
+the input height
+
+@item main_w, w, W
+the input width
+
+@item max_glyph_a, ascent
+the maximum distance from the baseline to the highest/upper grid
+coordinate used to place a glyph outline point, for all the rendered
+glyphs.
+It is a positive value, due to the grid's orientation with the Y axis
+upwards.
+
+@item max_glyph_d, descent
+the maximum distance from the baseline to the lowest grid coordinate
+used to place a glyph outline point, for all the rendered glyphs.
+This is a negative value, due to the grid's orientation, with the Y axis
+upwards.
+
+@item max_glyph_h
+maximum glyph height, that is the maximum height for all the glyphs
+contained in the rendered text, it is equivalent to @var{ascent} -
+@var{descent}.
+
+@item max_glyph_w
+maximum glyph width, that is the maximum width for all the glyphs
+contained in the rendered text
+
+@item n
+the number of input frame, starting from 0
+
+@item rand(min, max)
+return a random number included between @var{min} and @var{max}
+
+@item sar
+The input sample aspect ratio.
+
+@item t
+timestamp expressed in seconds, NAN if the input timestamp is unknown
+
+@item text_h, th
+the height of the rendered text
+
+@item text_w, tw
+the width of the rendered text
+
+@item x
+@item y
+the x and y offset coordinates where the text is drawn.
+
+These parameters allow the @var{x} and @var{y} expressions to refer
+each other, so you can for example specify @code{y=x/dar}.
+@end table
+
+If libavfilter was built with @code{--enable-fontconfig}, then
+@option{fontfile} can be a fontconfig pattern or omitted.
+
+@anchor{drawtext_expansion}
+@subsection Text expansion
+
+If @option{expansion} is set to @code{strftime},
+the filter recognizes strftime() sequences in the provided text and
+expands them accordingly. Check the documentation of strftime(). This
+feature is deprecated.
+
+If @option{expansion} is set to @code{none}, the text is printed verbatim.
+
+If @option{expansion} is set to @code{normal} (which is the default),
+the following expansion mechanism is used.
+
+The backslash character '\', followed by any character, always expands to
+the second character.
+
+Sequence of the form @code{%@{...@}} are expanded. The text between the
+braces is a function name, possibly followed by arguments separated by ':'.
+If the arguments contain special characters or delimiters (':' or '@}'),
+they should be escaped.
+
+Note that they probably must also be escaped as the value for the
+@option{text} option in the filter argument string and as the filter
+argument in the filtergraph description, and possibly also for the shell,
+that makes up to four levels of escaping; using a text file avoids these
+problems.
+
+The following functions are available:
+
+@table @command
+
+@item expr, e
+The expression evaluation result.
+
+It must take one argument specifying the expression to be evaluated,
+which accepts the same constants and functions as the @var{x} and
+@var{y} values. Note that not all constants should be used, for
+example the text size is not known when evaluating the expression, so
+the constants @var{text_w} and @var{text_h} will have an undefined
+value.
+
+@item gmtime
+The time at which the filter is running, expressed in UTC.
+It can accept an argument: a strftime() format string.
+
+@item localtime
+The time at which the filter is running, expressed in the local time zone.
+It can accept an argument: a strftime() format string.
+
+@item metadata
+Frame metadata. It must take one argument specifying metadata key.
+
+@item n, frame_num
+The frame number, starting from 0.
+
+@item pict_type
+A 1 character description of the current picture type.
+
+@item pts
+The timestamp of the current frame, in seconds, with microsecond accuracy.
+
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Draw "Test Text" with font FreeSerif, using the default values for the
+optional parameters.
+
+@example
+drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text'"
+@end example
+
+@item
+Draw 'Test Text' with font FreeSerif of size 24 at position x=100
+and y=50 (counting from the top-left corner of the screen), text is
+yellow with a red box around it. Both the text and the box have an
+opacity of 20%.
+
+@example
+drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text':\
+ x=100: y=50: fontsize=24: fontcolor=yellow@@0.2: box=1: boxcolor=red@@0.2"
+@end example
+
+Note that the double quotes are not necessary if spaces are not used
+within the parameter list.
+
+@item
+Show the text at the center of the video frame:
+@example
+drawtext="fontsize=30:fontfile=FreeSerif.ttf:text='hello world':x=(w-text_w)/2:y=(h-text_h-line_h)/2"
+@end example
+
+@item
+Show a text line sliding from right to left in the last row of the video
+frame. The file @file{LONG_LINE} is assumed to contain a single line
+with no newlines.
+@example
+drawtext="fontsize=15:fontfile=FreeSerif.ttf:text=LONG_LINE:y=h-line_h:x=-50*t"
+@end example
+
+@item
+Show the content of file @file{CREDITS} off the bottom of the frame and scroll up.
+@example
+drawtext="fontsize=20:fontfile=FreeSerif.ttf:textfile=CREDITS:y=h-20*t"
+@end example
+
+@item
+Draw a single green letter "g", at the center of the input video.
+The glyph baseline is placed at half screen height.
+@example
+drawtext="fontsize=60:fontfile=FreeSerif.ttf:fontcolor=green:text=g:x=(w-max_glyph_w)/2:y=h/2-ascent"
+@end example
+
+@item
+Show text for 1 second every 3 seconds:
+@example
+drawtext="fontfile=FreeSerif.ttf:fontcolor=white:x=100:y=x/dar:enable=lt(mod(t\,3)\,1):text='blink'"
+@end example
+
+@item
+Use fontconfig to set the font. Note that the colons need to be escaped.
+@example
+drawtext='fontfile=Linux Libertine O-40\:style=Semibold:text=FFmpeg'
+@end example
+
+@item
+Print the date of a real-time encoding (see strftime(3)):
+@example
+drawtext='fontfile=FreeSans.ttf:text=%@{localtime:%a %b %d %Y@}'
+@end example
+
+@end itemize
+
+For more information about libfreetype, check:
+@url{http://www.freetype.org/}.
+
+For more information about fontconfig, check:
+@url{http://freedesktop.org/software/fontconfig/fontconfig-user.html}.
+
+@section edgedetect
+
+Detect and draw edges. The filter uses the Canny Edge Detection algorithm.
+
+The filter accepts the following options:
+
+@table @option
+@item low
+@item high
+Set low and high threshold values used by the Canny thresholding
+algorithm.
+
+The high threshold selects the "strong" edge pixels, which are then
+connected through 8-connectivity with the "weak" edge pixels selected
+by the low threshold.
+
+@var{low} and @var{high} threshold values must be chosen in the range
+[0,1], and @var{low} should be lesser or equal to @var{high}.
+
+Default value for @var{low} is @code{20/255}, and default value for @var{high}
+is @code{50/255}.
+@end table
+
+Example:
+@example
+edgedetect=low=0.1:high=0.4
+@end example
+
+@section extractplanes
+
+Extract color channel components from input video stream into
+separate grayscale video streams.
+
+The filter accepts the following option:
+
+@table @option
+@item planes
+Set plane(s) to extract.
+
+Available values for planes are:
+@table @samp
+@item y
+@item u
+@item v
+@item a
+@item r
+@item g
+@item b
+@end table
+
+Choosing planes not available in the input will result in an error.
+That means you cannot select @code{r}, @code{g}, @code{b} planes
+with @code{y}, @code{u}, @code{v} planes at same time.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Extract luma, u and v color channel component from input video frame
+into 3 grayscale outputs:
+@example
+ffmpeg -i video.avi -filter_complex 'extractplanes=y+u+v[y][u][v]' -map '[y]' y.avi -map '[u]' u.avi -map '[v]' v.avi
+@end example
+@end itemize
+
+@section elbg
+
+Apply a posterize effect using the ELBG (Enhanced LBG) algorithm.
+
+For each input image, the filter will compute the optimal mapping from
+the input to the output given the codebook length, that is the number
+of distinct output colors.
+
+This filter accepts the following options.
+
+@table @option
+@item codebook_length, l
+Set codebook length. The value must be a positive integer, and
+represents the number of distinct output colors. Default value is 256.
+
+@item nb_steps, n
+Set the maximum number of iterations to apply for computing the optimal
+mapping. The higher the value the better the result and the higher the
+computation time. Default value is 1.
+
+@item seed, s
+Set a random seed, must be an integer included between 0 and
+UINT32_MAX. If not specified, or if explicitly set to -1, the filter
+will try to use a good random seed on a best effort basis.
+@end table
+
+@section fade
+
+Apply a fade-in/out effect to the input video.
+
+It accepts the following parameters:
+
+@table @option
+@item type, t
+The effect type can be either "in" for a fade-in, or "out" for a fade-out
+effect.
+Default is @code{in}.
+
+@item start_frame, s
+Specify the number of the frame to start applying the fade
+effect at. Default is 0.
+
+@item nb_frames, n
+The number of frames that the fade effect lasts. At the end of the
+fade-in effect, the output video will have the same intensity as the input video.
+At the end of the fade-out transition, the output video will be filled with the
+selected @option{color}.
+Default is 25.
+
+@item alpha
+If set to 1, fade only alpha channel, if one exists on the input.
+Default value is 0.
+
+@item start_time, st
+Specify the timestamp (in seconds) of the frame to start to apply the fade
+effect. If both start_frame and start_time are specified, the fade will start at
+whichever comes last. Default is 0.
+
+@item duration, d
+The number of seconds for which the fade effect has to last. At the end of the
+fade-in effect the output video will have the same intensity as the input video,
+at the end of the fade-out transition the output video will be filled with the
+selected @option{color}.
+If both duration and nb_frames are specified, duration is used. Default is 0.
+
+@item color, c
+Specify the color of the fade. Default is "black".
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Fade in the first 30 frames of video:
+@example
+fade=in:0:30
+@end example
+
+The command above is equivalent to:
+@example
+fade=t=in:s=0:n=30
+@end example
+
+@item
+Fade out the last 45 frames of a 200-frame video:
+@example
+fade=out:155:45
+fade=type=out:start_frame=155:nb_frames=45
+@end example
+
+@item
+Fade in the first 25 frames and fade out the last 25 frames of a 1000-frame video:
+@example
+fade=in:0:25, fade=out:975:25
+@end example
+
+@item
+Make the first 5 frames yellow, then fade in from frame 5-24:
+@example
+fade=in:5:20:color=yellow
+@end example
+
+@item
+Fade in alpha over first 25 frames of video:
+@example
+fade=in:0:25:alpha=1
+@end example
+
+@item
+Make the first 5.5 seconds black, then fade in for 0.5 seconds:
+@example
+fade=t=in:st=5.5:d=0.5
+@end example
+
+@end itemize
+
+@section field
+
+Extract a single field from an interlaced image using stride
+arithmetic to avoid wasting CPU time. The output frames are marked as
+non-interlaced.
+
+The filter accepts the following options:
+
+@table @option
+@item type
+Specify whether to extract the top (if the value is @code{0} or
+@code{top}) or the bottom field (if the value is @code{1} or
+@code{bottom}).
+@end table
+
+@section fieldmatch
+
+Field matching filter for inverse telecine. It is meant to reconstruct the
+progressive frames from a telecined stream. The filter does not drop duplicated
+frames, so to achieve a complete inverse telecine @code{fieldmatch} needs to be
+followed by a decimation filter such as @ref{decimate} in the filtergraph.
+
+The separation of the field matching and the decimation is notably motivated by
+the possibility of inserting a de-interlacing filter fallback between the two.
+If the source has mixed telecined and real interlaced content,
+@code{fieldmatch} will not be able to match fields for the interlaced parts.
+But these remaining combed frames will be marked as interlaced, and thus can be
+de-interlaced by a later filter such as @ref{yadif} before decimation.
+
+In addition to the various configuration options, @code{fieldmatch} can take an
+optional second stream, activated through the @option{ppsrc} option. If
+enabled, the frames reconstruction will be based on the fields and frames from
+this second stream. This allows the first input to be pre-processed in order to
+help the various algorithms of the filter, while keeping the output lossless
+(assuming the fields are matched properly). Typically, a field-aware denoiser,
+or brightness/contrast adjustments can help.
+
+Note that this filter uses the same algorithms as TIVTC/TFM (AviSynth project)
+and VIVTC/VFM (VapourSynth project). The later is a light clone of TFM from
+which @code{fieldmatch} is based on. While the semantic and usage are very
+close, some behaviour and options names can differ.
+
+The filter accepts the following options:
+
+@table @option
+@item order
+Specify the assumed field order of the input stream. Available values are:
+
+@table @samp
+@item auto
+Auto detect parity (use FFmpeg's internal parity value).
+@item bff
+Assume bottom field first.
+@item tff
+Assume top field first.
+@end table
+
+Note that it is sometimes recommended not to trust the parity announced by the
+stream.
+
+Default value is @var{auto}.
+
+@item mode
+Set the matching mode or strategy to use. @option{pc} mode is the safest in the
+sense that it won't risk creating jerkiness due to duplicate frames when
+possible, but if there are bad edits or blended fields it will end up
+outputting combed frames when a good match might actually exist. On the other
+hand, @option{pcn_ub} mode is the most risky in terms of creating jerkiness,
+but will almost always find a good frame if there is one. The other values are
+all somewhere in between @option{pc} and @option{pcn_ub} in terms of risking
+jerkiness and creating duplicate frames versus finding good matches in sections
+with bad edits, orphaned fields, blended fields, etc.
+
+More details about p/c/n/u/b are available in @ref{p/c/n/u/b meaning} section.
+
+Available values are:
+
+@table @samp
+@item pc
+2-way matching (p/c)
+@item pc_n
+2-way matching, and trying 3rd match if still combed (p/c + n)
+@item pc_u
+2-way matching, and trying 3rd match (same order) if still combed (p/c + u)
+@item pc_n_ub
+2-way matching, trying 3rd match if still combed, and trying 4th/5th matches if
+still combed (p/c + n + u/b)
+@item pcn
+3-way matching (p/c/n)
+@item pcn_ub
+3-way matching, and trying 4th/5th matches if all 3 of the original matches are
+detected as combed (p/c/n + u/b)
+@end table
+
+The parenthesis at the end indicate the matches that would be used for that
+mode assuming @option{order}=@var{tff} (and @option{field} on @var{auto} or
+@var{top}).
+
+In terms of speed @option{pc} mode is by far the fastest and @option{pcn_ub} is
+the slowest.
+
+Default value is @var{pc_n}.
+
+@item ppsrc
+Mark the main input stream as a pre-processed input, and enable the secondary
+input stream as the clean source to pick the fields from. See the filter
+introduction for more details. It is similar to the @option{clip2} feature from
+VFM/TFM.
+
+Default value is @code{0} (disabled).
+
+@item field
+Set the field to match from. It is recommended to set this to the same value as
+@option{order} unless you experience matching failures with that setting. In
+certain circumstances changing the field that is used to match from can have a
+large impact on matching performance. Available values are:
+
+@table @samp
+@item auto
+Automatic (same value as @option{order}).
+@item bottom
+Match from the bottom field.
+@item top
+Match from the top field.
+@end table
+
+Default value is @var{auto}.
+
+@item mchroma
+Set whether or not chroma is included during the match comparisons. In most
+cases it is recommended to leave this enabled. You should set this to @code{0}
+only if your clip has bad chroma problems such as heavy rainbowing or other
+artifacts. Setting this to @code{0} could also be used to speed things up at
+the cost of some accuracy.
+
+Default value is @code{1}.
+
+@item y0
+@item y1
+These define an exclusion band which excludes the lines between @option{y0} and
+@option{y1} from being included in the field matching decision. An exclusion
+band can be used to ignore subtitles, a logo, or other things that may
+interfere with the matching. @option{y0} sets the starting scan line and
+@option{y1} sets the ending line; all lines in between @option{y0} and
+@option{y1} (including @option{y0} and @option{y1}) will be ignored. Setting
+@option{y0} and @option{y1} to the same value will disable the feature.
+@option{y0} and @option{y1} defaults to @code{0}.
+
+@item scthresh
+Set the scene change detection threshold as a percentage of maximum change on
+the luma plane. Good values are in the @code{[8.0, 14.0]} range. Scene change
+detection is only relevant in case @option{combmatch}=@var{sc}. The range for
+@option{scthresh} is @code{[0.0, 100.0]}.
+
+Default value is @code{12.0}.
+
+@item combmatch
+When @option{combatch} is not @var{none}, @code{fieldmatch} will take into
+account the combed scores of matches when deciding what match to use as the
+final match. Available values are:
+
+@table @samp
+@item none
+No final matching based on combed scores.
+@item sc
+Combed scores are only used when a scene change is detected.
+@item full
+Use combed scores all the time.
+@end table
+
+Default is @var{sc}.
+
+@item combdbg
+Force @code{fieldmatch} to calculate the combed metrics for certain matches and
+print them. This setting is known as @option{micout} in TFM/VFM vocabulary.
+Available values are:
+
+@table @samp
+@item none
+No forced calculation.
+@item pcn
+Force p/c/n calculations.
+@item pcnub
+Force p/c/n/u/b calculations.
+@end table
+
+Default value is @var{none}.
+
+@item cthresh
+This is the area combing threshold used for combed frame detection. This
+essentially controls how "strong" or "visible" combing must be to be detected.
+Larger values mean combing must be more visible and smaller values mean combing
+can be less visible or strong and still be detected. Valid settings are from
+@code{-1} (every pixel will be detected as combed) to @code{255} (no pixel will
+be detected as combed). This is basically a pixel difference value. A good
+range is @code{[8, 12]}.
+
+Default value is @code{9}.
+
+@item chroma
+Sets whether or not chroma is considered in the combed frame decision. Only
+disable this if your source has chroma problems (rainbowing, etc.) that are
+causing problems for the combed frame detection with chroma enabled. Actually,
+using @option{chroma}=@var{0} is usually more reliable, except for the case
+where there is chroma only combing in the source.
+
+Default value is @code{0}.
+
+@item blockx
+@item blocky
+Respectively set the x-axis and y-axis size of the window used during combed
+frame detection. This has to do with the size of the area in which
+@option{combpel} pixels are required to be detected as combed for a frame to be
+declared combed. See the @option{combpel} parameter description for more info.
+Possible values are any number that is a power of 2 starting at 4 and going up
+to 512.
+
+Default value is @code{16}.
+
+@item combpel
+The number of combed pixels inside any of the @option{blocky} by
+@option{blockx} size blocks on the frame for the frame to be detected as
+combed. While @option{cthresh} controls how "visible" the combing must be, this
+setting controls "how much" combing there must be in any localized area (a
+window defined by the @option{blockx} and @option{blocky} settings) on the
+frame. Minimum value is @code{0} and maximum is @code{blocky x blockx} (at
+which point no frames will ever be detected as combed). This setting is known
+as @option{MI} in TFM/VFM vocabulary.
+
+Default value is @code{80}.
+@end table
+
+@anchor{p/c/n/u/b meaning}
+@subsection p/c/n/u/b meaning
+
+@subsubsection p/c/n
+
+We assume the following telecined stream:
+
+@example
+Top fields: 1 2 2 3 4
+Bottom fields: 1 2 3 4 4
+@end example
+
+The numbers correspond to the progressive frame the fields relate to. Here, the
+first two frames are progressive, the 3rd and 4th are combed, and so on.
+
+When @code{fieldmatch} is configured to run a matching from bottom
+(@option{field}=@var{bottom}) this is how this input stream get transformed:
+
+@example
+Input stream:
+ T 1 2 2 3 4
+ B 1 2 3 4 4 <-- matching reference
+
+Matches: c c n n c
+
+Output stream:
+ T 1 2 3 4 4
+ B 1 2 3 4 4
+@end example
+
+As a result of the field matching, we can see that some frames get duplicated.
+To perform a complete inverse telecine, you need to rely on a decimation filter
+after this operation. See for instance the @ref{decimate} filter.
+
+The same operation now matching from top fields (@option{field}=@var{top})
+looks like this:
+
+@example
+Input stream:
+ T 1 2 2 3 4 <-- matching reference
+ B 1 2 3 4 4
+
+Matches: c c p p c
+
+Output stream:
+ T 1 2 2 3 4
+ B 1 2 2 3 4
+@end example
+
+In these examples, we can see what @var{p}, @var{c} and @var{n} mean;
+basically, they refer to the frame and field of the opposite parity:
+
+@itemize
+@item @var{p} matches the field of the opposite parity in the previous frame
+@item @var{c} matches the field of the opposite parity in the current frame
+@item @var{n} matches the field of the opposite parity in the next frame
+@end itemize
+
+@subsubsection u/b
+
+The @var{u} and @var{b} matching are a bit special in the sense that they match
+from the opposite parity flag. In the following examples, we assume that we are
+currently matching the 2nd frame (Top:2, bottom:2). According to the match, a
+'x' is placed above and below each matched fields.
+
+With bottom matching (@option{field}=@var{bottom}):
+@example
+Match: c p n b u
+
+ x x x x x
+ Top 1 2 2 1 2 2 1 2 2 1 2 2 1 2 2
+ Bottom 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
+ x x x x x
+
+Output frames:
+ 2 1 2 2 2
+ 2 2 2 1 3
+@end example
+
+With top matching (@option{field}=@var{top}):
+@example
+Match: c p n b u
+
+ x x x x x
+ Top 1 2 2 1 2 2 1 2 2 1 2 2 1 2 2
+ Bottom 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
+ x x x x x
+
+Output frames:
+ 2 2 2 1 2
+ 2 1 3 2 2
+@end example
+
+@subsection Examples
+
+Simple IVTC of a top field first telecined stream:
+@example
+fieldmatch=order=tff:combmatch=none, decimate
+@end example
+
+Advanced IVTC, with fallback on @ref{yadif} for still combed frames:
+@example
+fieldmatch=order=tff:combmatch=full, yadif=deint=interlaced, decimate
+@end example
+
+@section fieldorder
+
+Transform the field order of the input video.
+
+It accepts the following parameters:
+
+@table @option
+
+@item order
+The output field order. Valid values are @var{tff} for top field first or @var{bff}
+for bottom field first.
+@end table
+
+The default value is @samp{tff}.
+
+The transformation is done by shifting the picture content up or down
+by one line, and filling the remaining line with appropriate picture content.
+This method is consistent with most broadcast field order converters.
+
+If the input video is not flagged as being interlaced, or it is already
+flagged as being of the required output field order, then this filter does
+not alter the incoming video.
+
+It is very useful when converting to or from PAL DV material,
+which is bottom field first.
+
+For example:
+@example
+ffmpeg -i in.vob -vf "fieldorder=bff" out.dv
+@end example
+
+@section fifo
+
+Buffer input images and send them when they are requested.
+
+It is mainly useful when auto-inserted by the libavfilter
+framework.
+
+It does not take parameters.
+
+@anchor{format}
+@section format
+
+Convert the input video to one of the specified pixel formats.
+Libavfilter will try to pick one that is suitable as input to
+the next filter.
+
+It accepts the following parameters:
+@table @option
+
+@item pix_fmts
+A '|'-separated list of pixel format names, such as
+"pix_fmts=yuv420p|monow|rgb24".
+
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Convert the input video to the @var{yuv420p} format
+@example
+format=pix_fmts=yuv420p
+@end example
+
+Convert the input video to any of the formats in the list
+@example
+format=pix_fmts=yuv420p|yuv444p|yuv410p
+@end example
+@end itemize
+
+@anchor{fps}
+@section fps
+
+Convert the video to specified constant frame rate by duplicating or dropping
+frames as necessary.
+
+It accepts the following parameters:
+@table @option
+
+@item fps
+The desired output frame rate. The default is @code{25}.
+
+@item round
+Rounding method.
+
+Possible values are:
+@table @option
+@item zero
+zero round towards 0
+@item inf
+round away from 0
+@item down
+round towards -infinity
+@item up
+round towards +infinity
+@item near
+round to nearest
+@end table
+The default is @code{near}.
+
+@item start_time
+Assume the first PTS should be the given value, in seconds. This allows for
+padding/trimming at the start of stream. By default, no assumption is made
+about the first frame's expected PTS, so no padding or trimming is done.
+For example, this could be set to 0 to pad the beginning with duplicates of
+the first frame if a video stream starts after the audio stream or to trim any
+frames with a negative PTS.
+
+@end table
+
+Alternatively, the options can be specified as a flat string:
+@var{fps}[:@var{round}].
+
+See also the @ref{setpts} filter.
+
+@subsection Examples
+
+@itemize
+@item
+A typical usage in order to set the fps to 25:
+@example
+fps=fps=25
+@end example
+
+@item
+Sets the fps to 24, using abbreviation and rounding method to round to nearest:
+@example
+fps=fps=film:round=near
+@end example
+@end itemize
+
+@section framepack
+
+Pack two different video streams into a stereoscopic video, setting proper
+metadata on supported codecs. The two views should have the same size and
+framerate and processing will stop when the shorter video ends. Please note
+that you may conveniently adjust view properties with the @ref{scale} and
+@ref{fps} filters.
+
+It accepts the following parameters:
+@table @option
+
+@item format
+The desired packing format. Supported values are:
+
+@table @option
+
+@item sbs
+The views are next to each other (default).
+
+@item tab
+The views are on top of each other.
+
+@item lines
+The views are packed by line.
+
+@item columns
+The views are packed by column.
+
+@item frameseq
+The views are temporally interleaved.
+
+@end table
+
+@end table
+
+Some examples:
+
+@example
+# Convert left and right views into a frame-sequential video
+ffmpeg -i LEFT -i RIGHT -filter_complex framepack=frameseq OUTPUT
+
+# Convert views into a side-by-side video with the same output resolution as the input
+ffmpeg -i LEFT -i RIGHT -filter_complex [0:v]scale=w=iw/2[left],[1:v]scale=w=iw/2[right],[left][right]framepack=sbs OUTPUT
+@end example
+
+@section framestep
+
+Select one frame every N-th frame.
+
+This filter accepts the following option:
+@table @option
+@item step
+Select frame after every @code{step} frames.
+Allowed values are positive integers higher than 0. Default value is @code{1}.
+@end table
+
+@anchor{frei0r}
+@section frei0r
+
+Apply a frei0r effect to the input video.
+
+To enable the compilation of this filter, you need to install the frei0r
+header and configure FFmpeg with @code{--enable-frei0r}.
+
+It accepts the following parameters:
+
+@table @option
+
+@item filter_name
+The name of the frei0r effect to load. If the environment variable
+@env{FREI0R_PATH} is defined, the frei0r effect is searched for in each of the
+directories specified by the colon-separated list in @env{FREIOR_PATH}.
+Otherwise, the standard frei0r paths are searched, in this order:
+@file{HOME/.frei0r-1/lib/}, @file{/usr/local/lib/frei0r-1/},
+@file{/usr/lib/frei0r-1/}.
+
+@item filter_params
+A '|'-separated list of parameters to pass to the frei0r effect.
+
+@end table
+
+A frei0r effect parameter can be a boolean (its value is either
+"y" or "n"), a double, a color (specified as
+@var{R}/@var{G}/@var{B}, where @var{R}, @var{G}, and @var{B} are floating point
+numbers between 0.0 and 1.0, inclusive) or by a color description specified in the "Color"
+section in the ffmpeg-utils manual), a position (specified as @var{X}/@var{Y}, where
+@var{X} and @var{Y} are floating point numbers) and/or a string.
+
+The number and types of parameters depend on the loaded effect. If an
+effect parameter is not specified, the default value is set.
+
+@subsection Examples
+
+@itemize
+@item
+Apply the distort0r effect, setting the first two double parameters:
+@example
+frei0r=filter_name=distort0r:filter_params=0.5|0.01
+@end example
+
+@item
+Apply the colordistance effect, taking a color as the first parameter:
+@example
+frei0r=colordistance:0.2/0.3/0.4
+frei0r=colordistance:violet
+frei0r=colordistance:0x112233
+@end example
+
+@item
+Apply the perspective effect, specifying the top left and top right image
+positions:
+@example
+frei0r=perspective:0.2/0.2|0.8/0.2
+@end example
+@end itemize
+
+For more information, see
+@url{http://frei0r.dyne.org}
+
+@section geq
+
+The filter accepts the following options:
+
+@table @option
+@item lum_expr, lum
+Set the luminance expression.
+@item cb_expr, cb
+Set the chrominance blue expression.
+@item cr_expr, cr
+Set the chrominance red expression.
+@item alpha_expr, a
+Set the alpha expression.
+@item red_expr, r
+Set the red expression.
+@item green_expr, g
+Set the green expression.
+@item blue_expr, b
+Set the blue expression.
+@end table
+
+The colorspace is selected according to the specified options. If one
+of the @option{lum_expr}, @option{cb_expr}, or @option{cr_expr}
+options is specified, the filter will automatically select a YCbCr
+colorspace. If one of the @option{red_expr}, @option{green_expr}, or
+@option{blue_expr} options is specified, it will select an RGB
+colorspace.
+
+If one of the chrominance expression is not defined, it falls back on the other
+one. If no alpha expression is specified it will evaluate to opaque value.
+If none of chrominance expressions are specified, they will evaluate
+to the luminance expression.
+
+The expressions can use the following variables and functions:
+
+@table @option
+@item N
+The sequential number of the filtered frame, starting from @code{0}.
+
+@item X
+@item Y
+The coordinates of the current sample.
+
+@item W
+@item H
+The width and height of the image.
+
+@item SW
+@item SH
+Width and height scale depending on the currently filtered plane. It is the
+ratio between the corresponding luma plane number of pixels and the current
+plane ones. E.g. for YUV4:2:0 the values are @code{1,1} for the luma plane, and
+@code{0.5,0.5} for chroma planes.
+
+@item T
+Time of the current frame, expressed in seconds.
+
+@item p(x, y)
+Return the value of the pixel at location (@var{x},@var{y}) of the current
+plane.
+
+@item lum(x, y)
+Return the value of the pixel at location (@var{x},@var{y}) of the luminance
+plane.
+
+@item cb(x, y)
+Return the value of the pixel at location (@var{x},@var{y}) of the
+blue-difference chroma plane. Return 0 if there is no such plane.
+
+@item cr(x, y)
+Return the value of the pixel at location (@var{x},@var{y}) of the
+red-difference chroma plane. Return 0 if there is no such plane.
+
+@item r(x, y)
+@item g(x, y)
+@item b(x, y)
+Return the value of the pixel at location (@var{x},@var{y}) of the
+red/green/blue component. Return 0 if there is no such component.
+
+@item alpha(x, y)
+Return the value of the pixel at location (@var{x},@var{y}) of the alpha
+plane. Return 0 if there is no such plane.
+@end table
+
+For functions, if @var{x} and @var{y} are outside the area, the value will be
+automatically clipped to the closer edge.
+
+@subsection Examples
+
+@itemize
+@item
+Flip the image horizontally:
+@example
+geq=p(W-X\,Y)
+@end example
+
+@item
+Generate a bidimensional sine wave, with angle @code{PI/3} and a
+wavelength of 100 pixels:
+@example
+geq=128 + 100*sin(2*(PI/100)*(cos(PI/3)*(X-50*T) + sin(PI/3)*Y)):128:128
+@end example
+
+@item
+Generate a fancy enigmatic moving light:
+@example
+nullsrc=s=256x256,geq=random(1)/hypot(X-cos(N*0.07)*W/2-W/2\,Y-sin(N*0.09)*H/2-H/2)^2*1000000*sin(N*0.02):128:128
+@end example
+
+@item
+Generate a quick emboss effect:
+@example
+format=gray,geq=lum_expr='(p(X,Y)+(256-p(X-4,Y-4)))/2'
+@end example
+
+@item
+Modify RGB components depending on pixel position:
+@example
+geq=r='X/W*r(X,Y)':g='(1-X/W)*g(X,Y)':b='(H-Y)/H*b(X,Y)'
+@end example
+@end itemize
+
+@section gradfun
+
+Fix the banding artifacts that are sometimes introduced into nearly flat
+regions by truncation to 8bit color depth.
+Interpolate the gradients that should go where the bands are, and
+dither them.
+
+It is designed for playback only. Do not use it prior to
+lossy compression, because compression tends to lose the dither and
+bring back the bands.
+
+It accepts the following parameters:
+
+@table @option
+
+@item strength
+The maximum amount by which the filter will change any one pixel. This is also
+the threshold for detecting nearly flat regions. Acceptable values range from
+.51 to 64; the default value is 1.2. Out-of-range values will be clipped to the
+valid range.
+
+@item radius
+The neighborhood to fit the gradient to. A larger radius makes for smoother
+gradients, but also prevents the filter from modifying the pixels near detailed
+regions. Acceptable values are 8-32; the default value is 16. Out-of-range
+values will be clipped to the valid range.
+
+@end table
+
+Alternatively, the options can be specified as a flat string:
+@var{strength}[:@var{radius}]
+
+@subsection Examples
+
+@itemize
+@item
+Apply the filter with a @code{3.5} strength and radius of @code{8}:
+@example
+gradfun=3.5:8
+@end example
+
+@item
+Specify radius, omitting the strength (which will fall-back to the default
+value):
+@example
+gradfun=radius=8
+@end example
+
+@end itemize
+
+@anchor{haldclut}
+@section haldclut
+
+Apply a Hald CLUT to a video stream.
+
+First input is the video stream to process, and second one is the Hald CLUT.
+The Hald CLUT input can be a simple picture or a complete video stream.
+
+The filter accepts the following options:
+
+@table @option
+@item shortest
+Force termination when the shortest input terminates. Default is @code{0}.
+@item repeatlast
+Continue applying the last CLUT after the end of the stream. A value of
+@code{0} disable the filter after the last frame of the CLUT is reached.
+Default is @code{1}.
+@end table
+
+@code{haldclut} also has the same interpolation options as @ref{lut3d} (both
+filters share the same internals).
+
+More information about the Hald CLUT can be found on Eskil Steenberg's website
+(Hald CLUT author) at @url{http://www.quelsolaar.com/technology/clut.html}.
+
+@subsection Workflow examples
+
+@subsubsection Hald CLUT video stream
+
+Generate an identity Hald CLUT stream altered with various effects:
+@example
+ffmpeg -f lavfi -i @ref{haldclutsrc}=8 -vf "hue=H=2*PI*t:s=sin(2*PI*t)+1, curves=cross_process" -t 10 -c:v ffv1 clut.nut
+@end example
+
+Note: make sure you use a lossless codec.
+
+Then use it with @code{haldclut} to apply it on some random stream:
+@example
+ffmpeg -f lavfi -i mandelbrot -i clut.nut -filter_complex '[0][1] haldclut' -t 20 mandelclut.mkv
+@end example
+
+The Hald CLUT will be applied to the 10 first seconds (duration of
+@file{clut.nut}), then the latest picture of that CLUT stream will be applied
+to the remaining frames of the @code{mandelbrot} stream.
+
+@subsubsection Hald CLUT with preview
+
+A Hald CLUT is supposed to be a squared image of @code{Level*Level*Level} by
+@code{Level*Level*Level} pixels. For a given Hald CLUT, FFmpeg will select the
+biggest possible square starting at the top left of the picture. The remaining
+padding pixels (bottom or right) will be ignored. This area can be used to add
+a preview of the Hald CLUT.
+
+Typically, the following generated Hald CLUT will be supported by the
+@code{haldclut} filter:
+
+@example
+ffmpeg -f lavfi -i @ref{haldclutsrc}=8 -vf "
+ pad=iw+320 [padded_clut];
+ smptebars=s=320x256, split [a][b];
+ [padded_clut][a] overlay=W-320:h, curves=color_negative [main];
+ [main][b] overlay=W-320" -frames:v 1 clut.png
+@end example
+
+It contains the original and a preview of the effect of the CLUT: SMPTE color
+bars are displayed on the right-top, and below the same color bars processed by
+the color changes.
+
+Then, the effect of this Hald CLUT can be visualized with:
+@example
+ffplay input.mkv -vf "movie=clut.png, [in] haldclut"
+@end example
+
+@section hflip
+
+Flip the input video horizontally.
+
+For example, to horizontally flip the input video with @command{ffmpeg}:
+@example
+ffmpeg -i in.avi -vf "hflip" out.avi
+@end example
+
+@section histeq
+This filter applies a global color histogram equalization on a
+per-frame basis.
+
+It can be used to correct video that has a compressed range of pixel
+intensities. The filter redistributes the pixel intensities to
+equalize their distribution across the intensity range. It may be
+viewed as an "automatically adjusting contrast filter". This filter is
+useful only for correcting degraded or poorly captured source
+video.
+
+The filter accepts the following options:
+
+@table @option
+@item strength
+Determine the amount of equalization to be applied. As the strength
+is reduced, the distribution of pixel intensities more-and-more
+approaches that of the input frame. The value must be a float number
+in the range [0,1] and defaults to 0.200.
+
+@item intensity
+Set the maximum intensity that can generated and scale the output
+values appropriately. The strength should be set as desired and then
+the intensity can be limited if needed to avoid washing-out. The value
+must be a float number in the range [0,1] and defaults to 0.210.
+
+@item antibanding
+Set the antibanding level. If enabled the filter will randomly vary
+the luminance of output pixels by a small amount to avoid banding of
+the histogram. Possible values are @code{none}, @code{weak} or
+@code{strong}. It defaults to @code{none}.
+@end table
+
+@section histogram
+
+Compute and draw a color distribution histogram for the input video.
+
+The computed histogram is a representation of the color component
+distribution in an image.
+
+The filter accepts the following options:
+
+@table @option
+@item mode
+Set histogram mode.
+
+It accepts the following values:
+@table @samp
+@item levels
+Standard histogram that displays the color components distribution in an
+image. Displays color graph for each color component. Shows distribution of
+the Y, U, V, A or R, G, B components, depending on input format, in the
+current frame. Below each graph a color component scale meter is shown.
+
+@item color
+Displays chroma values (U/V color placement) in a two dimensional
+graph (which is called a vectorscope). The brighter a pixel in the
+vectorscope, the more pixels of the input frame correspond to that pixel
+(i.e., more pixels have this chroma value). The V component is displayed on
+the horizontal (X) axis, with the leftmost side being V = 0 and the rightmost
+side being V = 255. The U component is displayed on the vertical (Y) axis,
+with the top representing U = 0 and the bottom representing U = 255.
+
+The position of a white pixel in the graph corresponds to the chroma value of
+a pixel of the input clip. The graph can therefore be used to read the hue
+(color flavor) and the saturation (the dominance of the hue in the color). As
+the hue of a color changes, it moves around the square. At the center of the
+square the saturation is zero, which means that the corresponding pixel has no
+color. If the amount of a specific color is increased (while leaving the other
+colors unchanged) the saturation increases, and the indicator moves towards
+the edge of the square.
+
+@item color2
+Chroma values in vectorscope, similar as @code{color} but actual chroma values
+are displayed.
+
+@item waveform
+Per row/column color component graph. In row mode, the graph on the left side
+represents color component value 0 and the right side represents value = 255.
+In column mode, the top side represents color component value = 0 and bottom
+side represents value = 255.
+@end table
+Default value is @code{levels}.
+
+@item level_height
+Set height of level in @code{levels}. Default value is @code{200}.
+Allowed range is [50, 2048].
+
+@item scale_height
+Set height of color scale in @code{levels}. Default value is @code{12}.
+Allowed range is [0, 40].
+
+@item step
+Set step for @code{waveform} mode. Smaller values are useful to find out how
+many values of the same luminance are distributed across input rows/columns.
+Default value is @code{10}. Allowed range is [1, 255].
+
+@item waveform_mode
+Set mode for @code{waveform}. Can be either @code{row}, or @code{column}.
+Default is @code{row}.
+
+@item waveform_mirror
+Set mirroring mode for @code{waveform}. @code{0} means unmirrored, @code{1}
+means mirrored. In mirrored mode, higher values will be represented on the left
+side for @code{row} mode and at the top for @code{column} mode. Default is
+@code{0} (unmirrored).
+
+@item display_mode
+Set display mode for @code{waveform} and @code{levels}.
+It accepts the following values:
+@table @samp
+@item parade
+Display separate graph for the color components side by side in
+@code{row} waveform mode or one below the other in @code{column} waveform mode
+for @code{waveform} histogram mode. For @code{levels} histogram mode,
+per color component graphs are placed below each other.
+
+Using this display mode in @code{waveform} histogram mode makes it easy to
+spot color casts in the highlights and shadows of an image, by comparing the
+contours of the top and the bottom graphs of each waveform. Since whites,
+grays, and blacks are characterized by exactly equal amounts of red, green,
+and blue, neutral areas of the picture should display three waveforms of
+roughly equal width/height. If not, the correction is easy to perform by
+making level adjustments the three waveforms.
+
+@item overlay
+Presents information identical to that in the @code{parade}, except
+that the graphs representing color components are superimposed directly
+over one another.
+
+This display mode in @code{waveform} histogram mode makes it easier to spot
+relative differences or similarities in overlapping areas of the color
+components that are supposed to be identical, such as neutral whites, grays,
+or blacks.
+@end table
+Default is @code{parade}.
+
+@item levels_mode
+Set mode for @code{levels}. Can be either @code{linear}, or @code{logarithmic}.
+Default is @code{linear}.
+@end table
+
+@subsection Examples
+
+@itemize
+
+@item
+Calculate and draw histogram:
+@example
+ffplay -i input -vf histogram
+@end example
+
+@end itemize
+
+@anchor{hqdn3d}
+@section hqdn3d
+
+This is a high precision/quality 3d denoise filter. It aims to reduce
+image noise, producing smooth images and making still images really
+still. It should enhance compressibility.
+
+It accepts the following optional parameters:
+
+@table @option
+@item luma_spatial
+A non-negative floating point number which specifies spatial luma strength.
+It defaults to 4.0.
+
+@item chroma_spatial
+A non-negative floating point number which specifies spatial chroma strength.
+It defaults to 3.0*@var{luma_spatial}/4.0.
+
+@item luma_tmp
+A floating point number which specifies luma temporal strength. It defaults to
+6.0*@var{luma_spatial}/4.0.
+
+@item chroma_tmp
+A floating point number which specifies chroma temporal strength. It defaults to
+@var{luma_tmp}*@var{chroma_spatial}/@var{luma_spatial}.
+@end table
+
+@section hue
+
+Modify the hue and/or the saturation of the input.
+
+It accepts the following parameters:
+
+@table @option
+@item h
+Specify the hue angle as a number of degrees. It accepts an expression,
+and defaults to "0".
+
+@item s
+Specify the saturation in the [-10,10] range. It accepts an expression and
+defaults to "1".
+
+@item H
+Specify the hue angle as a number of radians. It accepts an
+expression, and defaults to "0".
+
+@item b
+Specify the brightness in the [-10,10] range. It accepts an expression and
+defaults to "0".
+@end table
+
+@option{h} and @option{H} are mutually exclusive, and can't be
+specified at the same time.
+
+The @option{b}, @option{h}, @option{H} and @option{s} option values are
+expressions containing the following constants:
+
+@table @option
+@item n
+frame count of the input frame starting from 0
+
+@item pts
+presentation timestamp of the input frame expressed in time base units
+
+@item r
+frame rate of the input video, NAN if the input frame rate is unknown
+
+@item t
+timestamp expressed in seconds, NAN if the input timestamp is unknown
+
+@item tb
+time base of the input video
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Set the hue to 90 degrees and the saturation to 1.0:
+@example
+hue=h=90:s=1
+@end example
+
+@item
+Same command but expressing the hue in radians:
+@example
+hue=H=PI/2:s=1
+@end example
+
+@item
+Rotate hue and make the saturation swing between 0
+and 2 over a period of 1 second:
+@example
+hue="H=2*PI*t: s=sin(2*PI*t)+1"
+@end example
+
+@item
+Apply a 3 seconds saturation fade-in effect starting at 0:
+@example
+hue="s=min(t/3\,1)"
+@end example
+
+The general fade-in expression can be written as:
+@example
+hue="s=min(0\, max((t-START)/DURATION\, 1))"
+@end example
+
+@item
+Apply a 3 seconds saturation fade-out effect starting at 5 seconds:
+@example
+hue="s=max(0\, min(1\, (8-t)/3))"
+@end example
+
+The general fade-out expression can be written as:
+@example
+hue="s=max(0\, min(1\, (START+DURATION-t)/DURATION))"
+@end example
+
+@end itemize
+
+@subsection Commands
+
+This filter supports the following commands:
+@table @option
+@item b
+@item s
+@item h
+@item H
+Modify the hue and/or the saturation and/or brightness of the input video.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
+@end table
+
+@section idet
+
+Detect video interlacing type.
+
+This filter tries to detect if the input is interlaced or progressive,
+top or bottom field first.
+
+The filter accepts the following options:
+
+@table @option
+@item intl_thres
+Set interlacing threshold.
+@item prog_thres
+Set progressive threshold.
+@end table
+
+@section il
+
+Deinterleave or interleave fields.
+
+This filter allows one to process interlaced images fields without
+deinterlacing them. Deinterleaving splits the input frame into 2
+fields (so called half pictures). Odd lines are moved to the top
+half of the output image, even lines to the bottom half.
+You can process (filter) them independently and then re-interleave them.
+
+The filter accepts the following options:
+
+@table @option
+@item luma_mode, l
+@item chroma_mode, c
+@item alpha_mode, a
+Available values for @var{luma_mode}, @var{chroma_mode} and
+@var{alpha_mode} are:
+
+@table @samp
+@item none
+Do nothing.
+
+@item deinterleave, d
+Deinterleave fields, placing one above the other.
+
+@item interleave, i
+Interleave fields. Reverse the effect of deinterleaving.
+@end table
+Default value is @code{none}.
+
+@item luma_swap, ls
+@item chroma_swap, cs
+@item alpha_swap, as
+Swap luma/chroma/alpha fields. Exchange even & odd lines. Default value is @code{0}.
+@end table
+
+@section interlace
+
+Simple interlacing filter from progressive contents. This interleaves upper (or
+lower) lines from odd frames with lower (or upper) lines from even frames,
+halving the frame rate and preserving image height. A vertical lowpass filter
+is always applied in order to avoid twitter effects and reduce moiré patterns.
+
+@example
+ Original Original New Frame
+ Frame 'j' Frame 'j+1' (tff)
+ ========== =========== ==================
+ Line 0 --------------------> Frame 'j' Line 0
+ Line 1 Line 1 ----> Frame 'j+1' Line 1
+ Line 2 ---------------------> Frame 'j' Line 2
+ Line 3 Line 3 ----> Frame 'j+1' Line 3
+ ... ... ...
+New Frame + 1 will be generated by Frame 'j+2' and Frame 'j+3' and so on
+@end example
+
+It accepts the following optional parameters:
+
+@table @option
+@item scan
+This determines whether the interlaced frame is taken from the even
+(tff - default) or odd (bff) lines of the progressive frame.
+@end table
+
+@section kerndeint
+
+Deinterlace input video by applying Donald Graft's adaptive kernel
+deinterling. Work on interlaced parts of a video to produce
+progressive frames.
+
+The description of the accepted parameters follows.
+
+@table @option
+@item thresh
+Set the threshold which affects the filter's tolerance when
+determining if a pixel line must be processed. It must be an integer
+in the range [0,255] and defaults to 10. A value of 0 will result in
+applying the process on every pixels.
+
+@item map
+Paint pixels exceeding the threshold value to white if set to 1.
+Default is 0.
+
+@item order
+Set the fields order. Swap fields if set to 1, leave fields alone if
+0. Default is 0.
+
+@item sharp
+Enable additional sharpening if set to 1. Default is 0.
+
+@item twoway
+Enable twoway sharpening if set to 1. Default is 0.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Apply default values:
+@example
+kerndeint=thresh=10:map=0:order=0:sharp=0:twoway=0
+@end example
+
+@item
+Enable additional sharpening:
+@example
+kerndeint=sharp=1
+@end example
+
+@item
+Paint processed pixels in white:
+@example
+kerndeint=map=1
+@end example
+@end itemize
+
+@anchor{lut3d}
+@section lut3d
+
+Apply a 3D LUT to an input video.
+
+The filter accepts the following options:
+
+@table @option
+@item file
+Set the 3D LUT file name.
+
+Currently supported formats:
+@table @samp
+@item 3dl
+AfterEffects
+@item cube
+Iridas
+@item dat
+DaVinci
+@item m3d
+Pandora
+@end table
+@item interp
+Select interpolation mode.
+
+Available values are:
+
+@table @samp
+@item nearest
+Use values from the nearest defined point.
+@item trilinear
+Interpolate values using the 8 points defining a cube.
+@item tetrahedral
+Interpolate values using a tetrahedron.
+@end table
+@end table
+
+@section lut, lutrgb, lutyuv
+
+Compute a look-up table for binding each pixel component input value
+to an output value, and apply it to the input video.
+
+@var{lutyuv} applies a lookup table to a YUV input video, @var{lutrgb}
+to an RGB input video.
+
+These filters accept the following parameters:
+@table @option
+@item c0
+set first pixel component expression
+@item c1
+set second pixel component expression
+@item c2
+set third pixel component expression
+@item c3
+set fourth pixel component expression, corresponds to the alpha component
+
+@item r
+set red component expression
+@item g
+set green component expression
+@item b
+set blue component expression
+@item a
+alpha component expression
+
+@item y
+set Y/luminance component expression
+@item u
+set U/Cb component expression
+@item v
+set V/Cr component expression
+@end table
+
+Each of them specifies the expression to use for computing the lookup table for
+the corresponding pixel component values.
+
+The exact component associated to each of the @var{c*} options depends on the
+format in input.
+
+The @var{lut} filter requires either YUV or RGB pixel formats in input,
+@var{lutrgb} requires RGB pixel formats in input, and @var{lutyuv} requires YUV.
+
+The expressions can contain the following constants and functions:
+
+@table @option
+@item w
+@item h
+The input width and height.
+
+@item val
+The input value for the pixel component.
+
+@item clipval
+The input value, clipped to the @var{minval}-@var{maxval} range.
+
+@item maxval
+The maximum value for the pixel component.
+
+@item minval
+The minimum value for the pixel component.
+
+@item negval
+The negated value for the pixel component value, clipped to the
+@var{minval}-@var{maxval} range; it corresponds to the expression
+"maxval-clipval+minval".
+
+@item clip(val)
+The computed value in @var{val}, clipped to the
+@var{minval}-@var{maxval} range.
+
+@item gammaval(gamma)
+The computed gamma correction value of the pixel component value,
+clipped to the @var{minval}-@var{maxval} range. It corresponds to the
+expression
+"pow((clipval-minval)/(maxval-minval)\,@var{gamma})*(maxval-minval)+minval"
+
+@end table
+
+All expressions default to "val".
+
+@subsection Examples
+
+@itemize
+@item
+Negate input video:
+@example
+lutrgb="r=maxval+minval-val:g=maxval+minval-val:b=maxval+minval-val"
+lutyuv="y=maxval+minval-val:u=maxval+minval-val:v=maxval+minval-val"
+@end example
+
+The above is the same as:
+@example
+lutrgb="r=negval:g=negval:b=negval"
+lutyuv="y=negval:u=negval:v=negval"
+@end example
+
+@item
+Negate luminance:
+@example
+lutyuv=y=negval
+@end example
+
+@item
+Remove chroma components, turning the video into a graytone image:
+@example
+lutyuv="u=128:v=128"
+@end example
+
+@item
+Apply a luma burning effect:
+@example
+lutyuv="y=2*val"
+@end example
+
+@item
+Remove green and blue components:
+@example
+lutrgb="g=0:b=0"
+@end example
+
+@item
+Set a constant alpha channel value on input:
+@example
+format=rgba,lutrgb=a="maxval-minval/2"
+@end example
+
+@item
+Correct luminance gamma by a factor of 0.5:
+@example
+lutyuv=y=gammaval(0.5)
+@end example
+
+@item
+Discard least significant bits of luma:
+@example
+lutyuv=y='bitand(val, 128+64+32)'
+@end example
+@end itemize
+
+@section mergeplanes
+
+Merge color channel components from several video streams.
+
+The filter accepts up to 4 input streams, and merge selected input
+planes to the output video.
+
+This filter accepts the following options:
+@table @option
+@item mapping
+Set input to output plane mapping. Default is @code{0}.
+
+The mappings is specified as a bitmap. It should be specified as a
+hexadecimal number in the form 0xAa[Bb[Cc[Dd]]]. 'Aa' describes the
+mapping for the first plane of the output stream. 'A' sets the number of
+the input stream to use (from 0 to 3), and 'a' the plane number of the
+corresponding input to use (from 0 to 3). The rest of the mappings is
+similar, 'Bb' describes the mapping for the output stream second
+plane, 'Cc' describes the mapping for the output stream third plane and
+'Dd' describes the mapping for the output stream fourth plane.
+
+@item format
+Set output pixel format. Default is @code{yuva444p}.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Merge three gray video streams of same width and height into single video stream:
+@example
+[a0][a1][a2]mergeplanes=0x001020:yuv444p
+@end example
+
+@item
+Merge 1st yuv444p stream and 2nd gray video stream into yuva444p video stream:
+@example
+[a0][a1]mergeplanes=0x00010210:yuva444p
+@end example
+
+@item
+Swap Y and A plane in yuva444p stream:
+@example
+format=yuva444p,mergeplanes=0x03010200:yuva444p
+@end example
+
+@item
+Swap U and V plane in yuv420p stream:
+@example
+format=yuv420p,mergeplanes=0x000201:yuv420p
+@end example
+
+@item
+Cast a rgb24 clip to yuv444p:
+@example
+format=rgb24,mergeplanes=0x000102:yuv444p
+@end example
+@end itemize
+
+@section mcdeint
+
+Apply motion-compensation deinterlacing.
+
+It needs one field per frame as input and must thus be used together
+with yadif=1/3 or equivalent.
+
+This filter accepts the following options:
+@table @option
+@item mode
+Set the deinterlacing mode.
+
+It accepts one of the following values:
+@table @samp
+@item fast
+@item medium
+@item slow
+use iterative motion estimation
+@item extra_slow
+like @samp{slow}, but use multiple reference frames.
+@end table
+Default value is @samp{fast}.
+
+@item parity
+Set the picture field parity assumed for the input video. It must be
+one of the following values:
+
+@table @samp
+@item 0, tff
+assume top field first
+@item 1, bff
+assume bottom field first
+@end table
+
+Default value is @samp{bff}.
+
+@item qp
+Set per-block quantization parameter (QP) used by the internal
+encoder.
+
+Higher values should result in a smoother motion vector field but less
+optimal individual vectors. Default value is 1.
+@end table
+
+@section mp
+
+Apply an MPlayer filter to the input video.
+
+This filter provides a wrapper around some of the filters of
+MPlayer/MEncoder.
+
+This wrapper is considered experimental. Some of the wrapped filters
+may not work properly and we may drop support for them, as they will
+be implemented natively into FFmpeg. Thus you should avoid
+depending on them when writing portable scripts.
+
+The filter accepts the parameters:
+@var{filter_name}[:=]@var{filter_params}
+
+@var{filter_name} is the name of a supported MPlayer filter,
+@var{filter_params} is a string containing the parameters accepted by
+the named filter.
+
+The list of the currently supported filters follows:
+@table @var
+@item eq2
+@item eq
+@item fspp
+@item ilpack
+@item pp7
+@item softpulldown
+@item uspp
+@end table
+
+The parameter syntax and behavior for the listed filters are the same
+of the corresponding MPlayer filters. For detailed instructions check
+the "VIDEO FILTERS" section in the MPlayer manual.
+
+@subsection Examples
+
+@itemize
+@item
+Adjust gamma, brightness, contrast:
+@example
+mp=eq2=1.0:2:0.5
+@end example
+@end itemize
+
+See also mplayer(1), @url{http://www.mplayerhq.hu/}.
+
+@section mpdecimate
+
+Drop frames that do not differ greatly from the previous frame in
+order to reduce frame rate.
+
+The main use of this filter is for very-low-bitrate encoding
+(e.g. streaming over dialup modem), but it could in theory be used for
+fixing movies that were inverse-telecined incorrectly.
+
+A description of the accepted options follows.
+
+@table @option
+@item max
+Set the maximum number of consecutive frames which can be dropped (if
+positive), or the minimum interval between dropped frames (if
+negative). If the value is 0, the frame is dropped unregarding the
+number of previous sequentially dropped frames.
+
+Default value is 0.
+
+@item hi
+@item lo
+@item frac
+Set the dropping threshold values.
+
+Values for @option{hi} and @option{lo} are for 8x8 pixel blocks and
+represent actual pixel value differences, so a threshold of 64
+corresponds to 1 unit of difference for each pixel, or the same spread
+out differently over the block.
+
+A frame is a candidate for dropping if no 8x8 blocks differ by more
+than a threshold of @option{hi}, and if no more than @option{frac} blocks (1
+meaning the whole image) differ by more than a threshold of @option{lo}.
+
+Default value for @option{hi} is 64*12, default value for @option{lo} is
+64*5, and default value for @option{frac} is 0.33.
+@end table
+
+
+@section negate
+
+Negate input video.
+
+It accepts an integer in input; if non-zero it negates the
+alpha component (if available). The default value in input is 0.
+
+@section noformat
+
+Force libavfilter not to use any of the specified pixel formats for the
+input to the next filter.
+
+It accepts the following parameters:
+@table @option
+
+@item pix_fmts
+A '|'-separated list of pixel format names, such as
+apix_fmts=yuv420p|monow|rgb24".
+
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Force libavfilter to use a format different from @var{yuv420p} for the
+input to the vflip filter:
+@example
+noformat=pix_fmts=yuv420p,vflip
+@end example
+
+@item
+Convert the input video to any of the formats not contained in the list:
+@example
+noformat=yuv420p|yuv444p|yuv410p
+@end example
+@end itemize
+
+@section noise
+
+Add noise on video input frame.
+
+The filter accepts the following options:
+
+@table @option
+@item all_seed
+@item c0_seed
+@item c1_seed
+@item c2_seed
+@item c3_seed
+Set noise seed for specific pixel component or all pixel components in case
+of @var{all_seed}. Default value is @code{123457}.
+
+@item all_strength, alls
+@item c0_strength, c0s
+@item c1_strength, c1s
+@item c2_strength, c2s
+@item c3_strength, c3s
+Set noise strength for specific pixel component or all pixel components in case
+@var{all_strength}. Default value is @code{0}. Allowed range is [0, 100].
+
+@item all_flags, allf
+@item c0_flags, c0f
+@item c1_flags, c1f
+@item c2_flags, c2f
+@item c3_flags, c3f
+Set pixel component flags or set flags for all components if @var{all_flags}.
+Available values for component flags are:
+@table @samp
+@item a
+averaged temporal noise (smoother)
+@item p
+mix random noise with a (semi)regular pattern
+@item t
+temporal noise (noise pattern changes between frames)
+@item u
+uniform noise (gaussian otherwise)
+@end table
+@end table
+
+@subsection Examples
+
+Add temporal and uniform noise to input video:
+@example
+noise=alls=20:allf=t+u
+@end example
+
+@section null
+
+Pass the video source unchanged to the output.
+
+@section ocv
+
+Apply a video transform using libopencv.
+
+To enable this filter, install the libopencv library and headers and
+configure FFmpeg with @code{--enable-libopencv}.
+
+It accepts the following parameters:
+
+@table @option
+
+@item filter_name
+The name of the libopencv filter to apply.
+
+@item filter_params
+The parameters to pass to the libopencv filter. If not specified, the default
+values are assumed.
+
+@end table
+
+Refer to the official libopencv documentation for more precise
+information:
+@url{http://opencv.willowgarage.com/documentation/c/image_filtering.html}
+
+Several libopencv filters are supported; see the following subsections.
+
+@anchor{dilate}
+@subsection dilate
+
+Dilate an image by using a specific structuring element.
+It corresponds to the libopencv function @code{cvDilate}.
+
+It accepts the parameters: @var{struct_el}|@var{nb_iterations}.
+
+@var{struct_el} represents a structuring element, and has the syntax:
+@var{cols}x@var{rows}+@var{anchor_x}x@var{anchor_y}/@var{shape}
+
+@var{cols} and @var{rows} represent the number of columns and rows of
+the structuring element, @var{anchor_x} and @var{anchor_y} the anchor
+point, and @var{shape} the shape for the structuring element. @var{shape}
+must be "rect", "cross", "ellipse", or "custom".
+
+If the value for @var{shape} is "custom", it must be followed by a
+string of the form "=@var{filename}". The file with name
+@var{filename} is assumed to represent a binary image, with each
+printable character corresponding to a bright pixel. When a custom
+@var{shape} is used, @var{cols} and @var{rows} are ignored, the number
+or columns and rows of the read file are assumed instead.
+
+The default value for @var{struct_el} is "3x3+0x0/rect".
+
+@var{nb_iterations} specifies the number of times the transform is
+applied to the image, and defaults to 1.
+
+Some examples:
+@example
+# Use the default values
+ocv=dilate
+
+# Dilate using a structuring element with a 5x5 cross, iterating two times
+ocv=filter_name=dilate:filter_params=5x5+2x2/cross|2
+
+# Read the shape from the file diamond.shape, iterating two times.
+# The file diamond.shape may contain a pattern of characters like this
+# *
+# ***
+# *****
+# ***
+# *
+# The specified columns and rows are ignored
+# but the anchor point coordinates are not
+ocv=dilate:0x0+2x2/custom=diamond.shape|2
+@end example
+
+@subsection erode
+
+Erode an image by using a specific structuring element.
+It corresponds to the libopencv function @code{cvErode}.
+
+It accepts the parameters: @var{struct_el}:@var{nb_iterations},
+with the same syntax and semantics as the @ref{dilate} filter.
+
+@subsection smooth
+
+Smooth the input video.
+
+The filter takes the following parameters:
+@var{type}|@var{param1}|@var{param2}|@var{param3}|@var{param4}.
+
+@var{type} is the type of smooth filter to apply, and must be one of
+the following values: "blur", "blur_no_scale", "median", "gaussian",
+or "bilateral". The default value is "gaussian".
+
+The meaning of @var{param1}, @var{param2}, @var{param3}, and @var{param4}
+depend on the smooth type. @var{param1} and
+@var{param2} accept integer positive values or 0. @var{param3} and
+@var{param4} accept floating point values.
+
+The default value for @var{param1} is 3. The default value for the
+other parameters is 0.
+
+These parameters correspond to the parameters assigned to the
+libopencv function @code{cvSmooth}.
+
+@anchor{overlay}
+@section overlay
+
+Overlay one video on top of another.
+
+It takes two inputs and has one output. The first input is the "main"
+video on which the second input is overlayed.
+
+It accepts the following parameters:
+
+A description of the accepted options follows.
+
+@table @option
+@item x
+@item y
+Set the expression for the x and y coordinates of the overlayed video
+on the main video. Default value is "0" for both expressions. In case
+the expression is invalid, it is set to a huge value (meaning that the
+overlay will not be displayed within the output visible area).
+
+@item eof_action
+The action to take when EOF is encountered on the secondary input; it accepts
+one of the following values:
+
+@table @option
+@item repeat
+Repeat the last frame (the default).
+@item endall
+End both streams.
+@item pass
+Pass the main input through.
+@end table
+
+@item eval
+Set when the expressions for @option{x}, and @option{y} are evaluated.
+
+It accepts the following values:
+@table @samp
+@item init
+only evaluate expressions once during the filter initialization or
+when a command is processed
+
+@item frame
+evaluate expressions for each incoming frame
+@end table
+
+Default value is @samp{frame}.
+
+@item shortest
+If set to 1, force the output to terminate when the shortest input
+terminates. Default value is 0.
+
+@item format
+Set the format for the output video.
+
+It accepts the following values:
+@table @samp
+@item yuv420
+force YUV420 output
+
+@item yuv422
+force YUV422 output
+
+@item yuv444
+force YUV444 output
+
+@item rgb
+force RGB output
+@end table
+
+Default value is @samp{yuv420}.
+
+@item rgb @emph{(deprecated)}
+If set to 1, force the filter to accept inputs in the RGB
+color space. Default value is 0. This option is deprecated, use
+@option{format} instead.
+
+@item repeatlast
+If set to 1, force the filter to draw the last overlay frame over the
+main input until the end of the stream. A value of 0 disables this
+behavior. Default value is 1.
+@end table
+
+The @option{x}, and @option{y} expressions can contain the following
+parameters.
+
+@table @option
+@item main_w, W
+@item main_h, H
+The main input width and height.
+
+@item overlay_w, w
+@item overlay_h, h
+The overlay input width and height.
+
+@item x
+@item y
+The computed values for @var{x} and @var{y}. They are evaluated for
+each new frame.
+
+@item hsub
+@item vsub
+horizontal and vertical chroma subsample values of the output
+format. For example for the pixel format "yuv422p" @var{hsub} is 2 and
+@var{vsub} is 1.
+
+@item n
+the number of input frame, starting from 0
+
+@item pos
+the position in the file of the input frame, NAN if unknown
+
+@item t
+The timestamp, expressed in seconds. It's NAN if the input timestamp is unknown.
+
+@end table
+
+Note that the @var{n}, @var{pos}, @var{t} variables are available only
+when evaluation is done @emph{per frame}, and will evaluate to NAN
+when @option{eval} is set to @samp{init}.
+
+Be aware that frames are taken from each input video in timestamp
+order, hence, if their initial timestamps differ, it is a good idea
+to pass the two inputs through a @var{setpts=PTS-STARTPTS} filter to
+have them begin in the same zero timestamp, as the example for
+the @var{movie} filter does.
+
+You can chain together more overlays but you should test the
+efficiency of such approach.
+
+@subsection Commands
+
+This filter supports the following commands:
+@table @option
+@item x
+@item y
+Modify the x and y of the overlay input.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Draw the overlay at 10 pixels from the bottom right corner of the main
+video:
+@example
+overlay=main_w-overlay_w-10:main_h-overlay_h-10
+@end example
+
+Using named options the example above becomes:
+@example
+overlay=x=main_w-overlay_w-10:y=main_h-overlay_h-10
+@end example
+
+@item
+Insert a transparent PNG logo in the bottom left corner of the input,
+using the @command{ffmpeg} tool with the @code{-filter_complex} option:
+@example
+ffmpeg -i input -i logo -filter_complex 'overlay=10:main_h-overlay_h-10' output
+@end example
+
+@item
+Insert 2 different transparent PNG logos (second logo on bottom
+right corner) using the @command{ffmpeg} tool:
+@example
+ffmpeg -i input -i logo1 -i logo2 -filter_complex 'overlay=x=10:y=H-h-10,overlay=x=W-w-10:y=H-h-10' output
+@end example
+
+@item
+Add a transparent color layer on top of the main video; @code{WxH}
+must specify the size of the main input to the overlay filter:
+@example
+color=color=red@@.3:size=WxH [over]; [in][over] overlay [out]
+@end example
+
+@item
+Play an original video and a filtered version (here with the deshake
+filter) side by side using the @command{ffplay} tool:
+@example
+ffplay input.avi -vf 'split[a][b]; [a]pad=iw*2:ih[src]; [b]deshake[filt]; [src][filt]overlay=w'
+@end example
+
+The above command is the same as:
+@example
+ffplay input.avi -vf 'split[b], pad=iw*2[src], [b]deshake, [src]overlay=w'
+@end example
+
+@item
+Make a sliding overlay appearing from the left to the right top part of the
+screen starting since time 2:
+@example
+overlay=x='if(gte(t,2), -w+(t-2)*20, NAN)':y=0
+@end example
+
+@item
+Compose output by putting two input videos side to side:
+@example
+ffmpeg -i left.avi -i right.avi -filter_complex "
+nullsrc=size=200x100 [background];
+[0:v] setpts=PTS-STARTPTS, scale=100x100 [left];
+[1:v] setpts=PTS-STARTPTS, scale=100x100 [right];
+[background][left] overlay=shortest=1 [background+left];
+[background+left][right] overlay=shortest=1:x=100 [left+right]
+"
+@end example
+
+@item
+Mask 10-20 seconds of a video by applying the delogo filter to a section
+@example
+ffmpeg -i test.avi -codec:v:0 wmv2 -ar 11025 -b:v 9000k
+-vf '[in]split[split_main][split_delogo];[split_delogo]trim=start=360:end=371,delogo=0:0:640:480[delogoed];[split_main][delogoed]overlay=eof_action=pass[out]'
+masked.avi
+@end example
+
+@item
+Chain several overlays in cascade:
+@example
+nullsrc=s=200x200 [bg];
+testsrc=s=100x100, split=4 [in0][in1][in2][in3];
+[in0] lutrgb=r=0, [bg] overlay=0:0 [mid0];
+[in1] lutrgb=g=0, [mid0] overlay=100:0 [mid1];
+[in2] lutrgb=b=0, [mid1] overlay=0:100 [mid2];
+[in3] null, [mid2] overlay=100:100 [out0]
+@end example
+
+@end itemize
+
+@section owdenoise
+
+Apply Overcomplete Wavelet denoiser.
+
+The filter accepts the following options:
+
+@table @option
+@item depth
+Set depth.
+
+Larger depth values will denoise lower frequency components more, but
+slow down filtering.
+
+Must be an int in the range 8-16, default is @code{8}.
+
+@item luma_strength, ls
+Set luma strength.
+
+Must be a double value in the range 0-1000, default is @code{1.0}.
+
+@item chroma_strength, cs
+Set chroma strength.
+
+Must be a double value in the range 0-1000, default is @code{1.0}.
+@end table
+
+@section pad
+
+Add paddings to the input image, and place the original input at the
+provided @var{x}, @var{y} coordinates.
+
+It accepts the following parameters:
+
+@table @option
+@item width, w
+@item height, h
+Specify an expression for the size of the output image with the
+paddings added. If the value for @var{width} or @var{height} is 0, the
+corresponding input size is used for the output.
+
+The @var{width} expression can reference the value set by the
+@var{height} expression, and vice versa.
+
+The default value of @var{width} and @var{height} is 0.
+
+@item x
+@item y
+Specify the offsets to place the input image at within the padded area,
+with respect to the top/left border of the output image.
+
+The @var{x} expression can reference the value set by the @var{y}
+expression, and vice versa.
+
+The default value of @var{x} and @var{y} is 0.
+
+@item color
+Specify the color of the padded area. For the syntax of this option,
+check the "Color" section in the ffmpeg-utils manual.
+
+The default value of @var{color} is "black".
+@end table
+
+The value for the @var{width}, @var{height}, @var{x}, and @var{y}
+options are expressions containing the following constants:
+
+@table @option
+@item in_w
+@item in_h
+The input video width and height.
+
+@item iw
+@item ih
+These are the same as @var{in_w} and @var{in_h}.
+
+@item out_w
+@item out_h
+The output width and height (the size of the padded area), as
+specified by the @var{width} and @var{height} expressions.
+
+@item ow
+@item oh
+These are the same as @var{out_w} and @var{out_h}.
+
+@item x
+@item y
+The x and y offsets as specified by the @var{x} and @var{y}
+expressions, or NAN if not yet specified.
+
+@item a
+same as @var{iw} / @var{ih}
+
+@item sar
+input sample aspect ratio
+
+@item dar
+input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
+
+@item hsub
+@item vsub
+The horizontal and vertical chroma subsample values. For example for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Add paddings with the color "violet" to the input video. The output video
+size is 640x480, and the top-left corner of the input video is placed at
+column 0, row 40
+@example
+pad=640:480:0:40:violet
+@end example
+
+The example above is equivalent to the following command:
+@example
+pad=width=640:height=480:x=0:y=40:color=violet
+@end example
+
+@item
+Pad the input to get an output with dimensions increased by 3/2,
+and put the input video at the center of the padded area:
+@example
+pad="3/2*iw:3/2*ih:(ow-iw)/2:(oh-ih)/2"
+@end example
+
+@item
+Pad the input to get a squared output with size equal to the maximum
+value between the input width and height, and put the input video at
+the center of the padded area:
+@example
+pad="max(iw\,ih):ow:(ow-iw)/2:(oh-ih)/2"
+@end example
+
+@item
+Pad the input to get a final w/h ratio of 16:9:
+@example
+pad="ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"
+@end example
+
+@item
+In case of anamorphic video, in order to set the output display aspect
+correctly, it is necessary to use @var{sar} in the expression,
+according to the relation:
+@example
+(ih * X / ih) * sar = output_dar
+X = output_dar / sar
+@end example
+
+Thus the previous example needs to be modified to:
+@example
+pad="ih*16/9/sar:ih:(ow-iw)/2:(oh-ih)/2"
+@end example
+
+@item
+Double the output size and put the input video in the bottom-right
+corner of the output padded area:
+@example
+pad="2*iw:2*ih:ow-iw:oh-ih"
+@end example
+@end itemize
+
+@section perspective
+
+Correct perspective of video not recorded perpendicular to the screen.
+
+A description of the accepted parameters follows.
+
+@table @option
+@item x0
+@item y0
+@item x1
+@item y1
+@item x2
+@item y2
+@item x3
+@item y3
+Set coordinates expression for top left, top right, bottom left and bottom right corners.
+Default values are @code{0:0:W:0:0:H:W:H} with which perspective will remain unchanged.
+
+The expressions can use the following variables:
+
+@table @option
+@item W
+@item H
+the width and height of video frame.
+@end table
+
+@item interpolation
+Set interpolation for perspective correction.
+
+It accepts the following values:
+@table @samp
+@item linear
+@item cubic
+@end table
+
+Default value is @samp{linear}.
+@end table
+
+@section phase
+
+Delay interlaced video by one field time so that the field order changes.
+
+The intended use is to fix PAL movies that have been captured with the
+opposite field order to the film-to-video transfer.
+
+A description of the accepted parameters follows.
+
+@table @option
+@item mode
+Set phase mode.
+
+It accepts the following values:
+@table @samp
+@item t
+Capture field order top-first, transfer bottom-first.
+Filter will delay the bottom field.
+
+@item b
+Capture field order bottom-first, transfer top-first.
+Filter will delay the top field.
+
+@item p
+Capture and transfer with the same field order. This mode only exists
+for the documentation of the other options to refer to, but if you
+actually select it, the filter will faithfully do nothing.
+
+@item a
+Capture field order determined automatically by field flags, transfer
+opposite.
+Filter selects among @samp{t} and @samp{b} modes on a frame by frame
+basis using field flags. If no field information is available,
+then this works just like @samp{u}.
+
+@item u
+Capture unknown or varying, transfer opposite.
+Filter selects among @samp{t} and @samp{b} on a frame by frame basis by
+analyzing the images and selecting the alternative that produces best
+match between the fields.
+
+@item T
+Capture top-first, transfer unknown or varying.
+Filter selects among @samp{t} and @samp{p} using image analysis.
+
+@item B
+Capture bottom-first, transfer unknown or varying.
+Filter selects among @samp{b} and @samp{p} using image analysis.
+
+@item A
+Capture determined by field flags, transfer unknown or varying.
+Filter selects among @samp{t}, @samp{b} and @samp{p} using field flags and
+image analysis. If no field information is available, then this works just
+like @samp{U}. This is the default mode.
+
+@item U
+Both capture and transfer unknown or varying.
+Filter selects among @samp{t}, @samp{b} and @samp{p} using image analysis only.
+@end table
+@end table
+
+@section pixdesctest
+
+Pixel format descriptor test filter, mainly useful for internal
+testing. The output video should be equal to the input video.
+
+For example:
+@example
+format=monow, pixdesctest
+@end example
+
+can be used to test the monowhite pixel format descriptor definition.
+
+@section pp
+
+Enable the specified chain of postprocessing subfilters using libpostproc. This
+library should be automatically selected with a GPL build (@code{--enable-gpl}).
+Subfilters must be separated by '/' and can be disabled by prepending a '-'.
+Each subfilter and some options have a short and a long name that can be used
+interchangeably, i.e. dr/dering are the same.
+
+The filters accept the following options:
+
+@table @option
+@item subfilters
+Set postprocessing subfilters string.
+@end table
+
+All subfilters share common options to determine their scope:
+
+@table @option
+@item a/autoq
+Honor the quality commands for this subfilter.
+
+@item c/chrom
+Do chrominance filtering, too (default).
+
+@item y/nochrom
+Do luminance filtering only (no chrominance).
+
+@item n/noluma
+Do chrominance filtering only (no luminance).
+@end table
+
+These options can be appended after the subfilter name, separated by a '|'.
+
+Available subfilters are:
+
+@table @option
+@item hb/hdeblock[|difference[|flatness]]
+Horizontal deblocking filter
+@table @option
+@item difference
+Difference factor where higher values mean more deblocking (default: @code{32}).
+@item flatness
+Flatness threshold where lower values mean more deblocking (default: @code{39}).
+@end table
+
+@item vb/vdeblock[|difference[|flatness]]
+Vertical deblocking filter
+@table @option
+@item difference
+Difference factor where higher values mean more deblocking (default: @code{32}).
+@item flatness
+Flatness threshold where lower values mean more deblocking (default: @code{39}).
+@end table
+
+@item ha/hadeblock[|difference[|flatness]]
+Accurate horizontal deblocking filter
+@table @option
+@item difference
+Difference factor where higher values mean more deblocking (default: @code{32}).
+@item flatness
+Flatness threshold where lower values mean more deblocking (default: @code{39}).
+@end table
+
+@item va/vadeblock[|difference[|flatness]]
+Accurate vertical deblocking filter
+@table @option
+@item difference
+Difference factor where higher values mean more deblocking (default: @code{32}).
+@item flatness
+Flatness threshold where lower values mean more deblocking (default: @code{39}).
+@end table
+@end table
+
+The horizontal and vertical deblocking filters share the difference and
+flatness values so you cannot set different horizontal and vertical
+thresholds.
+
+@table @option
+@item h1/x1hdeblock
+Experimental horizontal deblocking filter
+
+@item v1/x1vdeblock
+Experimental vertical deblocking filter
+
+@item dr/dering
+Deringing filter
+
+@item tn/tmpnoise[|threshold1[|threshold2[|threshold3]]], temporal noise reducer
+@table @option
+@item threshold1
+larger -> stronger filtering
+@item threshold2
+larger -> stronger filtering
+@item threshold3
+larger -> stronger filtering
+@end table
+
+@item al/autolevels[:f/fullyrange], automatic brightness / contrast correction
+@table @option
+@item f/fullyrange
+Stretch luminance to @code{0-255}.
+@end table
+
+@item lb/linblenddeint
+Linear blend deinterlacing filter that deinterlaces the given block by
+filtering all lines with a @code{(1 2 1)} filter.
+
+@item li/linipoldeint
+Linear interpolating deinterlacing filter that deinterlaces the given block by
+linearly interpolating every second line.
+
+@item ci/cubicipoldeint
+Cubic interpolating deinterlacing filter deinterlaces the given block by
+cubically interpolating every second line.
+
+@item md/mediandeint
+Median deinterlacing filter that deinterlaces the given block by applying a
+median filter to every second line.
+
+@item fd/ffmpegdeint
+FFmpeg deinterlacing filter that deinterlaces the given block by filtering every
+second line with a @code{(-1 4 2 4 -1)} filter.
+
+@item l5/lowpass5
+Vertically applied FIR lowpass deinterlacing filter that deinterlaces the given
+block by filtering all lines with a @code{(-1 2 6 2 -1)} filter.
+
+@item fq/forceQuant[|quantizer]
+Overrides the quantizer table from the input with the constant quantizer you
+specify.
+@table @option
+@item quantizer
+Quantizer to use
+@end table
+
+@item de/default
+Default pp filter combination (@code{hb|a,vb|a,dr|a})
+
+@item fa/fast
+Fast pp filter combination (@code{h1|a,v1|a,dr|a})
+
+@item ac
+High quality pp filter combination (@code{ha|a|128|7,va|a,dr|a})
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Apply horizontal and vertical deblocking, deringing and automatic
+brightness/contrast:
+@example
+pp=hb/vb/dr/al
+@end example
+
+@item
+Apply default filters without brightness/contrast correction:
+@example
+pp=de/-al
+@end example
+
+@item
+Apply default filters and temporal denoiser:
+@example
+pp=default/tmpnoise|1|2|3
+@end example
+
+@item
+Apply deblocking on luminance only, and switch vertical deblocking on or off
+automatically depending on available CPU time:
+@example
+pp=hb|y/vb|a
+@end example
+@end itemize
+
+@section psnr
+
+Obtain the average, maximum and minimum PSNR (Peak Signal to Noise
+Ratio) between two input videos.
+
+This filter takes in input two input videos, the first input is
+considered the "main" source and is passed unchanged to the
+output. The second input is used as a "reference" video for computing
+the PSNR.
+
+Both video inputs must have the same resolution and pixel format for
+this filter to work correctly. Also it assumes that both inputs
+have the same number of frames, which are compared one by one.
+
+The obtained average PSNR is printed through the logging system.
+
+The filter stores the accumulated MSE (mean squared error) of each
+frame, and at the end of the processing it is averaged across all frames
+equally, and the following formula is applied to obtain the PSNR:
+
+@example
+PSNR = 10*log10(MAX^2/MSE)
+@end example
+
+Where MAX is the average of the maximum values of each component of the
+image.
+
+The description of the accepted parameters follows.
+
+@table @option
+@item stats_file, f
+If specified the filter will use the named file to save the PSNR of
+each individual frame.
+@end table
+
+The file printed if @var{stats_file} is selected, contains a sequence of
+key/value pairs of the form @var{key}:@var{value} for each compared
+couple of frames.
+
+A description of each shown parameter follows:
+
+@table @option
+@item n
+sequential number of the input frame, starting from 1
+
+@item mse_avg
+Mean Square Error pixel-by-pixel average difference of the compared
+frames, averaged over all the image components.
+
+@item mse_y, mse_u, mse_v, mse_r, mse_g, mse_g, mse_a
+Mean Square Error pixel-by-pixel average difference of the compared
+frames for the component specified by the suffix.
+
+@item psnr_y, psnr_u, psnr_v, psnr_r, psnr_g, psnr_b, psnr_a
+Peak Signal to Noise ratio of the compared frames for the component
+specified by the suffix.
+@end table
+
+For example:
+@example
+movie=ref_movie.mpg, setpts=PTS-STARTPTS [main];
+[main][ref] psnr="stats_file=stats.log" [out]
+@end example
+
+On this example the input file being processed is compared with the
+reference file @file{ref_movie.mpg}. The PSNR of each individual frame
+is stored in @file{stats.log}.
+
+@anchor{pullup}
+@section pullup
+
+Pulldown reversal (inverse telecine) filter, capable of handling mixed
+hard-telecine, 24000/1001 fps progressive, and 30000/1001 fps progressive
+content.
+
+The pullup filter is designed to take advantage of future context in making
+its decisions. This filter is stateless in the sense that it does not lock
+onto a pattern to follow, but it instead looks forward to the following
+fields in order to identify matches and rebuild progressive frames.
+
+To produce content with an even framerate, insert the fps filter after
+pullup, use @code{fps=24000/1001} if the input frame rate is 29.97fps,
+@code{fps=24} for 30fps and the (rare) telecined 25fps input.
+
+The filter accepts the following options:
+
+@table @option
+@item jl
+@item jr
+@item jt
+@item jb
+These options set the amount of "junk" to ignore at the left, right, top, and
+bottom of the image, respectively. Left and right are in units of 8 pixels,
+while top and bottom are in units of 2 lines.
+The default is 8 pixels on each side.
+
+@item sb
+Set the strict breaks. Setting this option to 1 will reduce the chances of
+filter generating an occasional mismatched frame, but it may also cause an
+excessive number of frames to be dropped during high motion sequences.
+Conversely, setting it to -1 will make filter match fields more easily.
+This may help processing of video where there is slight blurring between
+the fields, but may also cause there to be interlaced frames in the output.
+Default value is @code{0}.
+
+@item mp
+Set the metric plane to use. It accepts the following values:
+@table @samp
+@item l
+Use luma plane.
+
+@item u
+Use chroma blue plane.
+
+@item v
+Use chroma red plane.
+@end table
+
+This option may be set to use chroma plane instead of the default luma plane
+for doing filter's computations. This may improve accuracy on very clean
+source material, but more likely will decrease accuracy, especially if there
+is chroma noise (rainbow effect) or any grayscale video.
+The main purpose of setting @option{mp} to a chroma plane is to reduce CPU
+load and make pullup usable in realtime on slow machines.
+@end table
+
+For best results (without duplicated frames in the output file) it is
+necessary to change the output frame rate. For example, to inverse
+telecine NTSC input:
+@example
+ffmpeg -i input -vf pullup -r 24000/1001 ...
+@end example
+
+@section removelogo
+
+Suppress a TV station logo, using an image file to determine which
+pixels comprise the logo. It works by filling in the pixels that
+comprise the logo with neighboring pixels.
+
+The filter accepts the following options:
+
+@table @option
+@item filename, f
+Set the filter bitmap file, which can be any image format supported by
+libavformat. The width and height of the image file must match those of the
+video stream being processed.
+@end table
+
+Pixels in the provided bitmap image with a value of zero are not
+considered part of the logo, non-zero pixels are considered part of
+the logo. If you use white (255) for the logo and black (0) for the
+rest, you will be safe. For making the filter bitmap, it is
+recommended to take a screen capture of a black frame with the logo
+visible, and then using a threshold filter followed by the erode
+filter once or twice.
+
+If needed, little splotches can be fixed manually. Remember that if
+logo pixels are not covered, the filter quality will be much
+reduced. Marking too many pixels as part of the logo does not hurt as
+much, but it will increase the amount of blurring needed to cover over
+the image and will destroy more information than necessary, and extra
+pixels will slow things down on a large logo.
+
+@section rotate
+
+Rotate video by an arbitrary angle expressed in radians.
+
+The filter accepts the following options:
+
+A description of the optional parameters follows.
+@table @option
+@item angle, a
+Set an expression for the angle by which to rotate the input video
+clockwise, expressed as a number of radians. A negative value will
+result in a counter-clockwise rotation. By default it is set to "0".
+
+This expression is evaluated for each frame.
+
+@item out_w, ow
+Set the output width expression, default value is "iw".
+This expression is evaluated just once during configuration.
+
+@item out_h, oh
+Set the output height expression, default value is "ih".
+This expression is evaluated just once during configuration.
+
+@item bilinear
+Enable bilinear interpolation if set to 1, a value of 0 disables
+it. Default value is 1.
+
+@item fillcolor, c
+Set the color used to fill the output area not covered by the rotated
+image. For the generalsyntax of this option, check the "Color" section in the
+ffmpeg-utils manual. If the special value "none" is selected then no
+background is printed (useful for example if the background is never shown).
+
+Default value is "black".
+@end table
+
+The expressions for the angle and the output size can contain the
+following constants and functions:
+
+@table @option
+@item n
+sequential number of the input frame, starting from 0. It is always NAN
+before the first frame is filtered.
+
+@item t
+time in seconds of the input frame, it is set to 0 when the filter is
+configured. It is always NAN before the first frame is filtered.
+
+@item hsub
+@item vsub
+horizontal and vertical chroma subsample values. For example for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+
+@item in_w, iw
+@item in_h, ih
+the input video width and height
+
+@item out_w, ow
+@item out_h, oh
+the output width and height, that is the size of the padded area as
+specified by the @var{width} and @var{height} expressions
+
+@item rotw(a)
+@item roth(a)
+the minimal width/height required for completely containing the input
+video rotated by @var{a} radians.
+
+These are only available when computing the @option{out_w} and
+@option{out_h} expressions.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Rotate the input by PI/6 radians clockwise:
+@example
+rotate=PI/6
+@end example
+
+@item
+Rotate the input by PI/6 radians counter-clockwise:
+@example
+rotate=-PI/6
+@end example
+
+@item
+Rotate the input by 45 degrees clockwise:
+@example
+rotate=45*PI/180
+@end example
+
+@item
+Apply a constant rotation with period T, starting from an angle of PI/3:
+@example
+rotate=PI/3+2*PI*t/T
+@end example
+
+@item
+Make the input video rotation oscillating with a period of T
+seconds and an amplitude of A radians:
+@example
+rotate=A*sin(2*PI/T*t)
+@end example
+
+@item
+Rotate the video, output size is chosen so that the whole rotating
+input video is always completely contained in the output:
+@example
+rotate='2*PI*t:ow=hypot(iw,ih):oh=ow'
+@end example
+
+@item
+Rotate the video, reduce the output size so that no background is ever
+shown:
+@example
+rotate=2*PI*t:ow='min(iw,ih)/sqrt(2)':oh=ow:c=none
+@end example
+@end itemize
+
+@subsection Commands
+
+The filter supports the following commands:
+
+@table @option
+@item a, angle
+Set the angle expression.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
+@end table
+
+@section sab
+
+Apply Shape Adaptive Blur.
+
+The filter accepts the following options:
+
+@table @option
+@item luma_radius, lr
+Set luma blur filter strength, must be a value in range 0.1-4.0, default
+value is 1.0. A greater value will result in a more blurred image, and
+in slower processing.
+
+@item luma_pre_filter_radius, lpfr
+Set luma pre-filter radius, must be a value in the 0.1-2.0 range, default
+value is 1.0.
+
+@item luma_strength, ls
+Set luma maximum difference between pixels to still be considered, must
+be a value in the 0.1-100.0 range, default value is 1.0.
+
+@item chroma_radius, cr
+Set chroma blur filter strength, must be a value in range 0.1-4.0. A
+greater value will result in a more blurred image, and in slower
+processing.
+
+@item chroma_pre_filter_radius, cpfr
+Set chroma pre-filter radius, must be a value in the 0.1-2.0 range.
+
+@item chroma_strength, cs
+Set chroma maximum difference between pixels to still be considered,
+must be a value in the 0.1-100.0 range.
+@end table
+
+Each chroma option value, if not explicitly specified, is set to the
+corresponding luma option value.
+
+@anchor{scale}
+@section scale
+
+Scale (resize) the input video, using the libswscale library.
+
+The scale filter forces the output display aspect ratio to be the same
+of the input, by changing the output sample aspect ratio.
+
+If the input image format is different from the format requested by
+the next filter, the scale filter will convert the input to the
+requested format.
+
+@subsection Options
+The filter accepts the following options, or any of the options
+supported by the libswscale scaler.
+
+See @ref{scaler_options,,the ffmpeg-scaler manual,ffmpeg-scaler} for
+the complete list of scaler options.
+
+@table @option
+@item width, w
+@item height, h
+Set the output video dimension expression. Default value is the input
+dimension.
+
+If the value is 0, the input width is used for the output.
+
+If one of the values is -1, the scale filter will use a value that
+maintains the aspect ratio of the input image, calculated from the
+other specified dimension. If both of them are -1, the input size is
+used
+
+If one of the values is -n with n > 1, the scale filter will also use a value
+that maintains the aspect ratio of the input image, calculated from the other
+specified dimension. After that it will, however, make sure that the calculated
+dimension is divisible by n and adjust the value if necessary.
+
+See below for the list of accepted constants for use in the dimension
+expression.
+
+@item interl
+Set the interlacing mode. It accepts the following values:
+
+@table @samp
+@item 1
+Force interlaced aware scaling.
+
+@item 0
+Do not apply interlaced scaling.
+
+@item -1
+Select interlaced aware scaling depending on whether the source frames
+are flagged as interlaced or not.
+@end table
+
+Default value is @samp{0}.
+
+@item flags
+Set libswscale scaling flags. See
+@ref{sws_flags,,the ffmpeg-scaler manual,ffmpeg-scaler} for the
+complete list of values. If not explicitly specified the filter applies
+the default flags.
+
+@item size, s
+Set the video size. For the syntax of this option, check the "Video size"
+section in the ffmpeg-utils manual.
+
+@item in_color_matrix
+@item out_color_matrix
+Set in/output YCbCr color space type.
+
+This allows the autodetected value to be overridden as well as allows forcing
+a specific value used for the output and encoder.
+
+If not specified, the color space type depends on the pixel format.
+
+Possible values:
+
+@table @samp
+@item auto
+Choose automatically.
+
+@item bt709
+Format conforming to International Telecommunication Union (ITU)
+Recommendation BT.709.
+
+@item fcc
+Set color space conforming to the United States Federal Communications
+Commission (FCC) Code of Federal Regulations (CFR) Title 47 (2003) 73.682 (a).
+
+@item bt601
+Set color space conforming to:
+
+@itemize
+@item
+ITU Radiocommunication Sector (ITU-R) Recommendation BT.601
+
+@item
+ITU-R Rec. BT.470-6 (1998) Systems B, B1, and G
+
+@item
+Society of Motion Picture and Television Engineers (SMPTE) ST 170:2004
+
+@end itemize
+
+@item smpte240m
+Set color space conforming to SMPTE ST 240:1999.
+@end table
+
+@item in_range
+@item out_range
+Set in/output YCbCr sample range.
+
+This allows the autodetected value to be overridden as well as allows forcing
+a specific value used for the output and encoder. If not specified, the
+range depends on the pixel format. Possible values:
+
+@table @samp
+@item auto
+Choose automatically.
+
+@item jpeg/full/pc
+Set full range (0-255 in case of 8-bit luma).
+
+@item mpeg/tv
+Set "MPEG" range (16-235 in case of 8-bit luma).
+@end table
+
+@item force_original_aspect_ratio
+Enable decreasing or increasing output video width or height if necessary to
+keep the original aspect ratio. Possible values:
+
+@table @samp
+@item disable
+Scale the video as specified and disable this feature.
+
+@item decrease
+The output video dimensions will automatically be decreased if needed.
+
+@item increase
+The output video dimensions will automatically be increased if needed.
+
+@end table
+
+One useful instance of this option is that when you know a specific device's
+maximum allowed resolution, you can use this to limit the output video to
+that, while retaining the aspect ratio. For example, device A allows
+1280x720 playback, and your video is 1920x800. Using this option (set it to
+decrease) and specifying 1280x720 to the command line makes the output
+1280x533.
+
+Please note that this is a different thing than specifying -1 for @option{w}
+or @option{h}, you still need to specify the output resolution for this option
+to work.
+
+@end table
+
+The values of the @option{w} and @option{h} options are expressions
+containing the following constants:
+
+@table @var
+@item in_w
+@item in_h
+The input width and height
+
+@item iw
+@item ih
+These are the same as @var{in_w} and @var{in_h}.
+
+@item out_w
+@item out_h
+The output (scaled) width and height
+
+@item ow
+@item oh
+These are the same as @var{out_w} and @var{out_h}
+
+@item a
+The same as @var{iw} / @var{ih}
+
+@item sar
+input sample aspect ratio
+
+@item dar
+The input display aspect ratio. Calculated from @code{(iw / ih) * sar}.
+
+@item hsub
+@item vsub
+horizontal and vertical input chroma subsample values. For example for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+
+@item ohsub
+@item ovsub
+horizontal and vertical output chroma subsample values. For example for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Scale the input video to a size of 200x100
+@example
+scale=w=200:h=100
+@end example
+
+This is equivalent to:
+@example
+scale=200:100
+@end example
+
+or:
+@example
+scale=200x100
+@end example
+
+@item
+Specify a size abbreviation for the output size:
+@example
+scale=qcif
+@end example
+
+which can also be written as:
+@example
+scale=size=qcif
+@end example
+
+@item
+Scale the input to 2x:
+@example
+scale=w=2*iw:h=2*ih
+@end example
+
+@item
+The above is the same as:
+@example
+scale=2*in_w:2*in_h
+@end example
+
+@item
+Scale the input to 2x with forced interlaced scaling:
+@example
+scale=2*iw:2*ih:interl=1
+@end example
+
+@item
+Scale the input to half size:
+@example
+scale=w=iw/2:h=ih/2
+@end example
+
+@item
+Increase the width, and set the height to the same size:
+@example
+scale=3/2*iw:ow
+@end example
+
+@item
+Seek Greek harmony:
+@example
+scale=iw:1/PHI*iw
+scale=ih*PHI:ih
+@end example
+
+@item
+Increase the height, and set the width to 3/2 of the height:
+@example
+scale=w=3/2*oh:h=3/5*ih
+@end example
+
+@item
+Increase the size, making the size a multiple of the chroma
+subsample values:
+@example
+scale="trunc(3/2*iw/hsub)*hsub:trunc(3/2*ih/vsub)*vsub"
+@end example
+
+@item
+Increase the width to a maximum of 500 pixels,
+keeping the same aspect ratio as the input:
+@example
+scale=w='min(500\, iw*3/2):h=-1'
+@end example
+@end itemize
+
+@section separatefields
+
+The @code{separatefields} takes a frame-based video input and splits
+each frame into its components fields, producing a new half height clip
+with twice the frame rate and twice the frame count.
+
+This filter use field-dominance information in frame to decide which
+of each pair of fields to place first in the output.
+If it gets it wrong use @ref{setfield} filter before @code{separatefields} filter.
+
+@section setdar, setsar
+
+The @code{setdar} filter sets the Display Aspect Ratio for the filter
+output video.
+
+This is done by changing the specified Sample (aka Pixel) Aspect
+Ratio, according to the following equation:
+@example
+@var{DAR} = @var{HORIZONTAL_RESOLUTION} / @var{VERTICAL_RESOLUTION} * @var{SAR}
+@end example
+
+Keep in mind that the @code{setdar} filter does not modify the pixel
+dimensions of the video frame. Also, the display aspect ratio set by
+this filter may be changed by later filters in the filterchain,
+e.g. in case of scaling or if another "setdar" or a "setsar" filter is
+applied.
+
+The @code{setsar} filter sets the Sample (aka Pixel) Aspect Ratio for
+the filter output video.
+
+Note that as a consequence of the application of this filter, the
+output display aspect ratio will change according to the equation
+above.
+
+Keep in mind that the sample aspect ratio set by the @code{setsar}
+filter may be changed by later filters in the filterchain, e.g. if
+another "setsar" or a "setdar" filter is applied.
+
+It accepts the following parameters:
+
+@table @option
+@item r, ratio, dar (@code{setdar} only), sar (@code{setsar} only)
+Set the aspect ratio used by the filter.
+
+The parameter can be a floating point number string, an expression, or
+a string of the form @var{num}:@var{den}, where @var{num} and
+@var{den} are the numerator and denominator of the aspect ratio. If
+the parameter is not specified, it is assumed the value "0".
+In case the form "@var{num}:@var{den}" is used, the @code{:} character
+should be escaped.
+
+@item max
+Set the maximum integer value to use for expressing numerator and
+denominator when reducing the expressed aspect ratio to a rational.
+Default value is @code{100}.
+
+@end table
+
+The parameter @var{sar} is an expression containing
+the following constants:
+
+@table @option
+@item E, PI, PHI
+These are approximated values for the mathematical constants e
+(Euler's number), pi (Greek pi), and phi (the golden ratio).
+
+@item w, h
+The input width and height.
+
+@item a
+These are the same as @var{w} / @var{h}.
+
+@item sar
+The input sample aspect ratio.
+
+@item dar
+The input display aspect ratio. It is the same as
+(@var{w} / @var{h}) * @var{sar}.
+
+@item hsub, vsub
+Horizontal and vertical chroma subsample values. For example, for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+@end table
+
+@subsection Examples
+
+@itemize
+
+@item
+To change the display aspect ratio to 16:9, specify one of the following:
+@example
+setdar=dar=1.77777
+setdar=dar=16/9
+setdar=dar=1.77777
+@end example
+
+@item
+To change the sample aspect ratio to 10:11, specify:
+@example
+setsar=sar=10/11
+@end example
+
+@item
+To set a display aspect ratio of 16:9, and specify a maximum integer value of
+1000 in the aspect ratio reduction, use the command:
+@example
+setdar=ratio=16/9:max=1000
+@end example
+
+@end itemize
+
+@anchor{setfield}
+@section setfield
+
+Force field for the output video frame.
+
+The @code{setfield} filter marks the interlace type field for the
+output frames. It does not change the input frame, but only sets the
+corresponding property, which affects how the frame is treated by
+following filters (e.g. @code{fieldorder} or @code{yadif}).
+
+The filter accepts the following options:
+
+@table @option
+
+@item mode
+Available values are:
+
+@table @samp
+@item auto
+Keep the same field property.
+
+@item bff
+Mark the frame as bottom-field-first.
+
+@item tff
+Mark the frame as top-field-first.
+
+@item prog
+Mark the frame as progressive.
+@end table
+@end table
+
+@section showinfo
+
+Show a line containing various information for each input video frame.
+The input video is not modified.
+
+The shown line contains a sequence of key/value pairs of the form
+@var{key}:@var{value}.
+
+It accepts the following parameters:
+
+@table @option
+@item n
+The (sequential) number of the input frame, starting from 0.
+
+@item pts
+The Presentation TimeStamp of the input frame, expressed as a number of
+time base units. The time base unit depends on the filter input pad.
+
+@item pts_time
+The Presentation TimeStamp of the input frame, expressed as a number of
+seconds.
+
+@item pos
+The position of the frame in the input stream, or -1 if this information is
+unavailable and/or meaningless (for example in case of synthetic video).
+
+@item fmt
+The pixel format name.
+
+@item sar
+The sample aspect ratio of the input frame, expressed in the form
+@var{num}/@var{den}.
+
+@item s
+The size of the input frame. For the syntax of this option, check the "Video size"
+section in the ffmpeg-utils manual.
+
+@item i
+The type of interlaced mode ("P" for "progressive", "T" for top field first, "B"
+for bottom field first).
+
+@item iskey
+This is 1 if the frame is a key frame, 0 otherwise.
+
+@item type
+The picture type of the input frame ("I" for an I-frame, "P" for a
+P-frame, "B" for a B-frame, or "?" for an unknown type).
+Also refer to the documentation of the @code{AVPictureType} enum and of
+the @code{av_get_picture_type_char} function defined in
+@file{libavutil/avutil.h}.
+
+@item checksum
+The Adler-32 checksum (printed in hexadecimal) of all the planes of the input frame.
+
+@item plane_checksum
+The Adler-32 checksum (printed in hexadecimal) of each plane of the input frame,
+expressed in the form "[@var{c0} @var{c1} @var{c2} @var{c3}]".
+@end table
+
+@section shuffleplanes
+
+Reorder and/or duplicate video planes.
+
+It accepts the following parameters:
+
+@table @option
+
+@item map0
+The index of the input plane to be used as the first output plane.
+
+@item map1
+The index of the input plane to be used as the second output plane.
+
+@item map2
+The index of the input plane to be used as the third output plane.
+
+@item map3
+The index of the input plane to be used as the fourth output plane.
+
+@end table
+
+The first plane has the index 0. The default is to keep the input unchanged.
+
+Swap the second and third planes of the input:
+@example
+ffmpeg -i INPUT -vf shuffleplanes=0:2:1:3 OUTPUT
+@end example
+
+@anchor{smartblur}
+@section smartblur
+
+Blur the input video without impacting the outlines.
+
+It accepts the following options:
+
+@table @option
+@item luma_radius, lr
+Set the luma radius. The option value must be a float number in
+the range [0.1,5.0] that specifies the variance of the gaussian filter
+used to blur the image (slower if larger). Default value is 1.0.
+
+@item luma_strength, ls
+Set the luma strength. The option value must be a float number
+in the range [-1.0,1.0] that configures the blurring. A value included
+in [0.0,1.0] will blur the image whereas a value included in
+[-1.0,0.0] will sharpen the image. Default value is 1.0.
+
+@item luma_threshold, lt
+Set the luma threshold used as a coefficient to determine
+whether a pixel should be blurred or not. The option value must be an
+integer in the range [-30,30]. A value of 0 will filter all the image,
+a value included in [0,30] will filter flat areas and a value included
+in [-30,0] will filter edges. Default value is 0.
+
+@item chroma_radius, cr
+Set the chroma radius. The option value must be a float number in
+the range [0.1,5.0] that specifies the variance of the gaussian filter
+used to blur the image (slower if larger). Default value is 1.0.
+
+@item chroma_strength, cs
+Set the chroma strength. The option value must be a float number
+in the range [-1.0,1.0] that configures the blurring. A value included
+in [0.0,1.0] will blur the image whereas a value included in
+[-1.0,0.0] will sharpen the image. Default value is 1.0.
+
+@item chroma_threshold, ct
+Set the chroma threshold used as a coefficient to determine
+whether a pixel should be blurred or not. The option value must be an
+integer in the range [-30,30]. A value of 0 will filter all the image,
+a value included in [0,30] will filter flat areas and a value included
+in [-30,0] will filter edges. Default value is 0.
+@end table
+
+If a chroma option is not explicitly set, the corresponding luma value
+is set.
+
+@section stereo3d
+
+Convert between different stereoscopic image formats.
+
+The filters accept the following options:
+
+@table @option
+@item in
+Set stereoscopic image format of input.
+
+Available values for input image formats are:
+@table @samp
+@item sbsl
+side by side parallel (left eye left, right eye right)
+
+@item sbsr
+side by side crosseye (right eye left, left eye right)
+
+@item sbs2l
+side by side parallel with half width resolution
+(left eye left, right eye right)
+
+@item sbs2r
+side by side crosseye with half width resolution
+(right eye left, left eye right)
+
+@item abl
+above-below (left eye above, right eye below)
+
+@item abr
+above-below (right eye above, left eye below)
+
+@item ab2l
+above-below with half height resolution
+(left eye above, right eye below)
+
+@item ab2r
+above-below with half height resolution
+(right eye above, left eye below)
+
+@item al
+alternating frames (left eye first, right eye second)
+
+@item ar
+alternating frames (right eye first, left eye second)
+
+Default value is @samp{sbsl}.
+@end table
+
+@item out
+Set stereoscopic image format of output.
+
+Available values for output image formats are all the input formats as well as:
+@table @samp
+@item arbg
+anaglyph red/blue gray
+(red filter on left eye, blue filter on right eye)
+
+@item argg
+anaglyph red/green gray
+(red filter on left eye, green filter on right eye)
+
+@item arcg
+anaglyph red/cyan gray
+(red filter on left eye, cyan filter on right eye)
+
+@item arch
+anaglyph red/cyan half colored
+(red filter on left eye, cyan filter on right eye)
+
+@item arcc
+anaglyph red/cyan color
+(red filter on left eye, cyan filter on right eye)
+
+@item arcd
+anaglyph red/cyan color optimized with the least squares projection of dubois
+(red filter on left eye, cyan filter on right eye)
+
+@item agmg
+anaglyph green/magenta gray
+(green filter on left eye, magenta filter on right eye)
+
+@item agmh
+anaglyph green/magenta half colored
+(green filter on left eye, magenta filter on right eye)
+
+@item agmc
+anaglyph green/magenta colored
+(green filter on left eye, magenta filter on right eye)
+
+@item agmd
+anaglyph green/magenta color optimized with the least squares projection of dubois
+(green filter on left eye, magenta filter on right eye)
+
+@item aybg
+anaglyph yellow/blue gray
+(yellow filter on left eye, blue filter on right eye)
+
+@item aybh
+anaglyph yellow/blue half colored
+(yellow filter on left eye, blue filter on right eye)
+
+@item aybc
+anaglyph yellow/blue colored
+(yellow filter on left eye, blue filter on right eye)
-Default value for @var{replaygain_preamp} is 0.0.
+@item aybd
+anaglyph yellow/blue color optimized with the least squares projection of dubois
+(yellow filter on left eye, blue filter on right eye)
-@item replaygain_noclip
-Prevent clipping by limiting the gain applied.
+@item irl
+interleaved rows (left eye has top row, right eye starts on next row)
-Default value for @var{replaygain_noclip} is 1.
+@item irr
+interleaved rows (right eye has top row, left eye starts on next row)
+
+@item ml
+mono output (left eye only)
+
+@item mr
+mono output (right eye only)
+@end table
+Default value is @samp{arcd}.
@end table
@subsection Examples