@item ow, oh
same as @var{out_w} and @var{out_h}
-@item n
-the number of input frame, starting from 0
+@item a
+same as @var{iw} / @var{ih}
+
+@item sar
+input sample aspect ratio
+
+@item dar
+input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
+
+@item hsub, vsub
+horizontal and vertical chroma subsample values. For example for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+
+@item n
+the number of input frame, starting from 0
+
- @item pos
- the position in the file of the input frame, NAN if unknown
-
+@item t
+timestamp expressed in seconds, NAN if the input timestamp is unknown
+
+@end table
+
+The expression for @var{out_w} may depend on the value of @var{out_h},
+and the expression for @var{out_h} may depend on @var{out_w}, but they
+cannot depend on @var{x} and @var{y}, as @var{x} and @var{y} are
+evaluated after @var{out_w} and @var{out_h}.
+
+The @var{x} and @var{y} parameters specify the expressions for the
+position of the top-left corner of the output (non-cropped) area. They
+are evaluated for each frame. If the evaluated value is not valid, it
+is approximated to the nearest valid value.
+
+The expression for @var{x} may depend on @var{y}, and the expression
+for @var{y} may depend on @var{x}.
+
+@subsection Examples
+
+@itemize
+@item
+Crop area with size 100x100 at position (12,34).
+@example
+crop=100:100:12:34
+@end example
+
+Using named options, the example above becomes:
+@example
+crop=w=100:h=100:x=12:y=34
+@end example
+
+@item
+Crop the central input area with size 100x100:
+@example
+crop=100:100
+@end example
+
+@item
+Crop the central input area with size 2/3 of the input video:
+@example
+crop=2/3*in_w:2/3*in_h
+@end example
+
+@item
+Crop the input video central square:
+@example
+crop=in_h
+@end example
+
+@item
+Delimit the rectangle with the top-left corner placed at position
+100:100 and the right-bottom corner corresponding to the right-bottom
+corner of the input image:
+@example
+crop=in_w-100:in_h-100:100:100
+@end example
+
+@item
+Crop 10 pixels from the left and right borders, and 20 pixels from
+the top and bottom borders
+@example
+crop=in_w-2*10:in_h-2*20
+@end example
+
+@item
+Keep only the bottom right quarter of the input image:
+@example
+crop=in_w/2:in_h/2:in_w/2:in_h/2
+@end example
+
+@item
+Crop height for getting Greek harmony:
+@example
+crop=in_w:1/PHI*in_w
+@end example
+
+@item
+Appply trembling effect:
+@example
+crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(n/10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(n/7)
+@end example
+
+@item
+Apply erratic camera effect depending on timestamp:
+@example
+crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(t*10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(t*13)"
+@end example
+
+@item
+Set x depending on the value of y:
+@example
+crop=in_w/2:in_h/2:y:10+10*sin(n/10)
+@end example
+@end itemize
+
+@section cropdetect
+
+Auto-detect crop size.
+
+Calculate necessary cropping parameters and prints the recommended
+parameters through the logging system. The detected dimensions
+correspond to the non-black area of the input video.
+
+It accepts the syntax:
+@example
+cropdetect[=@var{limit}[:@var{round}[:@var{reset}]]]
+@end example
+
+@table @option
+
+@item limit
+Threshold, which can be optionally specified from nothing (0) to
+everything (255), defaults to 24.
+
+@item round
+Value which the width/height should be divisible by, defaults to
+16. The offset is automatically adjusted to center the video. Use 2 to
+get only even dimensions (needed for 4:2:2 video). 16 is best when
+encoding to most video codecs.
+
+@item reset
+Counter that determines after how many frames cropdetect will reset
+the previously detected largest video area and start over to detect
+the current optimal crop area. Defaults to 0.
+
+This can be useful when channel logos distort the video area. 0
+indicates never reset and return the largest area encountered during
+playback.
+@end table
+
+@section decimate
+
+Drop frames that do not differ greatly from the previous frame in
+order to reduce framerate.
+
+The main use of this filter is for very-low-bitrate encoding
+(e.g. streaming over dialup modem), but it could in theory be used for
+fixing movies that were inverse-telecined incorrectly.
+
+The filter accepts parameters as a list of @var{key}=@var{value}
+pairs, separated by ":". If the key of the first options is omitted,
+the arguments are interpreted according to the syntax:
+@option{max}:@option{hi}:@option{lo}:@option{frac}.
+
+A description of the accepted options follows.
+
+@table @option
+@item max
+Set the maximum number of consecutive frames which can be dropped (if
+positive), or the minimum interval between dropped frames (if
+negative). If the value is 0, the frame is dropped unregarding the
+number of previous sequentially dropped frames.
+
+Default value is 0.
+
+@item hi
+@item lo
+@item frac
+Set the dropping threshold values.
+
+Values for @option{hi} and @option{lo} are for 8x8 pixel blocks and
+represent actual pixel value differences, so a threshold of 64
+corresponds to 1 unit of difference for each pixel, or the same spread
+out differently over the block.
+
+A frame is a candidate for dropping if no 8x8 blocks differ by more
+than a threshold of @option{hi}, and if no more than @option{frac} blocks (1
+meaning the whole image) differ by more than a threshold of @option{lo}.
+
+Default value for @option{hi} is 64*12, default value for @option{lo} is
+64*5, and default value for @option{frac} is 0.33.
+@end table
+
+@section delogo
+
+Suppress a TV station logo by a simple interpolation of the surrounding
+pixels. Just set a rectangle covering the logo and watch it disappear
+(and sometimes something even uglier appear - your mileage may vary).
+
+The filter accepts parameters as a string of the form
+"@var{x}:@var{y}:@var{w}:@var{h}:@var{band}", or as a list of
+@var{key}=@var{value} pairs, separated by ":".
+
+The description of the accepted parameters follows.
+
+@table @option
+
+@item x, y
+Specify the top left corner coordinates of the logo. They must be
+specified.
+
+@item w, h
+Specify the width and height of the logo to clear. They must be
+specified.
+
+@item band, t
+Specify the thickness of the fuzzy edge of the rectangle (added to
+@var{w} and @var{h}). The default value is 4.
+
+@item show
+When set to 1, a green rectangle is drawn on the screen to simplify
+finding the right @var{x}, @var{y}, @var{w}, @var{h} parameters, and
+@var{band} is set to 4. The default value is 0.
+
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Set a rectangle covering the area with top left corner coordinates 0,0
+and size 100x77, setting a band of size 10:
+@example
+delogo=0:0:100:77:10
+@end example
+
+@item
+As the previous example, but use named options:
+@example
+delogo=x=0:y=0:w=100:h=77:band=10
+@end example
+
+@end itemize
+
+@section deshake
+
+Attempt to fix small changes in horizontal and/or vertical shift. This
+filter helps remove camera shake from hand-holding a camera, bumping a
+tripod, moving on a vehicle, etc.
+
+The filter accepts parameters as a list of @var{key}=@var{value}
+pairs, separated by ":". If the key of the first options is omitted,
+the arguments are interpreted according to the syntax
+@var{x}:@var{y}:@var{w}:@var{h}:@var{rx}:@var{ry}:@var{edge}:@var{blocksize}:@var{contrast}:@var{search}:@var{filename}.
+
+A description of the accepted parameters follows.
+
+@table @option
+
+@item x, y, w, h
+Specify a rectangular area where to limit the search for motion
+vectors.
+If desired the search for motion vectors can be limited to a
+rectangular area of the frame defined by its top left corner, width
+and height. These parameters have the same meaning as the drawbox
+filter which can be used to visualise the position of the bounding
+box.
+
+This is useful when simultaneous movement of subjects within the frame
+might be confused for camera motion by the motion vector search.
+
+If any or all of @var{x}, @var{y}, @var{w} and @var{h} are set to -1
+then the full frame is used. This allows later options to be set
+without specifying the bounding box for the motion vector search.
+
+Default - search the whole frame.
+
+@item rx, ry
+Specify the maximum extent of movement in x and y directions in the
+range 0-64 pixels. Default 16.
+
+@item edge
+Specify how to generate pixels to fill blanks at the edge of the
+frame. Available values are:
+@table @samp
+@item blank, 0
+Fill zeroes at blank locations
+@item original, 1
+Original image at blank locations
+@item clamp, 2
+Extruded edge value at blank locations
+@item mirror, 3
+Mirrored edge at blank locations
+@end table
+Default value is @samp{mirror}.
+
+@item blocksize
+Specify the blocksize to use for motion search. Range 4-128 pixels,
+default 8.
+
+@item contrast
+Specify the contrast threshold for blocks. Only blocks with more than
+the specified contrast (difference between darkest and lightest
+pixels) will be considered. Range 1-255, default 125.
+
+@item search
+Specify the search strategy. Available values are:
+@table @samp
+@item exhaustive, 0
+Set exhaustive search
+@item less, 1
+Set less exhaustive search.
+@end table
+Default value is @samp{exhaustive}.
+
+@item filename
+If set then a detailed log of the motion search is written to the
+specified file.
+
+@end table
+
+@section drawbox
+
+Draw a colored box on the input image.
+
+The filter accepts parameters as a list of @var{key}=@var{value}
+pairs, separated by ":". If the key of the first options is omitted,
+the arguments are interpreted according to the syntax
+@option{x}:@option{y}:@option{width}:@option{height}:@option{color}:@option{thickness}.
+
+A description of the accepted options follows.
+
+@table @option
+@item x, y
+Specify the top left corner coordinates of the box. Default to 0.
+
+@item width, w
+@item height, h
+Specify the width and height of the box, if 0 they are interpreted as
+the input width and height. Default to 0.
+
+@item color, c
+Specify the color of the box to write, it can be the name of a color
+(case insensitive match) or a 0xRRGGBB[AA] sequence. If the special
+value @code{invert} is used, the box edge color is the same as the
+video with inverted luma.
+
+@item thickness, t
+Set the thickness of the box edge. Default value is @code{4}.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Draw a black box around the edge of the input image:
+@example
+drawbox
+@end example
+
+@item
+Draw a box with color red and an opacity of 50%:
+@example
+drawbox=10:20:200:60:red@@0.5
+@end example
+
+The previous example can be specified as:
+@example
+drawbox=x=10:y=20:w=200:h=60:color=red@@0.5
+@end example
+
+@item
+Fill the box with pink color:
+@example
+drawbox=x=10:y=10:w=100:h=100:color=pink@@0.5:t=max
+@end example
+@end itemize
+
+@anchor{drawtext}
+@section drawtext
+
+Draw text string or text from specified file on top of video using the
+libfreetype library.
+
+To enable compilation of this filter you need to configure FFmpeg with
+@code{--enable-libfreetype}.
+
+@subsection Syntax
+
+The filter accepts parameters as a list of @var{key}=@var{value} pairs,
+separated by ":".
+
+The description of the accepted parameters follows.
+
+@table @option
+
+@item box
+Used to draw a box around text using background color.
+Value should be either 1 (enable) or 0 (disable).
+The default value of @var{box} is 0.
+
+@item boxcolor
+The color to be used for drawing box around text.
+Either a string (e.g. "yellow") or in 0xRRGGBB[AA] format
+(e.g. "0xff00ff"), possibly followed by an alpha specifier.
+The default value of @var{boxcolor} is "white".
+
+@item draw
+Set an expression which specifies if the text should be drawn. If the
+expression evaluates to 0, the text is not drawn. This is useful for
+specifying that the text should be drawn only when specific conditions
+are met.
+
+Default value is "1".
+
+See below for the list of accepted constants and functions.
+
+@item expansion
+Select how the @var{text} is expanded. Can be either @code{none},
+@code{strftime} (deprecated) or
+@code{normal} (default). See the @ref{drawtext_expansion, Text expansion} section
+below for details.
+
+@item fix_bounds
+If true, check and fix text coords to avoid clipping.
+
+@item fontcolor
+The color to be used for drawing fonts.
+Either a string (e.g. "red") or in 0xRRGGBB[AA] format
+(e.g. "0xff000033"), possibly followed by an alpha specifier.
+The default value of @var{fontcolor} is "black".
+
+@item fontfile
+The font file to be used for drawing text. Path must be included.
+This parameter is mandatory.
+
+@item fontsize
+The font size to be used for drawing text.
+The default value of @var{fontsize} is 16.
+
+@item ft_load_flags
+Flags to be used for loading the fonts.
+
+The flags map the corresponding flags supported by libfreetype, and are
+a combination of the following values:
+@table @var
+@item default
+@item no_scale
+@item no_hinting
+@item render
+@item no_bitmap
+@item vertical_layout
+@item force_autohint
+@item crop_bitmap
+@item pedantic
+@item ignore_global_advance_width
+@item no_recurse
+@item ignore_transform
+@item monochrome
+@item linear_design
+@item no_autohint
+@item end table
+@end table
+
+Default value is "render".
+
+For more information consult the documentation for the FT_LOAD_*
+libfreetype flags.
+
+@item shadowcolor
+The color to be used for drawing a shadow behind the drawn text. It
+can be a color name (e.g. "yellow") or a string in the 0xRRGGBB[AA]
+form (e.g. "0xff00ff"), possibly followed by an alpha specifier.
+The default value of @var{shadowcolor} is "black".
+
+@item shadowx, shadowy
+The x and y offsets for the text shadow position with respect to the
+position of the text. They can be either positive or negative
+values. Default value for both is "0".
+
+@item tabsize
+The size in number of spaces to use for rendering the tab.
+Default value is 4.
+
+@item timecode
+Set the initial timecode representation in "hh:mm:ss[:;.]ff"
+format. It can be used with or without text parameter. @var{timecode_rate}
+option must be specified.
+
+@item timecode_rate, rate, r
+Set the timecode frame rate (timecode only).
+
+@item text
+The text string to be drawn. The text must be a sequence of UTF-8
+encoded characters.
+This parameter is mandatory if no file is specified with the parameter
+@var{textfile}.
+
+@item textfile
+A text file containing text to be drawn. The text must be a sequence
+of UTF-8 encoded characters.
+
+This parameter is mandatory if no text string is specified with the
+parameter @var{text}.
+
+If both @var{text} and @var{textfile} are specified, an error is thrown.
+
+@item reload
+If set to 1, the @var{textfile} will be reloaded before each frame.
+Be sure to update it atomically, or it may be read partially, or even fail.
+
+@item x, y
+The expressions which specify the offsets where text will be drawn
+within the video frame. They are relative to the top/left border of the
+output image.
+
+The default value of @var{x} and @var{y} is "0".
+
+See below for the list of accepted constants and functions.
+@end table
+
+The parameters for @var{x} and @var{y} are expressions containing the
+following constants and functions:
+
+@table @option
+@item dar
+input display aspect ratio, it is the same as (@var{w} / @var{h}) * @var{sar}
+
+@item hsub, vsub
+horizontal and vertical chroma subsample values. For example for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+
+@item line_h, lh
+the height of each text line
+
+@item main_h, h, H
+the input height
+
+@item main_w, w, W
+the input width
+
+@item max_glyph_a, ascent
+the maximum distance from the baseline to the highest/upper grid
+coordinate used to place a glyph outline point, for all the rendered
+glyphs.
+It is a positive value, due to the grid's orientation with the Y axis
+upwards.
+
+@item max_glyph_d, descent
+the maximum distance from the baseline to the lowest grid coordinate
+used to place a glyph outline point, for all the rendered glyphs.
+This is a negative value, due to the grid's orientation, with the Y axis
+upwards.
+
+@item max_glyph_h
+maximum glyph height, that is the maximum height for all the glyphs
+contained in the rendered text, it is equivalent to @var{ascent} -
+@var{descent}.
+
+@item max_glyph_w
+maximum glyph width, that is the maximum width for all the glyphs
+contained in the rendered text
+
+@item n
+the number of input frame, starting from 0
+
+@item rand(min, max)
+return a random number included between @var{min} and @var{max}
+
+@item sar
+input sample aspect ratio
+
+@item t
+timestamp expressed in seconds, NAN if the input timestamp is unknown
+
+@item text_h, th
+the height of the rendered text
+
+@item text_w, tw
+the width of the rendered text
+
+@item x, y
+the x and y offset coordinates where the text is drawn.
+
+These parameters allow the @var{x} and @var{y} expressions to refer
+each other, so you can for example specify @code{y=x/dar}.
+@end table
+
+If libavfilter was built with @code{--enable-fontconfig}, then
+@option{fontfile} can be a fontconfig pattern or omitted.
+
+@anchor{drawtext_expansion}
+@subsection Text expansion
+
+If @option{expansion} is set to @code{strftime},
+the filter recognizes strftime() sequences in the provided text and
+expands them accordingly. Check the documentation of strftime(). This
+feature is deprecated.
+
+If @option{expansion} is set to @code{none}, the text is printed verbatim.
+
+If @option{expansion} is set to @code{normal} (which is the default),
+the following expansion mechanism is used.
+
+The backslash character '\', followed by any character, always expands to
+the second character.
+
+Sequence of the form @code{%@{...@}} are expanded. The text between the
+braces is a function name, possibly followed by arguments separated by ':'.
+If the arguments contain special characters or delimiters (':' or '@}'),
+they should be escaped.
+
+Note that they probably must also be escaped as the value for the
+@option{text} option in the filter argument string and as the filter
+argument in the filter graph description, and possibly also for the shell,
+that makes up to four levels of escaping; using a text file avoids these
+problems.
+
+The following functions are available:
+
+@table @command
+
+@item expr, e
+The expression evaluation result.
+
+It must take one argument specifying the expression to be evaluated,
+which accepts the same constants and functions as the @var{x} and
+@var{y} values. Note that not all constants should be used, for
+example the text size is not known when evaluating the expression, so
+the constants @var{text_w} and @var{text_h} will have an undefined
+value.
+
+@item gmtime
+The time at which the filter is running, expressed in UTC.
+It can accept an argument: a strftime() format string.
+
+@item localtime
+The time at which the filter is running, expressed in the local time zone.
+It can accept an argument: a strftime() format string.
+
+@item n, frame_num
+The frame number, starting from 0.
+
+@item pts
+The timestamp of the current frame, in seconds, with microsecond accuracy.
+
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Draw "Test Text" with font FreeSerif, using the default values for the
+optional parameters.
+
+@example
+drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text'"
+@end example
+
+@item
+Draw 'Test Text' with font FreeSerif of size 24 at position x=100
+and y=50 (counting from the top-left corner of the screen), text is
+yellow with a red box around it. Both the text and the box have an
+opacity of 20%.
+
+@example
+drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text':\
+ x=100: y=50: fontsize=24: fontcolor=yellow@@0.2: box=1: boxcolor=red@@0.2"
+@end example
+
+Note that the double quotes are not necessary if spaces are not used
+within the parameter list.
+
+@item
+Show the text at the center of the video frame:
+@example
+drawtext="fontsize=30:fontfile=FreeSerif.ttf:text='hello world':x=(w-text_w)/2:y=(h-text_h-line_h)/2"
+@end example
+
+@item
+Show a text line sliding from right to left in the last row of the video
+frame. The file @file{LONG_LINE} is assumed to contain a single line
+with no newlines.
+@example
+drawtext="fontsize=15:fontfile=FreeSerif.ttf:text=LONG_LINE:y=h-line_h:x=-50*t"
+@end example
+
+@item
+Show the content of file @file{CREDITS} off the bottom of the frame and scroll up.
+@example
+drawtext="fontsize=20:fontfile=FreeSerif.ttf:textfile=CREDITS:y=h-20*t"
+@end example
+
+@item
+Draw a single green letter "g", at the center of the input video.
+The glyph baseline is placed at half screen height.
+@example
+drawtext="fontsize=60:fontfile=FreeSerif.ttf:fontcolor=green:text=g:x=(w-max_glyph_w)/2:y=h/2-ascent"
+@end example
+
+@item
+Show text for 1 second every 3 seconds:
+@example
+drawtext="fontfile=FreeSerif.ttf:fontcolor=white:x=100:y=x/dar:draw=lt(mod(t\,3)\,1):text='blink'"
+@end example
+
+@item
+Use fontconfig to set the font. Note that the colons need to be escaped.
+@example
+drawtext='fontfile=Linux Libertine O-40\:style=Semibold:text=FFmpeg'
+@end example
+
+@item
+Print the date of a real-time encoding (see strftime(3)):
+@example
+drawtext='fontfile=FreeSans.ttf:text=%@{localtime:%a %b %d %Y@}'
+@end example
+
+@end itemize
+
+For more information about libfreetype, check:
+@url{http://www.freetype.org/}.
+
+For more information about fontconfig, check:
+@url{http://freedesktop.org/software/fontconfig/fontconfig-user.html}.
+
+@section edgedetect
+
+Detect and draw edges. The filter uses the Canny Edge Detection algorithm.
+
+This filter accepts the following optional named parameters:
+
+@table @option
+@item low, high
+Set low and high threshold values used by the Canny thresholding
+algorithm.
+
+The high threshold selects the "strong" edge pixels, which are then
+connected through 8-connectivity with the "weak" edge pixels selected
+by the low threshold.
+
+@var{low} and @var{high} threshold values must be choosen in the range
+[0,1], and @var{low} should be lesser or equal to @var{high}.
+
+Default value for @var{low} is @code{20/255}, and default value for @var{high}
+is @code{50/255}.
+@end table
+
+Example:
+@example
+edgedetect=low=0.1:high=0.4
+@end example
+
+@section fade
+
+Apply fade-in/out effect to input video.
+
+The filter accepts parameters as a list of @var{key}=@var{value}
+pairs, separated by ":". If the key of the first options is omitted,
+the arguments are interpreted according to the syntax
+@var{type}:@var{start_frame}:@var{nb_frames}.
+
+A description of the accepted parameters follows.
+
+@table @option
+@item type, t
+Specify if the effect type, can be either @code{in} for fade-in, or
+@code{out} for a fade-out effect. Default is @code{in}.
+
+@item start_frame, s
+Specify the number of the start frame for starting to apply the fade
+effect. Default is 0.
+
+@item nb_frames, n
+Specify the number of frames for which the fade effect has to last. At
+the end of the fade-in effect the output video will have the same
+intensity as the input video, at the end of the fade-out transition
+the output video will be completely black. Default is 25.
+
+@item alpha
+If set to 1, fade only alpha channel, if one exists on the input.
+Default value is 0.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Fade in first 30 frames of video:
+@example
+fade=in:0:30
+@end example
+
+The command above is equivalent to:
+@example
+fade=t=in:s=0:n=30
+@end example
+
+@item
+Fade out last 45 frames of a 200-frame video:
+@example
+fade=out:155:45
+@end example
+
+@item
+Fade in first 25 frames and fade out last 25 frames of a 1000-frame video:
+@example
+fade=in:0:25, fade=out:975:25
+@end example
+
+@item
+Make first 5 frames black, then fade in from frame 5-24:
+@example
+fade=in:5:20
+@end example
+
+@item
+Fade in alpha over first 25 frames of video:
+@example
+fade=in:0:25:alpha=1
+@end example
+@end itemize
+
+@section field
+
+Extract a single field from an interlaced image using stride
+arithmetic to avoid wasting CPU time. The output frames are marked as
+non-interlaced.
+
+This filter accepts the following named options:
+@table @option
+@item type
+Specify whether to extract the top (if the value is @code{0} or
+@code{top}) or the bottom field (if the value is @code{1} or
+@code{bottom}).
+@end table
+
+If the option key is not specified, the first value sets the @var{type}
+option. For example:
+@example
+field=bottom
+@end example
+
+is equivalent to:
+@example
+field=type=bottom
+@end example
+
+@section fieldorder
+
+Transform the field order of the input video.
+
+It accepts one parameter which specifies the required field order that
+the input interlaced video will be transformed to. The parameter can
+assume one of the following values:
+
+@table @option
+@item 0 or bff
+output bottom field first
+@item 1 or tff
+output top field first
+@end table
+
+Default value is "tff".
+
+Transformation is achieved by shifting the picture content up or down
+by one line, and filling the remaining line with appropriate picture content.
+This method is consistent with most broadcast field order converters.
+
+If the input video is not flagged as being interlaced, or it is already
+flagged as being of the required output field order then this filter does
+not alter the incoming video.
+
+This filter is very useful when converting to or from PAL DV material,
+which is bottom field first.
+
+For example:
+@example
+ffmpeg -i in.vob -vf "fieldorder=bff" out.dv
+@end example
+
+@section fifo
+
+Buffer input images and send them when they are requested.
+
+This filter is mainly useful when auto-inserted by the libavfilter
+framework.
+
+The filter does not take parameters.
+
+@section format
+
+Convert the input video to one of the specified pixel formats.
+Libavfilter will try to pick one that is supported for the input to
+the next filter.
+
+The filter accepts a list of pixel format names, separated by ":",
+for example "yuv420p:monow:rgb24".
+
+@subsection Examples
+
+@itemize
+@item
+Convert the input video to the format @var{yuv420p}
+@example
+format=yuv420p
+@end example
+
+Convert the input video to any of the formats in the list
+@example
+format=yuv420p:yuv444p:yuv410p
+@end example
+@end itemize
+
+@section fps
+
+Convert the video to specified constant framerate by duplicating or dropping
+frames as necessary.
+
+This filter accepts the following named parameters:
+@table @option
+
+@item fps
+Desired output framerate. The default is @code{25}.
+
+@item round
+Rounding method.
+
+Possible values are:
+@table @option
+@item zero
+zero round towards 0
+@item inf
+round away from 0
+@item down
+round towards -infinity
+@item up
+round towards +infinity
+@item near
+round to nearest
+@end table
+The default is @code{near}.
+
+@end table
+
+Alternatively, the options can be specified as a flat string:
+@var{fps}[:@var{round}].
+
+See also the @ref{setpts} filter.
+
+@section framestep
+
+Select one frame every N.
+
+This filter accepts in input a string representing a positive
+integer. Default argument is @code{1}.
+
+@anchor{frei0r}
+@section frei0r
+
+Apply a frei0r effect to the input video.
+
+To enable compilation of this filter you need to install the frei0r
+header and configure FFmpeg with @code{--enable-frei0r}.
+
+The filter supports the syntax:
+@example
+@var{filter_name}[@{:|=@}@var{param1}:@var{param2}:...:@var{paramN}]
+@end example
+
+@var{filter_name} is the name of the frei0r effect to load. If the
+environment variable @env{FREI0R_PATH} is defined, the frei0r effect
+is searched in each one of the directories specified by the colon (or
+semicolon on Windows platforms) separated list in @env{FREIOR_PATH},
+otherwise in the standard frei0r paths, which are in this order:
+@file{HOME/.frei0r-1/lib/}, @file{/usr/local/lib/frei0r-1/},
+@file{/usr/lib/frei0r-1/}.
+
+@var{param1}, @var{param2}, ... , @var{paramN} specify the parameters
+for the frei0r effect.
+
+A frei0r effect parameter can be a boolean (whose values are specified
+with "y" and "n"), a double, a color (specified by the syntax
+@var{R}/@var{G}/@var{B}, @var{R}, @var{G}, and @var{B} being float
+numbers from 0.0 to 1.0) or by an @code{av_parse_color()} color
+description), a position (specified by the syntax @var{X}/@var{Y},
+@var{X} and @var{Y} being float numbers) and a string.
+
+The number and kind of parameters depend on the loaded effect. If an
+effect parameter is not specified the default value is set.
+
+@subsection Examples
+
+@itemize
+@item
+Apply the distort0r effect, set the first two double parameters:
+@example
+frei0r=distort0r:0.5:0.01
+@end example
+
+@item
+Apply the colordistance effect, take a color as first parameter:
+@example
+frei0r=colordistance:0.2/0.3/0.4
+frei0r=colordistance:violet
+frei0r=colordistance:0x112233
+@end example
+
+@item
+Apply the perspective effect, specify the top left and top right image
+positions:
+@example
+frei0r=perspective:0.2/0.2:0.8/0.2
+@end example
+@end itemize
+
+For more information see:
+@url{http://frei0r.dyne.org}
+
+@section geq
+
+The filter takes one, two, three or four equations as parameter, separated by ':'.
+The first equation is mandatory and applies to the luma plane. The two
+following are respectively for chroma blue and chroma red planes.
+
+The filter syntax allows named parameters:
+
+@table @option
+@item lum_expr
+the luminance expression
+@item cb_expr
+the chrominance blue expression
+@item cr_expr
+the chrominance red expression
+@item alpha_expr
+the alpha expression
+@end table
+
+If one of the chrominance expression is not defined, it falls back on the other
+one. If no alpha expression is specified it will evaluate to opaque value.
+If none of chrominance expressions are
+specified, they will evaluate the luminance expression.
+
+The expressions can use the following variables and functions:
+
+@table @option
+@item N
+The sequential number of the filtered frame, starting from @code{0}.
+
+@item X, Y
+The coordinates of the current sample.
+
+@item W, H
+The width and height of the image.
+
+@item SW, SH
+Width and height scale depending on the currently filtered plane. It is the
+ratio between the corresponding luma plane number of pixels and the current
+plane ones. E.g. for YUV4:2:0 the values are @code{1,1} for the luma plane, and
+@code{0.5,0.5} for chroma planes.
+
+@item T
+Time of the current frame, expressed in seconds.
+
+@item p(x, y)
+Return the value of the pixel at location (@var{x},@var{y}) of the current
+plane.
+
+@item lum(x, y)
+Return the value of the pixel at location (@var{x},@var{y}) of the luminance
+plane.
+
+@item cb(x, y)
+Return the value of the pixel at location (@var{x},@var{y}) of the
+blue-difference chroma plane. Returns 0 if there is no such plane.
+
+@item cr(x, y)
+Return the value of the pixel at location (@var{x},@var{y}) of the
+red-difference chroma plane. Returns 0 if there is no such plane.
+
+@item alpha(x, y)
+Return the value of the pixel at location (@var{x},@var{y}) of the alpha
+plane. Returns 0 if there is no such plane.
+@end table
+
+For functions, if @var{x} and @var{y} are outside the area, the value will be
+automatically clipped to the closer edge.
+
+@subsection Examples
+
+@itemize
+@item
+Flip the image horizontally:
+@example
+geq=p(W-X\,Y)
+@end example
+
+@item
+Generate a bidimensional sine wave, with angle @code{PI/3} and a
+wavelength of 100 pixels:
+@example
+geq=128 + 100*sin(2*(PI/100)*(cos(PI/3)*(X-50*T) + sin(PI/3)*Y)):128:128
+@end example
+
+@item
+Generate a fancy enigmatic moving light:
+@example
+nullsrc=s=256x256,geq=random(1)/hypot(X-cos(N*0.07)*W/2-W/2\,Y-sin(N*0.09)*H/2-H/2)^2*1000000*sin(N*0.02):128:128
+@end example
+@end itemize
+
+@section gradfun
+
+Fix the banding artifacts that are sometimes introduced into nearly flat
+regions by truncation to 8bit color depth.
+Interpolate the gradients that should go where the bands are, and
+dither them.
+
+This filter is designed for playback only. Do not use it prior to
+lossy compression, because compression tends to lose the dither and
+bring back the bands.
+
+The filter accepts a list of options in the form of @var{key}=@var{value} pairs
+separated by ":". A description of the accepted options follows.
+
+@table @option
+
+@item strength
+The maximum amount by which the filter will change
+any one pixel. Also the threshold for detecting nearly flat
+regions. Acceptable values range from @code{0.51} to @code{64}, default value
+is @code{1.2}.
+
+@item radius
+The neighborhood to fit the gradient to. A larger
+radius makes for smoother gradients, but also prevents the filter from
+modifying the pixels near detailed regions. Acceptable values are
+@code{8-32}, default value is @code{16}.
+
+@end table
+
+Alternatively, the options can be specified as a flat string:
+@var{strength}[:@var{radius}]
+
+@subsection Examples
+
+@itemize
+@item
+Apply the filter with a @code{3.5} strength and radius of @code{8}:
+@example
+gradfun=3.5:8
+@end example
+
+@item
+Specify radius, omitting the strength (which will fall-back to the default
+value):
+@example
+gradfun=radius=8
+@end example
+
+@end itemize
+
+@section hflip
+
+Flip the input video horizontally.
+
+For example to horizontally flip the input video with @command{ffmpeg}:
+@example
+ffmpeg -i in.avi -vf "hflip" out.avi
+@end example
+
+@section histeq
+This filter applies a global color histogram equalization on a
+per-frame basis.
+
+It can be used to correct video that has a compressed range of pixel
+intensities. The filter redistributes the pixel intensities to
+equalize their distribution across the intensity range. It may be
+viewed as an "automatically adjusting contrast filter". This filter is
+useful only for correcting degraded or poorly captured source
+video.
+
+The filter accepts parameters as a list of @var{key}=@var{value}
+pairs, separated by ":". If the key of the first options is omitted,
+the arguments are interpreted according to syntax
+@var{strength}:@var{intensity}:@var{antibanding}.
+
+This filter accepts the following named options:
+
+@table @option
+@item strength
+Determine the amount of equalization to be applied. As the strength
+is reduced, the distribution of pixel intensities more-and-more
+approaches that of the input frame. The value must be a float number
+in the range [0,1] and defaults to 0.200.
+
+@item intensity
+Set the maximum intensity that can generated and scale the output
+values appropriately. The strength should be set as desired and then
+the intensity can be limited if needed to avoid washing-out. The value
+must be a float number in the range [0,1] and defaults to 0.210.
+
+@item antibanding
+Set the antibanding level. If enabled the filter will randomly vary
+the luminance of output pixels by a small amount to avoid banding of
+the histogram. Possible values are @code{none}, @code{weak} or
+@code{strong}. It defaults to @code{none}.
+@end table
+
+@section histogram
+
+Compute and draw a color distribution histogram for the input video.
+
+The computed histogram is a representation of distribution of color components
+in an image.
+
+The filter accepts the following named parameters:
+
+@table @option
+@item mode
+Set histogram mode.
+
+It accepts the following values:
+@table @samp
+@item levels
+standard histogram that display color components distribution in an image.
+Displays color graph for each color component. Shows distribution
+of the Y, U, V, A or G, B, R components, depending on input format,
+in current frame. Bellow each graph is color component scale meter.
+
+@item color
+chroma values in vectorscope, if brighter more such chroma values are
+distributed in an image.
+Displays chroma values (U/V color placement) in two dimensional graph
+(which is called a vectorscope). It can be used to read of the hue and
+saturation of the current frame. At a same time it is a histogram.
+The whiter a pixel in the vectorscope, the more pixels of the input frame
+correspond to that pixel (that is the more pixels have this chroma value).
+The V component is displayed on the horizontal (X) axis, with the leftmost
+side being V = 0 and the rightmost side being V = 255.
+The U component is displayed on the vertical (Y) axis, with the top
+representing U = 0 and the bottom representing U = 255.
+
+The position of a white pixel in the graph corresponds to the chroma value
+of a pixel of the input clip. So the graph can be used to read of the
+hue (color flavor) and the saturation (the dominance of the hue in the color).
+As the hue of a color changes, it moves around the square. At the center of
+the square, the saturation is zero, which means that the corresponding pixel
+has no color. If you increase the amount of a specific color, while leaving
+the other colors unchanged, the saturation increases, and you move towards
+the edge of the square.
+
+@item color2
+chroma values in vectorscope, similar as @code{color} but actual chroma values
+are displayed.
+
+@item waveform
+per row/column color component graph. In row mode graph in the left side represents
+color component value 0 and right side represents value = 255. In column mode top
+side represents color component value = 0 and bottom side represents value = 255.
+@end table
+Default value is @code{levels}.
+
+@item level_height
+Set height of level in @code{levels}. Default value is @code{200}.
+Allowed range is [50, 2048].
+
+@item scale_height
+Set height of color scale in @code{levels}. Default value is @code{12}.
+Allowed range is [0, 40].
+
+@item step
+Set step for @code{waveform} mode. Smaller values are useful to find out how much
+of same luminance values across input rows/columns are distributed.
+Default value is @code{10}. Allowed range is [1, 255].
+
+@item waveform_mode
+Set mode for @code{waveform}. Can be either @code{row}, or @code{column}.
+Default is @code{row}.
+
+@item display_mode
+Set display mode for @code{waveform} and @code{levels}.
+It accepts the following values:
+@table @samp
+@item parade
+Display separate graph for the color components side by side in
+@code{row} waveform mode or one below other in @code{column} waveform mode
+for @code{waveform} histogram mode. For @code{levels} histogram mode
+per color component graphs are placed one bellow other.
+
+This display mode in @code{waveform} histogram mode makes it easy to spot
+color casts in the highlights and shadows of an image, by comparing the
+contours of the top and the bottom of each waveform.
+Since whites, grays, and blacks are characterized by
+exactly equal amounts of red, green, and blue, neutral areas of the
+picture should display three waveforms of roughly equal width/height.
+If not, the correction is easy to make by making adjustments to level the
+three waveforms.
+
+@item overlay
+Presents information that's identical to that in the @code{parade}, except
+that the graphs representing color components are superimposed directly
+over one another.
+
+This display mode in @code{waveform} histogram mode can make it easier to spot
+the relative differences or similarities in overlapping areas of the color
+components that are supposed to be identical, such as neutral whites, grays,
+or blacks.
+@end table
+Default is @code{parade}.
+@end table
+
+@subsection Examples
+
+@itemize
+
+@item
+Calculate and draw histogram:
+@example
+ffplay -i input -vf histogram
+@end example
+
+@end itemize
+
+@section hqdn3d
+
+High precision/quality 3d denoise filter. This filter aims to reduce
+image noise producing smooth images and making still images really
+still. It should enhance compressibility.
+
+It accepts the following optional parameters:
+@var{luma_spatial}:@var{chroma_spatial}:@var{luma_tmp}:@var{chroma_tmp}
+
+@table @option
+@item luma_spatial
+a non-negative float number which specifies spatial luma strength,
+defaults to 4.0
+
+@item chroma_spatial
+a non-negative float number which specifies spatial chroma strength,
+defaults to 3.0*@var{luma_spatial}/4.0
+
+@item luma_tmp
+a float number which specifies luma temporal strength, defaults to
+6.0*@var{luma_spatial}/4.0
+
+@item chroma_tmp
+a float number which specifies chroma temporal strength, defaults to
+@var{luma_tmp}*@var{chroma_spatial}/@var{luma_spatial}
+@end table
+
+@section hue
+
+Modify the hue and/or the saturation of the input.
+
+This filter accepts the following optional named options:
+
+@table @option
+@item h
+Specify the hue angle as a number of degrees. It accepts a float
+number or an expression, and defaults to 0.0.
+
+@item H
+Specify the hue angle as a number of radians. It accepts a float
+number or an expression, and defaults to 0.0.
+
+@item s
+Specify the saturation in the [-10,10] range. It accepts a float number and
+defaults to 1.0.
+@end table
+
+The @var{h}, @var{H} and @var{s} parameters are expressions containing the
+following constants:
+
+@table @option
+@item n
+frame count of the input frame starting from 0
+
+@item pts
+presentation timestamp of the input frame expressed in time base units
+
+@item r
+frame rate of the input video, NAN if the input frame rate is unknown
+
+@item t
+timestamp expressed in seconds, NAN if the input timestamp is unknown
+
+@item tb
+time base of the input video
+@end table
+
+The options can also be set using the syntax: @var{hue}:@var{saturation}
+
+In this case @var{hue} is expressed in degrees.
+
+@subsection Examples
+
+@itemize
+@item
+Set the hue to 90 degrees and the saturation to 1.0:
+@example
+hue=h=90:s=1
+@end example
+
+@item
+Same command but expressing the hue in radians:
+@example
+hue=H=PI/2:s=1
+@end example
+
+@item
+Same command without named options, hue must be expressed in degrees:
+@example
+hue=90:1
+@end example
+
+@item
+Note that "h:s" syntax does not support expressions for the values of
+h and s, so the following example will issue an error:
+@example
+hue=PI/2:1
+@end example
+
+@item
+Rotate hue and make the saturation swing between 0
+and 2 over a period of 1 second:
+@example
+hue="H=2*PI*t: s=sin(2*PI*t)+1"
+@end example
+
+@item
+Apply a 3 seconds saturation fade-in effect starting at 0:
+@example
+hue="s=min(t/3\,1)"
+@end example
+
+The general fade-in expression can be written as:
+@example
+hue="s=min(0\, max((t-START)/DURATION\, 1))"
+@end example
+
+@item
+Apply a 3 seconds saturation fade-out effect starting at 5 seconds:
+@example
+hue="s=max(0\, min(1\, (8-t)/3))"
+@end example
+
+The general fade-out expression can be written as:
+@example
+hue="s=max(0\, min(1\, (START+DURATION-t)/DURATION))"
+@end example
+
+@end itemize
+
+@subsection Commands
+
+This filter supports the following command:
+@table @option
+@item reinit
+Modify the hue and/or the saturation of the input video.
+The command accepts the same named options and syntax than when calling the
+filter from the command-line.
+
+If a parameter is omitted, it is kept at its current value.
+@end table
+
+@section idet
+
+Detect video interlacing type.
+
+This filter tries to detect if the input is interlaced or progressive,
+top or bottom field first.
+
+@section il
+
+Deinterleave or interleave fields.
+
+This filter allows to process interlaced images fields without
+deinterlacing them. Deinterleaving splits the input frame into 2
+fields (so called half pictures). Odd lines are moved to the top
+half of the output image, even lines to the bottom half.
+You can process (filter) them independently and then re-interleave them.
+
+It accepts a list of options in the form of @var{key}=@var{value} pairs
+separated by ":". A description of the accepted options follows.
+
+@table @option
+@item luma_mode, l
+@item chroma_mode, s
+@item alpha_mode, a
+Available values for @var{luma_mode}, @var{chroma_mode} and
+@var{alpha_mode} are:
+
+@table @samp
+@item none
+Do nothing.
+
+@item deinterleave, d
+Deinterleave fields, placing one above the other.
+
+@item interleave, i
+Interleave fields. Reverse the effect of deinterleaving.
+@end table
+Default value is @code{none}.
+
+@item luma_swap, ls
+@item chroma_swap, cs
+@item alpha_swap, as
+Swap luma/chroma/alpha fields. Exchange even & odd lines. Default value is @code{0}.
+@end table
+
+@section kerndeint
+
+Deinterlace input video by applying Donald Graft's adaptive kernel
+deinterling. Work on interlaced parts of a video to produce
+progressive frames.
+
+This filter accepts parameters as a list of @var{key}=@var{value}
+pairs, separated by ":". If the key of the first options is omitted,
+the arguments are interpreted according to the following syntax:
+@var{thresh}:@var{map}:@var{order}:@var{sharp}:@var{twoway}.
+
+The description of the accepted parameters follows.
+
+@table @option
+@item thresh
+Set the threshold which affects the filter's tolerance when
+determining if a pixel line must be processed. It must be an integer
+in the range [0,255] and defaults to 10. A value of 0 will result in
+applying the process on every pixels.
+
+@item map
+Paint pixels exceeding the threshold value to white if set to 1.
+Default is 0.
+
+@item order
+Set the fields order. Swap fields if set to 1, leave fields alone if
+0. Default is 0.
+
+@item sharp
+Enable additional sharpening if set to 1. Default is 0.
+
+@item twoway
+Enable twoway sharpening if set to 1. Default is 0.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Apply default values:
+@example
+kerndeint=thresh=10:map=0:order=0:sharp=0:twoway=0
+@end example
+
+@item
+Enable additional sharpening:
+@example
+kerndeint=sharp=1
+@end example
+
+@item
+Paint processed pixels in white:
+@example
+kerndeint=map=1
+@end example
+@end itemize
+
+@section lut, lutrgb, lutyuv
+
+Compute a look-up table for binding each pixel component input value
+to an output value, and apply it to input video.
+
+@var{lutyuv} applies a lookup table to a YUV input video, @var{lutrgb}
+to an RGB input video.
+
+These filters accept in input a ":"-separated list of options, which
+specify the expressions used for computing the lookup table for the
+corresponding pixel component values.
+
+The @var{lut} filter requires either YUV or RGB pixel formats in
+input, and accepts the options:
+@table @option
+@item c0
+set first pixel component expression
+@item c1
+set second pixel component expression
+@item c2
+set third pixel component expression
+@item c3
+set fourth pixel component expression, corresponds to the alpha component
+@end table
+
+The exact component associated to each option depends on the format in
+input.
+
+The @var{lutrgb} filter requires RGB pixel formats in input, and
+accepts the options:
+@table @option
+@item r
+set red component expression
+@item g
+set green component expression
+@item b
+set blue component expression
+@item a
+alpha component expression
+@end table
+
+The @var{lutyuv} filter requires YUV pixel formats in input, and
+accepts the options:
+@table @option
+@item y
+set Y/luminance component expression
+@item u
+set U/Cb component expression
+@item v
+set V/Cr component expression
+@item a
+set alpha component expression
+@end table
+
+The expressions can contain the following constants and functions:
+
+@table @option
+@item w, h
+the input width and height
+
+@item val
+input value for the pixel component
+
+@item clipval
+the input value clipped in the @var{minval}-@var{maxval} range
+
+@item maxval
+maximum value for the pixel component
+
+@item minval
+minimum value for the pixel component
+
+@item negval
+the negated value for the pixel component value clipped in the
+@var{minval}-@var{maxval} range , it corresponds to the expression
+"maxval-clipval+minval"
+
+@item clip(val)
+the computed value in @var{val} clipped in the
+@var{minval}-@var{maxval} range
+
+@item gammaval(gamma)
+the computed gamma correction value of the pixel component value
+clipped in the @var{minval}-@var{maxval} range, corresponds to the
+expression
+"pow((clipval-minval)/(maxval-minval)\,@var{gamma})*(maxval-minval)+minval"
+
+@end table
+
+All expressions default to "val".
+
+@subsection Examples
+
+@itemize
+@item
+Negate input video:
+@example
+lutrgb="r=maxval+minval-val:g=maxval+minval-val:b=maxval+minval-val"
+lutyuv="y=maxval+minval-val:u=maxval+minval-val:v=maxval+minval-val"
+@end example
+
+The above is the same as:
+@example
+lutrgb="r=negval:g=negval:b=negval"
+lutyuv="y=negval:u=negval:v=negval"
+@end example
+
+@item
+Negate luminance:
+@example
+lutyuv=y=negval
+@end example
+
+@item
+Remove chroma components, turns the video into a graytone image:
+@example
+lutyuv="u=128:v=128"
+@end example
+
+@item
+Apply a luma burning effect:
+@example
+lutyuv="y=2*val"
+@end example
+
+@item
+Remove green and blue components:
+@example
+lutrgb="g=0:b=0"
+@end example
+
+@item
+Set a constant alpha channel value on input:
+@example
+format=rgba,lutrgb=a="maxval-minval/2"
+@end example
+
+@item
+Correct luminance gamma by a 0.5 factor:
+@example
+lutyuv=y=gammaval(0.5)
+@end example
+@end itemize
+
+@section mp
+
+Apply an MPlayer filter to the input video.
+
+This filter provides a wrapper around most of the filters of
+MPlayer/MEncoder.
+
+This wrapper is considered experimental. Some of the wrapped filters
+may not work properly and we may drop support for them, as they will
+be implemented natively into FFmpeg. Thus you should avoid
+depending on them when writing portable scripts.
+
+The filters accepts the parameters:
+@var{filter_name}[:=]@var{filter_params}
+
+@var{filter_name} is the name of a supported MPlayer filter,
+@var{filter_params} is a string containing the parameters accepted by
+the named filter.
+
+The list of the currently supported filters follows:
+@table @var
+@item detc
+@item dint
+@item divtc
+@item down3dright
+@item eq2
+@item eq
+@item fil
+@item fspp
+@item harddup
+@item ilpack
+@item ivtc
+@item mcdeint
+@item ow
+@item perspective
+@item phase
+@item pp7
+@item pullup
+@item qp
+@item sab
+@item softpulldown
+@item spp
+@item telecine
+@item tinterlace
+@item uspp
+@end table
+
+The parameter syntax and behavior for the listed filters are the same
+of the corresponding MPlayer filters. For detailed instructions check
+the "VIDEO FILTERS" section in the MPlayer manual.
+
+@subsection Examples
+
+@itemize
+@item
+Adjust gamma, brightness, contrast:
+@example
+mp=eq2=1.0:2:0.5
+@end example
+@end itemize
+
+See also mplayer(1), @url{http://www.mplayerhq.hu/}.
+
+@section negate
+
+Negate input video.
+
+This filter accepts an integer in input, if non-zero it negates the
+alpha component (if available). The default value in input is 0.
+
+@section noformat
+
+Force libavfilter not to use any of the specified pixel formats for the
+input to the next filter.
+
+The filter accepts a list of pixel format names, separated by ":",
+for example "yuv420p:monow:rgb24".
+
+@subsection Examples
+
+@itemize
+@item
+Force libavfilter to use a format different from @var{yuv420p} for the
+input to the vflip filter:
+@example
+noformat=yuv420p,vflip
+@end example
+
+@item
+Convert the input video to any of the formats not contained in the list:
+@example
+noformat=yuv420p:yuv444p:yuv410p
+@end example
+@end itemize
+
+@section noise
+
+Add noise on video input frame.
+
+This filter accepts a list of options in the form of @var{key}=@var{value}
+pairs separated by ":". A description of the accepted options follows.
+
+@table @option
+@item all_seed
+@item c0_seed
+@item c1_seed
+@item c2_seed
+@item c3_seed
+Set noise seed for specific pixel component or all pixel components in case
+of @var{all_seed}. Default value is @code{123457}.
+
+@item all_strength, alls
+@item c0_strength, c0s
+@item c1_strength, c1s
+@item c2_strength, c2s
+@item c3_strength, c3s
+Set noise strength for specific pixel component or all pixel components in case
+@var{all_strength}. Default value is @code{0}. Allowed range is [0, 100].
+
+@item all_flags, allf
+@item c0_flags, c0f
+@item c1_flags, c1f
+@item c2_flags, c2f
+@item c3_flags, c3f
+Set pixel component flags or set flags for all components if @var{all_flags}.
+Available values for component flags are:
+@table @samp
+@item a
+averaged temporal noise (smoother)
+@item p
+mix random noise with a (semi)regular pattern
+@item q
+higher quality (slightly better looking, slightly slower)
+@item t
+temporal noise (noise pattern changes between frames)
+@item u
+uniform noise (gaussian otherwise)
+@end table
+@end table
+
+@subsection Examples
+
+Add temporal and uniform noise to input video:
+@example
+noise=alls=20:allf=t+u
+@end example
+
+@section null
+
+Pass the video source unchanged to the output.
+
+@section ocv
+
+Apply video transform using libopencv.
+
+To enable this filter install libopencv library and headers and
+configure FFmpeg with @code{--enable-libopencv}.
+
+The filter takes the parameters: @var{filter_name}@{:=@}@var{filter_params}.
+
+@var{filter_name} is the name of the libopencv filter to apply.
+
+@var{filter_params} specifies the parameters to pass to the libopencv
+filter. If not specified the default values are assumed.
+
+Refer to the official libopencv documentation for more precise
+information:
+@url{http://opencv.willowgarage.com/documentation/c/image_filtering.html}
+
+Follows the list of supported libopencv filters.
+
+@anchor{dilate}
+@subsection dilate
+
+Dilate an image by using a specific structuring element.
+This filter corresponds to the libopencv function @code{cvDilate}.
+
+It accepts the parameters: @var{struct_el}:@var{nb_iterations}.
+
+@var{struct_el} represents a structuring element, and has the syntax:
+@var{cols}x@var{rows}+@var{anchor_x}x@var{anchor_y}/@var{shape}
+
+@var{cols} and @var{rows} represent the number of columns and rows of
+the structuring element, @var{anchor_x} and @var{anchor_y} the anchor
+point, and @var{shape} the shape for the structuring element, and
+can be one of the values "rect", "cross", "ellipse", "custom".
+
+If the value for @var{shape} is "custom", it must be followed by a
+string of the form "=@var{filename}". The file with name
+@var{filename} is assumed to represent a binary image, with each
+printable character corresponding to a bright pixel. When a custom
+@var{shape} is used, @var{cols} and @var{rows} are ignored, the number
+or columns and rows of the read file are assumed instead.
+
+The default value for @var{struct_el} is "3x3+0x0/rect".
+
+@var{nb_iterations} specifies the number of times the transform is
+applied to the image, and defaults to 1.
+
+Follow some example:
+@example
+# use the default values
+ocv=dilate
+
+# dilate using a structuring element with a 5x5 cross, iterate two times
+ocv=dilate=5x5+2x2/cross:2
+
+# read the shape from the file diamond.shape, iterate two times
+# the file diamond.shape may contain a pattern of characters like this:
+# *
+# ***
+# *****
+# ***
+# *
+# the specified cols and rows are ignored (but not the anchor point coordinates)
+ocv=0x0+2x2/custom=diamond.shape:2
+@end example
+
+@subsection erode
+
+Erode an image by using a specific structuring element.
+This filter corresponds to the libopencv function @code{cvErode}.
+
+The filter accepts the parameters: @var{struct_el}:@var{nb_iterations},
+with the same syntax and semantics as the @ref{dilate} filter.
+
+@subsection smooth
+
+Smooth the input video.
+
+The filter takes the following parameters:
+@var{type}:@var{param1}:@var{param2}:@var{param3}:@var{param4}.
+
+@var{type} is the type of smooth filter to apply, and can be one of
+the following values: "blur", "blur_no_scale", "median", "gaussian",
+"bilateral". The default value is "gaussian".
+
+@var{param1}, @var{param2}, @var{param3}, and @var{param4} are
+parameters whose meanings depend on smooth type. @var{param1} and
+@var{param2} accept integer positive values or 0, @var{param3} and
+@var{param4} accept float values.
+
+The default value for @var{param1} is 3, the default value for the
+other parameters is 0.
+
+These parameters correspond to the parameters assigned to the
+libopencv function @code{cvSmooth}.
+
+@anchor{overlay}
+@section overlay
+
+Overlay one video on top of another.
+
+It takes two inputs and one output, the first input is the "main"
+video on which the second input is overlayed.
+
+This filter accepts a list of @var{key}=@var{value} pairs as argument,
+separated by ":". If the key of the first options is omitted, the
+arguments are interpreted according to the syntax @var{x}:@var{y}.
+
+A description of the accepted options follows.
+
+@table @option
+@item x, y
+Set the expression for the x and y coordinates of the overlayed video
+on the main video. Default value is 0.
+
+The @var{x} and @var{y} expressions can contain the following
+parameters:
+@table @option
+@item main_w, main_h
+main input width and height
+
+@item W, H
+same as @var{main_w} and @var{main_h}
+
+@item overlay_w, overlay_h
+overlay input width and height
+
+@item w, h
+same as @var{overlay_w} and @var{overlay_h}
+@end table
+
+@item format
+Set the format for the output video.
+
+It accepts the following values:
+@table @samp
+@item yuv420
+force YUV420 output
+
+@item yuv444
+force YUV444 output
+
+@item rgb
+force RGB output
+@end table
+
+Default value is @samp{yuv420}.
+
+@item rgb @emph{(deprecated)}
+If set to 1, force the filter to accept inputs in the RGB
+color space. Default value is 0. This option is deprecated, use
+@option{format} instead.
+
+@item shortest
+If set to 1, force the output to terminate when the shortest input
+terminates. Default value is 0.
+@end table
+
+Be aware that frames are taken from each input video in timestamp
+order, hence, if their initial timestamps differ, it is a a good idea
+to pass the two inputs through a @var{setpts=PTS-STARTPTS} filter to
+have them begin in the same zero timestamp, as it does the example for
+the @var{movie} filter.
+
+You can chain together more overlays but you should test the
+efficiency of such approach.
+
+@subsection Examples
+
+@itemize
+@item
+Draw the overlay at 10 pixels from the bottom right corner of the main
+video:
+@example
+overlay=main_w-overlay_w-10:main_h-overlay_h-10
+@end example
+
+Using named options the example above becomes:
+@example
+overlay=x=main_w-overlay_w-10:y=main_h-overlay_h-10
+@end example
+
+@item
+Insert a transparent PNG logo in the bottom left corner of the input,
+using the @command{ffmpeg} tool with the @code{-filter_complex} option:
+@example
+ffmpeg -i input -i logo -filter_complex 'overlay=10:main_h-overlay_h-10' output
+@end example
+
+@item
+Insert 2 different transparent PNG logos (second logo on bottom
+right corner) using the @command{ffmpeg} tool:
+@example
+ffmpeg -i input -i logo1 -i logo2 -filter_complex 'overlay=10:H-h-10,overlay=W-w-10:H-h-10' output
+@end example
+
+@item
+Add a transparent color layer on top of the main video, WxH specifies
+the size of the main input to the overlay filter:
+@example
+color=red@@.3:WxH [over]; [in][over] overlay [out]
+@end example
+
+@item
+Play an original video and a filtered version (here with the deshake
+filter) side by side using the @command{ffplay} tool:
+@example
+ffplay input.avi -vf 'split[a][b]; [a]pad=iw*2:ih[src]; [b]deshake[filt]; [src][filt]overlay=w'
+@end example
+
+The above command is the same as:
+@example
+ffplay input.avi -vf 'split[b], pad=iw*2[src], [b]deshake, [src]overlay=w'
+@end example
+
+@item
+Compose output by putting two input videos side to side:
+@example
+ffmpeg -i left.avi -i right.avi -filter_complex "
+nullsrc=size=200x100 [background];
+[0:v] setpts=PTS-STARTPTS, scale=100x100 [left];
+[1:v] setpts=PTS-STARTPTS, scale=100x100 [right];
+[background][left] overlay=shortest=1 [background+left];
+[background+left][right] overlay=shortest=1:x=100 [left+right]
+"
+@end example
+
+@item
+Chain several overlays in cascade:
+@example
+nullsrc=s=200x200 [bg];
+testsrc=s=100x100, split=4 [in0][in1][in2][in3];
+[in0] lutrgb=r=0, [bg] overlay=0:0 [mid0];
+[in1] lutrgb=g=0, [mid0] overlay=100:0 [mid1];
+[in2] lutrgb=b=0, [mid1] overlay=0:100 [mid2];
+[in3] null, [mid2] overlay=100:100 [out0]
+@end example
+
+@end itemize
+
+@section pad
+
+Add paddings to the input image, and place the original input at the
+given coordinates @var{x}, @var{y}.
+
+The filter accepts parameters as a list of @var{key}=@var{value} pairs,
+separated by ":".
+
+If the key of the first options is omitted, the arguments are
+interpreted according to the syntax
+@var{width}:@var{height}:@var{x}:@var{y}:@var{color}.
+
+A description of the accepted options follows.
+
+@table @option
+@item width, w
+@item height, h
+Specify an expression for the size of the output image with the
+paddings added. If the value for @var{width} or @var{height} is 0, the
+corresponding input size is used for the output.
+
+The @var{width} expression can reference the value set by the
+@var{height} expression, and vice versa.
+
+The default value of @var{width} and @var{height} is 0.
+
+@item x
+@item y
+Specify an expression for the offsets where to place the input image
+in the padded area with respect to the top/left border of the output
+image.
+
+The @var{x} expression can reference the value set by the @var{y}
+expression, and vice versa.
+
+The default value of @var{x} and @var{y} is 0.
+
+@item color
+Specify the color of the padded area, it can be the name of a color
+(case insensitive match) or a 0xRRGGBB[AA] sequence.
+
+The default value of @var{color} is "black".
+@end table
+
+The value for the @var{width}, @var{height}, @var{x}, and @var{y}
+options are expressions containing the following constants:
+
+@table @option
+@item in_w, in_h
+the input video width and height
+
+@item iw, ih
+same as @var{in_w} and @var{in_h}
+
+@item out_w, out_h
+the output width and height, that is the size of the padded area as
+specified by the @var{width} and @var{height} expressions
+
+@item ow, oh
+same as @var{out_w} and @var{out_h}
+
+@item x, y
+x and y offsets as specified by the @var{x} and @var{y}
+expressions, or NAN if not yet specified
+
+@item a
+same as @var{iw} / @var{ih}
+
+@item sar
+input sample aspect ratio
+
+@item dar
+input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
+
+@item hsub, vsub
+horizontal and vertical chroma subsample values. For example for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Add paddings with color "violet" to the input video. Output video
+size is 640x480, the top-left corner of the input video is placed at
+column 0, row 40:
+@example
+pad=640:480:0:40:violet
+@end example
+
+The example above is equivalent to the following command:
+@example
+pad=width=640:height=480:x=0:y=40:color=violet
+@end example
+
+@item
+Pad the input to get an output with dimensions increased by 3/2,
+and put the input video at the center of the padded area:
+@example
+pad="3/2*iw:3/2*ih:(ow-iw)/2:(oh-ih)/2"
+@end example
+
+@item
+Pad the input to get a squared output with size equal to the maximum
+value between the input width and height, and put the input video at
+the center of the padded area:
+@example
+pad="max(iw\,ih):ow:(ow-iw)/2:(oh-ih)/2"
+@end example
+
+@item
+Pad the input to get a final w/h ratio of 16:9:
+@example
+pad="ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"
+@end example
+
+@item
+In case of anamorphic video, in order to set the output display aspect
+correctly, it is necessary to use @var{sar} in the expression,
+according to the relation:
+@example
+(ih * X / ih) * sar = output_dar
+X = output_dar / sar
+@end example
+
+Thus the previous example needs to be modified to:
+@example
+pad="ih*16/9/sar:ih:(ow-iw)/2:(oh-ih)/2"
+@end example
+
+@item
+Double output size and put the input video in the bottom-right
+corner of the output padded area:
+@example
+pad="2*iw:2*ih:ow-iw:oh-ih"
+@end example
+@end itemize
+
+@section pixdesctest
+
+Pixel format descriptor test filter, mainly useful for internal
+testing. The output video should be equal to the input video.
+
+For example:
+@example
+format=monow, pixdesctest
+@end example
+
+can be used to test the monowhite pixel format descriptor definition.
+
+@section pp
+
+Enable the specified chain of postprocessing subfilters using libpostproc. This
+library should be automatically selected with a GPL build (@code{--enable-gpl}).
+Subfilters must be separated by '/' and can be disabled by prepending a '-'.
+Each subfilter and some options have a short and a long name that can be used
+interchangeably, i.e. dr/dering are the same.
+
+All subfilters share common options to determine their scope:
+
+@table @option
+@item a/autoq
+Honor the quality commands for this subfilter.
+
+@item c/chrom
+Do chrominance filtering, too (default).
+
+@item y/nochrom
+Do luminance filtering only (no chrominance).
+
+@item n/noluma
+Do chrominance filtering only (no luminance).
+@end table
+
+These options can be appended after the subfilter name, separated by a ':'.
+
+Available subfilters are:
+
+@table @option
+@item hb/hdeblock[:difference[:flatness]]
+Horizontal deblocking filter
+@table @option
+@item difference
+Difference factor where higher values mean more deblocking (default: @code{32}).
+@item flatness
+Flatness threshold where lower values mean more deblocking (default: @code{39}).
+@end table
+
+@item vb/vdeblock[:difference[:flatness]]
+Vertical deblocking filter
+@table @option
+@item difference
+Difference factor where higher values mean more deblocking (default: @code{32}).
+@item flatness
+Flatness threshold where lower values mean more deblocking (default: @code{39}).
+@end table
+
+@item ha/hadeblock[:difference[:flatness]]
+Accurate horizontal deblocking filter
+@table @option
+@item difference
+Difference factor where higher values mean more deblocking (default: @code{32}).
+@item flatness
+Flatness threshold where lower values mean more deblocking (default: @code{39}).
+@end table
+
+@item va/vadeblock[:difference[:flatness]]
+Accurate vertical deblocking filter
+@table @option
+@item difference
+Difference factor where higher values mean more deblocking (default: @code{32}).
+@item flatness
+Flatness threshold where lower values mean more deblocking (default: @code{39}).
+@end table
+@end table
+
+The horizontal and vertical deblocking filters share the difference and
+flatness values so you cannot set different horizontal and vertical
+thresholds.
+
+@table @option
+@item h1/x1hdeblock
+Experimental horizontal deblocking filter
+
+@item v1/x1vdeblock
+Experimental vertical deblocking filter
+
+@item dr/dering
+Deringing filter
+
+@item tn/tmpnoise[:threshold1[:threshold2[:threshold3]]], temporal noise reducer
+@table @option
+@item threshold1
+larger -> stronger filtering
+@item threshold2
+larger -> stronger filtering
+@item threshold3
+larger -> stronger filtering
+@end table
+
+@item al/autolevels[:f/fullyrange], automatic brightness / contrast correction
+@table @option
+@item f/fullyrange
+Stretch luminance to @code{0-255}.
+@end table
+
+@item lb/linblenddeint
+Linear blend deinterlacing filter that deinterlaces the given block by
+filtering all lines with a @code{(1 2 1)} filter.
+
+@item li/linipoldeint
+Linear interpolating deinterlacing filter that deinterlaces the given block by
+linearly interpolating every second line.
+
+@item ci/cubicipoldeint
+Cubic interpolating deinterlacing filter deinterlaces the given block by
+cubically interpolating every second line.
+
+@item md/mediandeint
+Median deinterlacing filter that deinterlaces the given block by applying a
+median filter to every second line.
+
+@item fd/ffmpegdeint
+FFmpeg deinterlacing filter that deinterlaces the given block by filtering every
+second line with a @code{(-1 4 2 4 -1)} filter.
+
+@item l5/lowpass5
+Vertically applied FIR lowpass deinterlacing filter that deinterlaces the given
+block by filtering all lines with a @code{(-1 2 6 2 -1)} filter.
+
+@item fq/forceQuant[:quantizer]
+Overrides the quantizer table from the input with the constant quantizer you
+specify.
+@table @option
+@item quantizer
+Quantizer to use
+@end table
+
+@item de/default
+Default pp filter combination (@code{hb:a,vb:a,dr:a})
+
+@item fa/fast
+Fast pp filter combination (@code{h1:a,v1:a,dr:a})
+
+@item ac
+High quality pp filter combination (@code{ha:a:128:7,va:a,dr:a})
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Apply horizontal and vertical deblocking, deringing and automatic
+brightness/contrast:
+@example
+pp=hb/vb/dr/al
+@end example
+
+@item
+Apply default filters without brightness/contrast correction:
+@example
+pp=de/-al
+@end example
+
+@item
+Apply default filters and temporal denoiser:
+@example
+pp=default/tmpnoise:1:2:3
+@end example
+
+@item
+Apply deblocking on luminance only, and switch vertical deblocking on or off
+automatically depending on available CPU time:
+@example
+pp=hb:y/vb:a
+@end example
+@end itemize
+
+@section removelogo
+
+Suppress a TV station logo, using an image file to determine which
+pixels comprise the logo. It works by filling in the pixels that
+comprise the logo with neighboring pixels.
+
+This filter requires one argument which specifies the filter bitmap
+file, which can be any image format supported by libavformat. The
+width and height of the image file must match those of the video
+stream being processed.
+
+Pixels in the provided bitmap image with a value of zero are not
+considered part of the logo, non-zero pixels are considered part of
+the logo. If you use white (255) for the logo and black (0) for the
+rest, you will be safe. For making the filter bitmap, it is
+recommended to take a screen capture of a black frame with the logo
+visible, and then using a threshold filter followed by the erode
+filter once or twice.
+
+If needed, little splotches can be fixed manually. Remember that if
+logo pixels are not covered, the filter quality will be much
+reduced. Marking too many pixels as part of the logo does not hurt as
+much, but it will increase the amount of blurring needed to cover over
+the image and will destroy more information than necessary, and extra
+pixels will slow things down on a large logo.
+
+@section scale
+
+Scale (resize) the input video, using the libswscale library.
+
+The scale filter forces the output display aspect ratio to be the same
+of the input, by changing the output sample aspect ratio.
+
+This filter accepts a list of named options in the form of
+@var{key}=@var{value} pairs separated by ":". If the key for the first
+two options is not specified, the assumed keys for the first two
+values are @code{w} and @code{h}. If the first option has no key and
+can be interpreted like a video size specification, it will be used
+to set the video size.
+
+A description of the accepted options follows.
+
+@table @option
+@item width, w
+Set the video width expression, default value is @code{iw}. See below
+for the list of accepted constants.
+
+@item height, h
+Set the video heiht expression, default value is @code{ih}.
+See below for the list of accepted constants.
+
+@item interl
+Set the interlacing. It accepts the following values:
+
+@table @option
+@item 1
+force interlaced aware scaling
+
+@item 0
+do not apply interlaced scaling
+
+@item -1
+select interlaced aware scaling depending on whether the source frames
+are flagged as interlaced or not
+@end table
+
+Default value is @code{0}.
+
+@item flags
+Set libswscale scaling flags. If not explictly specified the filter
+applies a bilinear scaling algorithm.
+
+@item size, s
+Set the video size, the value must be a valid abbreviation or in the
+form @var{width}x@var{height}.
+@end table
+
+The values of the @var{w} and @var{h} options are expressions
+containing the following constants:
+
+@table @option
+@item in_w, in_h
+the input width and height
+
+@item iw, ih
+same as @var{in_w} and @var{in_h}
+
+@item out_w, out_h
+the output (cropped) width and height
+
+@item ow, oh
+same as @var{out_w} and @var{out_h}
+
+@item a
+same as @var{iw} / @var{ih}
-@item t
-timestamp expressed in seconds, NAN if the input timestamp is unknown
+@item sar
+input sample aspect ratio
+
+@item dar
+input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
+@item hsub, vsub
+horizontal and vertical chroma subsample values. For example for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
@end table
-The @var{out_w} and @var{out_h} parameters specify the expressions for
-the width and height of the output (cropped) video. They are
-evaluated just at the configuration of the filter.
+If the input image format is different from the format requested by
+the next filter, the scale filter will convert the input to the
+requested format.
-The default value of @var{out_w} is "in_w", and the default value of
-@var{out_h} is "in_h".
+If the value for @var{width} or @var{height} is 0, the respective input
+size is used for the output.
-The expression for @var{out_w} may depend on the value of @var{out_h},
-and the expression for @var{out_h} may depend on @var{out_w}, but they
-cannot depend on @var{x} and @var{y}, as @var{x} and @var{y} are
-evaluated after @var{out_w} and @var{out_h}.
+If the value for @var{width} or @var{height} is -1, the scale filter will
+use, for the respective output size, a value that maintains the aspect
+ratio of the input image.
-The @var{x} and @var{y} parameters specify the expressions for the
-position of the top-left corner of the output (non-cropped) area. They
-are evaluated for each frame. If the evaluated value is not valid, it
-is approximated to the nearest valid value.
+@subsection Examples
-The default value of @var{x} is "(in_w-out_w)/2", and the default
-value for @var{y} is "(in_h-out_h)/2", which set the cropped area at
-the center of the input image.
+@itemize
+@item
+Scale the input video to a size of 200x100:
+@example
+scale=200:100
+@end example
-The expression for @var{x} may depend on @var{y}, and the expression
-for @var{y} may depend on @var{x}.
+This is equivalent to:
+@example
+scale=w=200:h=100
+@end example
-Follow some examples:
+or:
@example
-# crop the central input area with size 100x100
-crop=100:100
+scale=200x100
+@end example
-# crop the central input area with size 2/3 of the input video
-"crop=2/3*in_w:2/3*in_h"
+@item
+Specify a size abbreviation for the output size:
+@example
+scale=qcif
+@end example
-# crop the input video central square
-crop=in_h
+which can also be written as:
+@example
+scale=size=qcif
+@end example
-# delimit the rectangle with the top-left corner placed at position
-# 100:100 and the right-bottom corner corresponding to the right-bottom
-# corner of the input image.
-crop=in_w-100:in_h-100:100:100
+@item
+Scale the input to 2x:
+@example
+scale=2*iw:2*ih
+@end example
-# crop 10 pixels from the left and right borders, and 20 pixels from
-# the top and bottom borders
-"crop=in_w-2*10:in_h-2*20"
+@item
+The above is the same as:
+@example
+scale=2*in_w:2*in_h
+@end example
-# keep only the bottom right quarter of the input image
-"crop=in_w/2:in_h/2:in_w/2:in_h/2"
+@item
+Scale the input to 2x with forced interlaced scaling:
+@example
+scale=2*iw:2*ih:interl=1
+@end example
-# crop height for getting Greek harmony
-"crop=in_w:1/PHI*in_w"
+@item
+Scale the input to half size:
+@example
+scale=iw/2:ih/2
+@end example
-# trembling effect
-"crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(n/10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(n/7)"
+@item
+Increase the width, and set the height to the same size:
+@example
+scale=3/2*iw:ow
+@end example
-# erratic camera effect depending on timestamp
-"crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(t*10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(t*13)"
+@item
+Seek for Greek harmony:
+@example
+scale=iw:1/PHI*iw
+scale=ih*PHI:ih
+@end example
-# set x depending on the value of y
-"crop=in_w/2:in_h/2:y:10+10*sin(n/10)"
+@item
+Increase the height, and set the width to 3/2 of the height:
+@example
+scale=3/2*oh:3/5*ih
@end example
-@section cropdetect
+@item
+Increase the size, but make the size a multiple of the chroma:
+@example
+scale="trunc(3/2*iw/hsub)*hsub:trunc(3/2*ih/vsub)*vsub"
+@end example
-Auto-detect crop size.
+@item
+Increase the width to a maximum of 500 pixels, keep the same input
+aspect ratio:
+@example
+scale='min(500\, iw*3/2):-1'
+@end example
+@end itemize
-Calculate necessary cropping parameters and prints the recommended
-parameters through the logging system. The detected dimensions
-correspond to the non-black area of the input video.
+@section setdar, setsar
-It accepts the syntax:
+The @code{setdar} filter sets the Display Aspect Ratio for the filter
+output video.
+
+This is done by changing the specified Sample (aka Pixel) Aspect
+Ratio, according to the following equation:
@example
-cropdetect[=@var{limit}[:@var{round}[:@var{reset}]]]
+@var{DAR} = @var{HORIZONTAL_RESOLUTION} / @var{VERTICAL_RESOLUTION} * @var{SAR}
+@end example
+
+Keep in mind that the @code{setdar} filter does not modify the pixel
+dimensions of the video frame. Also the display aspect ratio set by
+this filter may be changed by later filters in the filterchain,
+e.g. in case of scaling or if another "setdar" or a "setsar" filter is
+applied.
+
+The @code{setsar} filter sets the Sample (aka Pixel) Aspect Ratio for
+the filter output video.
+
+Note that as a consequence of the application of this filter, the
+output display aspect ratio will change according to the equation
+above.
+
+Keep in mind that the sample aspect ratio set by the @code{setsar}
+filter may be changed by later filters in the filterchain, e.g. if
+another "setsar" or a "setdar" filter is applied.
+
+The @code{setdar} and @code{setsar} filters accept a string in the
+form @var{num}:@var{den} expressing an aspect ratio, or the following
+named options, expressed as a sequence of @var{key}=@var{value} pairs,
+separated by ":".
+
+@table @option
+@item max
+Set the maximum integer value to use for expressing numerator and
+denominator when reducing the expressed aspect ratio to a rational.
+Default value is @code{100}.
+
+@item r, ratio:
+Set the aspect ratio used by the filter.
+
+The parameter can be a floating point number string, an expression, or
+a string of the form @var{num}:@var{den}, where @var{num} and
+@var{den} are the numerator and denominator of the aspect ratio. If
+the parameter is not specified, it is assumed the value "0".
+In case the form "@var{num}:@var{den}" the @code{:} character should
+be escaped.
+@end table
+
+If the keys are omitted in the named options list, the specifed values
+are assumed to be @var{ratio} and @var{max} in that order.
+
+For example to change the display aspect ratio to 16:9, specify:
+@example
+setdar='16:9'
+@end example
+
+The example above is equivalent to:
+@example
+setdar=1.77777
+@end example
+
+To change the sample aspect ratio to 10:11, specify:
+@example
+setsar='10:11'
+@end example
+
+To set a display aspect ratio of 16:9, and specify a maximum integer value of
+1000 in the aspect ratio reduction, use the command:
+@example
+setdar=ratio='16:9':max=1000
@end example
-@table @option
+@section setfield
+
+Force field for the output video frame.
+
+The @code{setfield} filter marks the interlace type field for the
+output frames. It does not change the input frame, but only sets the
+corresponding property, which affects how the frame is treated by
+following filters (e.g. @code{fieldorder} or @code{yadif}).
+
+This filter accepts a single option @option{mode}, which can be
+specified either by setting @code{mode=VALUE} or setting the value
+alone. Available values are:
+
+@table @samp
+@item auto
+Keep the same field property.
+
+@item bff
+Mark the frame as bottom-field-first.
+
+@item tff
+Mark the frame as top-field-first.
+
+@item prog
+Mark the frame as progressive.
+@end table
+
+@section showinfo
+
+Show a line containing various information for each input video frame.
+The input video is not modified.
+
+The shown line contains a sequence of key/value pairs of the form
+@var{key}:@var{value}.
+
+A description of each shown parameter follows:
+
+@table @option
+@item n
+sequential number of the input frame, starting from 0
+
+@item pts
+Presentation TimeStamp of the input frame, expressed as a number of
+time base units. The time base unit depends on the filter input pad.
+
+@item pts_time
+Presentation TimeStamp of the input frame, expressed as a number of
+seconds
+
+@item pos
+position of the frame in the input stream, -1 if this information in
+unavailable and/or meaningless (for example in case of synthetic video)
-@item limit
-Threshold, which can be optionally specified from nothing (0) to
-everything (255), defaults to 24.
+@item fmt
+pixel format name
-@item round
-Value which the width/height should be divisible by, defaults to
-16. The offset is automatically adjusted to center the video. Use 2 to
-get only even dimensions (needed for 4:2:2 video). 16 is best when
-encoding to most video codecs.
+@item sar
+sample aspect ratio of the input frame, expressed in the form
+@var{num}/@var{den}
-@item reset
-Counter that determines after how many frames cropdetect will reset
-the previously detected largest video area and start over to detect
-the current optimal crop area. Defaults to 0.
+@item s
+size of the input frame, expressed in the form
+@var{width}x@var{height}
-This can be useful when channel logos distort the video area. 0
-indicates never reset and return the largest area encountered during
-playback.
-@end table
+@item i
+interlaced mode ("P" for "progressive", "T" for top field first, "B"
+for bottom field first)
-@section delogo
+@item iskey
+1 if the frame is a key frame, 0 otherwise
-Suppress a TV station logo by a simple interpolation of the surrounding
-pixels. Just set a rectangle covering the logo and watch it disappear
-(and sometimes something even uglier appear - your mileage may vary).
+@item type
+picture type of the input frame ("I" for an I-frame, "P" for a
+P-frame, "B" for a B-frame, "?" for unknown type).
+Check also the documentation of the @code{AVPictureType} enum and of
+the @code{av_get_picture_type_char} function defined in
+@file{libavutil/avutil.h}.
-The filter accepts parameters as a string of the form
-"@var{x}:@var{y}:@var{w}:@var{h}:@var{band}", or as a list of
-@var{key}=@var{value} pairs, separated by ":".
+@item checksum
+Adler-32 checksum (printed in hexadecimal) of all the planes of the input frame
-The description of the accepted parameters follows.
+@item plane_checksum
+Adler-32 checksum (printed in hexadecimal) of each plane of the input frame,
+expressed in the form "[@var{c0} @var{c1} @var{c2} @var{c3}]"
+@end table
-@table @option
+@section smartblur
-@item x, y
-Specify the top left corner coordinates of the logo. They must be
-specified.
+Blur the input video without impacting the outlines.
-@item w, h
-Specify the width and height of the logo to clear. They must be
-specified.
+The filter accepts the following parameters:
+@var{luma_radius}:@var{luma_strength}:@var{luma_threshold}[:@var{chroma_radius}:@var{chroma_strength}:@var{chroma_threshold}]
-@item band, t
-Specify the thickness of the fuzzy edge of the rectangle (added to
-@var{w} and @var{h}). The default value is 4.
+Parameters prefixed by @var{luma} indicate that they work on the
+luminance of the pixels whereas parameters prefixed by @var{chroma}
+refer to the chrominance of the pixels.
-@item show
-When set to 1, a green rectangle is drawn on the screen to simplify
-finding the right @var{x}, @var{y}, @var{w}, @var{h} parameters, and
-@var{band} is set to 4. The default value is 0.
+If the chroma parameters are not set, the luma parameters are used for
+either the luminance and the chrominance of the pixels.
-@end table
+@var{luma_radius} or @var{chroma_radius} must be a float number in the
+range [0.1,5.0] that specifies the variance of the gaussian filter
+used to blur the image (slower if larger).
-Some examples follow.
+@var{luma_strength} or @var{chroma_strength} must be a float number in
+the range [-1.0,1.0] that configures the blurring. A value included in
+[0.0,1.0] will blur the image whereas a value included in [-1.0,0.0]
+will sharpen the image.
-@itemize
+@var{luma_threshold} or @var{chroma_threshold} must be an integer in
+the range [-30,30] that is used as a coefficient to determine whether
+a pixel should be blurred or not. A value of 0 will filter all the
+image, a value included in [0,30] will filter flat areas and a value
+included in [-30,0] will filter edges.
-@item
-Set a rectangle covering the area with top left corner coordinates 0,0
-and size 100x77, setting a band of size 10:
-@example
-delogo=0:0:100:77:10
-@end example
+@section stereo3d
-@item
-As the previous example, but use named options:
-@example
-delogo=x=0:y=0:w=100:h=77:band=10
-@end example
+Convert between different stereoscopic image formats.
-@end itemize
+This filter accepts the following named options, expressed as a
+sequence of @var{key}=@var{value} pairs, separated by ":".
-@section drawbox
+@table @option
+@item in
+Set stereoscopic image format of input.
-Draw a colored box on the input image.
+Available values for input image formats are:
+@table @samp
+@item sbsl
+side by side parallel (left eye left, right eye right)
-It accepts the syntax:
-@example
-drawbox=@var{x}:@var{y}:@var{width}:@var{height}:@var{color}
-@end example
+@item sbsr
+side by side crosseye (right eye left, left eye right)
-@table @option
+@item sbs2l
+side by side parallel with half width resolution
+(left eye left, right eye right)
-@item x, y
-Specify the top left corner coordinates of the box. Default to 0.
+@item sbs2r
+side by side crosseye with half width resolution
+(right eye left, left eye right)
-@item width, height
-Specify the width and height of the box, if 0 they are interpreted as
-the input width and height. Default to 0.
+@item abl
+above-below (left eye above, right eye below)
-@item color
-Specify the color of the box to write, it can be the name of a color
-(case insensitive match) or a 0xRRGGBB[AA] sequence.
-@end table
+@item abr
+above-below (right eye above, left eye below)
-Follow some examples:
-@example
-# draw a black box around the edge of the input image
-drawbox
+@item ab2l
+above-below with half height resolution
+(left eye above, right eye below)
-# draw a box with color red and an opacity of 50%
-drawbox=10:20:200:60:red@@0.5"
-@end example
+@item ab2r
+above-below with half height resolution
+(right eye above, left eye below)
-@section drawtext
+Default value is @samp{sbsl}.
+@end table
-Draw text string or text from specified file on top of video using the
-libfreetype library.
+@item out
+Set stereoscopic image format of output.
-To enable compilation of this filter you need to configure Libav with
-@code{--enable-libfreetype}.
+Available values for output image formats are all the input formats as well as:
+@table @samp
+@item arbg
+anaglyph red/blue gray
+(red filter on left eye, blue filter on right eye)
-The filter also recognizes strftime() sequences in the provided text
-and expands them accordingly. Check the documentation of strftime().
+@item argg
+anaglyph red/green gray
+(red filter on left eye, green filter on right eye)
-The filter accepts parameters as a list of @var{key}=@var{value} pairs,
-separated by ":".
+@item arcg
+anaglyph red/cyan gray
+(red filter on left eye, cyan filter on right eye)
-The description of the accepted parameters follows.
+@item arch
+anaglyph red/cyan half colored
+(red filter on left eye, cyan filter on right eye)
-@table @option
+@item arcc
+anaglyph red/cyan color
+(red filter on left eye, cyan filter on right eye)
-@item fontfile
-The font file to be used for drawing text. Path must be included.
-This parameter is mandatory.
+@item arcd
+anaglyph red/cyan color optimized with the least squares projection of dubois
+(red filter on left eye, cyan filter on right eye)
-@item text
-The text string to be drawn. The text must be a sequence of UTF-8
-encoded characters.
-This parameter is mandatory if no file is specified with the parameter
-@var{textfile}.
+@item agmg
+anaglyph green/magenta gray
+(green filter on left eye, magenta filter on right eye)
-@item textfile
-A text file containing text to be drawn. The text must be a sequence
-of UTF-8 encoded characters.
+@item agmh
+anaglyph green/magenta half colored
+(green filter on left eye, magenta filter on right eye)
-This parameter is mandatory if no text string is specified with the
-parameter @var{text}.
+@item agmc
+anaglyph green/magenta colored
+(green filter on left eye, magenta filter on right eye)
-If both text and textfile are specified, an error is thrown.
+@item agmd
+anaglyph green/magenta color optimized with the least squares projection of dubois
+(green filter on left eye, magenta filter on right eye)
-@item x, y
-The offsets where text will be drawn within the video frame.
-Relative to the top/left border of the output image.
-They accept expressions similar to the @ref{overlay} filter:
-@table @option
+@item aybg
+anaglyph yellow/blue gray
+(yellow filter on left eye, blue filter on right eye)
-@item x, y
-the computed values for @var{x} and @var{y}. They are evaluated for
-each new frame.
+@item aybh
+anaglyph yellow/blue half colored
+(yellow filter on left eye, blue filter on right eye)
-@item main_w, main_h
-main input width and height
+@item aybc
+anaglyph yellow/blue colored
+(yellow filter on left eye, blue filter on right eye)
-@item W, H
-same as @var{main_w} and @var{main_h}
+@item aybd
+anaglyph yellow/blue color optimized with the least squares projection of dubois
+(yellow filter on left eye, blue filter on right eye)
-@item text_w, text_h
-rendered text width and height
+@item irl
+interleaved rows (left eye has top row, right eye starts on next row)
-@item w, h
-same as @var{text_w} and @var{text_h}
+@item irr
+interleaved rows (right eye has top row, left eye starts on next row)
-@item n
-the number of frames processed, starting from 0
+@item ml
+mono output (left eye only)
-@item t
-timestamp expressed in seconds, NAN if the input timestamp is unknown
+@item mr
+mono output (right eye only)
+@end table
+Default value is @samp{arcd}.
@end table
-The default value of @var{x} and @var{y} is 0.
+@anchor{subtitles}
+@section subtitles
-@item fontsize
-The font size to be used for drawing text.
-The default value of @var{fontsize} is 16.
+Draw subtitles on top of input video using the libass library.
-@item fontcolor
-The color to be used for drawing fonts.
-Either a string (e.g. "red") or in 0xRRGGBB[AA] format
-(e.g. "0xff000033"), possibly followed by an alpha specifier.
-The default value of @var{fontcolor} is "black".
+To enable compilation of this filter you need to configure FFmpeg with
+@code{--enable-libass}. This filter also requires a build with libavcodec and
+libavformat to convert the passed subtitles file to ASS (Advanced Substation
+Alpha) subtitles format.
-@item boxcolor
-The color to be used for drawing box around text.
-Either a string (e.g. "yellow") or in 0xRRGGBB[AA] format
-(e.g. "0xff00ff"), possibly followed by an alpha specifier.
-The default value of @var{boxcolor} is "white".
+This filter accepts the following named options, expressed as a
+sequence of @var{key}=@var{value} pairs, separated by ":".
-@item box
-Used to draw a box around text using background color.
-Value should be either 1 (enable) or 0 (disable).
-The default value of @var{box} is 0.
+@table @option
+@item filename, f
+Set the filename of the subtitle file to read. It must be specified.
-@item shadowx, shadowy
-The x and y offsets for the text shadow position with respect to the
-position of the text. They can be either positive or negative
-values. Default value for both is "0".
+@item original_size
+Specify the size of the original video, the video for which the ASS file
+was composed. Due to a misdesign in ASS aspect ratio arithmetic, this is
+necessary to correctly scale the fonts if the aspect ratio has been changed.
-@item shadowcolor
-The color to be used for drawing a shadow behind the drawn text. It
-can be a color name (e.g. "yellow") or a string in the 0xRRGGBB[AA]
-form (e.g. "0xff00ff"), possibly followed by an alpha specifier.
-The default value of @var{shadowcolor} is "black".
+@item charenc
+Set subtitles input character encoding. @code{subtitles} filter only. Only
+useful if not UTF-8.
+@end table
-@item ft_load_flags
-Flags to be used for loading the fonts.
+If the first key is not specified, it is assumed that the first value
+specifies the @option{filename}.
-The flags map the corresponding flags supported by libfreetype, and are
-a combination of the following values:
-@table @var
-@item default
-@item no_scale
-@item no_hinting
-@item render
-@item no_bitmap
-@item vertical_layout
-@item force_autohint
-@item crop_bitmap
-@item pedantic
-@item ignore_global_advance_width
-@item no_recurse
-@item ignore_transform
-@item monochrome
-@item linear_design
-@item no_autohint
-@item end table
-@end table
+For example, to render the file @file{sub.srt} on top of the input
+video, use the command:
+@example
+subtitles=sub.srt
+@end example
-Default value is "render".
+which is equivalent to:
+@example
+subtitles=filename=sub.srt
+@end example
-For more information consult the documentation for the FT_LOAD_*
-libfreetype flags.
+@section split
-@item tabsize
-The size in number of spaces to use for rendering the tab.
-Default value is 4.
+Split input video into several identical outputs.
-@item fix_bounds
-If true, check and fix text coords to avoid clipping.
-@end table
+The filter accepts a single parameter which specifies the number of outputs. If
+unspecified, it defaults to 2.
-For example the command:
+For example
@example
-drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text'"
+ffmpeg -i INPUT -filter_complex split=5 OUTPUT
@end example
+will create 5 copies of the input video.
-will draw "Test Text" with font FreeSerif, using the default values
-for the optional parameters.
-
-The command:
+For example:
@example
-drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text':\
- x=100: y=50: fontsize=24: fontcolor=yellow@@0.2: box=1: boxcolor=red@@0.2"
+[in] split [splitout1][splitout2];
+[splitout1] crop=100:100:0:0 [cropout];
+[splitout2] pad=200:200:100:100 [padout];
@end example
-will draw 'Test Text' with font FreeSerif of size 24 at position x=100
-and y=50 (counting from the top-left corner of the screen), text is
-yellow with a red box around it. Both the text and the box have an
-opacity of 20%.
+will create two separate outputs from the same input, one cropped and
+one padded.
-Note that the double quotes are not necessary if spaces are not used
-within the parameter list.
+@section super2xsai
-For more information about libfreetype, check:
-@url{http://www.freetype.org/}.
+Scale the input by 2x and smooth using the Super2xSaI (Scale and
+Interpolate) pixel art scaling algorithm.
-@section fade
+Useful for enlarging pixel art images without reducing sharpness.
-Apply fade-in/out effect to input video.
+@section swapuv
+Swap U & V plane.
-It accepts the parameters:
-@var{type}:@var{start_frame}:@var{nb_frames}
+@section thumbnail
+Select the most representative frame in a given sequence of consecutive frames.
-@var{type} specifies if the effect type, can be either "in" for
-fade-in, or "out" for a fade-out effect.
+It accepts as argument the frames batch size to analyze (default @var{N}=100);
+in a set of @var{N} frames, the filter will pick one of them, and then handle
+the next batch of @var{N} frames until the end.
-@var{start_frame} specifies the number of the start frame for starting
-to apply the fade effect.
+Since the filter keeps track of the whole frames sequence, a bigger @var{N}
+value will result in a higher memory usage, so a high value is not recommended.
-@var{nb_frames} specifies the number of frames for which the fade
-effect has to last. At the end of the fade-in effect the output video
-will have the same intensity as the input video, at the end of the
-fade-out transition the output video will be completely black.
+The following example extract one picture each 50 frames:
+@example
+thumbnail=50
+@end example
-A few usage examples follow, usable too as test scenarios.
+Complete example of a thumbnail creation with @command{ffmpeg}:
@example
-# fade in first 30 frames of video
-fade=in:0:30
+ffmpeg -i in.avi -vf thumbnail,scale=300:200 -frames:v 1 out.png
+@end example
-# fade out last 45 frames of a 200-frame video
-fade=out:155:45
+@section tile
-# fade in first 25 frames and fade out last 25 frames of a 1000-frame video
-fade=in:0:25, fade=out:975:25
+Tile several successive frames together.
-# make first 5 frames black, then fade in from frame 5-24
-fade=in:5:20
-@end example
+It accepts a list of options in the form of @var{key}=@var{value} pairs
+separated by ":". A description of the accepted options follows.
-@section fieldorder
+@table @option
-Transform the field order of the input video.
+@item layout
+Set the grid size (i.e. the number of lines and columns) in the form
+"@var{w}x@var{h}".
-It accepts one parameter which specifies the required field order that
-the input interlaced video will be transformed to. The parameter can
-assume one of the following values:
+@item margin
+Set the outer border margin in pixels.
-@table @option
-@item 0 or bff
-output bottom field first
-@item 1 or tff
-output top field first
-@end table
+@item padding
+Set the inner border thickness (i.e. the number of pixels between frames). For
+more advanced padding options (such as having different values for the edges),
+refer to the pad video filter.
-Default value is "tff".
+@item nb_frames
+Set the maximum number of frames to render in the given area. It must be less
+than or equal to @var{w}x@var{h}. The default value is @code{0}, meaning all
+the area will be used.
-Transformation is achieved by shifting the picture content up or down
-by one line, and filling the remaining line with appropriate picture content.
-This method is consistent with most broadcast field order converters.
+@end table
-If the input video is not flagged as being interlaced, or it is already
-flagged as being of the required output field order then this filter does
-not alter the incoming video.
+Alternatively, the options can be specified as a flat string:
-This filter is very useful when converting to or from PAL DV material,
-which is bottom field first.
+@var{layout}[:@var{nb_frames}[:@var{margin}[:@var{padding}]]]
-For example:
+For example, produce 8x8 PNG tiles of all keyframes (@option{-skip_frame
+nokey}) in a movie:
@example
-./avconv -i in.vob -vf "fieldorder=bff" out.dv
+ffmpeg -skip_frame nokey -i file.avi -vf 'scale=128:72,tile=8x8' -an -vsync 0 keyframes%03d.png
@end example
+The @option{-vsync 0} is necessary to prevent @command{ffmpeg} from
+duplicating each output frame to accomodate the originally detected frame
+rate.
-@section fifo
+Another example to display @code{5} pictures in an area of @code{3x2} frames,
+with @code{7} pixels between them, and @code{2} pixels of initial margin, using
+mixed flat and named options:
+@example
+tile=3x2:nb_frames=5:padding=7:margin=2
+@end example
+
+@section tinterlace
+
+Perform various types of temporal field interlacing.
+
+Frames are counted starting from 1, so the first input frame is
+considered odd.
-Buffer input images and send them when they are requested.
+This filter accepts options in the form of @var{key}=@var{value} pairs
+separated by ":".
+Alternatively, the @var{mode} option can be specified as a value alone,
+optionally followed by a ":" and further ":" separated @var{key}=@var{value}
+pairs.
-This filter is mainly useful when auto-inserted by the libavfilter
-framework.
+A description of the accepted options follows.
-The filter does not take parameters.
+@table @option
-@section format
+@item mode
+Specify the mode of the interlacing. This option can also be specified
+as a value alone. See below for a list of values for this option.
-Convert the input video to one of the specified pixel formats.
-Libavfilter will try to pick one that is supported for the input to
-the next filter.
+Available values are:
-The filter accepts a list of pixel format names, separated by ":",
-for example "yuv420p:monow:rgb24".
+@table @samp
+@item merge, 0
+Move odd frames into the upper field, even into the lower field,
+generating a double height frame at half framerate.
+
+@item drop_odd, 1
+Only output even frames, odd frames are dropped, generating a frame with
+unchanged height at half framerate.
+
+@item drop_even, 2
+Only output odd frames, even frames are dropped, generating a frame with
+unchanged height at half framerate.
+
+@item pad, 3
+Expand each frame to full height, but pad alternate lines with black,
+generating a frame with double height at the same input framerate.
+
+@item interleave_top, 4
+Interleave the upper field from odd frames with the lower field from
+even frames, generating a frame with unchanged height at half framerate.
+
+@item interleave_bottom, 5
+Interleave the lower field from odd frames with the upper field from
+even frames, generating a frame with unchanged height at half framerate.
+
+@item interlacex2, 6
+Double frame rate with unchanged height. Frames are inserted each
+containing the second temporal field from the previous input frame and
+the first temporal field from the next input frame. This mode relies on
+the top_field_first flag. Useful for interlaced video displays with no
+field synchronisation.
+@end table
-Some examples follow:
-@example
-# convert the input video to the format "yuv420p"
-format=yuv420p
+Numeric values are deprecated but are accepted for backward
+compatibility reasons.
-# convert the input video to any of the formats in the list
-format=yuv420p:yuv444p:yuv410p
-@end example
+Default mode is @code{merge}.
-@section fps
+@item flags
+Specify flags influencing the filter process.
-Convert the video to specified constant framerate by duplicating or dropping
-frames as necessary.
+Available value for @var{flags} is:
-This filter accepts the following named parameters:
@table @option
+@item low_pass_filter, vlfp
+Enable vertical low-pass filtering in the filter.
+Vertical low-pass filtering is required when creating an interlaced
+destination from a progressive source which contains high-frequency
+vertical detail. Filtering will reduce interlace 'twitter' and Moire
+patterning.
-@item fps
-Desired output framerate.
+Vertical low-pass filtering can only be enabled for @option{mode}
+@var{interleave_top} and @var{interleave_bottom}.
+@end table
@end table
-@anchor{frei0r}
-@section frei0r
+@section transpose
-Apply a frei0r effect to the input video.
+Transpose rows with columns in the input video and optionally flip it.
-To enable compilation of this filter you need to install the frei0r
-header and configure Libav with --enable-frei0r.
+The filter accepts parameters as a list of @var{key}=@var{value}
+pairs, separated by ':'. If the key of the first options is omitted,
+the arguments are interpreted according to the syntax
+@var{dir}:@var{passthrough}.
-The filter supports the syntax:
+@table @option
+@item dir
+Specify the transposition direction. Can assume the following values:
+
+@table @samp
+@item 0, 4
+Rotate by 90 degrees counterclockwise and vertically flip (default), that is:
@example
-@var{filter_name}[@{:|=@}@var{param1}:@var{param2}:...:@var{paramN}]
+L.R L.l
+. . -> . .
+l.r R.r
@end example
-@var{filter_name} is the name to the frei0r effect to load. If the
-environment variable @env{FREI0R_PATH} is defined, the frei0r effect
-is searched in each one of the directories specified by the colon
-separated list in @env{FREIOR_PATH}, otherwise in the standard frei0r
-paths, which are in this order: @file{HOME/.frei0r-1/lib/},
-@file{/usr/local/lib/frei0r-1/}, @file{/usr/lib/frei0r-1/}.
+@item 1, 5
+Rotate by 90 degrees clockwise, that is:
+@example
+L.R l.L
+. . -> . .
+l.r r.R
+@end example
-@var{param1}, @var{param2}, ... , @var{paramN} specify the parameters
-for the frei0r effect.
+@item 2, 6
+Rotate by 90 degrees counterclockwise, that is:
+@example
+L.R R.r
+. . -> . .
+l.r L.l
+@end example
-A frei0r effect parameter can be a boolean (whose values are specified
-with "y" and "n"), a double, a color (specified by the syntax
-@var{R}/@var{G}/@var{B}, @var{R}, @var{G}, and @var{B} being float
-numbers from 0.0 to 1.0) or by an @code{av_parse_color()} color
-description), a position (specified by the syntax @var{X}/@var{Y},
-@var{X} and @var{Y} being float numbers) and a string.
+@item 3, 7
+Rotate by 90 degrees clockwise and vertically flip, that is:
+@example
+L.R r.R
+. . -> . .
+l.r l.L
+@end example
+@end table
-The number and kind of parameters depend on the loaded effect. If an
-effect parameter is not specified the default value is set.
+For values between 4-7, the transposition is only done if the input
+video geometry is portrait and not landscape. These values are
+deprecated, the @code{passthrough} option should be used instead.
-Some examples follow:
-@example
-# apply the distort0r effect, set the first two double parameters
-frei0r=distort0r:0.5:0.01
+@item passthrough
+Do not apply the transposition if the input geometry matches the one
+specified by the specified value. It accepts the following values:
+@table @samp
+@item none
+Always apply transposition.
+@item portrait
+Preserve portrait geometry (when @var{height} >= @var{width}).
+@item landscape
+Preserve landscape geometry (when @var{width} >= @var{height}).
+@end table
-# apply the colordistance effect, takes a color as first parameter
-frei0r=colordistance:0.2/0.3/0.4
-frei0r=colordistance:violet
-frei0r=colordistance:0x112233
+Default value is @code{none}.
+@end table
-# apply the perspective effect, specify the top left and top right
-# image positions
-frei0r=perspective:0.2/0.2:0.8/0.2
+For example to rotate by 90 degrees clockwise and preserve portrait
+layout:
+@example
+transpose=dir=1:passthrough=portrait
@end example
-For more information see:
-@url{http://piksel.org/frei0r}
+The command above can also be specified as:
+@example
+transpose=1:portrait
+@end example
-@section gradfun
+@section unsharp
-Fix the banding artifacts that are sometimes introduced into nearly flat
-regions by truncation to 8bit colordepth.
-Interpolate the gradients that should go where the bands are, and
-dither them.
+Sharpen or blur the input video.
-This filter is designed for playback only. Do not use it prior to
-lossy compression, because compression tends to lose the dither and
-bring back the bands.
+This filter accepts parameters as a list of @var{key}=@var{value} pairs,
+separated by ":".
-The filter takes two optional parameters, separated by ':':
-@var{strength}:@var{radius}
+If the key of the first options is omitted, the arguments are
+interpreted according to the syntax:
+@var{luma_msize_x}:@var{luma_msize_y}:@var{luma_amount}:@var{chroma_msize_x}:@var{chroma_msize_y}:@var{chroma_amount}
-@var{strength} is the maximum amount by which the filter will change
-any one pixel. Also the threshold for detecting nearly flat
-regions. Acceptable values range from .51 to 255, default value is
-1.2, out-of-range values will be clipped to the valid range.
+A description of the accepted options follows.
-@var{radius} is the neighborhood to fit the gradient to. A larger
-radius makes for smoother gradients, but also prevents the filter from
-modifying the pixels near detailed regions. Acceptable values are
-8-32, default value is 16, out-of-range values will be clipped to the
-valid range.
+@table @option
+@item luma_msize_x, lx
+@item chroma_msize_x, cx
+Set the luma/chroma matrix horizontal size. It must be an odd integer
+between 3 and 63, default value is 5.
+
+@item luma_msize_y, ly
+@item chroma_msize_y, cy
+Set the luma/chroma matrix vertical size. It must be an odd integer
+between 3 and 63, default value is 5.
+
+@item luma_amount, la
+@item chroma_amount, ca
+Set the luma/chroma effect strength. It can be a float number,
+reasonable values lay between -1.5 and 1.5.
+
+Negative values will blur the input video, while positive values will
+sharpen it, a value of zero will disable the effect.
+
+Default value is 1.0 for @option{luma_amount}, 0.0 for
+@option{chroma_amount}.
+@end table
+
+@subsection Examples
+@itemize
+@item
+Apply strong luma sharpen effect:
@example
-# default parameters
-gradfun=1.2:16
+unsharp=7:7:2.5
+@end example
-# omitting radius
-gradfun=1.2
+@item
+Apply strong blur of both luma and chroma parameters:
+@example
+unsharp=7:7:-2:7:7:-2
@end example
+@end itemize
-@section hflip
+@section vflip
-Flip the input video horizontally.
+Flip the input video vertically.
-For example to horizontally flip the input video with @command{avconv}:
@example
-avconv -i in.avi -vf "hflip" out.avi
+ffmpeg -i in.avi -vf "vflip" out.avi
@end example
-@section hqdn3d
-
-High precision/quality 3d denoise filter. This filter aims to reduce
-image noise producing smooth images and making still images really
-still. It should enhance compressibility.
+@section yadif
-It accepts the following optional parameters:
-@var{luma_spatial}:@var{chroma_spatial}:@var{luma_tmp}:@var{chroma_tmp}
+Deinterlace the input video ("yadif" means "yet another deinterlacing
+filter").
-@table @option
-@item luma_spatial
-a non-negative float number which specifies spatial luma strength,
-defaults to 4.0
+The filter accepts parameters as a list of @var{key}=@var{value}
+pairs, separated by ":". If the key of the first options is omitted,
+the arguments are interpreted according to syntax
+@var{mode}:@var{parity}:@var{deint}.
-@item chroma_spatial
-a non-negative float number which specifies spatial chroma strength,
-defaults to 3.0*@var{luma_spatial}/4.0
+The description of the accepted parameters follows.
-@item luma_tmp
-a float number which specifies luma temporal strength, defaults to
-6.0*@var{luma_spatial}/4.0
+@table @option
+@item mode
+Specify the interlacing mode to adopt. Accept one of the following
+values:
-@item chroma_tmp
-a float number which specifies chroma temporal strength, defaults to
-@var{luma_tmp}*@var{chroma_spatial}/@var{luma_spatial}
+@table @option
+@item 0, send_frame
+output 1 frame for each frame
+@item 1, send_field
+output 1 frame for each field
+@item 2, send_frame_nospatial
+like @code{send_frame} but skip spatial interlacing check
+@item 3, send_field_nospatial
+like @code{send_field} but skip spatial interlacing check
@end table
-@section lut, lutrgb, lutyuv
-
-Compute a look-up table for binding each pixel component input value
-to an output value, and apply it to input video.
-
-@var{lutyuv} applies a lookup table to a YUV input video, @var{lutrgb}
-to an RGB input video.
+Default value is @code{send_frame}.
-These filters accept in input a ":"-separated list of options, which
-specify the expressions used for computing the lookup table for the
-corresponding pixel component values.
+@item parity
+Specify the picture field parity assumed for the input interlaced
+video. Accept one of the following values:
-The @var{lut} filter requires either YUV or RGB pixel formats in
-input, and accepts the options:
@table @option
-@item @var{c0} (first pixel component)
-@item @var{c1} (second pixel component)
-@item @var{c2} (third pixel component)
-@item @var{c3} (fourth pixel component, corresponds to the alpha component)
+@item 0, tff
+assume top field first
+@item 1, bff
+assume bottom field first
+@item -1, auto
+enable automatic detection
@end table
-The exact component associated to each option depends on the format in
-input.
+Default value is @code{auto}.
+If interlacing is unknown or decoder does not export this information,
+top field first will be assumed.
-The @var{lutrgb} filter requires RGB pixel formats in input, and
-accepts the options:
-@table @option
-@item @var{r} (red component)
-@item @var{g} (green component)
-@item @var{b} (blue component)
-@item @var{a} (alpha component)
-@end table
+@item deint
+Specify which frames to deinterlace. Accept one of the following
+values:
-The @var{lutyuv} filter requires YUV pixel formats in input, and
-accepts the options:
@table @option
-@item @var{y} (Y/luminance component)
-@item @var{u} (U/Cb component)
-@item @var{v} (V/Cr component)
-@item @var{a} (alpha component)
+@item 0, all
+deinterlace all frames
+@item 1, interlaced
+only deinterlace frames marked as interlaced
@end table
-The expressions can contain the following constants and functions:
-
-@table @option
-@item E, PI, PHI
-the corresponding mathematical approximated values for e
-(euler number), pi (greek PI), PHI (golden ratio)
-
-@item w, h
-the input width and height
-
-@item val
-input value for the pixel component
-
-@item clipval
-the input value clipped in the @var{minval}-@var{maxval} range
+Default value is @code{all}.
+@end table
-@item maxval
-maximum value for the pixel component
+@c man end VIDEO FILTERS
-@item minval
-minimum value for the pixel component
+@chapter Video Sources
+@c man begin VIDEO SOURCES
-@item negval
-the negated value for the pixel component value clipped in the
-@var{minval}-@var{maxval} range , it corresponds to the expression
-"maxval-clipval+minval"
+Below is a description of the currently available video sources.
-@item clip(val)
-the computed value in @var{val} clipped in the
-@var{minval}-@var{maxval} range
+@section buffer
-@item gammaval(gamma)
-the computed gamma correction value of the pixel component value
-clipped in the @var{minval}-@var{maxval} range, corresponds to the
-expression
-"pow((clipval-minval)/(maxval-minval)\,@var{gamma})*(maxval-minval)+minval"
+Buffer video frames, and make them available to the filter chain.
-@end table
+This source is mainly intended for a programmatic use, in particular
+through the interface defined in @file{libavfilter/vsrc_buffer.h}.
-All expressions default to "val".
+It accepts a list of options in the form of @var{key}=@var{value} pairs
+separated by ":". A description of the accepted options follows.
-Some examples follow:
-@example
-# negate input video
-lutrgb="r=maxval+minval-val:g=maxval+minval-val:b=maxval+minval-val"
-lutyuv="y=maxval+minval-val:u=maxval+minval-val:v=maxval+minval-val"
+@table @option
-# the above is the same as
-lutrgb="r=negval:g=negval:b=negval"
-lutyuv="y=negval:u=negval:v=negval"
+@item video_size
+Specify the size (width and height) of the buffered video frames.
-# negate luminance
-lutyuv=negval
+@item pix_fmt
+A string representing the pixel format of the buffered video frames.
+It may be a number corresponding to a pixel format, or a pixel format
+name.
-# remove chroma components, turns the video into a graytone image
-lutyuv="u=128:v=128"
+@item time_base
+Specify the timebase assumed by the timestamps of the buffered frames.
-# apply a luma burning effect
-lutyuv="y=2*val"
+@item time_base
+Specify the frame rate expected for the video stream.
-# remove green and blue components
-lutrgb="g=0:b=0"
+@item pixel_aspect
+Specify the sample aspect ratio assumed by the video frames.
-# set a constant alpha channel value on input
-format=rgba,lutrgb=a="maxval-minval/2"
+@item sws_param
+Specify the optional parameters to be used for the scale filter which
+is automatically inserted when an input change is detected in the
+input size or format.
+@end table
-# correct luminance gamma by a 0.5 factor
-lutyuv=y=gammaval(0.5)
+For example:
+@example
+buffer=size=320x240:pix_fmt=yuv410p:time_base=1/24:pixel_aspect=1/1
@end example
-@section negate
-
-Negate input video.
+will instruct the source to accept video frames with size 320x240 and
+with format "yuv410p", assuming 1/24 as the timestamps timebase and
+square pixels (1:1 sample aspect ratio).
+Since the pixel format with name "yuv410p" corresponds to the number 6
+(check the enum AVPixelFormat definition in @file{libavutil/pixfmt.h}),
+this example corresponds to:
+@example
+buffer=size=320x240:pixfmt=6:time_base=1/24:pixel_aspect=1/1
+@end example
-This filter accepts an integer in input, if non-zero it negates the
-alpha component (if available). The default value in input is 0.
+Alternatively, the options can be specified as a flat string, but this
+syntax is deprecated:
-Force libavfilter not to use any of the specified pixel formats for the
-input to the next filter.
+@var{width}:@var{height}:@var{pix_fmt}:@var{time_base.num}:@var{time_base.den}:@var{pixel_aspect.num}:@var{pixel_aspect.den}[:@var{sws_param}]
-The filter accepts a list of pixel format names, separated by ":",
-for example "yuv420p:monow:rgb24".
+@section cellauto
-Some examples follow:
-@example
-# force libavfilter to use a format different from "yuv420p" for the
-# input to the vflip filter
-noformat=yuv420p,vflip
+Create a pattern generated by an elementary cellular automaton.
-# convert the input video to any of the formats not contained in the list
-noformat=yuv420p:yuv444p:yuv410p
-@end example
+The initial state of the cellular automaton can be defined through the
+@option{filename}, and @option{pattern} options. If such options are
+not specified an initial state is created randomly.
-@section null
+At each new frame a new row in the video is filled with the result of
+the cellular automaton next generation. The behavior when the whole
+frame is filled is defined by the @option{scroll} option.
-Pass the video source unchanged to the output.
+This source accepts a list of options in the form of
+@var{key}=@var{value} pairs separated by ":". A description of the
+accepted options follows.
-@section ocv
+@table @option
+@item filename, f
+Read the initial cellular automaton state, i.e. the starting row, from
+the specified file.
+In the file, each non-whitespace character is considered an alive
+cell, a newline will terminate the row, and further characters in the
+file will be ignored.
-Apply video transform using libopencv.
+@item pattern, p
+Read the initial cellular automaton state, i.e. the starting row, from
+the specified string.
-To enable this filter install libopencv library and headers and
-configure Libav with --enable-libopencv.
+Each non-whitespace character in the string is considered an alive
+cell, a newline will terminate the row, and further characters in the
+string will be ignored.
-The filter takes the parameters: @var{filter_name}@{:=@}@var{filter_params}.
+@item rate, r
+Set the video rate, that is the number of frames generated per second.
+Default is 25.
-@var{filter_name} is the name of the libopencv filter to apply.
+@item random_fill_ratio, ratio
+Set the random fill ratio for the initial cellular automaton row. It
+is a floating point number value ranging from 0 to 1, defaults to
+1/PHI.
-@var{filter_params} specifies the parameters to pass to the libopencv
-filter. If not specified the default values are assumed.
+This option is ignored when a file or a pattern is specified.
-Refer to the official libopencv documentation for more precise
-information:
-@url{http://opencv.willowgarage.com/documentation/c/image_filtering.html}
+@item random_seed, seed
+Set the seed for filling randomly the initial row, must be an integer
+included between 0 and UINT32_MAX. If not specified, or if explicitly
+set to -1, the filter will try to use a good random seed on a best
+effort basis.
-Follows the list of supported libopencv filters.
+@item rule
+Set the cellular automaton rule, it is a number ranging from 0 to 255.
+Default value is 110.
-@anchor{dilate}
-@subsection dilate
+@item size, s
+Set the size of the output video.
-Dilate an image by using a specific structuring element.
-This filter corresponds to the libopencv function @code{cvDilate}.
+If @option{filename} or @option{pattern} is specified, the size is set
+by default to the width of the specified initial state row, and the
+height is set to @var{width} * PHI.
-It accepts the parameters: @var{struct_el}:@var{nb_iterations}.
+If @option{size} is set, it must contain the width of the specified
+pattern string, and the specified pattern will be centered in the
+larger row.
-@var{struct_el} represents a structuring element, and has the syntax:
-@var{cols}x@var{rows}+@var{anchor_x}x@var{anchor_y}/@var{shape}
+If a filename or a pattern string is not specified, the size value
+defaults to "320x518" (used for a randomly generated initial state).
-@var{cols} and @var{rows} represent the number of columns and rows of
-the structuring element, @var{anchor_x} and @var{anchor_y} the anchor
-point, and @var{shape} the shape for the structuring element, and
-can be one of the values "rect", "cross", "ellipse", "custom".
+@item scroll
+If set to 1, scroll the output upward when all the rows in the output
+have been already filled. If set to 0, the new generated row will be
+written over the top row just after the bottom row is filled.
+Defaults to 1.
-If the value for @var{shape} is "custom", it must be followed by a
-string of the form "=@var{filename}". The file with name
-@var{filename} is assumed to represent a binary image, with each
-printable character corresponding to a bright pixel. When a custom
-@var{shape} is used, @var{cols} and @var{rows} are ignored, the number
-or columns and rows of the read file are assumed instead.
+@item start_full, full
+If set to 1, completely fill the output with generated rows before
+outputting the first frame.
+This is the default behavior, for disabling set the value to 0.
-The default value for @var{struct_el} is "3x3+0x0/rect".
+@item stitch
+If set to 1, stitch the left and right row edges together.
+This is the default behavior, for disabling set the value to 0.
+@end table
-@var{nb_iterations} specifies the number of times the transform is
-applied to the image, and defaults to 1.
+@subsection Examples
-Follow some example:
+@itemize
+@item
+Read the initial state from @file{pattern}, and specify an output of
+size 200x400.
@example
-# use the default values
-ocv=dilate
+cellauto=f=pattern:s=200x400
+@end example
-# dilate using a structuring element with a 5x5 cross, iterate two times
-ocv=dilate=5x5+2x2/cross:2
+@item
+Generate a random initial row with a width of 200 cells, with a fill
+ratio of 2/3:
+@example
+cellauto=ratio=2/3:s=200x200
+@end example
-# read the shape from the file diamond.shape, iterate two times
-# the file diamond.shape may contain a pattern of characters like this:
-# *
-# ***
-# *****
-# ***
-# *
-# the specified cols and rows are ignored (but not the anchor point coordinates)
-ocv=0x0+2x2/custom=diamond.shape:2
+@item
+Create a pattern generated by rule 18 starting by a single alive cell
+centered on an initial row with width 100:
+@example
+cellauto=p=@@:s=100x400:full=0:rule=18
@end example
-@subsection erode
+@item
+Specify a more elaborated initial pattern:
+@example
+cellauto=p='@@@@ @@ @@@@':s=100x400:full=0:rule=18
+@end example
-Erode an image by using a specific structuring element.
-This filter corresponds to the libopencv function @code{cvErode}.
+@end itemize
-The filter accepts the parameters: @var{struct_el}:@var{nb_iterations},
-with the same syntax and semantics as the @ref{dilate} filter.
+@section mandelbrot
-@subsection smooth
+Generate a Mandelbrot set fractal, and progressively zoom towards the
+point specified with @var{start_x} and @var{start_y}.
-Smooth the input video.
+This source accepts a list of options in the form of
+@var{key}=@var{value} pairs separated by ":". A description of the
+accepted options follows.
-The filter takes the following parameters:
-@var{type}:@var{param1}:@var{param2}:@var{param3}:@var{param4}.
+@table @option
-@var{type} is the type of smooth filter to apply, and can be one of
-the following values: "blur", "blur_no_scale", "median", "gaussian",
-"bilateral". The default value is "gaussian".
+@item end_pts
+Set the terminal pts value. Default value is 400.
-@var{param1}, @var{param2}, @var{param3}, and @var{param4} are
-parameters whose meanings depend on smooth type. @var{param1} and
-@var{param2} accept integer positive values or 0, @var{param3} and
-@var{param4} accept float values.
+@item end_scale
+Set the terminal scale value.
+Must be a floating point value. Default value is 0.3.
-The default value for @var{param1} is 3, the default value for the
-other parameters is 0.
+@item inner
+Set the inner coloring mode, that is the algorithm used to draw the
+Mandelbrot fractal internal region.
-These parameters correspond to the parameters assigned to the
-libopencv function @code{cvSmooth}.
+It shall assume one of the following values:
+@table @option
+@item black
+Set black mode.
+@item convergence
+Show time until convergence.
+@item mincol
+Set color based on point closest to the origin of the iterations.
+@item period
+Set period mode.
+@end table
-@anchor{overlay}
-@section overlay
+Default value is @var{mincol}.
-Overlay one video on top of another.
+@item bailout
+Set the bailout value. Default value is 10.0.
-It takes two inputs and one output, the first input is the "main"
-video on which the second input is overlayed.
+@item maxiter
+Set the maximum of iterations performed by the rendering
+algorithm. Default value is 7189.
-It accepts the parameters: @var{x}:@var{y}.
+@item outer
+Set outer coloring mode.
+It shall assume one of following values:
+@table @option
+@item iteration_count
+Set iteration cound mode.
+@item normalized_iteration_count
+set normalized iteration count mode.
+@end table
+Default value is @var{normalized_iteration_count}.
-@var{x} is the x coordinate of the overlayed video on the main video,
-@var{y} is the y coordinate. The parameters are expressions containing
-the following parameters:
+@item rate, r
+Set frame rate, expressed as number of frames per second. Default
+value is "25".
-@table @option
-@item main_w, main_h
-main input width and height
+@item size, s
+Set frame size. Default value is "640x480".
-@item W, H
-same as @var{main_w} and @var{main_h}
+@item start_scale
+Set the initial scale value. Default value is 3.0.
-@item overlay_w, overlay_h
-overlay input width and height
+@item start_x
+Set the initial x position. Must be a floating point value between
+-100 and 100. Default value is -0.743643887037158704752191506114774.
-@item w, h
-same as @var{overlay_w} and @var{overlay_h}
+@item start_y
+Set the initial y position. Must be a floating point value between
+-100 and 100. Default value is -0.131825904205311970493132056385139.
@end table
-Be aware that frames are taken from each input video in timestamp
-order, hence, if their initial timestamps differ, it is a a good idea
-to pass the two inputs through a @var{setpts=PTS-STARTPTS} filter to
-have them begin in the same zero timestamp, as it does the example for
-the @var{movie} filter.
-
-Follow some examples:
-@example
-# draw the overlay at 10 pixels from the bottom right
-# corner of the main video.
-overlay=main_w-overlay_w-10:main_h-overlay_h-10
+@section mptestsrc
-# insert a transparent PNG logo in the bottom left corner of the input
-avconv -i input -i logo -filter_complex 'overlay=10:main_h-overlay_h-10' output
+Generate various test patterns, as generated by the MPlayer test filter.
-# insert 2 different transparent PNG logos (second logo on bottom
-# right corner):
-avconv -i input -i logo1 -i logo2 -filter_complex
-'overlay=10:H-h-10,overlay=W-w-10:H-h-10' output
+The size of the generated video is fixed, and is 256x256.
+This source is useful in particular for testing encoding features.
-# add a transparent color layer on top of the main video,
-# WxH specifies the size of the main input to the overlay filter
-color=red@.3:WxH [over]; [in][over] overlay [out]
-@end example
+This source accepts an optional sequence of @var{key}=@var{value} pairs,
+separated by ":". The description of the accepted options follows.
-You can chain together more overlays but the efficiency of such
-approach is yet to be tested.
+@table @option
-@section pad
+@item rate, r
+Specify the frame rate of the sourced video, as the number of frames
+generated per second. It has to be a string in the format
+@var{frame_rate_num}/@var{frame_rate_den}, an integer number, a float
+number or a valid video frame rate abbreviation. The default value is
+"25".
-Add paddings to the input image, and places the original input at the
-given coordinates @var{x}, @var{y}.
+@item duration, d
+Set the video duration of the sourced video. The accepted syntax is:
+@example
+[-]HH:MM:SS[.m...]
+[-]S+[.m...]
+@end example
+See also the function @code{av_parse_time()}.
-It accepts the following parameters:
-@var{width}:@var{height}:@var{x}:@var{y}:@var{color}.
+If not specified, or the expressed duration is negative, the video is
+supposed to be generated forever.
-The parameters @var{width}, @var{height}, @var{x}, and @var{y} are
-expressions containing the following constants:
+@item test, t
+Set the number or the name of the test to perform. Supported tests are:
@table @option
-@item E, PI, PHI
-the corresponding mathematical approximated values for e
-(euler number), pi (greek PI), phi (golden ratio)
+@item dc_luma
+@item dc_chroma
+@item freq_luma
+@item freq_chroma
+@item amp_luma
+@item amp_chroma
+@item cbp
+@item mv
+@item ring1
+@item ring2
+@item all
+@end table
-@item in_w, in_h
-the input video width and height
+Default value is "all", which will cycle through the list of all tests.
+@end table
-@item iw, ih
-same as @var{in_w} and @var{in_h}
+For example the following:
+@example
+testsrc=t=dc_luma
+@end example
-@item out_w, out_h
-the output width and height, that is the size of the padded area as
-specified by the @var{width} and @var{height} expressions
+will generate a "dc_luma" test pattern.
-@item ow, oh
-same as @var{out_w} and @var{out_h}
+@section frei0r_src
-@item x, y
-x and y offsets as specified by the @var{x} and @var{y}
-expressions, or NAN if not yet specified
+Provide a frei0r source.
-@item a
-input display aspect ratio, same as @var{iw} / @var{ih}
+To enable compilation of this filter you need to install the frei0r
+header and configure FFmpeg with @code{--enable-frei0r}.
-@item hsub, vsub
-horizontal and vertical chroma subsample values. For example for the
-pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
-@end table
+The source supports the syntax:
+@example
+@var{size}:@var{rate}:@var{src_name}[@{=|:@}@var{param1}:@var{param2}:...:@var{paramN}]
+@end example
+
+@var{size} is the size of the video to generate, may be a string of the
+form @var{width}x@var{height} or a frame size abbreviation.
+@var{rate} is the rate of the video to generate, may be a string of
+the form @var{num}/@var{den} or a frame rate abbreviation.
+@var{src_name} is the name to the frei0r source to load. For more
+information regarding frei0r and how to set the parameters read the
+section @ref{frei0r} in the description of the video filters.
-Follows the description of the accepted parameters.
+For example, to generate a frei0r partik0l source with size 200x200
+and frame rate 10 which is overlayed on the overlay filter main input:
+@example
+frei0r_src=200x200:10:partik0l=1234 [overlay]; [in][overlay] overlay
+@end example
-@table @option
-@item width, height
+@section life
-Specify the size of the output image with the paddings added. If the
-value for @var{width} or @var{height} is 0, the corresponding input size
-is used for the output.
+Generate a life pattern.
-The @var{width} expression can reference the value set by the
-@var{height} expression, and vice versa.
+This source is based on a generalization of John Conway's life game.
-The default value of @var{width} and @var{height} is 0.
+The sourced input represents a life grid, each pixel represents a cell
+which can be in one of two possible states, alive or dead. Every cell
+interacts with its eight neighbours, which are the cells that are
+horizontally, vertically, or diagonally adjacent.
-@item x, y
+At each interaction the grid evolves according to the adopted rule,
+which specifies the number of neighbor alive cells which will make a
+cell stay alive or born. The @option{rule} option allows to specify
+the rule to adopt.
-Specify the offsets where to place the input image in the padded area
-with respect to the top/left border of the output image.
+This source accepts a list of options in the form of
+@var{key}=@var{value} pairs separated by ":". A description of the
+accepted options follows.
-The @var{x} expression can reference the value set by the @var{y}
-expression, and vice versa.
+@table @option
+@item filename, f
+Set the file from which to read the initial grid state. In the file,
+each non-whitespace character is considered an alive cell, and newline
+is used to delimit the end of each row.
-The default value of @var{x} and @var{y} is 0.
+If this option is not specified, the initial grid is generated
+randomly.
-@item color
+@item rate, r
+Set the video rate, that is the number of frames generated per second.
+Default is 25.
+
+@item random_fill_ratio, ratio
+Set the random fill ratio for the initial random grid. It is a
+floating point number value ranging from 0 to 1, defaults to 1/PHI.
+It is ignored when a file is specified.
+
+@item random_seed, seed
+Set the seed for filling the initial random grid, must be an integer
+included between 0 and UINT32_MAX. If not specified, or if explicitly
+set to -1, the filter will try to use a good random seed on a best
+effort basis.
+
+@item rule
+Set the life rule.
+
+A rule can be specified with a code of the kind "S@var{NS}/B@var{NB}",
+where @var{NS} and @var{NB} are sequences of numbers in the range 0-8,
+@var{NS} specifies the number of alive neighbor cells which make a
+live cell stay alive, and @var{NB} the number of alive neighbor cells
+which make a dead cell to become alive (i.e. to "born").
+"s" and "b" can be used in place of "S" and "B", respectively.
+
+Alternatively a rule can be specified by an 18-bits integer. The 9
+high order bits are used to encode the next cell state if it is alive
+for each number of neighbor alive cells, the low order bits specify
+the rule for "borning" new cells. Higher order bits encode for an
+higher number of neighbor cells.
+For example the number 6153 = @code{(12<<9)+9} specifies a stay alive
+rule of 12 and a born rule of 9, which corresponds to "S23/B03".
+
+Default value is "S23/B3", which is the original Conway's game of life
+rule, and will keep a cell alive if it has 2 or 3 neighbor alive
+cells, and will born a new cell if there are three alive cells around
+a dead cell.
-Specify the color of the padded area, it can be the name of a color
-(case insensitive match) or a 0xRRGGBB[AA] sequence.
+@item size, s
+Set the size of the output video.
-The default value of @var{color} is "black".
+If @option{filename} is specified, the size is set by default to the
+same size of the input file. If @option{size} is set, it must contain
+the size specified in the input file, and the initial grid defined in
+that file is centered in the larger resulting area.
-@end table
+If a filename is not specified, the size value defaults to "320x240"
+(used for a randomly generated initial grid).
-Some examples follow:
+@item stitch
+If set to 1, stitch the left and right grid edges together, and the
+top and bottom edges also. Defaults to 1.
-@example
-# Add paddings with color "violet" to the input video. Output video
-# size is 640x480, the top-left corner of the input video is placed at
-# column 0, row 40.
-pad=640:480:0:40:violet
+@item mold
+Set cell mold speed. If set, a dead cell will go from @option{death_color} to
+@option{mold_color} with a step of @option{mold}. @option{mold} can have a
+value from 0 to 255.
-# pad the input to get an output with dimensions increased bt 3/2,
-# and put the input video at the center of the padded area
-pad="3/2*iw:3/2*ih:(ow-iw)/2:(oh-ih)/2"
+@item life_color
+Set the color of living (or new born) cells.
-# pad the input to get a squared output with size equal to the maximum
-# value between the input width and height, and put the input video at
-# the center of the padded area
-pad="max(iw\,ih):ow:(ow-iw)/2:(oh-ih)/2"
+@item death_color
+Set the color of dead cells. If @option{mold} is set, this is the first color
+used to represent a dead cell.
-# pad the input to get a final w/h ratio of 16:9
-pad="ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"
+@item mold_color
+Set mold color, for definitely dead and moldy cells.
+@end table
-# double output size and put the input video in the bottom-right
-# corner of the output padded area
-pad="2*iw:2*ih:ow-iw:oh-ih"
+@subsection Examples
+
+@itemize
+@item
+Read a grid from @file{pattern}, and center it on a grid of size
+300x300 pixels:
+@example
+life=f=pattern:s=300x300
@end example
-@section pixdesctest
+@item
+Generate a random grid of size 200x200, with a fill ratio of 2/3:
+@example
+life=ratio=2/3:s=200x200
+@end example
-Pixel format descriptor test filter, mainly useful for internal
-testing. The output video should be equal to the input video.
+@item
+Specify a custom rule for evolving a randomly generated grid:
+@example
+life=rule=S14/B34
+@end example
-For example:
+@item
+Full example with slow death effect (mold) using @command{ffplay}:
@example
-format=monow, pixdesctest
+ffplay -f lavfi life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800:flags=16
@end example
+@end itemize
-can be used to test the monowhite pixel format descriptor definition.
+@section color, nullsrc, rgbtestsrc, smptebars, testsrc
-@section scale
+The @code{color} source provides an uniformly colored input.
-Scale the input video to @var{width}:@var{height} and/or convert the image format.
+The @code{nullsrc} source returns unprocessed video frames. It is
+mainly useful to be employed in analysis / debugging tools, or as the
+source for filters which ignore the input data.
-The parameters @var{width} and @var{height} are expressions containing
-the following constants:
+The @code{rgbtestsrc} source generates an RGB test pattern useful for
+detecting RGB vs BGR issues. You should see a red, green and blue
+stripe from top to bottom.
+
+The @code{smptebars} source generates a color bars pattern, based on
+the SMPTE Engineering Guideline EG 1-1990.
+
+The @code{testsrc} source generates a test video pattern, showing a
+color pattern, a scrolling gradient and a timestamp. This is mainly
+intended for testing purposes.
+
+These sources accept an optional sequence of @var{key}=@var{value} pairs,
+separated by ":". The description of the accepted options follows.
@table @option
-@item E, PI, PHI
-the corresponding mathematical approximated values for e
-(euler number), pi (greek PI), phi (golden ratio)
-@item in_w, in_h
-the input width and height
+@item color, c
+Specify the color of the source, only used in the @code{color}
+source. It can be the name of a color (case insensitive match) or a
+0xRRGGBB[AA] sequence, possibly followed by an alpha specifier. The
+default value is "black".
-@item iw, ih
-same as @var{in_w} and @var{in_h}
+@item size, s
+Specify the size of the sourced video, it may be a string of the form
+@var{width}x@var{height}, or the name of a size abbreviation. The
+default value is "320x240".
-@item out_w, out_h
-the output (cropped) width and height
+@item rate, r
+Specify the frame rate of the sourced video, as the number of frames
+generated per second. It has to be a string in the format
+@var{frame_rate_num}/@var{frame_rate_den}, an integer number, a float
+number or a valid video frame rate abbreviation. The default value is
+"25".
-@item ow, oh
-same as @var{out_w} and @var{out_h}
+@item sar
+Set the sample aspect ratio of the sourced video.
+
+@item duration, d
+Set the video duration of the sourced video. The accepted syntax is:
+@example
+[-]HH[:MM[:SS[.m...]]]
+[-]S+[.m...]
+@end example
+See also the function @code{av_parse_time()}.
+
+If not specified, or the expressed duration is negative, the video is
+supposed to be generated forever.
+
+@item decimals, n
+Set the number of decimals to show in the timestamp, only used in the
+@code{testsrc} source.
+
+The displayed timestamp value will correspond to the original
+timestamp value multiplied by the power of 10 of the specified
+value. Default value is 0.
+@end table
+
+For example the following:
+@example
+testsrc=duration=5.3:size=qcif:rate=10
+@end example
-@item dar, a
-input display aspect ratio, same as @var{iw} / @var{ih}
+will generate a video with a duration of 5.3 seconds, with size
+176x144 and a frame rate of 10 frames per second.
-@item sar
-input sample aspect ratio
+The following graph description will generate a red source
+with an opacity of 0.2, with size "qcif" and a frame rate of 10
+frames per second.
+@example
+color=c=red@@0.2:s=qcif:r=10
+@end example
-@item hsub, vsub
-horizontal and vertical chroma subsample values. For example for the
-pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
-@end table
+If the input content is to be ignored, @code{nullsrc} can be used. The
+following command generates noise in the luminance plane by employing
+the @code{geq} filter:
+@example
+nullsrc=s=256x256, geq=random(1)*255:128:128
+@end example
-If the input image format is different from the format requested by
-the next filter, the scale filter will convert the input to the
-requested format.
+@c man end VIDEO SOURCES
-If the value for @var{width} or @var{height} is 0, the respective input
-size is used for the output.
+@chapter Video Sinks
+@c man begin VIDEO SINKS
-If the value for @var{width} or @var{height} is -1, the scale filter will
-use, for the respective output size, a value that maintains the aspect
-ratio of the input image.
+Below is a description of the currently available video sinks.
-The default value of @var{width} and @var{height} is 0.
+@section buffersink
-Some examples follow:
-@example
-# scale the input video to a size of 200x100.
-scale=200:100
+Buffer video frames, and make them available to the end of the filter
+graph.
-# scale the input to 2x
-scale=2*iw:2*ih
-# the above is the same as
-scale=2*in_w:2*in_h
+This sink is mainly intended for a programmatic use, in particular
+through the interface defined in @file{libavfilter/buffersink.h}.
-# scale the input to half size
-scale=iw/2:ih/2
+It does not require a string parameter in input, but you need to
+specify a pointer to a list of supported pixel formats terminated by
+-1 in the opaque parameter provided to @code{avfilter_init_filter}
+when initializing this sink.
-# increase the width, and set the height to the same size
-scale=3/2*iw:ow
+@section nullsink
-# seek for Greek harmony
-scale=iw:1/PHI*iw
-scale=ih*PHI:ih
+Null video sink, do absolutely nothing with the input video. It is
+mainly useful as a template and to be employed in analysis / debugging
+tools.
-# increase the height, and set the width to 3/2 of the height
-scale=3/2*oh:3/5*ih
+@c man end VIDEO SINKS
-# increase the size, but make the size a multiple of the chroma
-scale="trunc(3/2*iw/hsub)*hsub:trunc(3/2*ih/vsub)*vsub"
+@chapter Multimedia Filters
+@c man begin MULTIMEDIA FILTERS
-# increase the width to a maximum of 500 pixels, keep the same input aspect ratio
-scale='min(500\, iw*3/2):-1'
-@end example
+Below is a description of the currently available multimedia filters.
-@section select
+@section aselect, select
Select frames to pass in output.
-It accepts in input an expression, which is evaluated for each input
-frame. If the expression is evaluated to a non-zero value, the frame
-is selected and passed to the output, otherwise it is discarded.
+These filters accept a single option @option{expr} or @option{e}
+specifying the select expression, which can be specified either by
+specyfing @code{expr=VALUE} or specifying the expression
+alone.
+
+The select expression is evaluated for each input frame. If the
+evaluation result is a non-zero value, the frame is selected and
+passed to the output, otherwise it is discarded.
The expression can contain the following constants:
--- /dev/null
- av_buffersrc_add_frame(ist->filters[i]->filter, decoded_frame,
- AV_BUFFERSRC_FLAG_PUSH);
+/*
+ * Copyright (c) 2000-2003 Fabrice Bellard
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/**
+ * @file
+ * multimedia converter based on the FFmpeg libraries
+ */
+
+#include "config.h"
+#include <ctype.h>
+#include <string.h>
+#include <math.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <limits.h>
+#if HAVE_ISATTY
+#if HAVE_IO_H
+#include <io.h>
+#endif
+#if HAVE_UNISTD_H
+#include <unistd.h>
+#endif
+#endif
+#include "libavformat/avformat.h"
+#include "libavdevice/avdevice.h"
+#include "libswscale/swscale.h"
+#include "libswresample/swresample.h"
+#include "libavutil/opt.h"
+#include "libavutil/channel_layout.h"
+#include "libavutil/parseutils.h"
+#include "libavutil/samplefmt.h"
+#include "libavutil/colorspace.h"
+#include "libavutil/fifo.h"
+#include "libavutil/intreadwrite.h"
+#include "libavutil/dict.h"
+#include "libavutil/mathematics.h"
+#include "libavutil/pixdesc.h"
+#include "libavutil/avstring.h"
+#include "libavutil/libm.h"
+#include "libavutil/imgutils.h"
+#include "libavutil/timestamp.h"
+#include "libavutil/bprint.h"
+#include "libavutil/time.h"
+#include "libavformat/os_support.h"
+
+#include "libavformat/ffm.h" // not public API
+
+# include "libavfilter/avcodec.h"
+# include "libavfilter/avfilter.h"
+# include "libavfilter/avfiltergraph.h"
+# include "libavfilter/buffersrc.h"
+# include "libavfilter/buffersink.h"
+
+#if HAVE_SYS_RESOURCE_H
+#include <sys/time.h>
+#include <sys/types.h>
+#include <sys/resource.h>
+#elif HAVE_GETPROCESSTIMES
+#include <windows.h>
+#endif
+#if HAVE_GETPROCESSMEMORYINFO
+#include <windows.h>
+#include <psapi.h>
+#endif
+
+#if HAVE_SYS_SELECT_H
+#include <sys/select.h>
+#endif
+
+#if HAVE_TERMIOS_H
+#include <fcntl.h>
+#include <sys/ioctl.h>
+#include <sys/time.h>
+#include <termios.h>
+#elif HAVE_KBHIT
+#include <conio.h>
+#endif
+
+#if HAVE_PTHREADS
+#include <pthread.h>
+#endif
+
+#include <time.h>
+
+#include "ffmpeg.h"
+#include "cmdutils.h"
+
+#include "libavutil/avassert.h"
+
+const char program_name[] = "ffmpeg";
+const int program_birth_year = 2000;
+
+static FILE *vstats_file;
+
+const char *const forced_keyframes_const_names[] = {
+ "n",
+ "n_forced",
+ "prev_forced_n",
+ "prev_forced_t",
+ "t",
+ NULL
+};
+
+static void do_video_stats(OutputStream *ost, int frame_size);
+static int64_t getutime(void);
+static int64_t getmaxrss(void);
+
+static int run_as_daemon = 0;
+static int64_t video_size = 0;
+static int64_t audio_size = 0;
+static int64_t subtitle_size = 0;
+static int64_t extra_size = 0;
+static int nb_frames_dup = 0;
+static int nb_frames_drop = 0;
+
+static int current_time;
+AVIOContext *progress_avio = NULL;
+
+static uint8_t *subtitle_out;
+
+#if HAVE_PTHREADS
+/* signal to input threads that they should exit; set by the main thread */
+static int transcoding_finished;
+#endif
+
+#define DEFAULT_PASS_LOGFILENAME_PREFIX "ffmpeg2pass"
+
+InputStream **input_streams = NULL;
+int nb_input_streams = 0;
+InputFile **input_files = NULL;
+int nb_input_files = 0;
+
+OutputStream **output_streams = NULL;
+int nb_output_streams = 0;
+OutputFile **output_files = NULL;
+int nb_output_files = 0;
+
+FilterGraph **filtergraphs;
+int nb_filtergraphs;
+
+#if HAVE_TERMIOS_H
+
+/* init terminal so that we can grab keys */
+static struct termios oldtty;
+static int restore_tty;
+#endif
+
+
+/* sub2video hack:
+ Convert subtitles to video with alpha to insert them in filter graphs.
+ This is a temporary solution until libavfilter gets real subtitles support.
+ */
+
+
+
+static void sub2video_copy_rect(uint8_t *dst, int dst_linesize, int w, int h,
+ AVSubtitleRect *r)
+{
+ uint32_t *pal, *dst2;
+ uint8_t *src, *src2;
+ int x, y;
+
+ if (r->type != SUBTITLE_BITMAP) {
+ av_log(NULL, AV_LOG_WARNING, "sub2video: non-bitmap subtitle\n");
+ return;
+ }
+ if (r->x < 0 || r->x + r->w > w || r->y < 0 || r->y + r->h > h) {
+ av_log(NULL, AV_LOG_WARNING, "sub2video: rectangle overflowing\n");
+ return;
+ }
+
+ dst += r->y * dst_linesize + r->x * 4;
+ src = r->pict.data[0];
+ pal = (uint32_t *)r->pict.data[1];
+ for (y = 0; y < r->h; y++) {
+ dst2 = (uint32_t *)dst;
+ src2 = src;
+ for (x = 0; x < r->w; x++)
+ *(dst2++) = pal[*(src2++)];
+ dst += dst_linesize;
+ src += r->pict.linesize[0];
+ }
+}
+
+static void sub2video_push_ref(InputStream *ist, int64_t pts)
+{
+ AVFilterBufferRef *ref = ist->sub2video.ref;
+ int i;
+
+ ist->sub2video.last_pts = ref->pts = pts;
+ for (i = 0; i < ist->nb_filters; i++)
+ av_buffersrc_add_ref(ist->filters[i]->filter,
+ avfilter_ref_buffer(ref, ~0),
+ AV_BUFFERSRC_FLAG_NO_CHECK_FORMAT |
+ AV_BUFFERSRC_FLAG_NO_COPY |
+ AV_BUFFERSRC_FLAG_PUSH);
+}
+
+static void sub2video_update(InputStream *ist, AVSubtitle *sub)
+{
+ int w = ist->sub2video.w, h = ist->sub2video.h;
+ AVFilterBufferRef *ref = ist->sub2video.ref;
+ int8_t *dst;
+ int dst_linesize;
+ int num_rects, i;
+ int64_t pts, end_pts;
+
+ if (!ref)
+ return;
+ if (sub) {
+ pts = av_rescale_q(sub->pts + sub->start_display_time * 1000,
+ AV_TIME_BASE_Q, ist->st->time_base);
+ end_pts = av_rescale_q(sub->pts + sub->end_display_time * 1000,
+ AV_TIME_BASE_Q, ist->st->time_base);
+ num_rects = sub->num_rects;
+ } else {
+ pts = ist->sub2video.end_pts;
+ end_pts = INT64_MAX;
+ num_rects = 0;
+ }
+ dst = ref->data [0];
+ dst_linesize = ref->linesize[0];
+ memset(dst, 0, h * dst_linesize);
+ for (i = 0; i < num_rects; i++)
+ sub2video_copy_rect(dst, dst_linesize, w, h, sub->rects[i]);
+ sub2video_push_ref(ist, pts);
+ ist->sub2video.end_pts = end_pts;
+}
+
+static void sub2video_heartbeat(InputStream *ist, int64_t pts)
+{
+ InputFile *infile = input_files[ist->file_index];
+ int i, j, nb_reqs;
+ int64_t pts2;
+
+ /* When a frame is read from a file, examine all sub2video streams in
+ the same file and send the sub2video frame again. Otherwise, decoded
+ video frames could be accumulating in the filter graph while a filter
+ (possibly overlay) is desperately waiting for a subtitle frame. */
+ for (i = 0; i < infile->nb_streams; i++) {
+ InputStream *ist2 = input_streams[infile->ist_index + i];
+ if (!ist2->sub2video.ref)
+ continue;
+ /* subtitles seem to be usually muxed ahead of other streams;
+ if not, substracting a larger time here is necessary */
+ pts2 = av_rescale_q(pts, ist->st->time_base, ist2->st->time_base) - 1;
+ /* do not send the heartbeat frame if the subtitle is already ahead */
+ if (pts2 <= ist2->sub2video.last_pts)
+ continue;
+ if (pts2 >= ist2->sub2video.end_pts)
+ sub2video_update(ist2, NULL);
+ for (j = 0, nb_reqs = 0; j < ist2->nb_filters; j++)
+ nb_reqs += av_buffersrc_get_nb_failed_requests(ist2->filters[j]->filter);
+ if (nb_reqs)
+ sub2video_push_ref(ist2, pts2);
+ }
+}
+
+static void sub2video_flush(InputStream *ist)
+{
+ int i;
+
+ for (i = 0; i < ist->nb_filters; i++)
+ av_buffersrc_add_ref(ist->filters[i]->filter, NULL, 0);
+}
+
+/* end of sub2video hack */
+
+void term_exit(void)
+{
+ av_log(NULL, AV_LOG_QUIET, "%s", "");
+#if HAVE_TERMIOS_H
+ if(restore_tty)
+ tcsetattr (0, TCSANOW, &oldtty);
+#endif
+}
+
+static volatile int received_sigterm = 0;
+static volatile int received_nb_signals = 0;
+
+static void
+sigterm_handler(int sig)
+{
+ received_sigterm = sig;
+ received_nb_signals++;
+ term_exit();
+ if(received_nb_signals > 3)
+ exit(123);
+}
+
+void term_init(void)
+{
+#if HAVE_TERMIOS_H
+ if(!run_as_daemon){
+ struct termios tty;
+ int istty = 1;
+#if HAVE_ISATTY
+ istty = isatty(0) && isatty(2);
+#endif
+ if (istty && tcgetattr (0, &tty) == 0) {
+ oldtty = tty;
+ restore_tty = 1;
+ atexit(term_exit);
+
+ tty.c_iflag &= ~(IGNBRK|BRKINT|PARMRK|ISTRIP
+ |INLCR|IGNCR|ICRNL|IXON);
+ tty.c_oflag |= OPOST;
+ tty.c_lflag &= ~(ECHO|ECHONL|ICANON|IEXTEN);
+ tty.c_cflag &= ~(CSIZE|PARENB);
+ tty.c_cflag |= CS8;
+ tty.c_cc[VMIN] = 1;
+ tty.c_cc[VTIME] = 0;
+
+ tcsetattr (0, TCSANOW, &tty);
+ }
+ signal(SIGQUIT, sigterm_handler); /* Quit (POSIX). */
+ }
+#endif
+ avformat_network_deinit();
+
+ signal(SIGINT , sigterm_handler); /* Interrupt (ANSI). */
+ signal(SIGTERM, sigterm_handler); /* Termination (ANSI). */
+#ifdef SIGXCPU
+ signal(SIGXCPU, sigterm_handler);
+#endif
+}
+
+/* read a key without blocking */
+static int read_key(void)
+{
+ unsigned char ch;
+#if HAVE_TERMIOS_H
+ int n = 1;
+ struct timeval tv;
+ fd_set rfds;
+
+ FD_ZERO(&rfds);
+ FD_SET(0, &rfds);
+ tv.tv_sec = 0;
+ tv.tv_usec = 0;
+ n = select(1, &rfds, NULL, NULL, &tv);
+ if (n > 0) {
+ n = read(0, &ch, 1);
+ if (n == 1)
+ return ch;
+
+ return n;
+ }
+#elif HAVE_KBHIT
+# if HAVE_PEEKNAMEDPIPE
+ static int is_pipe;
+ static HANDLE input_handle;
+ DWORD dw, nchars;
+ if(!input_handle){
+ input_handle = GetStdHandle(STD_INPUT_HANDLE);
+ is_pipe = !GetConsoleMode(input_handle, &dw);
+ }
+
+ if (stdin->_cnt > 0) {
+ read(0, &ch, 1);
+ return ch;
+ }
+ if (is_pipe) {
+ /* When running under a GUI, you will end here. */
+ if (!PeekNamedPipe(input_handle, NULL, 0, NULL, &nchars, NULL)) {
+ // input pipe may have been closed by the program that ran ffmpeg
+ return -1;
+ }
+ //Read it
+ if(nchars != 0) {
+ read(0, &ch, 1);
+ return ch;
+ }else{
+ return -1;
+ }
+ }
+# endif
+ if(kbhit())
+ return(getch());
+#endif
+ return -1;
+}
+
+static int decode_interrupt_cb(void *ctx)
+{
+ return received_nb_signals > 1;
+}
+
+const AVIOInterruptCB int_cb = { decode_interrupt_cb, NULL };
+
+static void exit_program(void)
+{
+ int i, j;
+
+ if (do_benchmark) {
+ int maxrss = getmaxrss() / 1024;
+ printf("bench: maxrss=%ikB\n", maxrss);
+ }
+
+ for (i = 0; i < nb_filtergraphs; i++) {
+ avfilter_graph_free(&filtergraphs[i]->graph);
+ for (j = 0; j < filtergraphs[i]->nb_inputs; j++) {
+ av_freep(&filtergraphs[i]->inputs[j]->name);
+ av_freep(&filtergraphs[i]->inputs[j]);
+ }
+ av_freep(&filtergraphs[i]->inputs);
+ for (j = 0; j < filtergraphs[i]->nb_outputs; j++) {
+ av_freep(&filtergraphs[i]->outputs[j]->name);
+ av_freep(&filtergraphs[i]->outputs[j]);
+ }
+ av_freep(&filtergraphs[i]->outputs);
+ av_freep(&filtergraphs[i]);
+ }
+ av_freep(&filtergraphs);
+
+ av_freep(&subtitle_out);
+
+ /* close files */
+ for (i = 0; i < nb_output_files; i++) {
+ AVFormatContext *s = output_files[i]->ctx;
+ if (!(s->oformat->flags & AVFMT_NOFILE) && s->pb)
+ avio_close(s->pb);
+ avformat_free_context(s);
+ av_dict_free(&output_files[i]->opts);
+ av_freep(&output_files[i]);
+ }
+ for (i = 0; i < nb_output_streams; i++) {
+ AVBitStreamFilterContext *bsfc = output_streams[i]->bitstream_filters;
+ while (bsfc) {
+ AVBitStreamFilterContext *next = bsfc->next;
+ av_bitstream_filter_close(bsfc);
+ bsfc = next;
+ }
+ output_streams[i]->bitstream_filters = NULL;
+ avcodec_free_frame(&output_streams[i]->filtered_frame);
+
+ av_freep(&output_streams[i]->forced_keyframes);
+ av_expr_free(output_streams[i]->forced_keyframes_pexpr);
+ av_freep(&output_streams[i]->avfilter);
+ av_freep(&output_streams[i]->logfile_prefix);
+ av_freep(&output_streams[i]);
+ }
+ for (i = 0; i < nb_input_files; i++) {
+ avformat_close_input(&input_files[i]->ctx);
+ av_freep(&input_files[i]);
+ }
+ for (i = 0; i < nb_input_streams; i++) {
+ avcodec_free_frame(&input_streams[i]->decoded_frame);
+ av_dict_free(&input_streams[i]->opts);
+ free_buffer_pool(&input_streams[i]->buffer_pool);
+ avsubtitle_free(&input_streams[i]->prev_sub.subtitle);
+ avfilter_unref_bufferp(&input_streams[i]->sub2video.ref);
+ av_freep(&input_streams[i]->filters);
+ av_freep(&input_streams[i]);
+ }
+
+ if (vstats_file)
+ fclose(vstats_file);
+ av_free(vstats_filename);
+
+ av_freep(&input_streams);
+ av_freep(&input_files);
+ av_freep(&output_streams);
+ av_freep(&output_files);
+
+ uninit_opts();
+
+ avfilter_uninit();
+ avformat_network_deinit();
+
+ if (received_sigterm) {
+ av_log(NULL, AV_LOG_INFO, "Received signal %d: terminating.\n",
+ (int) received_sigterm);
+ }
+}
+
+void assert_avoptions(AVDictionary *m)
+{
+ AVDictionaryEntry *t;
+ if ((t = av_dict_get(m, "", NULL, AV_DICT_IGNORE_SUFFIX))) {
+ av_log(NULL, AV_LOG_FATAL, "Option %s not found.\n", t->key);
+ exit(1);
+ }
+}
+
+static void abort_codec_experimental(AVCodec *c, int encoder)
+{
+ exit(1);
+}
+
+static void update_benchmark(const char *fmt, ...)
+{
+ if (do_benchmark_all) {
+ int64_t t = getutime();
+ va_list va;
+ char buf[1024];
+
+ if (fmt) {
+ va_start(va, fmt);
+ vsnprintf(buf, sizeof(buf), fmt, va);
+ va_end(va);
+ printf("bench: %8"PRIu64" %s \n", t - current_time, buf);
+ }
+ current_time = t;
+ }
+}
+
+static void write_frame(AVFormatContext *s, AVPacket *pkt, OutputStream *ost)
+{
+ AVBitStreamFilterContext *bsfc = ost->bitstream_filters;
+ AVCodecContext *avctx = ost->st->codec;
+ int ret;
+
+ if ((avctx->codec_type == AVMEDIA_TYPE_VIDEO && video_sync_method == VSYNC_DROP) ||
+ (avctx->codec_type == AVMEDIA_TYPE_AUDIO && audio_sync_method < 0))
+ pkt->pts = pkt->dts = AV_NOPTS_VALUE;
+
+ if ((avctx->codec_type == AVMEDIA_TYPE_AUDIO || avctx->codec_type == AVMEDIA_TYPE_VIDEO) && pkt->dts != AV_NOPTS_VALUE) {
+ int64_t max = ost->st->cur_dts + !(s->oformat->flags & AVFMT_TS_NONSTRICT);
+ if (ost->st->cur_dts && ost->st->cur_dts != AV_NOPTS_VALUE && max > pkt->dts) {
+ av_log(s, max - pkt->dts > 2 || avctx->codec_type == AVMEDIA_TYPE_VIDEO ? AV_LOG_WARNING : AV_LOG_DEBUG,
+ "st:%d PTS: %"PRId64" DTS: %"PRId64" < %"PRId64" invalid, clipping\n", pkt->stream_index, pkt->pts, pkt->dts, max);
+ if(pkt->pts >= pkt->dts)
+ pkt->pts = FFMAX(pkt->pts, max);
+ pkt->dts = max;
+ }
+ }
+
+ /*
+ * Audio encoders may split the packets -- #frames in != #packets out.
+ * But there is no reordering, so we can limit the number of output packets
+ * by simply dropping them here.
+ * Counting encoded video frames needs to be done separately because of
+ * reordering, see do_video_out()
+ */
+ if (!(avctx->codec_type == AVMEDIA_TYPE_VIDEO && avctx->codec)) {
+ if (ost->frame_number >= ost->max_frames) {
+ av_free_packet(pkt);
+ return;
+ }
+ ost->frame_number++;
+ }
+
+ while (bsfc) {
+ AVPacket new_pkt = *pkt;
+ int a = av_bitstream_filter_filter(bsfc, avctx, NULL,
+ &new_pkt.data, &new_pkt.size,
+ pkt->data, pkt->size,
+ pkt->flags & AV_PKT_FLAG_KEY);
+ if(a == 0 && new_pkt.data != pkt->data && new_pkt.destruct) {
+ uint8_t *t = av_malloc(new_pkt.size + FF_INPUT_BUFFER_PADDING_SIZE); //the new should be a subset of the old so cannot overflow
+ if(t) {
+ memcpy(t, new_pkt.data, new_pkt.size);
+ memset(t + new_pkt.size, 0, FF_INPUT_BUFFER_PADDING_SIZE);
+ new_pkt.data = t;
+ new_pkt.buf = NULL;
+ a = 1;
+ } else
+ a = AVERROR(ENOMEM);
+ }
+ if (a > 0) {
+ av_free_packet(pkt);
+ new_pkt.destruct = av_destruct_packet;
+ } else if (a < 0) {
+ av_log(NULL, AV_LOG_ERROR, "Failed to open bitstream filter %s for stream %d with codec %s",
+ bsfc->filter->name, pkt->stream_index,
+ avctx->codec ? avctx->codec->name : "copy");
+ print_error("", a);
+ if (exit_on_error)
+ exit(1);
+ }
+ *pkt = new_pkt;
+
+ bsfc = bsfc->next;
+ }
+
+ pkt->stream_index = ost->index;
+
+ if (debug_ts) {
+ av_log(NULL, AV_LOG_INFO, "muxer <- type:%s "
+ "pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s size:%d\n",
+ av_get_media_type_string(ost->st->codec->codec_type),
+ av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, &ost->st->time_base),
+ av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, &ost->st->time_base),
+ pkt->size
+ );
+ }
+
+ ret = av_interleaved_write_frame(s, pkt);
+ if (ret < 0) {
+ print_error("av_interleaved_write_frame()", ret);
+ exit(1);
+ }
+}
+
+static void close_output_stream(OutputStream *ost)
+{
+ OutputFile *of = output_files[ost->file_index];
+
+ ost->finished = 1;
+ if (of->shortest) {
+ int64_t end = av_rescale_q(ost->sync_opts - ost->first_pts, ost->st->codec->time_base, AV_TIME_BASE_Q);
+ of->recording_time = FFMIN(of->recording_time, end);
+ }
+}
+
+static int check_recording_time(OutputStream *ost)
+{
+ OutputFile *of = output_files[ost->file_index];
+
+ if (of->recording_time != INT64_MAX &&
+ av_compare_ts(ost->sync_opts - ost->first_pts, ost->st->codec->time_base, of->recording_time,
+ AV_TIME_BASE_Q) >= 0) {
+ close_output_stream(ost);
+ return 0;
+ }
+ return 1;
+}
+
+static void do_audio_out(AVFormatContext *s, OutputStream *ost,
+ AVFrame *frame)
+{
+ AVCodecContext *enc = ost->st->codec;
+ AVPacket pkt;
+ int got_packet = 0;
+
+ av_init_packet(&pkt);
+ pkt.data = NULL;
+ pkt.size = 0;
+
+ if (!check_recording_time(ost))
+ return;
+
+ if (frame->pts == AV_NOPTS_VALUE || audio_sync_method < 0)
+ frame->pts = ost->sync_opts;
+ ost->sync_opts = frame->pts + frame->nb_samples;
+
+ av_assert0(pkt.size || !pkt.data);
+ update_benchmark(NULL);
+ if (avcodec_encode_audio2(enc, &pkt, frame, &got_packet) < 0) {
+ av_log(NULL, AV_LOG_FATAL, "Audio encoding failed (avcodec_encode_audio2)\n");
+ exit(1);
+ }
+ update_benchmark("encode_audio %d.%d", ost->file_index, ost->index);
+
+ if (got_packet) {
+ if (pkt.pts != AV_NOPTS_VALUE)
+ pkt.pts = av_rescale_q(pkt.pts, enc->time_base, ost->st->time_base);
+ if (pkt.dts != AV_NOPTS_VALUE)
+ pkt.dts = av_rescale_q(pkt.dts, enc->time_base, ost->st->time_base);
+ if (pkt.duration > 0)
+ pkt.duration = av_rescale_q(pkt.duration, enc->time_base, ost->st->time_base);
+
+ if (debug_ts) {
+ av_log(NULL, AV_LOG_INFO, "encoder -> type:audio "
+ "pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s\n",
+ av_ts2str(pkt.pts), av_ts2timestr(pkt.pts, &ost->st->time_base),
+ av_ts2str(pkt.dts), av_ts2timestr(pkt.dts, &ost->st->time_base));
+ }
+
+ audio_size += pkt.size;
+ write_frame(s, &pkt, ost);
+
+ av_free_packet(&pkt);
+ }
+}
+
+#if FF_API_DEINTERLACE
+static void pre_process_video_frame(InputStream *ist, AVPicture *picture, void **bufp)
+{
+ AVCodecContext *dec;
+ AVPicture *picture2;
+ AVPicture picture_tmp;
+ uint8_t *buf = 0;
+
+ dec = ist->st->codec;
+
+ /* deinterlace : must be done before any resize */
+ if (FF_API_DEINTERLACE && do_deinterlace) {
+ int size;
+
+ /* create temporary picture */
+ size = avpicture_get_size(dec->pix_fmt, dec->width, dec->height);
+ if (size < 0)
+ return;
+ buf = av_malloc(size);
+ if (!buf)
+ return;
+
+ picture2 = &picture_tmp;
+ avpicture_fill(picture2, buf, dec->pix_fmt, dec->width, dec->height);
+
+ if (avpicture_deinterlace(picture2, picture,
+ dec->pix_fmt, dec->width, dec->height) < 0) {
+ /* if error, do not deinterlace */
+ av_log(NULL, AV_LOG_WARNING, "Deinterlacing failed\n");
+ av_free(buf);
+ buf = NULL;
+ picture2 = picture;
+ }
+ } else {
+ picture2 = picture;
+ }
+
+ if (picture != picture2)
+ *picture = *picture2;
+ *bufp = buf;
+}
+#endif
+
+static void do_subtitle_out(AVFormatContext *s,
+ OutputStream *ost,
+ InputStream *ist,
+ AVSubtitle *sub)
+{
+ int subtitle_out_max_size = 1024 * 1024;
+ int subtitle_out_size, nb, i;
+ AVCodecContext *enc;
+ AVPacket pkt;
+ int64_t pts;
+
+ if (sub->pts == AV_NOPTS_VALUE) {
+ av_log(NULL, AV_LOG_ERROR, "Subtitle packets must have a pts\n");
+ if (exit_on_error)
+ exit(1);
+ return;
+ }
+
+ enc = ost->st->codec;
+
+ if (!subtitle_out) {
+ subtitle_out = av_malloc(subtitle_out_max_size);
+ }
+
+ /* Note: DVB subtitle need one packet to draw them and one other
+ packet to clear them */
+ /* XXX: signal it in the codec context ? */
+ if (enc->codec_id == AV_CODEC_ID_DVB_SUBTITLE)
+ nb = 2;
+ else
+ nb = 1;
+
+ /* shift timestamp to honor -ss and make check_recording_time() work with -t */
+ pts = sub->pts - output_files[ost->file_index]->start_time;
+ for (i = 0; i < nb; i++) {
+ ost->sync_opts = av_rescale_q(pts, AV_TIME_BASE_Q, enc->time_base);
+ if (!check_recording_time(ost))
+ return;
+
+ sub->pts = pts;
+ // start_display_time is required to be 0
+ sub->pts += av_rescale_q(sub->start_display_time, (AVRational){ 1, 1000 }, AV_TIME_BASE_Q);
+ sub->end_display_time -= sub->start_display_time;
+ sub->start_display_time = 0;
+ if (i == 1)
+ sub->num_rects = 0;
+ subtitle_out_size = avcodec_encode_subtitle(enc, subtitle_out,
+ subtitle_out_max_size, sub);
+ if (subtitle_out_size < 0) {
+ av_log(NULL, AV_LOG_FATAL, "Subtitle encoding failed\n");
+ exit(1);
+ }
+
+ av_init_packet(&pkt);
+ pkt.data = subtitle_out;
+ pkt.size = subtitle_out_size;
+ pkt.pts = av_rescale_q(sub->pts, AV_TIME_BASE_Q, ost->st->time_base);
+ pkt.duration = av_rescale_q(sub->end_display_time, (AVRational){ 1, 1000 }, ost->st->time_base);
+ if (enc->codec_id == AV_CODEC_ID_DVB_SUBTITLE) {
+ /* XXX: the pts correction is handled here. Maybe handling
+ it in the codec would be better */
+ if (i == 0)
+ pkt.pts += 90 * sub->start_display_time;
+ else
+ pkt.pts += 90 * sub->end_display_time;
+ }
+ subtitle_size += pkt.size;
+ write_frame(s, &pkt, ost);
+ }
+}
+
+static void do_video_out(AVFormatContext *s,
+ OutputStream *ost,
+ AVFrame *in_picture)
+{
+ int ret, format_video_sync;
+ AVPacket pkt;
+ AVCodecContext *enc = ost->st->codec;
+ int nb_frames, i;
+ double sync_ipts, delta;
+ double duration = 0;
+ int frame_size = 0;
+ InputStream *ist = NULL;
+
+ if (ost->source_index >= 0)
+ ist = input_streams[ost->source_index];
+
+ if(ist && ist->st->start_time != AV_NOPTS_VALUE && ist->st->first_dts != AV_NOPTS_VALUE && ost->frame_rate.num)
+ duration = 1/(av_q2d(ost->frame_rate) * av_q2d(enc->time_base));
+
+ sync_ipts = in_picture->pts;
+ delta = sync_ipts - ost->sync_opts + duration;
+
+ /* by default, we output a single frame */
+ nb_frames = 1;
+
+ format_video_sync = video_sync_method;
+ if (format_video_sync == VSYNC_AUTO)
+ format_video_sync = (s->oformat->flags & AVFMT_VARIABLE_FPS) ? ((s->oformat->flags & AVFMT_NOTIMESTAMPS) ? VSYNC_PASSTHROUGH : VSYNC_VFR) : VSYNC_CFR;
+
+ switch (format_video_sync) {
+ case VSYNC_CFR:
+ // FIXME set to 0.5 after we fix some dts/pts bugs like in avidec.c
+ if (delta < -1.1)
+ nb_frames = 0;
+ else if (delta > 1.1)
+ nb_frames = lrintf(delta);
+ break;
+ case VSYNC_VFR:
+ if (delta <= -0.6)
+ nb_frames = 0;
+ else if (delta > 0.6)
+ ost->sync_opts = lrint(sync_ipts);
+ break;
+ case VSYNC_DROP:
+ case VSYNC_PASSTHROUGH:
+ ost->sync_opts = lrint(sync_ipts);
+ break;
+ default:
+ av_assert0(0);
+ }
+
+ nb_frames = FFMIN(nb_frames, ost->max_frames - ost->frame_number);
+ if (nb_frames == 0) {
+ nb_frames_drop++;
+ av_log(NULL, AV_LOG_VERBOSE, "*** drop!\n");
+ return;
+ } else if (nb_frames > 1) {
+ if (nb_frames > dts_error_threshold * 30) {
+ av_log(NULL, AV_LOG_ERROR, "%d frame duplication too large, skipping\n", nb_frames - 1);
+ nb_frames_drop++;
+ return;
+ }
+ nb_frames_dup += nb_frames - 1;
+ av_log(NULL, AV_LOG_VERBOSE, "*** %d dup!\n", nb_frames - 1);
+ }
+
+ /* duplicates frame if needed */
+ for (i = 0; i < nb_frames; i++) {
+ av_init_packet(&pkt);
+ pkt.data = NULL;
+ pkt.size = 0;
+
+ in_picture->pts = ost->sync_opts;
+
+ if (!check_recording_time(ost))
+ return;
+
+ if (s->oformat->flags & AVFMT_RAWPICTURE &&
+ enc->codec->id == AV_CODEC_ID_RAWVIDEO) {
+ /* raw pictures are written as AVPicture structure to
+ avoid any copies. We support temporarily the older
+ method. */
+ enc->coded_frame->interlaced_frame = in_picture->interlaced_frame;
+ enc->coded_frame->top_field_first = in_picture->top_field_first;
+ if (enc->coded_frame->interlaced_frame)
+ enc->field_order = enc->coded_frame->top_field_first ? AV_FIELD_TB:AV_FIELD_BT;
+ else
+ enc->field_order = AV_FIELD_PROGRESSIVE;
+ pkt.data = (uint8_t *)in_picture;
+ pkt.size = sizeof(AVPicture);
+ pkt.pts = av_rescale_q(in_picture->pts, enc->time_base, ost->st->time_base);
+ pkt.flags |= AV_PKT_FLAG_KEY;
+
+ video_size += pkt.size;
+ write_frame(s, &pkt, ost);
+ } else {
+ int got_packet, forced_keyframe = 0;
+ AVFrame big_picture;
+ double pts_time;
+
+ big_picture = *in_picture;
+ /* better than nothing: use input picture interlaced
+ settings */
+ big_picture.interlaced_frame = in_picture->interlaced_frame;
+ if (ost->st->codec->flags & (CODEC_FLAG_INTERLACED_DCT|CODEC_FLAG_INTERLACED_ME)) {
+ if (ost->top_field_first == -1)
+ big_picture.top_field_first = in_picture->top_field_first;
+ else
+ big_picture.top_field_first = !!ost->top_field_first;
+ }
+
+ if (big_picture.interlaced_frame) {
+ if (enc->codec->id == AV_CODEC_ID_MJPEG)
+ enc->field_order = big_picture.top_field_first ? AV_FIELD_TT:AV_FIELD_BB;
+ else
+ enc->field_order = big_picture.top_field_first ? AV_FIELD_TB:AV_FIELD_BT;
+ } else
+ enc->field_order = AV_FIELD_PROGRESSIVE;
+
+ big_picture.quality = ost->st->codec->global_quality;
+ if (!enc->me_threshold)
+ big_picture.pict_type = 0;
+
+ pts_time = big_picture.pts != AV_NOPTS_VALUE ?
+ big_picture.pts * av_q2d(enc->time_base) : NAN;
+ if (ost->forced_kf_index < ost->forced_kf_count &&
+ big_picture.pts >= ost->forced_kf_pts[ost->forced_kf_index]) {
+ ost->forced_kf_index++;
+ forced_keyframe = 1;
+ } else if (ost->forced_keyframes_pexpr) {
+ double res;
+ ost->forced_keyframes_expr_const_values[FKF_T] = pts_time;
+ res = av_expr_eval(ost->forced_keyframes_pexpr,
+ ost->forced_keyframes_expr_const_values, NULL);
+ av_dlog(NULL, "force_key_frame: n:%f n_forced:%f prev_forced_n:%f t:%f prev_forced_t:%f -> res:%f\n",
+ ost->forced_keyframes_expr_const_values[FKF_N],
+ ost->forced_keyframes_expr_const_values[FKF_N_FORCED],
+ ost->forced_keyframes_expr_const_values[FKF_PREV_FORCED_N],
+ ost->forced_keyframes_expr_const_values[FKF_T],
+ ost->forced_keyframes_expr_const_values[FKF_PREV_FORCED_T],
+ res);
+ if (res) {
+ forced_keyframe = 1;
+ ost->forced_keyframes_expr_const_values[FKF_PREV_FORCED_N] =
+ ost->forced_keyframes_expr_const_values[FKF_N];
+ ost->forced_keyframes_expr_const_values[FKF_PREV_FORCED_T] =
+ ost->forced_keyframes_expr_const_values[FKF_T];
+ ost->forced_keyframes_expr_const_values[FKF_N_FORCED] += 1;
+ }
+
+ ost->forced_keyframes_expr_const_values[FKF_N] += 1;
+ }
+ if (forced_keyframe) {
+ big_picture.pict_type = AV_PICTURE_TYPE_I;
+ av_log(NULL, AV_LOG_DEBUG, "Forced keyframe at time %f\n", pts_time);
+ }
+
+ update_benchmark(NULL);
+ ret = avcodec_encode_video2(enc, &pkt, &big_picture, &got_packet);
+ update_benchmark("encode_video %d.%d", ost->file_index, ost->index);
+ if (ret < 0) {
+ av_log(NULL, AV_LOG_FATAL, "Video encoding failed\n");
+ exit(1);
+ }
+
+ if (got_packet) {
+ if (pkt.pts == AV_NOPTS_VALUE && !(enc->codec->capabilities & CODEC_CAP_DELAY))
+ pkt.pts = ost->sync_opts;
+
+ if (pkt.pts != AV_NOPTS_VALUE)
+ pkt.pts = av_rescale_q(pkt.pts, enc->time_base, ost->st->time_base);
+ if (pkt.dts != AV_NOPTS_VALUE)
+ pkt.dts = av_rescale_q(pkt.dts, enc->time_base, ost->st->time_base);
+
+ if (debug_ts) {
+ av_log(NULL, AV_LOG_INFO, "encoder -> type:video "
+ "pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s\n",
+ av_ts2str(pkt.pts), av_ts2timestr(pkt.pts, &ost->st->time_base),
+ av_ts2str(pkt.dts), av_ts2timestr(pkt.dts, &ost->st->time_base));
+ }
+
+ frame_size = pkt.size;
+ video_size += pkt.size;
+ write_frame(s, &pkt, ost);
+ av_free_packet(&pkt);
+
+ /* if two pass, output log */
+ if (ost->logfile && enc->stats_out) {
+ fprintf(ost->logfile, "%s", enc->stats_out);
+ }
+ }
+ }
+ ost->sync_opts++;
+ /*
+ * For video, number of frames in == number of packets out.
+ * But there may be reordering, so we can't throw away frames on encoder
+ * flush, we need to limit them here, before they go into encoder.
+ */
+ ost->frame_number++;
+ }
+
+ if (vstats_filename && frame_size)
+ do_video_stats(ost, frame_size);
+}
+
+static double psnr(double d)
+{
+ return -10.0 * log(d) / log(10.0);
+}
+
+static void do_video_stats(OutputStream *ost, int frame_size)
+{
+ AVCodecContext *enc;
+ int frame_number;
+ double ti1, bitrate, avg_bitrate;
+
+ /* this is executed just the first time do_video_stats is called */
+ if (!vstats_file) {
+ vstats_file = fopen(vstats_filename, "w");
+ if (!vstats_file) {
+ perror("fopen");
+ exit(1);
+ }
+ }
+
+ enc = ost->st->codec;
+ if (enc->codec_type == AVMEDIA_TYPE_VIDEO) {
+ frame_number = ost->st->nb_frames;
+ fprintf(vstats_file, "frame= %5d q= %2.1f ", frame_number, enc->coded_frame->quality / (float)FF_QP2LAMBDA);
+ if (enc->flags&CODEC_FLAG_PSNR)
+ fprintf(vstats_file, "PSNR= %6.2f ", psnr(enc->coded_frame->error[0] / (enc->width * enc->height * 255.0 * 255.0)));
+
+ fprintf(vstats_file,"f_size= %6d ", frame_size);
+ /* compute pts value */
+ ti1 = ost->st->pts.val * av_q2d(enc->time_base);
+ if (ti1 < 0.01)
+ ti1 = 0.01;
+
+ bitrate = (frame_size * 8) / av_q2d(enc->time_base) / 1000.0;
+ avg_bitrate = (double)(video_size * 8) / ti1 / 1000.0;
+ fprintf(vstats_file, "s_size= %8.0fkB time= %0.3f br= %7.1fkbits/s avg_br= %7.1fkbits/s ",
+ (double)video_size / 1024, ti1, bitrate, avg_bitrate);
+ fprintf(vstats_file, "type= %c\n", av_get_picture_type_char(enc->coded_frame->pict_type));
+ }
+}
+
+/**
+ * Get and encode new output from any of the filtergraphs, without causing
+ * activity.
+ *
+ * @return 0 for success, <0 for severe errors
+ */
+static int reap_filters(void)
+{
+ AVFilterBufferRef *picref;
+ AVFrame *filtered_frame = NULL;
+ int i;
+ int64_t frame_pts;
+
+ /* Reap all buffers present in the buffer sinks */
+ for (i = 0; i < nb_output_streams; i++) {
+ OutputStream *ost = output_streams[i];
+ OutputFile *of = output_files[ost->file_index];
+ int ret = 0;
+
+ if (!ost->filter)
+ continue;
+
+ if (!ost->filtered_frame && !(ost->filtered_frame = avcodec_alloc_frame())) {
+ return AVERROR(ENOMEM);
+ } else
+ avcodec_get_frame_defaults(ost->filtered_frame);
+ filtered_frame = ost->filtered_frame;
+
+ while (1) {
+ ret = av_buffersink_get_buffer_ref(ost->filter->filter, &picref,
+ AV_BUFFERSINK_FLAG_NO_REQUEST);
+ if (ret < 0) {
+ if (ret != AVERROR(EAGAIN) && ret != AVERROR_EOF) {
+ char buf[256];
+ av_strerror(ret, buf, sizeof(buf));
+ av_log(NULL, AV_LOG_WARNING,
+ "Error in av_buffersink_get_buffer_ref(): %s\n", buf);
+ }
+ break;
+ }
+ frame_pts = AV_NOPTS_VALUE;
+ if (picref->pts != AV_NOPTS_VALUE) {
+ filtered_frame->pts = frame_pts = av_rescale_q(picref->pts,
+ ost->filter->filter->inputs[0]->time_base,
+ ost->st->codec->time_base) -
+ av_rescale_q(of->start_time,
+ AV_TIME_BASE_Q,
+ ost->st->codec->time_base);
+
+ if (of->start_time && filtered_frame->pts < 0) {
+ avfilter_unref_buffer(picref);
+ continue;
+ }
+ }
+ //if (ost->source_index >= 0)
+ // *filtered_frame= *input_streams[ost->source_index]->decoded_frame; //for me_threshold
+
+
+ switch (ost->filter->filter->inputs[0]->type) {
+ case AVMEDIA_TYPE_VIDEO:
+ avfilter_copy_buf_props(filtered_frame, picref);
+ filtered_frame->pts = frame_pts;
+ if (!ost->frame_aspect_ratio)
+ ost->st->codec->sample_aspect_ratio = picref->video->sample_aspect_ratio;
+
+ do_video_out(of->ctx, ost, filtered_frame);
+ break;
+ case AVMEDIA_TYPE_AUDIO:
+ avfilter_copy_buf_props(filtered_frame, picref);
+ filtered_frame->pts = frame_pts;
+ if (!(ost->st->codec->codec->capabilities & CODEC_CAP_PARAM_CHANGE) &&
+ ost->st->codec->channels != av_frame_get_channels(filtered_frame)) {
+ av_log(NULL, AV_LOG_ERROR,
+ "Audio filter graph output is not normalized and encoder does not support parameter changes\n");
+ break;
+ }
+ do_audio_out(of->ctx, ost, filtered_frame);
+ break;
+ default:
+ // TODO support subtitle filters
+ av_assert0(0);
+ }
+
+ avfilter_unref_buffer(picref);
+ }
+ }
+
+ return 0;
+}
+
+static void print_report(int is_last_report, int64_t timer_start, int64_t cur_time)
+{
+ char buf[1024];
+ AVBPrint buf_script;
+ OutputStream *ost;
+ AVFormatContext *oc;
+ int64_t total_size;
+ AVCodecContext *enc;
+ int frame_number, vid, i;
+ double bitrate;
+ int64_t pts = INT64_MIN;
+ static int64_t last_time = -1;
+ static int qp_histogram[52];
+ int hours, mins, secs, us;
+
+ if (!print_stats && !is_last_report && !progress_avio)
+ return;
+
+ if (!is_last_report) {
+ if (last_time == -1) {
+ last_time = cur_time;
+ return;
+ }
+ if ((cur_time - last_time) < 500000)
+ return;
+ last_time = cur_time;
+ }
+
+
+ oc = output_files[0]->ctx;
+
+ total_size = avio_size(oc->pb);
+ if (total_size <= 0) // FIXME improve avio_size() so it works with non seekable output too
+ total_size = avio_tell(oc->pb);
+
+ buf[0] = '\0';
+ vid = 0;
+ av_bprint_init(&buf_script, 0, 1);
+ for (i = 0; i < nb_output_streams; i++) {
+ float q = -1;
+ ost = output_streams[i];
+ enc = ost->st->codec;
+ if (!ost->stream_copy && enc->coded_frame)
+ q = enc->coded_frame->quality / (float)FF_QP2LAMBDA;
+ if (vid && enc->codec_type == AVMEDIA_TYPE_VIDEO) {
+ snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), "q=%2.1f ", q);
+ av_bprintf(&buf_script, "stream_%d_%d_q=%.1f\n",
+ ost->file_index, ost->index, q);
+ }
+ if (!vid && enc->codec_type == AVMEDIA_TYPE_VIDEO) {
+ float fps, t = (cur_time-timer_start) / 1000000.0;
+
+ frame_number = ost->frame_number;
+ fps = t > 1 ? frame_number / t : 0;
+ snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), "frame=%5d fps=%3.*f q=%3.1f ",
+ frame_number, fps < 9.95, fps, q);
+ av_bprintf(&buf_script, "frame=%d\n", frame_number);
+ av_bprintf(&buf_script, "fps=%.1f\n", fps);
+ av_bprintf(&buf_script, "stream_%d_%d_q=%.1f\n",
+ ost->file_index, ost->index, q);
+ if (is_last_report)
+ snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), "L");
+ if (qp_hist) {
+ int j;
+ int qp = lrintf(q);
+ if (qp >= 0 && qp < FF_ARRAY_ELEMS(qp_histogram))
+ qp_histogram[qp]++;
+ for (j = 0; j < 32; j++)
+ snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), "%X", (int)lrintf(log2(qp_histogram[j] + 1)));
+ }
+ if ((enc->flags&CODEC_FLAG_PSNR) && (enc->coded_frame || is_last_report)) {
+ int j;
+ double error, error_sum = 0;
+ double scale, scale_sum = 0;
+ double p;
+ char type[3] = { 'Y','U','V' };
+ snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), "PSNR=");
+ for (j = 0; j < 3; j++) {
+ if (is_last_report) {
+ error = enc->error[j];
+ scale = enc->width * enc->height * 255.0 * 255.0 * frame_number;
+ } else {
+ error = enc->coded_frame->error[j];
+ scale = enc->width * enc->height * 255.0 * 255.0;
+ }
+ if (j)
+ scale /= 4;
+ error_sum += error;
+ scale_sum += scale;
+ p = psnr(error / scale);
+ snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), "%c:%2.2f ", type[j], p);
+ av_bprintf(&buf_script, "stream_%d_%d_psnr_%c=%2.2f\n",
+ ost->file_index, ost->index, type[j] | 32, p);
+ }
+ p = psnr(error_sum / scale_sum);
+ snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), "*:%2.2f ", psnr(error_sum / scale_sum));
+ av_bprintf(&buf_script, "stream_%d_%d_psnr_all=%2.2f\n",
+ ost->file_index, ost->index, p);
+ }
+ vid = 1;
+ }
+ /* compute min output value */
+ if ((is_last_report || !ost->finished) && ost->st->pts.val != AV_NOPTS_VALUE)
+ pts = FFMAX(pts, av_rescale_q(ost->st->pts.val,
+ ost->st->time_base, AV_TIME_BASE_Q));
+ }
+
+ secs = pts / AV_TIME_BASE;
+ us = pts % AV_TIME_BASE;
+ mins = secs / 60;
+ secs %= 60;
+ hours = mins / 60;
+ mins %= 60;
+
+ bitrate = pts && total_size >= 0 ? total_size * 8 / (pts / 1000.0) : -1;
+
+ if (total_size < 0) snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf),
+ "size=N/A time=");
+ else snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf),
+ "size=%8.0fkB time=", total_size / 1024.0);
+ snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf),
+ "%02d:%02d:%02d.%02d ", hours, mins, secs,
+ (100 * us) / AV_TIME_BASE);
+ if (bitrate < 0) snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf),
+ "bitrate=N/A");
+ else snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf),
+ "bitrate=%6.1fkbits/s", bitrate);
+ if (total_size < 0) av_bprintf(&buf_script, "total_size=N/A\n");
+ else av_bprintf(&buf_script, "total_size=%"PRId64"\n", total_size);
+ av_bprintf(&buf_script, "out_time_ms=%"PRId64"\n", pts);
+ av_bprintf(&buf_script, "out_time=%02d:%02d:%02d.%06d\n",
+ hours, mins, secs, us);
+
+ if (nb_frames_dup || nb_frames_drop)
+ snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), " dup=%d drop=%d",
+ nb_frames_dup, nb_frames_drop);
+ av_bprintf(&buf_script, "dup_frames=%d\n", nb_frames_dup);
+ av_bprintf(&buf_script, "drop_frames=%d\n", nb_frames_drop);
+
+ if (print_stats || is_last_report) {
+ if (print_stats==1 && AV_LOG_INFO > av_log_get_level()) {
+ fprintf(stderr, "%s \r", buf);
+ } else
+ av_log(NULL, AV_LOG_INFO, "%s \r", buf);
+
+ fflush(stderr);
+ }
+
+ if (progress_avio) {
+ av_bprintf(&buf_script, "progress=%s\n",
+ is_last_report ? "end" : "continue");
+ avio_write(progress_avio, buf_script.str,
+ FFMIN(buf_script.len, buf_script.size - 1));
+ avio_flush(progress_avio);
+ av_bprint_finalize(&buf_script, NULL);
+ if (is_last_report) {
+ avio_close(progress_avio);
+ progress_avio = NULL;
+ }
+ }
+
+ if (is_last_report) {
+ int64_t raw= audio_size + video_size + subtitle_size + extra_size;
+ av_log(NULL, AV_LOG_INFO, "\n");
+ av_log(NULL, AV_LOG_INFO, "video:%1.0fkB audio:%1.0fkB subtitle:%1.0f global headers:%1.0fkB muxing overhead %f%%\n",
+ video_size / 1024.0,
+ audio_size / 1024.0,
+ subtitle_size / 1024.0,
+ extra_size / 1024.0,
+ 100.0 * (total_size - raw) / raw
+ );
+ if(video_size + audio_size + subtitle_size + extra_size == 0){
+ av_log(NULL, AV_LOG_WARNING, "Output file is empty, nothing was encoded (check -ss / -t / -frames parameters if used)\n");
+ }
+ }
+}
+
+static void flush_encoders(void)
+{
+ int i, ret;
+
+ for (i = 0; i < nb_output_streams; i++) {
+ OutputStream *ost = output_streams[i];
+ AVCodecContext *enc = ost->st->codec;
+ AVFormatContext *os = output_files[ost->file_index]->ctx;
+ int stop_encoding = 0;
+
+ if (!ost->encoding_needed)
+ continue;
+
+ if (ost->st->codec->codec_type == AVMEDIA_TYPE_AUDIO && enc->frame_size <= 1)
+ continue;
+ if (ost->st->codec->codec_type == AVMEDIA_TYPE_VIDEO && (os->oformat->flags & AVFMT_RAWPICTURE) && enc->codec->id == AV_CODEC_ID_RAWVIDEO)
+ continue;
+
+ for (;;) {
+ int (*encode)(AVCodecContext*, AVPacket*, const AVFrame*, int*) = NULL;
+ const char *desc;
+ int64_t *size;
+
+ switch (ost->st->codec->codec_type) {
+ case AVMEDIA_TYPE_AUDIO:
+ encode = avcodec_encode_audio2;
+ desc = "Audio";
+ size = &audio_size;
+ break;
+ case AVMEDIA_TYPE_VIDEO:
+ encode = avcodec_encode_video2;
+ desc = "Video";
+ size = &video_size;
+ break;
+ default:
+ stop_encoding = 1;
+ }
+
+ if (encode) {
+ AVPacket pkt;
+ int got_packet;
+ av_init_packet(&pkt);
+ pkt.data = NULL;
+ pkt.size = 0;
+
+ update_benchmark(NULL);
+ ret = encode(enc, &pkt, NULL, &got_packet);
+ update_benchmark("flush %s %d.%d", desc, ost->file_index, ost->index);
+ if (ret < 0) {
+ av_log(NULL, AV_LOG_FATAL, "%s encoding failed\n", desc);
+ exit(1);
+ }
+ *size += pkt.size;
+ if (ost->logfile && enc->stats_out) {
+ fprintf(ost->logfile, "%s", enc->stats_out);
+ }
+ if (!got_packet) {
+ stop_encoding = 1;
+ break;
+ }
+ if (pkt.pts != AV_NOPTS_VALUE)
+ pkt.pts = av_rescale_q(pkt.pts, enc->time_base, ost->st->time_base);
+ if (pkt.dts != AV_NOPTS_VALUE)
+ pkt.dts = av_rescale_q(pkt.dts, enc->time_base, ost->st->time_base);
+ if (pkt.duration > 0)
+ pkt.duration = av_rescale_q(pkt.duration, enc->time_base, ost->st->time_base);
+ write_frame(os, &pkt, ost);
+ if (ost->st->codec->codec_type == AVMEDIA_TYPE_VIDEO && vstats_filename) {
+ do_video_stats(ost, pkt.size);
+ }
+ }
+
+ if (stop_encoding)
+ break;
+ }
+ }
+}
+
+/*
+ * Check whether a packet from ist should be written into ost at this time
+ */
+static int check_output_constraints(InputStream *ist, OutputStream *ost)
+{
+ OutputFile *of = output_files[ost->file_index];
+ int ist_index = input_files[ist->file_index]->ist_index + ist->st->index;
+
+ if (ost->source_index != ist_index)
+ return 0;
+
+ if (of->start_time && ist->pts < of->start_time)
+ return 0;
+
+ return 1;
+}
+
+static void do_streamcopy(InputStream *ist, OutputStream *ost, const AVPacket *pkt)
+{
+ OutputFile *of = output_files[ost->file_index];
+ int64_t ost_tb_start_time = av_rescale_q(of->start_time, AV_TIME_BASE_Q, ost->st->time_base);
+ AVPicture pict;
+ AVPacket opkt;
+
+ av_init_packet(&opkt);
+
+ if ((!ost->frame_number && !(pkt->flags & AV_PKT_FLAG_KEY)) &&
+ !ost->copy_initial_nonkeyframes)
+ return;
+
+ if (!ost->frame_number && ist->pts < of->start_time &&
+ !ost->copy_prior_start)
+ return;
+
+ if (of->recording_time != INT64_MAX &&
+ ist->pts >= of->recording_time + of->start_time) {
+ close_output_stream(ost);
+ return;
+ }
+
+ /* force the input stream PTS */
+ if (ost->st->codec->codec_type == AVMEDIA_TYPE_AUDIO)
+ audio_size += pkt->size;
+ else if (ost->st->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
+ video_size += pkt->size;
+ ost->sync_opts++;
+ } else if (ost->st->codec->codec_type == AVMEDIA_TYPE_SUBTITLE) {
+ subtitle_size += pkt->size;
+ }
+
+ if (pkt->pts != AV_NOPTS_VALUE)
+ opkt.pts = av_rescale_q(pkt->pts, ist->st->time_base, ost->st->time_base) - ost_tb_start_time;
+ else
+ opkt.pts = AV_NOPTS_VALUE;
+
+ if (pkt->dts == AV_NOPTS_VALUE)
+ opkt.dts = av_rescale_q(ist->dts, AV_TIME_BASE_Q, ost->st->time_base);
+ else
+ opkt.dts = av_rescale_q(pkt->dts, ist->st->time_base, ost->st->time_base);
+ opkt.dts -= ost_tb_start_time;
+
+ if (ost->st->codec->codec_type == AVMEDIA_TYPE_AUDIO && pkt->dts != AV_NOPTS_VALUE) {
+ int duration = av_get_audio_frame_duration(ist->st->codec, pkt->size);
+ if(!duration)
+ duration = ist->st->codec->frame_size;
+ opkt.dts = opkt.pts = av_rescale_delta(ist->st->time_base, pkt->dts,
+ (AVRational){1, ist->st->codec->sample_rate}, duration, &ist->filter_in_rescale_delta_last,
+ ost->st->time_base) - ost_tb_start_time;
+ }
+
+ opkt.duration = av_rescale_q(pkt->duration, ist->st->time_base, ost->st->time_base);
+ opkt.flags = pkt->flags;
+
+ // FIXME remove the following 2 lines they shall be replaced by the bitstream filters
+ if ( ost->st->codec->codec_id != AV_CODEC_ID_H264
+ && ost->st->codec->codec_id != AV_CODEC_ID_MPEG1VIDEO
+ && ost->st->codec->codec_id != AV_CODEC_ID_MPEG2VIDEO
+ && ost->st->codec->codec_id != AV_CODEC_ID_VC1
+ ) {
+ if (av_parser_change(ist->st->parser, ost->st->codec, &opkt.data, &opkt.size, pkt->data, pkt->size, pkt->flags & AV_PKT_FLAG_KEY))
+ opkt.destruct = av_destruct_packet;
+ } else {
+ opkt.data = pkt->data;
+ opkt.size = pkt->size;
+ }
+
+ if (ost->st->codec->codec_type == AVMEDIA_TYPE_VIDEO && (of->ctx->oformat->flags & AVFMT_RAWPICTURE)) {
+ /* store AVPicture in AVPacket, as expected by the output format */
+ avpicture_fill(&pict, opkt.data, ost->st->codec->pix_fmt, ost->st->codec->width, ost->st->codec->height);
+ opkt.data = (uint8_t *)&pict;
+ opkt.size = sizeof(AVPicture);
+ opkt.flags |= AV_PKT_FLAG_KEY;
+ }
+
+ write_frame(of->ctx, &opkt, ost);
+ ost->st->codec->frame_number++;
+}
+
+static void rate_emu_sleep(InputStream *ist)
+{
+ if (input_files[ist->file_index]->rate_emu) {
+ int64_t pts = av_rescale(ist->dts, 1000000, AV_TIME_BASE);
+ int64_t now = av_gettime() - ist->start;
+ if (pts > now)
+ av_usleep(pts - now);
+ }
+}
+
+int guess_input_channel_layout(InputStream *ist)
+{
+ AVCodecContext *dec = ist->st->codec;
+
+ if (!dec->channel_layout) {
+ char layout_name[256];
+
+ if (dec->channels > ist->guess_layout_max)
+ return 0;
+ dec->channel_layout = av_get_default_channel_layout(dec->channels);
+ if (!dec->channel_layout)
+ return 0;
+ av_get_channel_layout_string(layout_name, sizeof(layout_name),
+ dec->channels, dec->channel_layout);
+ av_log(NULL, AV_LOG_WARNING, "Guessed Channel Layout for Input Stream "
+ "#%d.%d : %s\n", ist->file_index, ist->st->index, layout_name);
+ }
+ return 1;
+}
+
+static int decode_audio(InputStream *ist, AVPacket *pkt, int *got_output)
+{
+ AVFrame *decoded_frame;
+ AVCodecContext *avctx = ist->st->codec;
+ int i, ret, resample_changed;
+ AVRational decoded_frame_tb;
+
+ if (!ist->decoded_frame && !(ist->decoded_frame = avcodec_alloc_frame()))
+ return AVERROR(ENOMEM);
+ decoded_frame = ist->decoded_frame;
+
+ update_benchmark(NULL);
+ ret = avcodec_decode_audio4(avctx, decoded_frame, got_output, pkt);
+ update_benchmark("decode_audio %d.%d", ist->file_index, ist->st->index);
+
+ if (ret >= 0 && avctx->sample_rate <= 0) {
+ av_log(avctx, AV_LOG_ERROR, "Sample rate %d invalid\n", avctx->sample_rate);
+ ret = AVERROR_INVALIDDATA;
+ }
+
+ if (!*got_output || ret < 0) {
+ if (!pkt->size) {
+ for (i = 0; i < ist->nb_filters; i++)
+ av_buffersrc_add_ref(ist->filters[i]->filter, NULL, 0);
+ }
+ return ret;
+ }
+
+#if 1
+ /* increment next_dts to use for the case where the input stream does not
+ have timestamps or there are multiple frames in the packet */
+ ist->next_pts += ((int64_t)AV_TIME_BASE * decoded_frame->nb_samples) /
+ avctx->sample_rate;
+ ist->next_dts += ((int64_t)AV_TIME_BASE * decoded_frame->nb_samples) /
+ avctx->sample_rate;
+#endif
+
+ rate_emu_sleep(ist);
+
+ resample_changed = ist->resample_sample_fmt != decoded_frame->format ||
+ ist->resample_channels != avctx->channels ||
+ ist->resample_channel_layout != decoded_frame->channel_layout ||
+ ist->resample_sample_rate != decoded_frame->sample_rate;
+ if (resample_changed) {
+ char layout1[64], layout2[64];
+
+ if (!guess_input_channel_layout(ist)) {
+ av_log(NULL, AV_LOG_FATAL, "Unable to find default channel "
+ "layout for Input Stream #%d.%d\n", ist->file_index,
+ ist->st->index);
+ exit(1);
+ }
+ decoded_frame->channel_layout = avctx->channel_layout;
+
+ av_get_channel_layout_string(layout1, sizeof(layout1), ist->resample_channels,
+ ist->resample_channel_layout);
+ av_get_channel_layout_string(layout2, sizeof(layout2), avctx->channels,
+ decoded_frame->channel_layout);
+
+ av_log(NULL, AV_LOG_INFO,
+ "Input stream #%d:%d frame changed from rate:%d fmt:%s ch:%d chl:%s to rate:%d fmt:%s ch:%d chl:%s\n",
+ ist->file_index, ist->st->index,
+ ist->resample_sample_rate, av_get_sample_fmt_name(ist->resample_sample_fmt),
+ ist->resample_channels, layout1,
+ decoded_frame->sample_rate, av_get_sample_fmt_name(decoded_frame->format),
+ avctx->channels, layout2);
+
+ ist->resample_sample_fmt = decoded_frame->format;
+ ist->resample_sample_rate = decoded_frame->sample_rate;
+ ist->resample_channel_layout = decoded_frame->channel_layout;
+ ist->resample_channels = avctx->channels;
+
+ for (i = 0; i < nb_filtergraphs; i++)
+ if (ist_in_filtergraph(filtergraphs[i], ist)) {
+ FilterGraph *fg = filtergraphs[i];
+ int j;
+ if (configure_filtergraph(fg) < 0) {
+ av_log(NULL, AV_LOG_FATAL, "Error reinitializing filters!\n");
+ exit(1);
+ }
+ for (j = 0; j < fg->nb_outputs; j++) {
+ OutputStream *ost = fg->outputs[j]->ost;
+ if (ost->enc->type == AVMEDIA_TYPE_AUDIO &&
+ !(ost->enc->capabilities & CODEC_CAP_VARIABLE_FRAME_SIZE))
+ av_buffersink_set_frame_size(ost->filter->filter,
+ ost->st->codec->frame_size);
+ }
+ }
+ }
+
+ /* if the decoder provides a pts, use it instead of the last packet pts.
+ the decoder could be delaying output by a packet or more. */
+ if (decoded_frame->pts != AV_NOPTS_VALUE) {
+ ist->dts = ist->next_dts = ist->pts = ist->next_pts = av_rescale_q(decoded_frame->pts, avctx->time_base, AV_TIME_BASE_Q);
+ decoded_frame_tb = avctx->time_base;
+ } else if (decoded_frame->pkt_pts != AV_NOPTS_VALUE) {
+ decoded_frame->pts = decoded_frame->pkt_pts;
+ pkt->pts = AV_NOPTS_VALUE;
+ decoded_frame_tb = ist->st->time_base;
+ } else if (pkt->pts != AV_NOPTS_VALUE) {
+ decoded_frame->pts = pkt->pts;
+ pkt->pts = AV_NOPTS_VALUE;
+ decoded_frame_tb = ist->st->time_base;
+ }else {
+ decoded_frame->pts = ist->dts;
+ decoded_frame_tb = AV_TIME_BASE_Q;
+ }
+ if (decoded_frame->pts != AV_NOPTS_VALUE)
+ decoded_frame->pts = av_rescale_delta(decoded_frame_tb, decoded_frame->pts,
+ (AVRational){1, ist->st->codec->sample_rate}, decoded_frame->nb_samples, &ist->filter_in_rescale_delta_last,
+ (AVRational){1, ist->st->codec->sample_rate});
+ for (i = 0; i < ist->nb_filters; i++)
- if(av_buffersrc_add_frame(ist->filters[i]->filter, decoded_frame, AV_BUFFERSRC_FLAG_PUSH)<0) {
++ av_buffersrc_write_frame(ist->filters[i]->filter, decoded_frame);
++ /* TODO re-add AV_BUFFERSRC_FLAG_PUSH */
+
+ decoded_frame->pts = AV_NOPTS_VALUE;
+
+ return ret;
+}
+
+static int decode_video(InputStream *ist, AVPacket *pkt, int *got_output)
+{
+ AVFrame *decoded_frame;
+ void *buffer_to_free = NULL;
+ int i, ret = 0, resample_changed;
+ int64_t best_effort_timestamp;
+ AVRational *frame_sample_aspect;
+
+ if (!ist->decoded_frame && !(ist->decoded_frame = avcodec_alloc_frame()))
+ return AVERROR(ENOMEM);
+ decoded_frame = ist->decoded_frame;
+ pkt->dts = av_rescale_q(ist->dts, AV_TIME_BASE_Q, ist->st->time_base);
+
+ update_benchmark(NULL);
+ ret = avcodec_decode_video2(ist->st->codec,
+ decoded_frame, got_output, pkt);
+ update_benchmark("decode_video %d.%d", ist->file_index, ist->st->index);
+ if (!*got_output || ret < 0) {
+ if (!pkt->size) {
+ for (i = 0; i < ist->nb_filters; i++)
+ av_buffersrc_add_ref(ist->filters[i]->filter, NULL, 0);
+ }
+ return ret;
+ }
+
+ if(ist->top_field_first>=0)
+ decoded_frame->top_field_first = ist->top_field_first;
+
+ best_effort_timestamp= av_frame_get_best_effort_timestamp(decoded_frame);
+ if(best_effort_timestamp != AV_NOPTS_VALUE)
+ ist->next_pts = ist->pts = av_rescale_q(decoded_frame->pts = best_effort_timestamp, ist->st->time_base, AV_TIME_BASE_Q);
+
+ if (debug_ts) {
+ av_log(NULL, AV_LOG_INFO, "decoder -> ist_index:%d type:video "
+ "frame_pts:%s frame_pts_time:%s best_effort_ts:%"PRId64" best_effort_ts_time:%s keyframe:%d frame_type:%d \n",
+ ist->st->index, av_ts2str(decoded_frame->pts),
+ av_ts2timestr(decoded_frame->pts, &ist->st->time_base),
+ best_effort_timestamp,
+ av_ts2timestr(best_effort_timestamp, &ist->st->time_base),
+ decoded_frame->key_frame, decoded_frame->pict_type);
+ }
+
+ pkt->size = 0;
+#if FF_API_DEINTERLACE
+ pre_process_video_frame(ist, (AVPicture *)decoded_frame, &buffer_to_free);
+#endif
+
+ rate_emu_sleep(ist);
+
+ if (ist->st->sample_aspect_ratio.num)
+ decoded_frame->sample_aspect_ratio = ist->st->sample_aspect_ratio;
+
+ resample_changed = ist->resample_width != decoded_frame->width ||
+ ist->resample_height != decoded_frame->height ||
+ ist->resample_pix_fmt != decoded_frame->format;
+ if (resample_changed) {
+ av_log(NULL, AV_LOG_INFO,
+ "Input stream #%d:%d frame changed from size:%dx%d fmt:%s to size:%dx%d fmt:%s\n",
+ ist->file_index, ist->st->index,
+ ist->resample_width, ist->resample_height, av_get_pix_fmt_name(ist->resample_pix_fmt),
+ decoded_frame->width, decoded_frame->height, av_get_pix_fmt_name(decoded_frame->format));
+
+ ist->resample_width = decoded_frame->width;
+ ist->resample_height = decoded_frame->height;
+ ist->resample_pix_fmt = decoded_frame->format;
+
+ for (i = 0; i < nb_filtergraphs; i++) {
+ if (ist_in_filtergraph(filtergraphs[i], ist) && ist->reinit_filters &&
+ configure_filtergraph(filtergraphs[i]) < 0) {
+ av_log(NULL, AV_LOG_FATAL, "Error reinitializing filters!\n");
+ exit(1);
+ }
+ }
+ }
+
+ frame_sample_aspect= av_opt_ptr(avcodec_get_frame_class(), decoded_frame, "sample_aspect_ratio");
+ for (i = 0; i < ist->nb_filters; i++) {
+ int changed = ist->st->codec->width != ist->filters[i]->filter->outputs[0]->w
+ || ist->st->codec->height != ist->filters[i]->filter->outputs[0]->h
+ || ist->st->codec->pix_fmt != ist->filters[i]->filter->outputs[0]->format;
+
+ if (!frame_sample_aspect->num)
+ *frame_sample_aspect = ist->st->sample_aspect_ratio;
+ if (ist->dr1 && decoded_frame->type==FF_BUFFER_TYPE_USER && !changed) {
+ FrameBuffer *buf = decoded_frame->opaque;
+ AVFilterBufferRef *fb = avfilter_get_video_buffer_ref_from_arrays(
+ decoded_frame->data, decoded_frame->linesize,
+ AV_PERM_READ | AV_PERM_PRESERVE,
+ ist->st->codec->width, ist->st->codec->height,
+ ist->st->codec->pix_fmt);
+
+ avfilter_copy_frame_props(fb, decoded_frame);
+ fb->buf->priv = buf;
+ fb->buf->free = filter_release_buffer;
+
+ av_assert0(buf->refcount>0);
+ buf->refcount++;
+ av_buffersrc_add_ref(ist->filters[i]->filter, fb,
+ AV_BUFFERSRC_FLAG_NO_CHECK_FORMAT |
+ AV_BUFFERSRC_FLAG_NO_COPY |
+ AV_BUFFERSRC_FLAG_PUSH);
+ } else
++ if(av_buffersrc_add_frame_flags(ist->filters[i]->filter, decoded_frame, AV_BUFFERSRC_FLAG_PUSH)<0) {
+ av_log(NULL, AV_LOG_FATAL, "Failed to inject frame into filter network\n");
+ exit(1);
+ }
+
+ }
+
+ av_free(buffer_to_free);
+ return ret;
+}
+
+static int transcode_subtitles(InputStream *ist, AVPacket *pkt, int *got_output)
+{
+ AVSubtitle subtitle;
+ int i, ret = avcodec_decode_subtitle2(ist->st->codec,
+ &subtitle, got_output, pkt);
+ if (ret < 0 || !*got_output) {
+ if (!pkt->size)
+ sub2video_flush(ist);
+ return ret;
+ }
+
+ if (ist->fix_sub_duration) {
+ if (ist->prev_sub.got_output) {
+ int end = av_rescale(subtitle.pts - ist->prev_sub.subtitle.pts,
+ 1000, AV_TIME_BASE);
+ if (end < ist->prev_sub.subtitle.end_display_time) {
+ av_log(ist->st->codec, AV_LOG_DEBUG,
+ "Subtitle duration reduced from %d to %d\n",
+ ist->prev_sub.subtitle.end_display_time, end);
+ ist->prev_sub.subtitle.end_display_time = end;
+ }
+ }
+ FFSWAP(int, *got_output, ist->prev_sub.got_output);
+ FFSWAP(int, ret, ist->prev_sub.ret);
+ FFSWAP(AVSubtitle, subtitle, ist->prev_sub.subtitle);
+ }
+
+ sub2video_update(ist, &subtitle);
+
+ if (!*got_output || !subtitle.num_rects)
+ return ret;
+
+ rate_emu_sleep(ist);
+
+ for (i = 0; i < nb_output_streams; i++) {
+ OutputStream *ost = output_streams[i];
+
+ if (!check_output_constraints(ist, ost) || !ost->encoding_needed)
+ continue;
+
+ do_subtitle_out(output_files[ost->file_index]->ctx, ost, ist, &subtitle);
+ }
+
+ avsubtitle_free(&subtitle);
+ return ret;
+}
+
+/* pkt = NULL means EOF (needed to flush decoder buffers) */
+static int output_packet(InputStream *ist, const AVPacket *pkt)
+{
+ int ret = 0, i;
+ int got_output;
+
+ AVPacket avpkt;
+ if (!ist->saw_first_ts) {
+ ist->dts = ist->st->avg_frame_rate.num ? - ist->st->codec->has_b_frames * AV_TIME_BASE / av_q2d(ist->st->avg_frame_rate) : 0;
+ ist->pts = 0;
+ if (pkt != NULL && pkt->pts != AV_NOPTS_VALUE && !ist->decoding_needed) {
+ ist->dts += av_rescale_q(pkt->pts, ist->st->time_base, AV_TIME_BASE_Q);
+ ist->pts = ist->dts; //unused but better to set it to a value thats not totally wrong
+ }
+ ist->saw_first_ts = 1;
+ }
+
+ if (ist->next_dts == AV_NOPTS_VALUE)
+ ist->next_dts = ist->dts;
+ if (ist->next_pts == AV_NOPTS_VALUE)
+ ist->next_pts = ist->pts;
+
+ if (pkt == NULL) {
+ /* EOF handling */
+ av_init_packet(&avpkt);
+ avpkt.data = NULL;
+ avpkt.size = 0;
+ goto handle_eof;
+ } else {
+ avpkt = *pkt;
+ }
+
+ if (pkt->dts != AV_NOPTS_VALUE) {
+ ist->next_dts = ist->dts = av_rescale_q(pkt->dts, ist->st->time_base, AV_TIME_BASE_Q);
+ if (ist->st->codec->codec_type != AVMEDIA_TYPE_VIDEO || !ist->decoding_needed)
+ ist->next_pts = ist->pts = ist->dts;
+ }
+
+ // while we have more to decode or while the decoder did output something on EOF
+ while (ist->decoding_needed && (avpkt.size > 0 || (!pkt && got_output))) {
+ int duration;
+ handle_eof:
+
+ ist->pts = ist->next_pts;
+ ist->dts = ist->next_dts;
+
+ if (avpkt.size && avpkt.size != pkt->size) {
+ av_log(NULL, ist->showed_multi_packet_warning ? AV_LOG_VERBOSE : AV_LOG_WARNING,
+ "Multiple frames in a packet from stream %d\n", pkt->stream_index);
+ ist->showed_multi_packet_warning = 1;
+ }
+
+ switch (ist->st->codec->codec_type) {
+ case AVMEDIA_TYPE_AUDIO:
+ ret = decode_audio (ist, &avpkt, &got_output);
+ break;
+ case AVMEDIA_TYPE_VIDEO:
+ ret = decode_video (ist, &avpkt, &got_output);
+ if (avpkt.duration) {
+ duration = av_rescale_q(avpkt.duration, ist->st->time_base, AV_TIME_BASE_Q);
+ } else if(ist->st->codec->time_base.num != 0 && ist->st->codec->time_base.den != 0) {
+ int ticks= ist->st->parser ? ist->st->parser->repeat_pict+1 : ist->st->codec->ticks_per_frame;
+ duration = ((int64_t)AV_TIME_BASE *
+ ist->st->codec->time_base.num * ticks) /
+ ist->st->codec->time_base.den;
+ } else
+ duration = 0;
+
+ if(ist->dts != AV_NOPTS_VALUE && duration) {
+ ist->next_dts += duration;
+ }else
+ ist->next_dts = AV_NOPTS_VALUE;
+
+ if (got_output)
+ ist->next_pts += duration; //FIXME the duration is not correct in some cases
+ break;
+ case AVMEDIA_TYPE_SUBTITLE:
+ ret = transcode_subtitles(ist, &avpkt, &got_output);
+ break;
+ default:
+ return -1;
+ }
+
+ if (ret < 0)
+ return ret;
+
+ avpkt.dts=
+ avpkt.pts= AV_NOPTS_VALUE;
+
+ // touch data and size only if not EOF
+ if (pkt) {
+ if(ist->st->codec->codec_type != AVMEDIA_TYPE_AUDIO)
+ ret = avpkt.size;
+ avpkt.data += ret;
+ avpkt.size -= ret;
+ }
+ if (!got_output) {
+ continue;
+ }
+ }
+
+ /* handle stream copy */
+ if (!ist->decoding_needed) {
+ rate_emu_sleep(ist);
+ ist->dts = ist->next_dts;
+ switch (ist->st->codec->codec_type) {
+ case AVMEDIA_TYPE_AUDIO:
+ ist->next_dts += ((int64_t)AV_TIME_BASE * ist->st->codec->frame_size) /
+ ist->st->codec->sample_rate;
+ break;
+ case AVMEDIA_TYPE_VIDEO:
+ if (pkt->duration) {
+ ist->next_dts += av_rescale_q(pkt->duration, ist->st->time_base, AV_TIME_BASE_Q);
+ } else if(ist->st->codec->time_base.num != 0) {
+ int ticks= ist->st->parser ? ist->st->parser->repeat_pict + 1 : ist->st->codec->ticks_per_frame;
+ ist->next_dts += ((int64_t)AV_TIME_BASE *
+ ist->st->codec->time_base.num * ticks) /
+ ist->st->codec->time_base.den;
+ }
+ break;
+ }
+ ist->pts = ist->dts;
+ ist->next_pts = ist->next_dts;
+ }
+ for (i = 0; pkt && i < nb_output_streams; i++) {
+ OutputStream *ost = output_streams[i];
+
+ if (!check_output_constraints(ist, ost) || ost->encoding_needed)
+ continue;
+
+ do_streamcopy(ist, ost, pkt);
+ }
+
+ return 0;
+}
+
+static void print_sdp(void)
+{
+ char sdp[16384];
+ int i;
+ AVFormatContext **avc = av_malloc(sizeof(*avc) * nb_output_files);
+
+ if (!avc)
+ exit(1);
+ for (i = 0; i < nb_output_files; i++)
+ avc[i] = output_files[i]->ctx;
+
+ av_sdp_create(avc, nb_output_files, sdp, sizeof(sdp));
+ printf("SDP:\n%s\n", sdp);
+ fflush(stdout);
+ av_freep(&avc);
+}
+
+static int init_input_stream(int ist_index, char *error, int error_len)
+{
+ int ret;
+ InputStream *ist = input_streams[ist_index];
+
+ if (ist->decoding_needed) {
+ AVCodec *codec = ist->dec;
+ if (!codec) {
+ snprintf(error, error_len, "Decoder (codec %s) not found for input stream #%d:%d",
+ avcodec_get_name(ist->st->codec->codec_id), ist->file_index, ist->st->index);
+ return AVERROR(EINVAL);
+ }
+
+ ist->dr1 = (codec->capabilities & CODEC_CAP_DR1) && !(FF_API_DEINTERLACE && do_deinterlace);
+ if (codec->type == AVMEDIA_TYPE_VIDEO && ist->dr1) {
+ ist->st->codec->get_buffer = codec_get_buffer;
+ ist->st->codec->release_buffer = codec_release_buffer;
+ ist->st->codec->opaque = &ist->buffer_pool;
+ }
+
+ if (!av_dict_get(ist->opts, "threads", NULL, 0))
+ av_dict_set(&ist->opts, "threads", "auto", 0);
+ if ((ret = avcodec_open2(ist->st->codec, codec, &ist->opts)) < 0) {
+ if (ret == AVERROR_EXPERIMENTAL)
+ abort_codec_experimental(codec, 0);
+ snprintf(error, error_len, "Error while opening decoder for input stream #%d:%d",
+ ist->file_index, ist->st->index);
+ return ret;
+ }
+ assert_avoptions(ist->opts);
+ }
+
+ ist->next_pts = AV_NOPTS_VALUE;
+ ist->next_dts = AV_NOPTS_VALUE;
+ ist->is_start = 1;
+
+ return 0;
+}
+
+static InputStream *get_input_stream(OutputStream *ost)
+{
+ if (ost->source_index >= 0)
+ return input_streams[ost->source_index];
+ return NULL;
+}
+
+static int compare_int64(const void *a, const void *b)
+{
+ int64_t va = *(int64_t *)a, vb = *(int64_t *)b;
+ return va < vb ? -1 : va > vb ? +1 : 0;
+}
+
+static void parse_forced_key_frames(char *kf, OutputStream *ost,
+ AVCodecContext *avctx)
+{
+ char *p;
+ int n = 1, i, size, index = 0;
+ int64_t t, *pts;
+
+ for (p = kf; *p; p++)
+ if (*p == ',')
+ n++;
+ size = n;
+ pts = av_malloc(sizeof(*pts) * size);
+ if (!pts) {
+ av_log(NULL, AV_LOG_FATAL, "Could not allocate forced key frames array.\n");
+ exit(1);
+ }
+
+ p = kf;
+ for (i = 0; i < n; i++) {
+ char *next = strchr(p, ',');
+
+ if (next)
+ *next++ = 0;
+
+ if (!memcmp(p, "chapters", 8)) {
+
+ AVFormatContext *avf = output_files[ost->file_index]->ctx;
+ int j;
+
+ if (avf->nb_chapters > INT_MAX - size ||
+ !(pts = av_realloc_f(pts, size += avf->nb_chapters - 1,
+ sizeof(*pts)))) {
+ av_log(NULL, AV_LOG_FATAL,
+ "Could not allocate forced key frames array.\n");
+ exit(1);
+ }
+ t = p[8] ? parse_time_or_die("force_key_frames", p + 8, 1) : 0;
+ t = av_rescale_q(t, AV_TIME_BASE_Q, avctx->time_base);
+
+ for (j = 0; j < avf->nb_chapters; j++) {
+ AVChapter *c = avf->chapters[j];
+ av_assert1(index < size);
+ pts[index++] = av_rescale_q(c->start, c->time_base,
+ avctx->time_base) + t;
+ }
+
+ } else {
+
+ t = parse_time_or_die("force_key_frames", p, 1);
+ av_assert1(index < size);
+ pts[index++] = av_rescale_q(t, AV_TIME_BASE_Q, avctx->time_base);
+
+ }
+
+ p = next;
+ }
+
+ av_assert0(index == size);
+ qsort(pts, size, sizeof(*pts), compare_int64);
+ ost->forced_kf_count = size;
+ ost->forced_kf_pts = pts;
+}
+
+static void report_new_stream(int input_index, AVPacket *pkt)
+{
+ InputFile *file = input_files[input_index];
+ AVStream *st = file->ctx->streams[pkt->stream_index];
+
+ if (pkt->stream_index < file->nb_streams_warn)
+ return;
+ av_log(file->ctx, AV_LOG_WARNING,
+ "New %s stream %d:%d at pos:%"PRId64" and DTS:%ss\n",
+ av_get_media_type_string(st->codec->codec_type),
+ input_index, pkt->stream_index,
+ pkt->pos, av_ts2timestr(pkt->dts, &st->time_base));
+ file->nb_streams_warn = pkt->stream_index + 1;
+}
+
+static int transcode_init(void)
+{
+ int ret = 0, i, j, k;
+ AVFormatContext *oc;
+ AVCodecContext *codec;
+ OutputStream *ost;
+ InputStream *ist;
+ char error[1024];
+ int want_sdp = 1;
+
+ /* init framerate emulation */
+ for (i = 0; i < nb_input_files; i++) {
+ InputFile *ifile = input_files[i];
+ if (ifile->rate_emu)
+ for (j = 0; j < ifile->nb_streams; j++)
+ input_streams[j + ifile->ist_index]->start = av_gettime();
+ }
+
+ /* output stream init */
+ for (i = 0; i < nb_output_files; i++) {
+ oc = output_files[i]->ctx;
+ if (!oc->nb_streams && !(oc->oformat->flags & AVFMT_NOSTREAMS)) {
+ av_dump_format(oc, i, oc->filename, 1);
+ av_log(NULL, AV_LOG_ERROR, "Output file #%d does not contain any stream\n", i);
+ return AVERROR(EINVAL);
+ }
+ }
+
+ /* init complex filtergraphs */
+ for (i = 0; i < nb_filtergraphs; i++)
+ if ((ret = avfilter_graph_config(filtergraphs[i]->graph, NULL)) < 0)
+ return ret;
+
+ /* for each output stream, we compute the right encoding parameters */
+ for (i = 0; i < nb_output_streams; i++) {
+ AVCodecContext *icodec = NULL;
+ ost = output_streams[i];
+ oc = output_files[ost->file_index]->ctx;
+ ist = get_input_stream(ost);
+
+ if (ost->attachment_filename)
+ continue;
+
+ codec = ost->st->codec;
+
+ if (ist) {
+ icodec = ist->st->codec;
+
+ ost->st->disposition = ist->st->disposition;
+ codec->bits_per_raw_sample = icodec->bits_per_raw_sample;
+ codec->chroma_sample_location = icodec->chroma_sample_location;
+ }
+
+ if (ost->stream_copy) {
+ uint64_t extra_size;
+
+ av_assert0(ist && !ost->filter);
+
+ extra_size = (uint64_t)icodec->extradata_size + FF_INPUT_BUFFER_PADDING_SIZE;
+
+ if (extra_size > INT_MAX) {
+ return AVERROR(EINVAL);
+ }
+
+ /* if stream_copy is selected, no need to decode or encode */
+ codec->codec_id = icodec->codec_id;
+ codec->codec_type = icodec->codec_type;
+
+ if (!codec->codec_tag) {
+ unsigned int codec_tag;
+ if (!oc->oformat->codec_tag ||
+ av_codec_get_id (oc->oformat->codec_tag, icodec->codec_tag) == codec->codec_id ||
+ !av_codec_get_tag2(oc->oformat->codec_tag, icodec->codec_id, &codec_tag))
+ codec->codec_tag = icodec->codec_tag;
+ }
+
+ codec->bit_rate = icodec->bit_rate;
+ codec->rc_max_rate = icodec->rc_max_rate;
+ codec->rc_buffer_size = icodec->rc_buffer_size;
+ codec->field_order = icodec->field_order;
+ codec->extradata = av_mallocz(extra_size);
+ if (!codec->extradata) {
+ return AVERROR(ENOMEM);
+ }
+ memcpy(codec->extradata, icodec->extradata, icodec->extradata_size);
+ codec->extradata_size= icodec->extradata_size;
+ codec->bits_per_coded_sample = icodec->bits_per_coded_sample;
+
+ codec->time_base = ist->st->time_base;
+ /*
+ * Avi is a special case here because it supports variable fps but
+ * having the fps and timebase differe significantly adds quite some
+ * overhead
+ */
+ if(!strcmp(oc->oformat->name, "avi")) {
+ if ( copy_tb<0 && av_q2d(ist->st->r_frame_rate) >= av_q2d(ist->st->avg_frame_rate)
+ && 0.5/av_q2d(ist->st->r_frame_rate) > av_q2d(ist->st->time_base)
+ && 0.5/av_q2d(ist->st->r_frame_rate) > av_q2d(icodec->time_base)
+ && av_q2d(ist->st->time_base) < 1.0/500 && av_q2d(icodec->time_base) < 1.0/500
+ || copy_tb==2){
+ codec->time_base.num = ist->st->r_frame_rate.den;
+ codec->time_base.den = 2*ist->st->r_frame_rate.num;
+ codec->ticks_per_frame = 2;
+ } else if ( copy_tb<0 && av_q2d(icodec->time_base)*icodec->ticks_per_frame > 2*av_q2d(ist->st->time_base)
+ && av_q2d(ist->st->time_base) < 1.0/500
+ || copy_tb==0){
+ codec->time_base = icodec->time_base;
+ codec->time_base.num *= icodec->ticks_per_frame;
+ codec->time_base.den *= 2;
+ codec->ticks_per_frame = 2;
+ }
+ } else if(!(oc->oformat->flags & AVFMT_VARIABLE_FPS)
+ && strcmp(oc->oformat->name, "mov") && strcmp(oc->oformat->name, "mp4") && strcmp(oc->oformat->name, "3gp")
+ && strcmp(oc->oformat->name, "3g2") && strcmp(oc->oformat->name, "psp") && strcmp(oc->oformat->name, "ipod")
+ && strcmp(oc->oformat->name, "f4v")
+ ) {
+ if( copy_tb<0 && icodec->time_base.den
+ && av_q2d(icodec->time_base)*icodec->ticks_per_frame > av_q2d(ist->st->time_base)
+ && av_q2d(ist->st->time_base) < 1.0/500
+ || copy_tb==0){
+ codec->time_base = icodec->time_base;
+ codec->time_base.num *= icodec->ticks_per_frame;
+ }
+ }
+ if ( codec->codec_tag == AV_RL32("tmcd")
+ && icodec->time_base.num < icodec->time_base.den
+ && icodec->time_base.num > 0
+ && 121LL*icodec->time_base.num > icodec->time_base.den) {
+ codec->time_base = icodec->time_base;
+ }
+
+ if(ost->frame_rate.num)
+ codec->time_base = av_inv_q(ost->frame_rate);
+
+ av_reduce(&codec->time_base.num, &codec->time_base.den,
+ codec->time_base.num, codec->time_base.den, INT_MAX);
+
+ switch (codec->codec_type) {
+ case AVMEDIA_TYPE_AUDIO:
+ if (audio_volume != 256) {
+ av_log(NULL, AV_LOG_FATAL, "-acodec copy and -vol are incompatible (frames are not decoded)\n");
+ exit(1);
+ }
+ codec->channel_layout = icodec->channel_layout;
+ codec->sample_rate = icodec->sample_rate;
+ codec->channels = icodec->channels;
+ codec->frame_size = icodec->frame_size;
+ codec->audio_service_type = icodec->audio_service_type;
+ codec->block_align = icodec->block_align;
+ if((codec->block_align == 1 || codec->block_align == 1152 || codec->block_align == 576) && codec->codec_id == AV_CODEC_ID_MP3)
+ codec->block_align= 0;
+ if(codec->codec_id == AV_CODEC_ID_AC3)
+ codec->block_align= 0;
+ break;
+ case AVMEDIA_TYPE_VIDEO:
+ codec->pix_fmt = icodec->pix_fmt;
+ codec->width = icodec->width;
+ codec->height = icodec->height;
+ codec->has_b_frames = icodec->has_b_frames;
+ if (!codec->sample_aspect_ratio.num) {
+ codec->sample_aspect_ratio =
+ ost->st->sample_aspect_ratio =
+ ist->st->sample_aspect_ratio.num ? ist->st->sample_aspect_ratio :
+ ist->st->codec->sample_aspect_ratio.num ?
+ ist->st->codec->sample_aspect_ratio : (AVRational){0, 1};
+ }
+ ost->st->avg_frame_rate = ist->st->avg_frame_rate;
+ break;
+ case AVMEDIA_TYPE_SUBTITLE:
+ codec->width = icodec->width;
+ codec->height = icodec->height;
+ break;
+ case AVMEDIA_TYPE_DATA:
+ case AVMEDIA_TYPE_ATTACHMENT:
+ break;
+ default:
+ abort();
+ }
+ } else {
+ if (!ost->enc)
+ ost->enc = avcodec_find_encoder(codec->codec_id);
+ if (!ost->enc) {
+ /* should only happen when a default codec is not present. */
+ snprintf(error, sizeof(error), "Encoder (codec %s) not found for output stream #%d:%d",
+ avcodec_get_name(ost->st->codec->codec_id), ost->file_index, ost->index);
+ ret = AVERROR(EINVAL);
+ goto dump_format;
+ }
+
+ if (ist)
+ ist->decoding_needed++;
+ ost->encoding_needed = 1;
+
+ if (!ost->filter &&
+ (codec->codec_type == AVMEDIA_TYPE_VIDEO ||
+ codec->codec_type == AVMEDIA_TYPE_AUDIO)) {
+ FilterGraph *fg;
+ fg = init_simple_filtergraph(ist, ost);
+ if (configure_filtergraph(fg)) {
+ av_log(NULL, AV_LOG_FATAL, "Error opening filters!\n");
+ exit(1);
+ }
+ }
+
+ if (codec->codec_type == AVMEDIA_TYPE_VIDEO) {
+ if (ost->filter && !ost->frame_rate.num)
+ ost->frame_rate = av_buffersink_get_frame_rate(ost->filter->filter);
+ if (ist && !ost->frame_rate.num)
+ ost->frame_rate = ist->framerate;
+ if (ist && !ost->frame_rate.num)
+ ost->frame_rate = ist->st->r_frame_rate.num ? ist->st->r_frame_rate : (AVRational){25, 1};
+// ost->frame_rate = ist->st->avg_frame_rate.num ? ist->st->avg_frame_rate : (AVRational){25, 1};
+ if (ost->enc && ost->enc->supported_framerates && !ost->force_fps) {
+ int idx = av_find_nearest_q_idx(ost->frame_rate, ost->enc->supported_framerates);
+ ost->frame_rate = ost->enc->supported_framerates[idx];
+ }
+ }
+
+ switch (codec->codec_type) {
+ case AVMEDIA_TYPE_AUDIO:
+ codec->sample_fmt = ost->filter->filter->inputs[0]->format;
+ codec->sample_rate = ost->filter->filter->inputs[0]->sample_rate;
+ codec->channel_layout = ost->filter->filter->inputs[0]->channel_layout;
+ codec->channels = avfilter_link_get_channels(ost->filter->filter->inputs[0]);
+ codec->time_base = (AVRational){ 1, codec->sample_rate };
+ break;
+ case AVMEDIA_TYPE_VIDEO:
+ codec->time_base = av_inv_q(ost->frame_rate);
+ if (ost->filter && !(codec->time_base.num && codec->time_base.den))
+ codec->time_base = ost->filter->filter->inputs[0]->time_base;
+ if ( av_q2d(codec->time_base) < 0.001 && video_sync_method != VSYNC_PASSTHROUGH
+ && (video_sync_method == VSYNC_CFR || (video_sync_method == VSYNC_AUTO && !(oc->oformat->flags & AVFMT_VARIABLE_FPS)))){
+ av_log(oc, AV_LOG_WARNING, "Frame rate very high for a muxer not efficiently supporting it.\n"
+ "Please consider specifying a lower framerate, a different muxer or -vsync 2\n");
+ }
+ for (j = 0; j < ost->forced_kf_count; j++)
+ ost->forced_kf_pts[j] = av_rescale_q(ost->forced_kf_pts[j],
+ AV_TIME_BASE_Q,
+ codec->time_base);
+
+ codec->width = ost->filter->filter->inputs[0]->w;
+ codec->height = ost->filter->filter->inputs[0]->h;
+ codec->sample_aspect_ratio = ost->st->sample_aspect_ratio =
+ ost->frame_aspect_ratio ? // overridden by the -aspect cli option
+ av_d2q(ost->frame_aspect_ratio * codec->height/codec->width, 255) :
+ ost->filter->filter->inputs[0]->sample_aspect_ratio;
+ codec->pix_fmt = ost->filter->filter->inputs[0]->format;
+
+ if (!icodec ||
+ codec->width != icodec->width ||
+ codec->height != icodec->height ||
+ codec->pix_fmt != icodec->pix_fmt) {
+ codec->bits_per_raw_sample = frame_bits_per_raw_sample;
+ }
+
+ if (ost->forced_keyframes) {
+ if (!strncmp(ost->forced_keyframes, "expr:", 5)) {
+ ret = av_expr_parse(&ost->forced_keyframes_pexpr, ost->forced_keyframes+5,
+ forced_keyframes_const_names, NULL, NULL, NULL, NULL, 0, NULL);
+ if (ret < 0) {
+ av_log(NULL, AV_LOG_ERROR,
+ "Invalid force_key_frames expression '%s'\n", ost->forced_keyframes+5);
+ return ret;
+ }
+ ost->forced_keyframes_expr_const_values[FKF_N] = 0;
+ ost->forced_keyframes_expr_const_values[FKF_N_FORCED] = 0;
+ ost->forced_keyframes_expr_const_values[FKF_PREV_FORCED_N] = NAN;
+ ost->forced_keyframes_expr_const_values[FKF_PREV_FORCED_T] = NAN;
+ } else {
+ parse_forced_key_frames(ost->forced_keyframes, ost, ost->st->codec);
+ }
+ }
+ break;
+ case AVMEDIA_TYPE_SUBTITLE:
+ codec->time_base = (AVRational){1, 1000};
+ if (!codec->width) {
+ codec->width = input_streams[ost->source_index]->st->codec->width;
+ codec->height = input_streams[ost->source_index]->st->codec->height;
+ }
+ break;
+ default:
+ abort();
+ break;
+ }
+ /* two pass mode */
+ if (codec->flags & (CODEC_FLAG_PASS1 | CODEC_FLAG_PASS2)) {
+ char logfilename[1024];
+ FILE *f;
+
+ snprintf(logfilename, sizeof(logfilename), "%s-%d.log",
+ ost->logfile_prefix ? ost->logfile_prefix :
+ DEFAULT_PASS_LOGFILENAME_PREFIX,
+ i);
+ if (!strcmp(ost->enc->name, "libx264")) {
+ av_dict_set(&ost->opts, "stats", logfilename, AV_DICT_DONT_OVERWRITE);
+ } else {
+ if (codec->flags & CODEC_FLAG_PASS2) {
+ char *logbuffer;
+ size_t logbuffer_size;
+ if (cmdutils_read_file(logfilename, &logbuffer, &logbuffer_size) < 0) {
+ av_log(NULL, AV_LOG_FATAL, "Error reading log file '%s' for pass-2 encoding\n",
+ logfilename);
+ exit(1);
+ }
+ codec->stats_in = logbuffer;
+ }
+ if (codec->flags & CODEC_FLAG_PASS1) {
+ f = fopen(logfilename, "wb");
+ if (!f) {
+ av_log(NULL, AV_LOG_FATAL, "Cannot write log file '%s' for pass-1 encoding: %s\n",
+ logfilename, strerror(errno));
+ exit(1);
+ }
+ ost->logfile = f;
+ }
+ }
+ }
+ }
+ }
+
+ /* open each encoder */
+ for (i = 0; i < nb_output_streams; i++) {
+ ost = output_streams[i];
+ if (ost->encoding_needed) {
+ AVCodec *codec = ost->enc;
+ AVCodecContext *dec = NULL;
+
+ if ((ist = get_input_stream(ost)))
+ dec = ist->st->codec;
+ if (dec && dec->subtitle_header) {
+ /* ASS code assumes this buffer is null terminated so add extra byte. */
+ ost->st->codec->subtitle_header = av_mallocz(dec->subtitle_header_size + 1);
+ if (!ost->st->codec->subtitle_header) {
+ ret = AVERROR(ENOMEM);
+ goto dump_format;
+ }
+ memcpy(ost->st->codec->subtitle_header, dec->subtitle_header, dec->subtitle_header_size);
+ ost->st->codec->subtitle_header_size = dec->subtitle_header_size;
+ }
+ if (!av_dict_get(ost->opts, "threads", NULL, 0))
+ av_dict_set(&ost->opts, "threads", "auto", 0);
+ if ((ret = avcodec_open2(ost->st->codec, codec, &ost->opts)) < 0) {
+ if (ret == AVERROR_EXPERIMENTAL)
+ abort_codec_experimental(codec, 1);
+ snprintf(error, sizeof(error), "Error while opening encoder for output stream #%d:%d - maybe incorrect parameters such as bit_rate, rate, width or height",
+ ost->file_index, ost->index);
+ goto dump_format;
+ }
+ if (ost->enc->type == AVMEDIA_TYPE_AUDIO &&
+ !(ost->enc->capabilities & CODEC_CAP_VARIABLE_FRAME_SIZE))
+ av_buffersink_set_frame_size(ost->filter->filter,
+ ost->st->codec->frame_size);
+ assert_avoptions(ost->opts);
+ if (ost->st->codec->bit_rate && ost->st->codec->bit_rate < 1000)
+ av_log(NULL, AV_LOG_WARNING, "The bitrate parameter is set too low."
+ " It takes bits/s as argument, not kbits/s\n");
+ extra_size += ost->st->codec->extradata_size;
+
+ if (ost->st->codec->me_threshold)
+ input_streams[ost->source_index]->st->codec->debug |= FF_DEBUG_MV;
+ } else {
+ av_opt_set_dict(ost->st->codec, &ost->opts);
+ }
+ }
+
+ /* init input streams */
+ for (i = 0; i < nb_input_streams; i++)
+ if ((ret = init_input_stream(i, error, sizeof(error))) < 0) {
+ for (i = 0; i < nb_output_streams; i++) {
+ ost = output_streams[i];
+ avcodec_close(ost->st->codec);
+ }
+ goto dump_format;
+ }
+
+ /* discard unused programs */
+ for (i = 0; i < nb_input_files; i++) {
+ InputFile *ifile = input_files[i];
+ for (j = 0; j < ifile->ctx->nb_programs; j++) {
+ AVProgram *p = ifile->ctx->programs[j];
+ int discard = AVDISCARD_ALL;
+
+ for (k = 0; k < p->nb_stream_indexes; k++)
+ if (!input_streams[ifile->ist_index + p->stream_index[k]]->discard) {
+ discard = AVDISCARD_DEFAULT;
+ break;
+ }
+ p->discard = discard;
+ }
+ }
+
+ /* open files and write file headers */
+ for (i = 0; i < nb_output_files; i++) {
+ oc = output_files[i]->ctx;
+ oc->interrupt_callback = int_cb;
+ if ((ret = avformat_write_header(oc, &output_files[i]->opts)) < 0) {
+ char errbuf[128];
+ const char *errbuf_ptr = errbuf;
+ if (av_strerror(ret, errbuf, sizeof(errbuf)) < 0)
+ errbuf_ptr = strerror(AVUNERROR(ret));
+ snprintf(error, sizeof(error), "Could not write header for output file #%d (incorrect codec parameters ?): %s", i, errbuf_ptr);
+ ret = AVERROR(EINVAL);
+ goto dump_format;
+ }
+// assert_avoptions(output_files[i]->opts);
+ if (strcmp(oc->oformat->name, "rtp")) {
+ want_sdp = 0;
+ }
+ }
+
+ dump_format:
+ /* dump the file output parameters - cannot be done before in case
+ of stream copy */
+ for (i = 0; i < nb_output_files; i++) {
+ av_dump_format(output_files[i]->ctx, i, output_files[i]->ctx->filename, 1);
+ }
+
+ /* dump the stream mapping */
+ av_log(NULL, AV_LOG_INFO, "Stream mapping:\n");
+ for (i = 0; i < nb_input_streams; i++) {
+ ist = input_streams[i];
+
+ for (j = 0; j < ist->nb_filters; j++) {
+ if (ist->filters[j]->graph->graph_desc) {
+ av_log(NULL, AV_LOG_INFO, " Stream #%d:%d (%s) -> %s",
+ ist->file_index, ist->st->index, ist->dec ? ist->dec->name : "?",
+ ist->filters[j]->name);
+ if (nb_filtergraphs > 1)
+ av_log(NULL, AV_LOG_INFO, " (graph %d)", ist->filters[j]->graph->index);
+ av_log(NULL, AV_LOG_INFO, "\n");
+ }
+ }
+ }
+
+ for (i = 0; i < nb_output_streams; i++) {
+ ost = output_streams[i];
+
+ if (ost->attachment_filename) {
+ /* an attached file */
+ av_log(NULL, AV_LOG_INFO, " File %s -> Stream #%d:%d\n",
+ ost->attachment_filename, ost->file_index, ost->index);
+ continue;
+ }
+
+ if (ost->filter && ost->filter->graph->graph_desc) {
+ /* output from a complex graph */
+ av_log(NULL, AV_LOG_INFO, " %s", ost->filter->name);
+ if (nb_filtergraphs > 1)
+ av_log(NULL, AV_LOG_INFO, " (graph %d)", ost->filter->graph->index);
+
+ av_log(NULL, AV_LOG_INFO, " -> Stream #%d:%d (%s)\n", ost->file_index,
+ ost->index, ost->enc ? ost->enc->name : "?");
+ continue;
+ }
+
+ av_log(NULL, AV_LOG_INFO, " Stream #%d:%d -> #%d:%d",
+ input_streams[ost->source_index]->file_index,
+ input_streams[ost->source_index]->st->index,
+ ost->file_index,
+ ost->index);
+ if (ost->sync_ist != input_streams[ost->source_index])
+ av_log(NULL, AV_LOG_INFO, " [sync #%d:%d]",
+ ost->sync_ist->file_index,
+ ost->sync_ist->st->index);
+ if (ost->stream_copy)
+ av_log(NULL, AV_LOG_INFO, " (copy)");
+ else
+ av_log(NULL, AV_LOG_INFO, " (%s -> %s)", input_streams[ost->source_index]->dec ?
+ input_streams[ost->source_index]->dec->name : "?",
+ ost->enc ? ost->enc->name : "?");
+ av_log(NULL, AV_LOG_INFO, "\n");
+ }
+
+ if (ret) {
+ av_log(NULL, AV_LOG_ERROR, "%s\n", error);
+ return ret;
+ }
+
+ if (want_sdp) {
+ print_sdp();
+ }
+
+ return 0;
+}
+
+/* Return 1 if there remain streams where more output is wanted, 0 otherwise. */
+static int need_output(void)
+{
+ int i;
+
+ for (i = 0; i < nb_output_streams; i++) {
+ OutputStream *ost = output_streams[i];
+ OutputFile *of = output_files[ost->file_index];
+ AVFormatContext *os = output_files[ost->file_index]->ctx;
+
+ if (ost->finished ||
+ (os->pb && avio_tell(os->pb) >= of->limit_filesize))
+ continue;
+ if (ost->frame_number >= ost->max_frames) {
+ int j;
+ for (j = 0; j < of->ctx->nb_streams; j++)
+ close_output_stream(output_streams[of->ost_index + j]);
+ continue;
+ }
+
+ return 1;
+ }
+
+ return 0;
+}
+
+/**
+ * Select the output stream to process.
+ *
+ * @return selected output stream, or NULL if none available
+ */
+static OutputStream *choose_output(void)
+{
+ int i;
+ int64_t opts_min = INT64_MAX;
+ OutputStream *ost_min = NULL;
+
+ for (i = 0; i < nb_output_streams; i++) {
+ OutputStream *ost = output_streams[i];
+ int64_t opts = av_rescale_q(ost->st->cur_dts, ost->st->time_base,
+ AV_TIME_BASE_Q);
+ if (!ost->unavailable && !ost->finished && opts < opts_min) {
+ opts_min = opts;
+ ost_min = ost;
+ }
+ }
+ return ost_min;
+}
+
+static int check_keyboard_interaction(int64_t cur_time)
+{
+ int i, ret, key;
+ static int64_t last_time;
+ if (received_nb_signals)
+ return AVERROR_EXIT;
+ /* read_key() returns 0 on EOF */
+ if(cur_time - last_time >= 100000 && !run_as_daemon){
+ key = read_key();
+ last_time = cur_time;
+ }else
+ key = -1;
+ if (key == 'q')
+ return AVERROR_EXIT;
+ if (key == '+') av_log_set_level(av_log_get_level()+10);
+ if (key == '-') av_log_set_level(av_log_get_level()-10);
+ if (key == 's') qp_hist ^= 1;
+ if (key == 'h'){
+ if (do_hex_dump){
+ do_hex_dump = do_pkt_dump = 0;
+ } else if(do_pkt_dump){
+ do_hex_dump = 1;
+ } else
+ do_pkt_dump = 1;
+ av_log_set_level(AV_LOG_DEBUG);
+ }
+ if (key == 'c' || key == 'C'){
+ char buf[4096], target[64], command[256], arg[256] = {0};
+ double time;
+ int k, n = 0;
+ fprintf(stderr, "\nEnter command: <target> <time> <command>[ <argument>]\n");
+ i = 0;
+ while ((k = read_key()) != '\n' && k != '\r' && i < sizeof(buf)-1)
+ if (k > 0)
+ buf[i++] = k;
+ buf[i] = 0;
+ if (k > 0 &&
+ (n = sscanf(buf, "%63[^ ] %lf %255[^ ] %255[^\n]", target, &time, command, arg)) >= 3) {
+ av_log(NULL, AV_LOG_DEBUG, "Processing command target:%s time:%f command:%s arg:%s",
+ target, time, command, arg);
+ for (i = 0; i < nb_filtergraphs; i++) {
+ FilterGraph *fg = filtergraphs[i];
+ if (fg->graph) {
+ if (time < 0) {
+ ret = avfilter_graph_send_command(fg->graph, target, command, arg, buf, sizeof(buf),
+ key == 'c' ? AVFILTER_CMD_FLAG_ONE : 0);
+ fprintf(stderr, "Command reply for stream %d: ret:%d res:%s\n", i, ret, buf);
+ } else {
+ ret = avfilter_graph_queue_command(fg->graph, target, command, arg, 0, time);
+ }
+ }
+ }
+ } else {
+ av_log(NULL, AV_LOG_ERROR,
+ "Parse error, at least 3 arguments were expected, "
+ "only %d given in string '%s'\n", n, buf);
+ }
+ }
+ if (key == 'd' || key == 'D'){
+ int debug=0;
+ if(key == 'D') {
+ debug = input_streams[0]->st->codec->debug<<1;
+ if(!debug) debug = 1;
+ while(debug & (FF_DEBUG_DCT_COEFF|FF_DEBUG_VIS_QP|FF_DEBUG_VIS_MB_TYPE)) //unsupported, would just crash
+ debug += debug;
+ }else
+ if(scanf("%d", &debug)!=1)
+ fprintf(stderr,"error parsing debug value\n");
+ for(i=0;i<nb_input_streams;i++) {
+ input_streams[i]->st->codec->debug = debug;
+ }
+ for(i=0;i<nb_output_streams;i++) {
+ OutputStream *ost = output_streams[i];
+ ost->st->codec->debug = debug;
+ }
+ if(debug) av_log_set_level(AV_LOG_DEBUG);
+ fprintf(stderr,"debug=%d\n", debug);
+ }
+ if (key == '?'){
+ fprintf(stderr, "key function\n"
+ "? show this help\n"
+ "+ increase verbosity\n"
+ "- decrease verbosity\n"
+ "c Send command to filtergraph\n"
+ "D cycle through available debug modes\n"
+ "h dump packets/hex press to cycle through the 3 states\n"
+ "q quit\n"
+ "s Show QP histogram\n"
+ );
+ }
+ return 0;
+}
+
+#if HAVE_PTHREADS
+static void *input_thread(void *arg)
+{
+ InputFile *f = arg;
+ int ret = 0;
+
+ while (!transcoding_finished && ret >= 0) {
+ AVPacket pkt;
+ ret = av_read_frame(f->ctx, &pkt);
+
+ if (ret == AVERROR(EAGAIN)) {
+ av_usleep(10000);
+ ret = 0;
+ continue;
+ } else if (ret < 0)
+ break;
+
+ pthread_mutex_lock(&f->fifo_lock);
+ while (!av_fifo_space(f->fifo))
+ pthread_cond_wait(&f->fifo_cond, &f->fifo_lock);
+
+ av_dup_packet(&pkt);
+ av_fifo_generic_write(f->fifo, &pkt, sizeof(pkt), NULL);
+
+ pthread_mutex_unlock(&f->fifo_lock);
+ }
+
+ f->finished = 1;
+ return NULL;
+}
+
+static void free_input_threads(void)
+{
+ int i;
+
+ if (nb_input_files == 1)
+ return;
+
+ transcoding_finished = 1;
+
+ for (i = 0; i < nb_input_files; i++) {
+ InputFile *f = input_files[i];
+ AVPacket pkt;
+
+ if (!f->fifo || f->joined)
+ continue;
+
+ pthread_mutex_lock(&f->fifo_lock);
+ while (av_fifo_size(f->fifo)) {
+ av_fifo_generic_read(f->fifo, &pkt, sizeof(pkt), NULL);
+ av_free_packet(&pkt);
+ }
+ pthread_cond_signal(&f->fifo_cond);
+ pthread_mutex_unlock(&f->fifo_lock);
+
+ pthread_join(f->thread, NULL);
+ f->joined = 1;
+
+ while (av_fifo_size(f->fifo)) {
+ av_fifo_generic_read(f->fifo, &pkt, sizeof(pkt), NULL);
+ av_free_packet(&pkt);
+ }
+ av_fifo_free(f->fifo);
+ }
+}
+
+static int init_input_threads(void)
+{
+ int i, ret;
+
+ if (nb_input_files == 1)
+ return 0;
+
+ for (i = 0; i < nb_input_files; i++) {
+ InputFile *f = input_files[i];
+
+ if (!(f->fifo = av_fifo_alloc(8*sizeof(AVPacket))))
+ return AVERROR(ENOMEM);
+
+ pthread_mutex_init(&f->fifo_lock, NULL);
+ pthread_cond_init (&f->fifo_cond, NULL);
+
+ if ((ret = pthread_create(&f->thread, NULL, input_thread, f)))
+ return AVERROR(ret);
+ }
+ return 0;
+}
+
+static int get_input_packet_mt(InputFile *f, AVPacket *pkt)
+{
+ int ret = 0;
+
+ pthread_mutex_lock(&f->fifo_lock);
+
+ if (av_fifo_size(f->fifo)) {
+ av_fifo_generic_read(f->fifo, pkt, sizeof(*pkt), NULL);
+ pthread_cond_signal(&f->fifo_cond);
+ } else {
+ if (f->finished)
+ ret = AVERROR_EOF;
+ else
+ ret = AVERROR(EAGAIN);
+ }
+
+ pthread_mutex_unlock(&f->fifo_lock);
+
+ return ret;
+}
+#endif
+
+static int get_input_packet(InputFile *f, AVPacket *pkt)
+{
+#if HAVE_PTHREADS
+ if (nb_input_files > 1)
+ return get_input_packet_mt(f, pkt);
+#endif
+ return av_read_frame(f->ctx, pkt);
+}
+
+static int got_eagain(void)
+{
+ int i;
+ for (i = 0; i < nb_output_streams; i++)
+ if (output_streams[i]->unavailable)
+ return 1;
+ return 0;
+}
+
+static void reset_eagain(void)
+{
+ int i;
+ for (i = 0; i < nb_input_files; i++)
+ input_files[i]->eagain = 0;
+ for (i = 0; i < nb_output_streams; i++)
+ output_streams[i]->unavailable = 0;
+}
+
+/*
+ * Return
+ * - 0 -- one packet was read and processed
+ * - AVERROR(EAGAIN) -- no packets were available for selected file,
+ * this function should be called again
+ * - AVERROR_EOF -- this function should not be called again
+ */
+static int process_input(int file_index)
+{
+ InputFile *ifile = input_files[file_index];
+ AVFormatContext *is;
+ InputStream *ist;
+ AVPacket pkt;
+ int ret, i, j;
+
+ is = ifile->ctx;
+ ret = get_input_packet(ifile, &pkt);
+
+ if (ret == AVERROR(EAGAIN)) {
+ ifile->eagain = 1;
+ return ret;
+ }
+ if (ret < 0) {
+ if (ret != AVERROR_EOF) {
+ print_error(is->filename, ret);
+ if (exit_on_error)
+ exit(1);
+ }
+ ifile->eof_reached = 1;
+
+ for (i = 0; i < ifile->nb_streams; i++) {
+ ist = input_streams[ifile->ist_index + i];
+ if (ist->decoding_needed)
+ output_packet(ist, NULL);
+
+ /* mark all outputs that don't go through lavfi as finished */
+ for (j = 0; j < nb_output_streams; j++) {
+ OutputStream *ost = output_streams[j];
+
+ if (ost->source_index == ifile->ist_index + i &&
+ (ost->stream_copy || ost->enc->type == AVMEDIA_TYPE_SUBTITLE))
+ close_output_stream(ost);
+ }
+ }
+
+ return AVERROR(EAGAIN);
+ }
+
+ reset_eagain();
+
+ if (do_pkt_dump) {
+ av_pkt_dump_log2(NULL, AV_LOG_DEBUG, &pkt, do_hex_dump,
+ is->streams[pkt.stream_index]);
+ }
+ /* the following test is needed in case new streams appear
+ dynamically in stream : we ignore them */
+ if (pkt.stream_index >= ifile->nb_streams) {
+ report_new_stream(file_index, &pkt);
+ goto discard_packet;
+ }
+
+ ist = input_streams[ifile->ist_index + pkt.stream_index];
+ if (ist->discard)
+ goto discard_packet;
+
+ if (debug_ts) {
+ av_log(NULL, AV_LOG_INFO, "demuxer -> ist_index:%d type:%s "
+ "next_dts:%s next_dts_time:%s next_pts:%s next_pts_time:%s pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s off:%s off_time:%s\n",
+ ifile->ist_index + pkt.stream_index, av_get_media_type_string(ist->st->codec->codec_type),
+ av_ts2str(ist->next_dts), av_ts2timestr(ist->next_dts, &AV_TIME_BASE_Q),
+ av_ts2str(ist->next_pts), av_ts2timestr(ist->next_pts, &AV_TIME_BASE_Q),
+ av_ts2str(pkt.pts), av_ts2timestr(pkt.pts, &ist->st->time_base),
+ av_ts2str(pkt.dts), av_ts2timestr(pkt.dts, &ist->st->time_base),
+ av_ts2str(input_files[ist->file_index]->ts_offset),
+ av_ts2timestr(input_files[ist->file_index]->ts_offset, &AV_TIME_BASE_Q));
+ }
+
+ if(!ist->wrap_correction_done && is->start_time != AV_NOPTS_VALUE && ist->st->pts_wrap_bits < 64){
+ int64_t stime, stime2;
+ // Correcting starttime based on the enabled streams
+ // FIXME this ideally should be done before the first use of starttime but we do not know which are the enabled streams at that point.
+ // so we instead do it here as part of discontinuity handling
+ if ( ist->next_dts == AV_NOPTS_VALUE
+ && ifile->ts_offset == -is->start_time
+ && (is->iformat->flags & AVFMT_TS_DISCONT)) {
+ int64_t new_start_time = INT64_MAX;
+ for (i=0; i<is->nb_streams; i++) {
+ AVStream *st = is->streams[i];
+ if(st->discard == AVDISCARD_ALL || st->start_time == AV_NOPTS_VALUE)
+ continue;
+ new_start_time = FFMIN(new_start_time, av_rescale_q(st->start_time, st->time_base, AV_TIME_BASE_Q));
+ }
+ if (new_start_time > is->start_time) {
+ av_log(is, AV_LOG_VERBOSE, "Correcting start time by %"PRId64"\n", new_start_time - is->start_time);
+ ifile->ts_offset = -new_start_time;
+ }
+ }
+
+ stime = av_rescale_q(is->start_time, AV_TIME_BASE_Q, ist->st->time_base);
+ stime2= stime + (1ULL<<ist->st->pts_wrap_bits);
+ ist->wrap_correction_done = 1;
+
+ if(stime2 > stime && pkt.dts != AV_NOPTS_VALUE && pkt.dts > stime + (1LL<<(ist->st->pts_wrap_bits-1))) {
+ pkt.dts -= 1ULL<<ist->st->pts_wrap_bits;
+ ist->wrap_correction_done = 0;
+ }
+ if(stime2 > stime && pkt.pts != AV_NOPTS_VALUE && pkt.pts > stime + (1LL<<(ist->st->pts_wrap_bits-1))) {
+ pkt.pts -= 1ULL<<ist->st->pts_wrap_bits;
+ ist->wrap_correction_done = 0;
+ }
+ }
+
+ if (pkt.dts != AV_NOPTS_VALUE)
+ pkt.dts += av_rescale_q(ifile->ts_offset, AV_TIME_BASE_Q, ist->st->time_base);
+ if (pkt.pts != AV_NOPTS_VALUE)
+ pkt.pts += av_rescale_q(ifile->ts_offset, AV_TIME_BASE_Q, ist->st->time_base);
+
+ if (pkt.pts != AV_NOPTS_VALUE)
+ pkt.pts *= ist->ts_scale;
+ if (pkt.dts != AV_NOPTS_VALUE)
+ pkt.dts *= ist->ts_scale;
+
+ if (pkt.dts != AV_NOPTS_VALUE && ist->next_dts != AV_NOPTS_VALUE &&
+ !copy_ts) {
+ int64_t pkt_dts = av_rescale_q(pkt.dts, ist->st->time_base, AV_TIME_BASE_Q);
+ int64_t delta = pkt_dts - ist->next_dts;
+ if (is->iformat->flags & AVFMT_TS_DISCONT) {
+ if(delta < -1LL*dts_delta_threshold*AV_TIME_BASE ||
+ (delta > 1LL*dts_delta_threshold*AV_TIME_BASE &&
+ ist->st->codec->codec_type != AVMEDIA_TYPE_SUBTITLE) ||
+ pkt_dts+1<ist->pts){
+ ifile->ts_offset -= delta;
+ av_log(NULL, AV_LOG_DEBUG,
+ "timestamp discontinuity %"PRId64", new offset= %"PRId64"\n",
+ delta, ifile->ts_offset);
+ pkt.dts -= av_rescale_q(delta, AV_TIME_BASE_Q, ist->st->time_base);
+ if (pkt.pts != AV_NOPTS_VALUE)
+ pkt.pts -= av_rescale_q(delta, AV_TIME_BASE_Q, ist->st->time_base);
+ }
+ } else {
+ if ( delta < -1LL*dts_error_threshold*AV_TIME_BASE ||
+ (delta > 1LL*dts_error_threshold*AV_TIME_BASE && ist->st->codec->codec_type != AVMEDIA_TYPE_SUBTITLE)
+ ) {
+ av_log(NULL, AV_LOG_WARNING, "DTS %"PRId64", next:%"PRId64" st:%d invalid dropping\n", pkt.dts, ist->next_dts, pkt.stream_index);
+ pkt.dts = AV_NOPTS_VALUE;
+ }
+ if (pkt.pts != AV_NOPTS_VALUE){
+ int64_t pkt_pts = av_rescale_q(pkt.pts, ist->st->time_base, AV_TIME_BASE_Q);
+ delta = pkt_pts - ist->next_dts;
+ if ( delta < -1LL*dts_error_threshold*AV_TIME_BASE ||
+ (delta > 1LL*dts_error_threshold*AV_TIME_BASE && ist->st->codec->codec_type != AVMEDIA_TYPE_SUBTITLE)
+ ) {
+ av_log(NULL, AV_LOG_WARNING, "PTS %"PRId64", next:%"PRId64" invalid dropping st:%d\n", pkt.pts, ist->next_dts, pkt.stream_index);
+ pkt.pts = AV_NOPTS_VALUE;
+ }
+ }
+ }
+ }
+
+ if (debug_ts) {
+ av_log(NULL, AV_LOG_INFO, "demuxer+ffmpeg -> ist_index:%d type:%s pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s off:%s off_time:%s\n",
+ ifile->ist_index + pkt.stream_index, av_get_media_type_string(ist->st->codec->codec_type),
+ av_ts2str(pkt.pts), av_ts2timestr(pkt.pts, &ist->st->time_base),
+ av_ts2str(pkt.dts), av_ts2timestr(pkt.dts, &ist->st->time_base),
+ av_ts2str(input_files[ist->file_index]->ts_offset),
+ av_ts2timestr(input_files[ist->file_index]->ts_offset, &AV_TIME_BASE_Q));
+ }
+
+ sub2video_heartbeat(ist, pkt.pts);
+
+ ret = output_packet(ist, &pkt);
+ if (ret < 0) {
+ char buf[128];
+ av_strerror(ret, buf, sizeof(buf));
+ av_log(NULL, AV_LOG_ERROR, "Error while decoding stream #%d:%d: %s\n",
+ ist->file_index, ist->st->index, buf);
+ if (exit_on_error)
+ exit(1);
+ }
+
+discard_packet:
+ av_free_packet(&pkt);
+
+ return 0;
+}
+
+/**
+ * Perform a step of transcoding for the specified filter graph.
+ *
+ * @param[in] graph filter graph to consider
+ * @param[out] best_ist input stream where a frame would allow to continue
+ * @return 0 for success, <0 for error
+ */
+static int transcode_from_filter(FilterGraph *graph, InputStream **best_ist)
+{
+ int i, ret;
+ int nb_requests, nb_requests_max = 0;
+ InputFilter *ifilter;
+ InputStream *ist;
+
+ *best_ist = NULL;
+ ret = avfilter_graph_request_oldest(graph->graph);
+ if (ret >= 0)
+ return reap_filters();
+
+ if (ret == AVERROR_EOF) {
+ ret = reap_filters();
+ for (i = 0; i < graph->nb_outputs; i++)
+ close_output_stream(graph->outputs[i]->ost);
+ return ret;
+ }
+ if (ret != AVERROR(EAGAIN))
+ return ret;
+
+ for (i = 0; i < graph->nb_inputs; i++) {
+ ifilter = graph->inputs[i];
+ ist = ifilter->ist;
+ if (input_files[ist->file_index]->eagain ||
+ input_files[ist->file_index]->eof_reached)
+ continue;
+ nb_requests = av_buffersrc_get_nb_failed_requests(ifilter->filter);
+ if (nb_requests > nb_requests_max) {
+ nb_requests_max = nb_requests;
+ *best_ist = ist;
+ }
+ }
+
+ if (!*best_ist)
+ for (i = 0; i < graph->nb_outputs; i++)
+ graph->outputs[i]->ost->unavailable = 1;
+
+ return 0;
+}
+
+/**
+ * Run a single step of transcoding.
+ *
+ * @return 0 for success, <0 for error
+ */
+static int transcode_step(void)
+{
+ OutputStream *ost;
+ InputStream *ist;
+ int ret;
+
+ ost = choose_output();
+ if (!ost) {
+ if (got_eagain()) {
+ reset_eagain();
+ av_usleep(10000);
+ return 0;
+ }
+ av_log(NULL, AV_LOG_VERBOSE, "No more inputs to read from, finishing.\n");
+ return AVERROR_EOF;
+ }
+
+ if (ost->filter) {
+ if ((ret = transcode_from_filter(ost->filter->graph, &ist)) < 0)
+ return ret;
+ if (!ist)
+ return 0;
+ } else {
+ av_assert0(ost->source_index >= 0);
+ ist = input_streams[ost->source_index];
+ }
+
+ ret = process_input(ist->file_index);
+ if (ret == AVERROR(EAGAIN)) {
+ if (input_files[ist->file_index]->eagain)
+ ost->unavailable = 1;
+ return 0;
+ }
+ if (ret < 0)
+ return ret == AVERROR_EOF ? 0 : ret;
+
+ return reap_filters();
+}
+
+/*
+ * The following code is the main loop of the file converter
+ */
+static int transcode(void)
+{
+ int ret, i;
+ AVFormatContext *os;
+ OutputStream *ost;
+ InputStream *ist;
+ int64_t timer_start;
+
+ ret = transcode_init();
+ if (ret < 0)
+ goto fail;
+
+ if (stdin_interaction) {
+ av_log(NULL, AV_LOG_INFO, "Press [q] to stop, [?] for help\n");
+ }
+
+ timer_start = av_gettime();
+
+#if HAVE_PTHREADS
+ if ((ret = init_input_threads()) < 0)
+ goto fail;
+#endif
+
+ while (!received_sigterm) {
+ int64_t cur_time= av_gettime();
+
+ /* if 'q' pressed, exits */
+ if (stdin_interaction)
+ if (check_keyboard_interaction(cur_time) < 0)
+ break;
+
+ /* check if there's any stream where output is still needed */
+ if (!need_output()) {
+ av_log(NULL, AV_LOG_VERBOSE, "No more output streams to write to, finishing.\n");
+ break;
+ }
+
+ ret = transcode_step();
+ if (ret < 0) {
+ if (ret == AVERROR_EOF || ret == AVERROR(EAGAIN))
+ continue;
+
+ av_log(NULL, AV_LOG_ERROR, "Error while filtering.\n");
+ break;
+ }
+
+ /* dump report by using the output first video and audio streams */
+ print_report(0, timer_start, cur_time);
+ }
+#if HAVE_PTHREADS
+ free_input_threads();
+#endif
+
+ /* at the end of stream, we must flush the decoder buffers */
+ for (i = 0; i < nb_input_streams; i++) {
+ ist = input_streams[i];
+ if (!input_files[ist->file_index]->eof_reached && ist->decoding_needed) {
+ output_packet(ist, NULL);
+ }
+ }
+ flush_encoders();
+
+ term_exit();
+
+ /* write the trailer if needed and close file */
+ for (i = 0; i < nb_output_files; i++) {
+ os = output_files[i]->ctx;
+ av_write_trailer(os);
+ }
+
+ /* dump report by using the first video and audio streams */
+ print_report(1, timer_start, av_gettime());
+
+ /* close each encoder */
+ for (i = 0; i < nb_output_streams; i++) {
+ ost = output_streams[i];
+ if (ost->encoding_needed) {
+ av_freep(&ost->st->codec->stats_in);
+ avcodec_close(ost->st->codec);
+ }
+ }
+
+ /* close each decoder */
+ for (i = 0; i < nb_input_streams; i++) {
+ ist = input_streams[i];
+ if (ist->decoding_needed) {
+ avcodec_close(ist->st->codec);
+ }
+ }
+
+ /* finished ! */
+ ret = 0;
+
+ fail:
+#if HAVE_PTHREADS
+ free_input_threads();
+#endif
+
+ if (output_streams) {
+ for (i = 0; i < nb_output_streams; i++) {
+ ost = output_streams[i];
+ if (ost) {
+ if (ost->stream_copy)
+ av_freep(&ost->st->codec->extradata);
+ if (ost->logfile) {
+ fclose(ost->logfile);
+ ost->logfile = NULL;
+ }
+ av_freep(&ost->st->codec->subtitle_header);
+ av_free(ost->forced_kf_pts);
+ av_dict_free(&ost->opts);
+ av_dict_free(&ost->swr_opts);
+ av_dict_free(&ost->resample_opts);
+ }
+ }
+ }
+ return ret;
+}
+
+
+static int64_t getutime(void)
+{
+#if HAVE_GETRUSAGE
+ struct rusage rusage;
+
+ getrusage(RUSAGE_SELF, &rusage);
+ return (rusage.ru_utime.tv_sec * 1000000LL) + rusage.ru_utime.tv_usec;
+#elif HAVE_GETPROCESSTIMES
+ HANDLE proc;
+ FILETIME c, e, k, u;
+ proc = GetCurrentProcess();
+ GetProcessTimes(proc, &c, &e, &k, &u);
+ return ((int64_t) u.dwHighDateTime << 32 | u.dwLowDateTime) / 10;
+#else
+ return av_gettime();
+#endif
+}
+
+static int64_t getmaxrss(void)
+{
+#if HAVE_GETRUSAGE && HAVE_STRUCT_RUSAGE_RU_MAXRSS
+ struct rusage rusage;
+ getrusage(RUSAGE_SELF, &rusage);
+ return (int64_t)rusage.ru_maxrss * 1024;
+#elif HAVE_GETPROCESSMEMORYINFO
+ HANDLE proc;
+ PROCESS_MEMORY_COUNTERS memcounters;
+ proc = GetCurrentProcess();
+ memcounters.cb = sizeof(memcounters);
+ GetProcessMemoryInfo(proc, &memcounters, sizeof(memcounters));
+ return memcounters.PeakPagefileUsage;
+#else
+ return 0;
+#endif
+}
+
+static void log_callback_null(void *ptr, int level, const char *fmt, va_list vl)
+{
+}
+
+int main(int argc, char **argv)
+{
+ int ret;
+ int64_t ti;
+
+ atexit(exit_program);
+
+ setvbuf(stderr,NULL,_IONBF,0); /* win32 runtime needs this */
+
+ av_log_set_flags(AV_LOG_SKIP_REPEATED);
+ parse_loglevel(argc, argv, options);
+
+ if(argc>1 && !strcmp(argv[1], "-d")){
+ run_as_daemon=1;
+ av_log_set_callback(log_callback_null);
+ argc--;
+ argv++;
+ }
+
+ avcodec_register_all();
+#if CONFIG_AVDEVICE
+ avdevice_register_all();
+#endif
+ avfilter_register_all();
+ av_register_all();
+ avformat_network_init();
+
+ show_banner(argc, argv, options);
+
+ term_init();
+
+ /* parse options and open all input/output files */
+ ret = ffmpeg_parse_options(argc, argv);
+ if (ret < 0)
+ exit(1);
+
+ if (nb_output_files <= 0 && nb_input_files == 0) {
+ show_usage();
+ av_log(NULL, AV_LOG_WARNING, "Use -h to get full help or, even better, run 'man %s'\n", program_name);
+ exit(1);
+ }
+
+ /* file converter / grab */
+ if (nb_output_files <= 0) {
+ av_log(NULL, AV_LOG_FATAL, "At least one output file must be specified\n");
+ exit(1);
+ }
+
+// if (nb_input_files == 0) {
+// av_log(NULL, AV_LOG_FATAL, "At least one input file must be specified\n");
+// exit(1);
+// }
+
+ current_time = ti = getutime();
+ if (transcode() < 0)
+ exit(1);
+ ti = getutime() - ti;
+ if (do_benchmark) {
+ printf("bench: utime=%0.3fs\n", ti / 1000000.0);
+ }
+
+ exit(received_nb_signals ? 255 : 0);
+ return 0;
+}
avfilter.o \
avfiltergraph.o \
buffer.o \
-- buffersink.o \
buffersrc.o \
drawutils.o \
fifo.o \
formats.o \
+ graphdump.o \
graphparser.o \
- src_buffer.o \
+ sink_buffer.o \
+ transform.o \
video.o \
+
+OBJS-$(CONFIG_AVCODEC) += avcodec.o
+OBJS-$(CONFIG_AVFORMAT) += lavfutils.o
+OBJS-$(CONFIG_SWSCALE) += lswsutils.o
+
+OBJS-$(CONFIG_ACONVERT_FILTER) += af_aconvert.o
+OBJS-$(CONFIG_AFADE_FILTER) += af_afade.o
OBJS-$(CONFIG_AFORMAT_FILTER) += af_aformat.o
+OBJS-$(CONFIG_ALLPASS_FILTER) += af_biquads.o
+OBJS-$(CONFIG_AMERGE_FILTER) += af_amerge.o
OBJS-$(CONFIG_AMIX_FILTER) += af_amix.o
OBJS-$(CONFIG_ANULL_FILTER) += af_anull.o
+OBJS-$(CONFIG_APAD_FILTER) += af_apad.o
+OBJS-$(CONFIG_ARESAMPLE_FILTER) += af_aresample.o
+OBJS-$(CONFIG_ASELECT_FILTER) += f_select.o
+OBJS-$(CONFIG_ASENDCMD_FILTER) += f_sendcmd.o
+OBJS-$(CONFIG_ASETNSAMPLES_FILTER) += af_asetnsamples.o
+OBJS-$(CONFIG_ASETPTS_FILTER) += f_setpts.o
+OBJS-$(CONFIG_ASETTB_FILTER) += f_settb.o
OBJS-$(CONFIG_ASHOWINFO_FILTER) += af_ashowinfo.o
OBJS-$(CONFIG_ASPLIT_FILTER) += split.o
+OBJS-$(CONFIG_ASTREAMSYNC_FILTER) += af_astreamsync.o
OBJS-$(CONFIG_ASYNCTS_FILTER) += af_asyncts.o
+OBJS-$(CONFIG_ATEMPO_FILTER) += af_atempo.o
+OBJS-$(CONFIG_BANDPASS_FILTER) += af_biquads.o
+OBJS-$(CONFIG_BANDREJECT_FILTER) += af_biquads.o
+OBJS-$(CONFIG_BASS_FILTER) += af_biquads.o
+OBJS-$(CONFIG_BIQUAD_FILTER) += af_biquads.o
OBJS-$(CONFIG_CHANNELMAP_FILTER) += af_channelmap.o
OBJS-$(CONFIG_CHANNELSPLIT_FILTER) += af_channelsplit.o
+OBJS-$(CONFIG_EARWAX_FILTER) += af_earwax.o
+OBJS-$(CONFIG_EBUR128_FILTER) += f_ebur128.o
+OBJS-$(CONFIG_EQUALIZER_FILTER) += af_biquads.o
+OBJS-$(CONFIG_HIGHPASS_FILTER) += af_biquads.o
OBJS-$(CONFIG_JOIN_FILTER) += af_join.o
+OBJS-$(CONFIG_LOWPASS_FILTER) += af_biquads.o
+OBJS-$(CONFIG_PAN_FILTER) += af_pan.o
OBJS-$(CONFIG_RESAMPLE_FILTER) += af_resample.o
+OBJS-$(CONFIG_SILENCEDETECT_FILTER) += af_silencedetect.o
+OBJS-$(CONFIG_TREBLE_FILTER) += af_biquads.o
OBJS-$(CONFIG_VOLUME_FILTER) += af_volume.o
+OBJS-$(CONFIG_VOLUMEDETECT_FILTER) += af_volumedetect.o
+OBJS-$(CONFIG_AEVALSRC_FILTER) += asrc_aevalsrc.o
OBJS-$(CONFIG_ANULLSRC_FILTER) += asrc_anullsrc.o
+OBJS-$(CONFIG_FLITE_FILTER) += asrc_flite.o
OBJS-$(CONFIG_ANULLSINK_FILTER) += asink_anullsink.o
--- /dev/null
- static int filter_frame(AVFilterLink *inlink, AVFilterBufferRef *insamplesref)
+/*
+ * Copyright (c) 2010 S.N. Hemanth Meenakshisundaram <smeenaks@ucsd.edu>
+ * Copyright (c) 2011 Stefano Sabatini
+ * Copyright (c) 2011 Mina Nagy Zaki
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/**
+ * @file
+ * sample format and channel layout conversion audio filter
+ */
+
+#include "libavutil/avstring.h"
+#include "libavutil/channel_layout.h"
+#include "libswresample/swresample.h"
+#include "avfilter.h"
+#include "audio.h"
+#include "internal.h"
+
+typedef struct {
+ enum AVSampleFormat out_sample_fmt;
+ int64_t out_chlayout;
+ struct SwrContext *swr;
+} AConvertContext;
+
+static av_cold int init(AVFilterContext *ctx, const char *args0)
+{
+ AConvertContext *aconvert = ctx->priv;
+ char *arg, *ptr = NULL;
+ int ret = 0;
+ char *args = av_strdup(args0);
+
+ aconvert->out_sample_fmt = AV_SAMPLE_FMT_NONE;
+ aconvert->out_chlayout = 0;
+
+ if ((arg = av_strtok(args, ":", &ptr)) && strcmp(arg, "auto")) {
+ if ((ret = ff_parse_sample_format(&aconvert->out_sample_fmt, arg, ctx)) < 0)
+ goto end;
+ }
+ if ((arg = av_strtok(NULL, ":", &ptr)) && strcmp(arg, "auto")) {
+ if ((ret = ff_parse_channel_layout(&aconvert->out_chlayout, arg, ctx)) < 0)
+ goto end;
+ }
+
+end:
+ av_freep(&args);
+ return ret;
+}
+
+static av_cold void uninit(AVFilterContext *ctx)
+{
+ AConvertContext *aconvert = ctx->priv;
+ swr_free(&aconvert->swr);
+}
+
+static int query_formats(AVFilterContext *ctx)
+{
+ AVFilterFormats *formats = NULL;
+ AConvertContext *aconvert = ctx->priv;
+ AVFilterLink *inlink = ctx->inputs[0];
+ AVFilterLink *outlink = ctx->outputs[0];
+ AVFilterChannelLayouts *layouts;
+
+ ff_formats_ref(ff_all_formats(AVMEDIA_TYPE_AUDIO),
+ &inlink->out_formats);
+ if (aconvert->out_sample_fmt != AV_SAMPLE_FMT_NONE) {
+ formats = NULL;
+ ff_add_format(&formats, aconvert->out_sample_fmt);
+ ff_formats_ref(formats, &outlink->in_formats);
+ } else
+ ff_formats_ref(ff_all_formats(AVMEDIA_TYPE_AUDIO),
+ &outlink->in_formats);
+
+ ff_channel_layouts_ref(ff_all_channel_layouts(),
+ &inlink->out_channel_layouts);
+ if (aconvert->out_chlayout != 0) {
+ layouts = NULL;
+ ff_add_channel_layout(&layouts, aconvert->out_chlayout);
+ ff_channel_layouts_ref(layouts, &outlink->in_channel_layouts);
+ } else
+ ff_channel_layouts_ref(ff_all_channel_layouts(),
+ &outlink->in_channel_layouts);
+
+ return 0;
+}
+
+static int config_output(AVFilterLink *outlink)
+{
+ int ret;
+ AVFilterContext *ctx = outlink->src;
+ AVFilterLink *inlink = ctx->inputs[0];
+ AConvertContext *aconvert = ctx->priv;
+ char buf1[64], buf2[64];
+
+ /* if not specified in args, use the format and layout of the output */
+ if (aconvert->out_sample_fmt == AV_SAMPLE_FMT_NONE)
+ aconvert->out_sample_fmt = outlink->format;
+ if (aconvert->out_chlayout == 0)
+ aconvert->out_chlayout = outlink->channel_layout;
+
+ aconvert->swr = swr_alloc_set_opts(aconvert->swr,
+ aconvert->out_chlayout, aconvert->out_sample_fmt, inlink->sample_rate,
+ inlink->channel_layout, inlink->format, inlink->sample_rate,
+ 0, ctx);
+ if (!aconvert->swr)
+ return AVERROR(ENOMEM);
+ ret = swr_init(aconvert->swr);
+ if (ret < 0)
+ return ret;
+
+ av_get_channel_layout_string(buf1, sizeof(buf1),
+ -1, inlink ->channel_layout);
+ av_get_channel_layout_string(buf2, sizeof(buf2),
+ -1, outlink->channel_layout);
+ av_log(ctx, AV_LOG_VERBOSE,
+ "fmt:%s cl:%s -> fmt:%s cl:%s\n",
+ av_get_sample_fmt_name(inlink ->format), buf1,
+ av_get_sample_fmt_name(outlink->format), buf2);
+
+ return 0;
+}
+
- const int n = insamplesref->audio->nb_samples;
++static int filter_frame(AVFilterLink *inlink, AVFrame *insamplesref)
+{
+ AConvertContext *aconvert = inlink->dst->priv;
- AVFilterBufferRef *outsamplesref = ff_get_audio_buffer(outlink, AV_PERM_WRITE, n);
++ const int n = insamplesref->nb_samples;
+ AVFilterLink *const outlink = inlink->dst->outputs[0];
- swr_convert(aconvert->swr, outsamplesref->data, n,
- (void *)insamplesref->data, n);
++ AVFrame *outsamplesref = ff_get_audio_buffer(outlink, n);
+ int ret;
+
- avfilter_copy_buffer_ref_props(outsamplesref, insamplesref);
- outsamplesref->audio->channels = outlink->channels;
- outsamplesref->audio->channel_layout = outlink->channel_layout;
++ swr_convert(aconvert->swr, outsamplesref->extended_data, n,
++ (void *)insamplesref->extended_data, n);
+
- avfilter_unref_buffer(insamplesref);
++ av_frame_copy_props(outsamplesref, insamplesref);
++ outsamplesref->channels = outlink->channels;
++ outsamplesref->channel_layout = outlink->channel_layout;
+
+ ret = ff_filter_frame(outlink, outsamplesref);
- .min_perms = AV_PERM_READ,
++ av_frame_free(&insamplesref);
+ return ret;
+}
+
+static const AVFilterPad aconvert_inputs[] = {
+ {
+ .name = "default",
+ .type = AVMEDIA_TYPE_AUDIO,
+ .filter_frame = filter_frame,
+ },
+ { NULL }
+};
+
+static const AVFilterPad aconvert_outputs[] = {
+ {
+ .name = "default",
+ .type = AVMEDIA_TYPE_AUDIO,
+ .config_props = config_output,
+ },
+ { NULL }
+};
+
+AVFilter avfilter_af_aconvert = {
+ .name = "aconvert",
+ .description = NULL_IF_CONFIG_SMALL("Convert the input audio to sample_fmt:channel_layout."),
+ .priv_size = sizeof(AConvertContext),
+ .init = init,
+ .uninit = uninit,
+ .query_formats = query_formats,
+ .inputs = aconvert_inputs,
+ .outputs = aconvert_outputs,
+};
--- /dev/null
- static int filter_frame(AVFilterLink *inlink, AVFilterBufferRef *buf)
+/*
+ * Copyright (c) 2013 Paul B Mahol
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/**
+ * @file
+ * fade audio filter
+ */
+
+#include "libavutil/opt.h"
+#include "audio.h"
+#include "avfilter.h"
+#include "internal.h"
+
+typedef struct {
+ const AVClass *class;
+ int type;
+ int curve;
+ int nb_samples;
+ int64_t start_sample;
+ double duration;
+ double start_time;
+
+ void (*fade_samples)(uint8_t **dst, uint8_t * const *src,
+ int nb_samples, int channels, int direction,
+ int64_t start, int range, int curve);
+} AudioFadeContext;
+
+enum CurveType { TRI, QSIN, ESIN, HSIN, LOG, PAR, QUA, CUB, SQU, CBR };
+
+#define OFFSET(x) offsetof(AudioFadeContext, x)
+#define FLAGS AV_OPT_FLAG_AUDIO_PARAM|AV_OPT_FLAG_FILTERING_PARAM
+
+static const AVOption afade_options[] = {
+ { "type", "set the fade direction", OFFSET(type), AV_OPT_TYPE_INT, {.i64 = 0 }, 0, 1, FLAGS, "type" },
+ { "t", "set the fade direction", OFFSET(type), AV_OPT_TYPE_INT, {.i64 = 0 }, 0, 1, FLAGS, "type" },
+ { "in", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = 0 }, 0, 0, FLAGS, "type" },
+ { "out", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = 1 }, 0, 0, FLAGS, "type" },
+ { "start_sample", "set expression of sample to start fading", OFFSET(start_sample), AV_OPT_TYPE_INT64, {.i64 = 0 }, 0, INT64_MAX, FLAGS },
+ { "ss", "set expression of sample to start fading", OFFSET(start_sample), AV_OPT_TYPE_INT64, {.i64 = 0 }, 0, INT64_MAX, FLAGS },
+ { "nb_samples", "set expression for fade duration in samples", OFFSET(nb_samples), AV_OPT_TYPE_INT, {.i64 = 44100}, 1, INT32_MAX, FLAGS },
+ { "ns", "set expression for fade duration in samples", OFFSET(nb_samples), AV_OPT_TYPE_INT, {.i64 = 44100}, 1, INT32_MAX, FLAGS },
+ { "start_time", "set expression of second to start fading", OFFSET(start_time), AV_OPT_TYPE_DOUBLE, {.dbl = 0. }, 0, 7*24*60*60,FLAGS },
+ { "st", "set expression of second to start fading", OFFSET(start_time), AV_OPT_TYPE_DOUBLE, {.dbl = 0. }, 0, 7*24*60*60,FLAGS },
+ { "duration", "set expression for fade duration in seconds", OFFSET(duration), AV_OPT_TYPE_DOUBLE, {.dbl = 0. }, 0, 24*60*60, FLAGS },
+ { "d", "set expression for fade duration in seconds", OFFSET(duration), AV_OPT_TYPE_DOUBLE, {.dbl = 0. }, 0, 24*60*60, FLAGS },
+ { "curve", "set expression for fade curve", OFFSET(curve), AV_OPT_TYPE_INT, {.i64 = TRI }, TRI, CBR, FLAGS, "curve" },
+ { "c", "set expression for fade curve", OFFSET(curve), AV_OPT_TYPE_INT, {.i64 = TRI }, TRI, CBR, FLAGS, "curve" },
+ { "tri", "linear slope", 0, AV_OPT_TYPE_CONST, {.i64 = TRI }, 0, 0, FLAGS, "curve" },
+ { "qsin", "quarter of sine wave", 0, AV_OPT_TYPE_CONST, {.i64 = QSIN }, 0, 0, FLAGS, "curve" },
+ { "esin", "exponential sine wave", 0, AV_OPT_TYPE_CONST, {.i64 = ESIN }, 0, 0, FLAGS, "curve" },
+ { "hsin", "half of sine wave", 0, AV_OPT_TYPE_CONST, {.i64 = HSIN }, 0, 0, FLAGS, "curve" },
+ { "log", "logarithmic", 0, AV_OPT_TYPE_CONST, {.i64 = LOG }, 0, 0, FLAGS, "curve" },
+ { "par", "inverted parabola", 0, AV_OPT_TYPE_CONST, {.i64 = PAR }, 0, 0, FLAGS, "curve" },
+ { "qua", "quadratic", 0, AV_OPT_TYPE_CONST, {.i64 = QUA }, 0, 0, FLAGS, "curve" },
+ { "cub", "cubic", 0, AV_OPT_TYPE_CONST, {.i64 = CUB }, 0, 0, FLAGS, "curve" },
+ { "squ", "square root", 0, AV_OPT_TYPE_CONST, {.i64 = SQU }, 0, 0, FLAGS, "curve" },
+ { "cbr", "cubic root", 0, AV_OPT_TYPE_CONST, {.i64 = CBR }, 0, 0, FLAGS, "curve" },
+ {NULL},
+};
+
+AVFILTER_DEFINE_CLASS(afade);
+
+static av_cold int init(AVFilterContext *ctx, const char *args)
+{
+ AudioFadeContext *afade = ctx->priv;
+ int ret;
+
+ afade->class = &afade_class;
+ av_opt_set_defaults(afade);
+
+ if ((ret = av_set_options_string(afade, args, "=", ":")) < 0)
+ return ret;
+
+ if (INT64_MAX - afade->nb_samples < afade->start_sample)
+ return AVERROR(EINVAL);
+
+ return 0;
+}
+
+static int query_formats(AVFilterContext *ctx)
+{
+ AVFilterFormats *formats;
+ AVFilterChannelLayouts *layouts;
+ static const enum AVSampleFormat sample_fmts[] = {
+ AV_SAMPLE_FMT_S16, AV_SAMPLE_FMT_S16P,
+ AV_SAMPLE_FMT_S32, AV_SAMPLE_FMT_S32P,
+ AV_SAMPLE_FMT_FLT, AV_SAMPLE_FMT_FLTP,
+ AV_SAMPLE_FMT_DBL, AV_SAMPLE_FMT_DBLP,
+ AV_SAMPLE_FMT_NONE
+ };
+
+ layouts = ff_all_channel_layouts();
+ if (!layouts)
+ return AVERROR(ENOMEM);
+ ff_set_common_channel_layouts(ctx, layouts);
+
+ formats = ff_make_format_list(sample_fmts);
+ if (!formats)
+ return AVERROR(ENOMEM);
+ ff_set_common_formats(ctx, formats);
+
+ formats = ff_all_samplerates();
+ if (!formats)
+ return AVERROR(ENOMEM);
+ ff_set_common_samplerates(ctx, formats);
+
+ return 0;
+}
+
+static double fade_gain(int curve, int64_t index, int range)
+{
+ double gain;
+
+ gain = FFMAX(0.0, FFMIN(1.0, 1.0 * index / range));
+
+ switch (curve) {
+ case QSIN:
+ gain = sin(gain * M_PI / 2.0);
+ break;
+ case ESIN:
+ gain = 1.0 - cos(M_PI / 4.0 * (pow(2.0*gain - 1, 3) + 1));
+ break;
+ case HSIN:
+ gain = (1.0 - cos(gain * M_PI)) / 2.0;
+ break;
+ case LOG:
+ gain = pow(0.1, (1 - gain) * 5.0);
+ break;
+ case PAR:
+ gain = (1 - (1 - gain) * (1 - gain));
+ break;
+ case QUA:
+ gain *= gain;
+ break;
+ case CUB:
+ gain = gain * gain * gain;
+ break;
+ case SQU:
+ gain = sqrt(gain);
+ break;
+ case CBR:
+ gain = cbrt(gain);
+ break;
+ }
+
+ return gain;
+}
+
+#define FADE_PLANAR(name, type) \
+static void fade_samples_## name ##p(uint8_t **dst, uint8_t * const *src, \
+ int nb_samples, int channels, int dir, \
+ int64_t start, int range, int curve) \
+{ \
+ int i, c; \
+ \
+ for (i = 0; i < nb_samples; i++) { \
+ double gain = fade_gain(curve, start + i * dir, range); \
+ for (c = 0; c < channels; c++) { \
+ type *d = (type *)dst[c]; \
+ const type *s = (type *)src[c]; \
+ \
+ d[i] = s[i] * gain; \
+ } \
+ } \
+}
+
+#define FADE(name, type) \
+static void fade_samples_## name (uint8_t **dst, uint8_t * const *src, \
+ int nb_samples, int channels, int dir, \
+ int64_t start, int range, int curve) \
+{ \
+ type *d = (type *)dst[0]; \
+ const type *s = (type *)src[0]; \
+ int i, c, k = 0; \
+ \
+ for (i = 0; i < nb_samples; i++) { \
+ double gain = fade_gain(curve, start + i * dir, range); \
+ for (c = 0; c < channels; c++, k++) \
+ d[k] = s[k] * gain; \
+ } \
+}
+
+FADE_PLANAR(dbl, double)
+FADE_PLANAR(flt, float)
+FADE_PLANAR(s16, int16_t)
+FADE_PLANAR(s32, int32_t)
+
+FADE(dbl, double)
+FADE(flt, float)
+FADE(s16, int16_t)
+FADE(s32, int32_t)
+
+static int config_output(AVFilterLink *outlink)
+{
+ AVFilterContext *ctx = outlink->src;
+ AudioFadeContext *afade = ctx->priv;
+ AVFilterLink *inlink = ctx->inputs[0];
+
+ switch (inlink->format) {
+ case AV_SAMPLE_FMT_DBL: afade->fade_samples = fade_samples_dbl; break;
+ case AV_SAMPLE_FMT_DBLP: afade->fade_samples = fade_samples_dblp; break;
+ case AV_SAMPLE_FMT_FLT: afade->fade_samples = fade_samples_flt; break;
+ case AV_SAMPLE_FMT_FLTP: afade->fade_samples = fade_samples_fltp; break;
+ case AV_SAMPLE_FMT_S16: afade->fade_samples = fade_samples_s16; break;
+ case AV_SAMPLE_FMT_S16P: afade->fade_samples = fade_samples_s16p; break;
+ case AV_SAMPLE_FMT_S32: afade->fade_samples = fade_samples_s32; break;
+ case AV_SAMPLE_FMT_S32P: afade->fade_samples = fade_samples_s32p; break;
+ }
+
+ if (afade->duration)
+ afade->nb_samples = afade->duration * inlink->sample_rate;
+ if (afade->start_time)
+ afade->start_sample = afade->start_time * inlink->sample_rate;
+
+ return 0;
+}
+
- int nb_samples = buf->audio->nb_samples;
- AVFilterBufferRef *out_buf;
++static int filter_frame(AVFilterLink *inlink, AVFrame *buf)
+{
+ AudioFadeContext *afade = inlink->dst->priv;
+ AVFilterLink *outlink = inlink->dst->outputs[0];
- if (buf->perms & AV_PERM_WRITE) {
++ int nb_samples = buf->nb_samples;