4 Protocols are configured elements in Libav which allow to access
5 resources which require the use of a particular protocol.
7 When you configure your Libav build, all the supported protocols are
8 enabled by default. You can list all available ones using the
9 configure option "--list-protocols".
11 You can disable all the protocols using the configure option
12 "--disable-protocols", and selectively enable a protocol using the
13 option "--enable-protocol=@var{PROTOCOL}", or you can disable a
14 particular protocol using the option
15 "--disable-protocol=@var{PROTOCOL}".
17 The option "-protocols" of the av* tools will display the list of
20 A description of the currently available protocols follows.
24 Physical concatenation protocol.
26 Allow to read and seek from many resource in sequence as if they were
29 A URL accepted by this protocol has the syntax:
31 concat:@var{URL1}|@var{URL2}|...|@var{URLN}
34 where @var{URL1}, @var{URL2}, ..., @var{URLN} are the urls of the
35 resource to be concatenated, each one possibly specifying a distinct
38 For example to read a sequence of files @file{split1.mpeg},
39 @file{split2.mpeg}, @file{split3.mpeg} with @command{avplay} use the
42 avplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg
45 Note that you may need to escape the character "|" which is special for
52 Allow to read from or read to a file.
54 For example to read from a file @file{input.mpeg} with @command{avconv}
57 avconv -i file:input.mpeg output.mpeg
60 The av* tools default to the file protocol, that is a resource
61 specified with the name "FILE.mpeg" is interpreted as the URL
70 Read Apple HTTP Live Streaming compliant segmented stream as
71 a uniform one. The M3U8 playlists describing the segments can be
72 remote HTTP resources or local files, accessed using the standard
74 The nested protocol is declared by specifying
75 "+@var{proto}" after the hls URI scheme name, where @var{proto}
76 is either "file" or "http".
79 hls+http://host/path/to/remote/resource.m3u8
80 hls+file://path/to/local/resource.m3u8
83 Using this protocol is discouraged - the hls demuxer should work
84 just as well (if not, please report the issues) and is more complete.
85 To use the hls demuxer instead, simply use the direct URLs to the
90 HTTP (Hyper Text Transfer Protocol).
94 MMS (Microsoft Media Server) protocol over TCP.
98 MMS (Microsoft Media Server) protocol over HTTP.
100 The required syntax is:
102 mmsh://@var{server}[:@var{port}][/@var{app}][/@var{playpath}]
109 Computes the MD5 hash of the data to be written, and on close writes
110 this to the designated output or stdout if none is specified. It can
111 be used to test muxers without writing an actual file.
113 Some examples follow.
115 # Write the MD5 hash of the encoded AVI file to the file output.avi.md5.
116 avconv -i input.flv -f avi -y md5:output.avi.md5
118 # Write the MD5 hash of the encoded AVI file to stdout.
119 avconv -i input.flv -f avi -y md5:
122 Note that some formats (typically MOV) require the output protocol to
123 be seekable, so they will fail with the MD5 output protocol.
127 UNIX pipe access protocol.
129 Allow to read and write from UNIX pipes.
131 The accepted syntax is:
136 @var{number} is the number corresponding to the file descriptor of the
137 pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr). If @var{number}
138 is not specified, by default the stdout file descriptor will be used
139 for writing, stdin for reading.
141 For example to read from stdin with @command{avconv}:
143 cat test.wav | avconv -i pipe:0
144 # ...this is the same as...
145 cat test.wav | avconv -i pipe:
148 For writing to stdout with @command{avconv}:
150 avconv -i test.wav -f avi pipe:1 | cat > test.avi
151 # ...this is the same as...
152 avconv -i test.wav -f avi pipe: | cat > test.avi
155 Note that some formats (typically MOV), require the output protocol to
156 be seekable, so they will fail with the pipe output protocol.
160 Real-Time Messaging Protocol.
162 The Real-Time Messaging Protocol (RTMP) is used for streaming multimedia
163 content across a TCP/IP network.
165 The required syntax is:
167 rtmp://[@var{username}:@var{password}@@]@var{server}[:@var{port}][/@var{app}][/@var{instance}][/@var{playpath}]
170 The accepted parameters are:
174 An optional username (mostly for publishing).
177 An optional password (mostly for publishing).
180 The address of the RTMP server.
183 The number of the TCP port to use (by default is 1935).
186 It is the name of the application to access. It usually corresponds to
187 the path where the application is installed on the RTMP server
188 (e.g. @file{/ondemand/}, @file{/flash/live/}, etc.). You can override
189 the value parsed from the URI through the @code{rtmp_app} option, too.
192 It is the path or name of the resource to play with reference to the
193 application specified in @var{app}, may be prefixed by "mp4:". You
194 can override the value parsed from the URI through the @code{rtmp_playpath}
198 Act as a server, listening for an incoming connection.
201 Maximum time to wait for the incoming connection. Implies listen.
204 Additionally, the following parameters can be set via command line options
205 (or in code via @code{AVOption}s):
209 Name of application to connect on the RTMP server. This option
210 overrides the parameter specified in the URI.
213 Set the client buffer time in milliseconds. The default is 3000.
216 Extra arbitrary AMF connection parameters, parsed from a string,
217 e.g. like @code{B:1 S:authMe O:1 NN:code:1.23 NS:flag:ok O:0}.
218 Each value is prefixed by a single character denoting the type,
219 B for Boolean, N for number, S for string, O for object, or Z for null,
220 followed by a colon. For Booleans the data must be either 0 or 1 for
221 FALSE or TRUE, respectively. Likewise for Objects the data must be 0 or
222 1 to end or begin an object, respectively. Data items in subobjects may
223 be named, by prefixing the type with 'N' and specifying the name before
224 the value (i.e. @code{NB:myFlag:1}). This option may be used multiple
225 times to construct arbitrary AMF sequences.
228 Version of the Flash plugin used to run the SWF player. The default
231 @item rtmp_flush_interval
232 Number of packets flushed in the same request (RTMPT only). The default
236 Specify that the media is a live stream. No resuming or seeking in
237 live streams is possible. The default value is @code{any}, which means the
238 subscriber first tries to play the live stream specified in the
239 playpath. If a live stream of that name is not found, it plays the
240 recorded stream. The other possible values are @code{live} and
244 URL of the web page in which the media was embedded. By default no
248 Stream identifier to play or to publish. This option overrides the
249 parameter specified in the URI.
252 Name of live stream to subscribe to. By default no value will be sent.
253 It is only sent if the option is specified or if rtmp_live
257 SHA256 hash of the decompressed SWF file (32 bytes).
260 Size of the decompressed SWF file, required for SWFVerification.
263 URL of the SWF player for the media. By default no value will be sent.
266 URL to player swf file, compute hash/size automatically.
269 URL of the target stream. Defaults to proto://host[:port]/app.
273 For example to read with @command{avplay} a multimedia resource named
274 "sample" from the application "vod" from an RTMP server "myserver":
276 avplay rtmp://myserver/vod/sample
281 Encrypted Real-Time Messaging Protocol.
283 The Encrypted Real-Time Messaging Protocol (RTMPE) is used for
284 streaming multimedia content within standard cryptographic primitives,
285 consisting of Diffie-Hellman key exchange and HMACSHA256, generating
290 Real-Time Messaging Protocol over a secure SSL connection.
292 The Real-Time Messaging Protocol (RTMPS) is used for streaming
293 multimedia content across an encrypted connection.
297 Real-Time Messaging Protocol tunneled through HTTP.
299 The Real-Time Messaging Protocol tunneled through HTTP (RTMPT) is used
300 for streaming multimedia content within HTTP requests to traverse
305 Encrypted Real-Time Messaging Protocol tunneled through HTTP.
307 The Encrypted Real-Time Messaging Protocol tunneled through HTTP (RTMPTE)
308 is used for streaming multimedia content within HTTP requests to traverse
313 Real-Time Messaging Protocol tunneled through HTTPS.
315 The Real-Time Messaging Protocol tunneled through HTTPS (RTMPTS) is used
316 for streaming multimedia content within HTTPS requests to traverse
319 @section rtmp, rtmpe, rtmps, rtmpt, rtmpte
321 Real-Time Messaging Protocol and its variants supported through
324 Requires the presence of the librtmp headers and library during
325 configuration. You need to explicitly configure the build with
326 "--enable-librtmp". If enabled this will replace the native RTMP
329 This protocol provides most client functions and a few server
330 functions needed to support RTMP, RTMP tunneled in HTTP (RTMPT),
331 encrypted RTMP (RTMPE), RTMP over SSL/TLS (RTMPS) and tunneled
332 variants of these encrypted types (RTMPTE, RTMPTS).
334 The required syntax is:
336 @var{rtmp_proto}://@var{server}[:@var{port}][/@var{app}][/@var{playpath}] @var{options}
339 where @var{rtmp_proto} is one of the strings "rtmp", "rtmpt", "rtmpe",
340 "rtmps", "rtmpte", "rtmpts" corresponding to each RTMP variant, and
341 @var{server}, @var{port}, @var{app} and @var{playpath} have the same
342 meaning as specified for the RTMP native protocol.
343 @var{options} contains a list of space-separated options of the form
346 See the librtmp manual page (man 3 librtmp) for more information.
348 For example, to stream a file in real-time to an RTMP server using
351 avconv -re -i myfile -f flv rtmp://myserver/live/mystream
354 To play the same stream using @command{avplay}:
356 avplay "rtmp://myserver/live/mystream live=1"
365 RTSP is not technically a protocol handler in libavformat, it is a demuxer
366 and muxer. The demuxer supports both normal RTSP (with data transferred
367 over RTP; this is used by e.g. Apple and Microsoft) and Real-RTSP (with
368 data transferred over RDT).
370 The muxer can be used to send a stream using RTSP ANNOUNCE to a server
371 supporting it (currently Darwin Streaming Server and Mischa Spiegelmock's
372 @uref{http://github.com/revmischa/rtsp-server, RTSP server}).
374 The required syntax for a RTSP url is:
376 rtsp://@var{hostname}[:@var{port}]/@var{path}
379 The following options (set on the @command{avconv}/@command{avplay} command
380 line, or set in code via @code{AVOption}s or in @code{avformat_open_input}),
383 Flags for @code{rtsp_transport}:
388 Use UDP as lower transport protocol.
391 Use TCP (interleaving within the RTSP control channel) as lower
395 Use UDP multicast as lower transport protocol.
398 Use HTTP tunneling as lower transport protocol, which is useful for
402 Multiple lower transport protocols may be specified, in that case they are
403 tried one at a time (if the setup of one fails, the next one is tried).
404 For the muxer, only the @code{tcp} and @code{udp} options are supported.
406 Flags for @code{rtsp_flags}:
410 Accept packets only from negotiated peer address and port.
412 Act as a server, listening for an incoming connection.
415 When receiving data over UDP, the demuxer tries to reorder received packets
416 (since they may arrive out of order, or packets may get lost totally). This
417 can be disabled by setting the maximum demuxing delay to zero (via
418 the @code{max_delay} field of AVFormatContext).
420 When watching multi-bitrate Real-RTSP streams with @command{avplay}, the
421 streams to display can be chosen with @code{-vst} @var{n} and
422 @code{-ast} @var{n} for video and audio respectively, and can be switched
423 on the fly by pressing @code{v} and @code{a}.
425 Example command lines:
427 To watch a stream over UDP, with a max reordering delay of 0.5 seconds:
430 avplay -max_delay 500000 -rtsp_transport udp rtsp://server/video.mp4
433 To watch a stream tunneled over HTTP:
436 avplay -rtsp_transport http rtsp://server/video.mp4
439 To send a stream in realtime to a RTSP server, for others to watch:
442 avconv -re -i @var{input} -f rtsp -muxdelay 0.1 rtsp://server/live.sdp
445 To receive a stream in realtime:
448 avconv -rtsp_flags listen -i rtsp://ownaddress/live.sdp @var{output}
453 Session Announcement Protocol (RFC 2974). This is not technically a
454 protocol handler in libavformat, it is a muxer and demuxer.
455 It is used for signalling of RTP streams, by announcing the SDP for the
456 streams regularly on a separate port.
460 The syntax for a SAP url given to the muxer is:
462 sap://@var{destination}[:@var{port}][?@var{options}]
465 The RTP packets are sent to @var{destination} on port @var{port},
466 or to port 5004 if no port is specified.
467 @var{options} is a @code{&}-separated list. The following options
472 @item announce_addr=@var{address}
473 Specify the destination IP address for sending the announcements to.
474 If omitted, the announcements are sent to the commonly used SAP
475 announcement multicast address 224.2.127.254 (sap.mcast.net), or
476 ff0e::2:7ffe if @var{destination} is an IPv6 address.
478 @item announce_port=@var{port}
479 Specify the port to send the announcements on, defaults to
480 9875 if not specified.
483 Specify the time to live value for the announcements and RTP packets,
486 @item same_port=@var{0|1}
487 If set to 1, send all RTP streams on the same port pair. If zero (the
488 default), all streams are sent on unique ports, with each stream on a
489 port 2 numbers higher than the previous.
490 VLC/Live555 requires this to be set to 1, to be able to receive the stream.
491 The RTP stack in libavformat for receiving requires all streams to be sent
495 Example command lines follow.
497 To broadcast a stream on the local subnet, for watching in VLC:
500 avconv -re -i @var{input} -f sap sap://224.0.0.255?same_port=1
503 Similarly, for watching in avplay:
506 avconv -re -i @var{input} -f sap sap://224.0.0.255
509 And for watching in avplay, over IPv6:
512 avconv -re -i @var{input} -f sap sap://[ff0e::1:2:3:4]
517 The syntax for a SAP url given to the demuxer is:
519 sap://[@var{address}][:@var{port}]
522 @var{address} is the multicast address to listen for announcements on,
523 if omitted, the default 224.2.127.254 (sap.mcast.net) is used. @var{port}
524 is the port that is listened on, 9875 if omitted.
526 The demuxers listens for announcements on the given address and port.
527 Once an announcement is received, it tries to receive that particular stream.
529 Example command lines follow.
531 To play back the first stream announced on the normal SAP multicast address:
537 To play back the first stream announced on one the default IPv6 SAP multicast address:
540 avplay sap://[ff0e::2:7ffe]
545 Trasmission Control Protocol.
547 The required syntax for a TCP url is:
549 tcp://@var{hostname}:@var{port}[?@var{options}]
555 Listen for an incoming connection
558 avconv -i @var{input} -f @var{format} tcp://@var{hostname}:@var{port}?listen
559 avplay tcp://@var{hostname}:@var{port}
566 User Datagram Protocol.
568 The required syntax for a UDP url is:
570 udp://@var{hostname}:@var{port}[?@var{options}]
573 @var{options} contains a list of &-separated options of the form @var{key}=@var{val}.
574 Follow the list of supported options.
578 @item buffer_size=@var{size}
579 set the UDP buffer size in bytes
581 @item localport=@var{port}
582 override the local UDP port to bind with
584 @item localaddr=@var{addr}
585 Choose the local IP address. This is useful e.g. if sending multicast
586 and the host has multiple interfaces, where the user can choose
587 which interface to send on by specifying the IP address of that interface.
589 @item pkt_size=@var{size}
590 set the size in bytes of UDP packets
592 @item reuse=@var{1|0}
593 explicitly allow or disallow reusing UDP sockets
596 set the time to live value (for multicast only)
598 @item connect=@var{1|0}
599 Initialize the UDP socket with @code{connect()}. In this case, the
600 destination address can't be changed with ff_udp_set_remote_url later.
601 If the destination address isn't known at the start, this option can
602 be specified in ff_udp_set_remote_url, too.
603 This allows finding out the source address for the packets with getsockname,
604 and makes writes return with AVERROR(ECONNREFUSED) if "destination
605 unreachable" is received.
606 For receiving, this gives the benefit of only receiving packets from
607 the specified peer address/port.
609 @item sources=@var{address}[,@var{address}]
610 Only receive packets sent to the multicast group from one of the
611 specified sender IP addresses.
613 @item block=@var{address}[,@var{address}]
614 Ignore packets sent to the multicast group from the specified
618 Some usage examples of the udp protocol with @command{avconv} follow.
620 To stream over UDP to a remote endpoint:
622 avconv -i @var{input} -f @var{format} udp://@var{hostname}:@var{port}
625 To stream in mpegts format over UDP using 188 sized UDP packets, using a large input buffer:
627 avconv -i @var{input} -f mpegts udp://@var{hostname}:@var{port}?pkt_size=188&buffer_size=65535
630 To receive over UDP from a remote endpoint:
632 avconv -i udp://[@var{multicast-address}]:@var{port}
639 The required syntax for a Unix socket URL is:
642 unix://@var{filepath}
645 The following parameters can be set via command line options
646 (or in code via @code{AVOption}s):
652 Create the Unix socket in listening mode.