May be you'll find it useful. But here is what we did for similar project (no need in any custom software modification):
Quick Setup Overview
USB Web cam Logitech B500 connected to Reaspberry Pi. But any camera will work. You can check
wiki if your cam need extra power supply.
Network is either Ethernet or WiFi. Ethernet is preferable as Web cam will use USB heavily.
ffserver will stream your video over the network.
ffmpeg will prepare your video and sound streams.
Several Linux versions are now switching to avconv instead of ffmpeg. But they are mostly compatible.
What is your Web cam capable of?
Normally your USB web cam will support 2 output streams:
- YUYU (raw video capture)
- MJPEG (compressed images capture)
To see you Web cam supported modes you can use video4linux2 control utility:
Advanced web cams, like Logitech C920 has built-in hardware support for h.264. See later why it is useful.
Web Cam works in one of the predefined modes with fixed resolution and FPS (frames per second). You can find all supported modes as the following:
You will need to know them later to get best performance. If you use "non-standard" resolution of FPS, then RPi CPU is used to adjust stream. And we need all our CPU for compression.
Network streaming quick overview
Streaming pipeline is simply the following:
1.V) Web cam ( /dev/video0 ) sends raw video stream, MJPEG stream (or h.264 stream) to the driver (video4linux2).
1.A) Audio device sends audio stream to audio driver.
2) ffmpeg grabs this video and audio data, combines them, and feeds final raw stream to the ffserver
3) ffserver accepts connections from clients and streams them media content, converting it to the desired format "on the fly" (with another ffmpeg instance)
If you choose to stream MJPEG to the client, all is simple. But it uses network heavily. STILL THIS IS MUST HAVE TO DO, in order to test your whole setup. At least you would know that everything works.
To reduce network traffic, we want to use video stream with compression. Many options are available here. Yet most commonly used are mpeg1, theora (ogg), and
h.264. The last one is now unofficial web standard and has hardware support almost everywhere.
Yet there is one problem with video stream. ffmpeg (and all other tools as of today) uses software compression. What is something very slow on Raspberry Pi. With decent resolution 320x240 you won't get more than 2-3 frames per second. And you want to have at least 25 (better 30) for stable view over the network.
Good news, RPi has unlocked GPU support for h.264 encoding. And I'm currently working on wrapping GPU-based encoder/decoder
omxtx for ffmpeg. This will give 30 fps even on top resolutions.
Another option is to use Web cam, that has built-in hardware h.264 encoder. Logitech C920 is an example.
MJPEG setup
MJPEG stream is a set of compressed images. Relatively heavy traffic usage. BUT, very simple to setup and supported with almost any client. Drawback is that we do not have audio (but workarounds are possible). Still used in all security surveillance projects due its simplicity.
Step 1. Test your web cam stream:
Code: Select all
ffmpeg -f video4linux2 -r 15 -s 352x288 -vcodec mjmpeg -i /dev/video0 -an /tmp/wcam.avi
IMPORTANT: All the time you work with Web Cam, you should explicitly limit framerate with the -r parameter. Othervise ffmpeg will read data from your Web cam as fast as it can, and will fail, because data is not ready yet.
FrameRate (-r) and resolution (-s) is better to match one of your Web Cam native modes (discussed earlier).
Step 2. Prepare ffserver configuration
By default ffserver will look for /etc/ffserver.conf configuration file. Here is a simple setup for MJPEG stream:
Code: Select all
Port 8090
BindAddress 0.0.0.0
MaxClients 5
MaxBandwidth 1000
NoDaemon
<Feed wcam.ffm>
File /tmp/wcam.ffm
FileMaxSize 5M
ACL allow 127.0.0.1
</Feed>
<Stream wcam.mjpeg>
Format mjpeg
Feed wcam.ffm
VideoFrameRate 15
VideoBitRate 500
VideoSize 352x288
VideoQMin 1
VideoQMax 10
NoAudio
</Stream>
Now we can start ffserver simply with the command bellow. Explicit configuration file path is not necessary, but just in case you want to place it somewhere else.
It is now running and streaming... black matter.
Step 3. Feed Web Cam stream to ffserver
Code: Select all
ffmpeg -f video4linux2 -r 15 -s 352x288 -vcodec mjmpeg -i /dev/video0 -an http://localhost:8090/wcam.ffm
Step 4. Enjoy your view
Using VLC you can simply open your stream by the URL:
Code: Select all
http://<raspberry pi IP>:8090/wcam.mjpeg
There are also mobile applications available that can view such streams. And another option is preview it in your web browser (Chrome does support it):
Code: Select all
<html><head></head><body>
<img src="http://<raspberry pi IP>:8090/wcam.mjpeg" />
</body></html>
Now if you want to add audio, you can do some tricks. MJPEG is container-less stream. E.g. just raw video data. In order to add sound you need to wrap it into AVI or some other format. Yes it would become a video stream, but without compression.
One more option would be to stream audio totally separately, and merge it with video on the client. But this approach is not recommended, because audio will be out of sync with picture.
Using video stream
Video stream has 3 parts:
- Video
- Audio
- Container
Container holds meta-information and helps to keep audio and video synced. It also includes key-frames that are necessary to rebuild compressed video and audio streams.
Audio and Video streams are captured from your hardware. If hardware does not compress (encode) them, than ffmpeg has to do it. Yet it only supports software compression, and RPi CPU will be a huge bottleneck here. It is easy to check if you try to grab raw video with 30 fps, and then repeat it, but with h.264 compression with 2 fps and 100% CPU utilization, and even less fps with sound.
Step 1. Grab video and audio streams
Same as with MJPEG, but use different Web Cam mode instead:
Code: Select all
ffmpeg -f video4linux2 -r 2 -s 320x240 -i /dev/video0 -f alsa -ac 1 -i hw:0 /tmp/wcam.avi
IMPORTANT part here is a Frame Rate. You can set it up to 30 (or whatever is the best one your Web Cam can support). BUT. ffmpeg won't be able to process them faster than it encodes. And it would encode 2-3 frames per second at most (unless GPU codec would be available in nearest future).
What would happen if we put "-r 30" for capturing? Well, it would capture all the data and feed it to ffserver buffer in a raw format. But all these frames are actually saved to the disk buffer, until they are streamed to the client. And when we would start streaming, we won't be able to give away more than 2-3 frames per second due to compression speed. And you still getting new frames... and vuala, your buffer is now full, client is still getting images from about 10 minutes ago... and new frames are discarded...
So the lesson here is to try to capture data with the same speed as you give it away. With MJPEG we give it away fast. With compression (unless we have GPU codec) we have to use 2-3 frames per second (depends on resolution).
Step 2. Update ffserver stream configuration
Unfortunately I didn't save my h.246 stream configuration, but it would look something like this:
Code: Select all
Port 8090
BindAddress 0.0.0.0
MaxClients 5
MaxBandwidth 1000
NoDaemon
<Feed wcam.ffm>
File /tmp/wcam.ffm
FileMaxSize 5M
ACL allow 127.0.0.1
</Feed>
<Stream wcam.mp4>
Format mpeg4
Feed wcam.ffm
VideoCodec libx264
VideoFrameRate 2
VideoBitRate 512
VideoSize 320x240
AVOptionVideo crf 26
AVOptionVideo preset medium
AVOptionVideo flags +global_header
AudioCodec aac
Strict -2
AudioBitRate 128
AudioChannels 2
AudioSampleRate 44100
AVOptionAudio flags +global_header
</Stream>
IMPORTANT: there will be errors due to "preset medium". I google it hard and found the workaround... will update the post when find it again.
Try not to stream multiple compressed streams with RPi (ogg and h.264 at the same time). It has no enough CPU even for one stream.
Now start ffserver. You may need to kill old instance, if any.
Step 3. Feed Web Cam stream to ffserver and "enjoy" the view
Code: Select all
ffmpeg -f video4linux2 -r 15 -s 352x288 -vcodec mjmpeg -i /dev/video0 -f alsa -ac 1 -i hw:0 http://localhost:8090/wcam.ffm
You won't realy "enjoy" the video with 2 FPS. This is because your player will buffer the stream... and then show you the video with 25 FPS (everything will move very fast)... then buffer again for the next couple of minutes.
The idea here is to get prepared, use MJPEG stream for a now... and wait a little till RPi GPU codec for ffserver is ready.
Still nobody stops you from using uncompressed MJPEG as video stream, combine it with compressed audion, wrap it with simple AVI container... and stream. I was able to view my home Web Cam with 320x240 MJPEG at 15 FPS (native mode for my Web Cam) over the Internet from my office with no delays at all. Bigger resolution can be used in local home network.
Hope this helps.