Raspberry Pi Documentation

Camera

Basic Camera Usage

Raspberry Pi currently sell two types of camera board: an 8MP device and a 12MP High Quality (HQ) camera. The 8MP device is also available in NoIR form without an IR filter. The original 5MP device is no longer available from Raspberry Pi.

All Raspberry Pi cameras are capable of taking high-resolution photographs, along with full HD 1080p video, and can be fully controlled programmatically. This documentation describes how to use the camera in various scenarios, and how to use the various software tools.

Once installed, there are various ways the cameras can be used. The simplest option is to use one of the provided camera applications. There are four Linux command-line applications installed by default (e.g. raspistill).

You can also programmatically access the camera using the Python programming language, using the picamera library.

About libcamera

libcamera is a new Linux API for interfacing to cameras. Raspberry Pi have been involved with the development of libcamera and are now using this sophisticated system for new camera software. This means Raspberry Pi are moving away from the firmware-based camera image processing pipeline (ISP) to a more open system.

Camera Modules

The Raspberry Pi Camera Modules are official products from the Raspberry Pi Foundation. The original 5-megapixel model was released in 2013, and an 8-megapixel Camera Module v2 was released in 2016. For both iterations, there are visible light and infrared versions. A 12-megapixel High Quality Camera was released in 2020. There is no infrared version of the HQ Camera, however the IR Filter can be removed if required.

Hardware Specification

Camera Module v1 Camera Module v2 HQ Camera

Net price

$25

$25

$50

Size

Around 25 × 24 × 9 mm

38 x 38 x 18.4mm (excluding lens)

Weight

3g

3g

Still resolution

5 Megapixels

8 Megapixels

12.3 Megapixels

Video modes

1080p30, 720p60 and 640 × 480p60/90

1080p30, 720p60 and 640 × 480p60/90

1080p30, 720p60 and 640 × 480p60/90

Linux integration

V4L2 driver available

V4L2 driver available

V4L2 driver available

C programming API

OpenMAX IL and others available

OpenMAX IL and others available

Sensor

OmniVision OV5647

Sony IMX219

Sony IMX477

Sensor resolution

2592 × 1944 pixels

3280 × 2464 pixels

4056 x 3040 pixels

Sensor image area

3.76 × 2.74 mm

3.68 x 2.76 mm (4.6 mm diagonal)

6.287mm x 4.712 mm (7.9mm diagonal)

Pixel size

1.4 µm × 1.4 µm

1.12 µm x 1.12 µm

1.55 µm x 1.55 µm

Optical size

1/4"

1/4"

Full-frame SLR lens equivalent

35 mm

S/N ratio

36 dB

Dynamic range

67 dB @ 8x gain

Sensitivity

680 mV/lux-sec

Dark current

16 mV/sec @ 60 C

Well capacity

4.3 Ke-

Fixed focus

1 m to infinity

N/A

Focal length

3.60 mm +/- 0.01

3.04 mm

Depends on lens

Horizontal field of view

53.50 +/- 0.13 degrees

62.2 degrees

Depends on lens

Vertical field of view

41.41 +/- 0.11 degrees

48.8 degrees

Depends on lens

Focal ratio (F-Stop)

2.9

2.0

Depends on lens

Hardware Features

Available Implemented

Chief ray angle correction

Yes

Global and rolling shutter

Rolling shutter

Automatic exposure control (AEC)

No - done by ISP instead

Automatic white balance (AWB)

No - done by ISP instead

Automatic black level calibration (ABLC)

No - done by ISP instead

Automatic 50/60 Hz luminance detection

No - done by ISP instead

Frame rate up to 120 fps

Max 90fps. Limitations on frame size for the higher frame rates (VGA only for above 47fps)

AEC/AGC 16-zone size/position/weight control

No - done by ISP instead

Mirror and flip

Yes

Cropping

No - done by ISP instead (except 1080p mode)

Lens correction

No - done by ISP instead

Defective pixel cancelling

No - done by ISP instead

10-bit RAW RGB data

Yes - format conversions available via GPU

Support for LED and flash strobe mode

LED flash

Support for internal and external frame synchronisation for frame exposure mode

No

Support for 2 × 2 binning for better SNR in low light conditions

Anything output res below 1296 x 976 will use the 2 x 2 binned mode

Support for horizontal and vertical sub-sampling

Yes, via binning and skipping

On-chip phase lock loop (PLL)

Yes

Standard serial SCCB interface

Yes

Digital video port (DVP) parallel output interface

No

MIPI interface (two lanes)

Yes

32 bytes of embedded one-time programmable (OTP) memory

No

Embedded 1.5V regulator for core power

Yes

Software Features

Picture formats

JPEG (accelerated), JPEG + RAW, GIF, BMP, PNG, YUV420, RGB888

Video formats

raw h.264 (accelerated)

Effects

negative, solarise, posterize, whiteboard, blackboard, sketch, denoise, emboss, oilpaint, hatch, gpen, pastel, watercolour, film, blur, saturation

Exposure modes

auto, night, nightpreview, backlight, spotlight, sports, snow, beach, verylong, fixedfps, antishake, fireworks

Metering modes

average, spot, backlit, matrix

Automatic white balance modes

off, auto, sun, cloud, shade, tungsten, fluorescent, incandescent, flash, horizon

Triggers

Keypress, UNIX signal, timeout

Extra modes

demo, burst/timelapse, circular buffer, video with motion vectors, segmented video, live preview on 3D models

HQ Camera IR Filter Transmission

The HQ Camera uses a Hoya CM500 infrared filter. Its transmission characteristics are as represented in the following graph.

CM500 Transmission Graph

Mechanical Drawings

  • Camera Module v2 PDF

  • HQ Camera Module PDF

  • HQ Camera Module lens mount PDF

Schematics

  • Camera Module v2 PDF

  • HQ Camera Module PDF

Installing a Raspberry Pi camera

Warning
Cameras are sensitive to static. Earth yourself prior to handling the PCB. A sink tap or similar should suffice if you don’t have an earthing strap.

Connecting the Camera

The flex cable inserts into the connector labelled CAMERA on the Raspberry Pi, which is located between the Ethernet and HDMI ports. The cable must be inserted with the silver contacts facing the HDMI port. To open the connector, pull the tabs on the top of the connector upwards, then towards the Ethernet port. The flex cable should be inserted firmly into the connector, with care taken not to bend the flex at too acute an angle. To close the connector, push the top part of the connector towards the HDMI port and down, while holding the flex cable in place.

We have created a video to illustrate the process of connecting the camera. Although the video shows the original camera on the original Raspberry Pi 1, the principle is the same for all camera boards:

Depending on the model, the camera may come with a small piece of translucent blue plastic film covering the lens. This is only present to protect the lens while it is being mailed to you, and needs to be removed by gently peeling it off.

Enabling the Camera

Using the desktop

Select Preferences and Raspberry Pi Configuration from the desktop menu: a window will appear. Select the Interfaces tab, then click on the enable camera option. Click OK. You will need to reboot for the changes to take effect.

Using the command line

Open the raspi-config tool from the terminal:

sudo raspi-config

Select Interfacing Options then Camera and press Enter. Choose Yes then Ok. Go to Finish and you’ll be prompted to reboot.

Setting up the Camera Software

Execute the following instructions on the command line to download and install the latest kernel, GPU firmware, and applications. You’ll need an internet connection for this to work correctly.

sudo apt update
sudo apt full-upgrade

Now you need to enable camera support using the raspi-config program you will have used when you first set up your Raspberry Pi.

sudo raspi-config

Use the cursor keys to select and open Interfacing Options, and then select Camera and follow the prompt to enable the camera.

Upon exiting raspi-config, it will ask to reboot. The enable option will ensure that on reboot the correct GPU firmware will be running with the camera driver and tuning, and the GPU memory split is sufficient to allow the camera to acquire enough memory to run correctly.

To test that the system is installed and working, try the following command:

raspistill -v -o test.jpg

The display should show a five-second preview from the camera and then take a picture, saved to the file test.jpg, whilst displaying various informational messages.

Raspicam commands

raspistill, raspivid and raspiyuv are command line tools for using the camera module.

raspistill

raspistill is the command line tool for capturing still photographs with a Raspberry Pi camera module.

Basic usage of raspistill

With a camera module connected and enabled, enter the following command in the terminal to take a picture:

raspistill -o cam.jpg
Upside-down photo

In this example the camera has been positioned upside-down. If the camera is placed in this position, the image must be flipped to appear the right way up.

Vertical flip and horizontal flip

With the camera placed upside-down, the image must be rotated 180° to be displayed correctly. The way to correct for this is to apply both a vertical and a horizontal flip by passing in the -vf and -hf flags:

raspistill -vf -hf -o cam2.jpg
Vertical and horizontal flipped photo

Now the photo has been captured correctly.

Resolution

The camera module takes pictures at a resolution of 2592 x 1944 which is 5,038,848 pixels or 5 megapixels.

File size

A photo taken with the camera module will be around 2.4MB. This is about 425 photos per GB.

Taking 1 photo per minute would take up 1GB in about 7 hours. This is a rate of about 144MB per hour or 3.3GB per day.

Bash script

You can create a Bash script which takes a picture with the camera. To create a script, open up your editor of choice and write the following example code:

#!/bin/bash

DATE=$(date +"%Y-%m-%d_%H%M")

raspistill -vf -hf -o /home/pi/camera/$DATE.jpg

This script will take a picture and name the file with a timestamp.

You’ll also need to make sure the path exists by creating the camera folder:

mkdir camera

Say we saved it as camera.sh, we would first make the file executable:

chmod +x camera.sh

Then run with:

./camera.sh

More options

For a full list of possible options, run raspistill with no arguments. To scroll, redirect stderr to stdout and pipe the output to less:

raspistill 2>&1 | less

Use the arrow keys to scroll and type q to exit.

raspivid

raspivid is the command line tool for capturing video with a Raspberry Pi camera module.

Basic usage of raspivid

With a camera module connected and enabled, record a video using the following command:

raspivid -o vid.h264

Remember to use -hf and -vf to flip the image if required, like with raspistill

This will save a 5 second video file to the path given here as vid.h264 (default length of time).

Specify length of video

To specify the length of the video taken, pass in the -t flag with a number of milliseconds. For example:

raspivid -o video.h264 -t 10000

This will record 10 seconds of video.

More options

For a full list of possible options, run raspivid with no arguments, or pipe this command through less and scroll through:

raspivid 2>&1 | less

Use the arrow keys to scroll and type q to exit.

MP4 Video Format

The Pi captures video as a raw H264 video stream. Many media players will refuse to play it, or play it at an incorrect speed, unless it is "wrapped" in a suitable container format like MP4. The easiest way to obtain an MP4 file from the raspivid command is using MP4Box.

Install MP4Box with this command:

sudo apt install -y gpac

Capture your raw video with raspivid and wrap it in an MP4 container like this:

# Capture 30 seconds of raw video at 640x480 and 150kB/s bit rate into a pivideo.h264 file:
raspivid -t 30000 -w 640 -h 480 -fps 25 -b 1200000 -p 0,0,640,480 -o pivideo.h264
# Wrap the raw video with an MP4 container:
MP4Box -add pivideo.h264 pivideo.mp4
# Remove the source raw file, leaving the remaining pivideo.mp4 file to play
rm pivideo.h264

Alternatively, wrap MP4 around your existing raspivid output, like this:

MP4Box -add video.h264 video.mp4

raspiyuv

raspiyuv has the same set of features as raspistill but instead of outputting standard image files such as .jpgs, it generates YUV420 or RGB888 image files from the output of the camera ISP.

In most cases using raspistill is the best option for standard image capture, but using YUV can be of benefit in certain circumstances. For example if you just need a uncompressed black and white image for computer vision applications, you can simply use the Y channel of a YUV capture.

There are some specific points about the YUV420 files that are required in order to use them correctly. Line stride (or pitch) is a multiple of 32, and each plane of YUV is a multiple of 16 in height. This can mean there may be extra pixels at the end of lines, or gaps between planes, depending on the resolution of the captured image. These gaps are unused.

Troubleshooting

If the Camera Module isn’t working correctly, there are number of things to try:

  • Is the ribbon cable attached to the Camera Serial Interface (CSI), not the Display Serial Interface (DSI)? The ribbon connector will fit into either port. The Camera port is located near the HDMI connector.

  • Are the ribbon connectors all firmly seated, and are they the right way round? They must be straight in their sockets.

  • Is the Camera Module connector, between the smaller black Camera Module itself and the PCB, firmly attached? Sometimes this connection can come loose during transit or when putting the Camera Module in a case. Using a fingernail, flip up the connector on the PCB, then reconnect it with gentle pressure. It engages with a very slight click. Don’t force it; if it doesn’t engage, it’s probably slightly misaligned.

  • Have sudo apt update and sudo apt full-upgrade been run?

  • Has raspi-config been run and the Camera Module enabled?

  • Is your power supply sufficient? The Camera Module adds about 200-250mA to the power requirements of your Raspberry Pi.

If things are still not working, try the following:

  • Error : raspistill/raspivid command not found. This probably means your update/upgrade failed in some way. Try it again.

  • Error : ENOMEM. The Camera Module is not starting up. Check all connections again.

  • Error : ENOSPC. The Camera Module is probably running out of GPU memory. Check config.txt in the /boot/ folder. The gpu_mem option should be at least 128. Alternatively, use the Memory Split option in the Advanced section of raspi-config to set this.

  • If you’ve checked all the above issues and the Camera Module is still not working, try posting on our forums for more help.

Command Line Options

Preview window

	--preview,	-p		Preview window settings <'x,y,w,h'>

Allows the user to define the size of the preview window and its location on the screen. Note this will be superimposed over the top of any other windows/graphics.

	--fullscreen,	-f		Fullscreen preview mode

Forces the preview window to use the whole screen. Note that the aspect ratio of the incoming image will be retained, so there may be bars on some edges.

	--nopreview,	-n		Do not display a preview window

Disables the preview window completely. Note that even though the preview is disabled, the camera will still be producing frames, so will be using power.

	--opacity,	-op		Set preview window opacity

Sets the opacity of the preview windows. 0 = invisible, 255 = fully opaque.

Camera control options

	--sharpness,	-sh		Set image sharpness (-100 - 100)

Sets the sharpness of the image. 0 is the default.

	--contrast,	-co		Set image contrast (-100 - 100)

Sets the contrast of the image. 0 is the default.

	--brightness,	-br		Set image brightness (0 - 100)

Sets the brightness of the image. 50 is the default. 0 is black, 100 is white.

	--saturation,	-sa		Set image saturation (-100 - 100)

Sets the colour saturation of the image. 0 is the default.

	--ISO,	-ISO		Set capture ISO (100 - 800)

Sets the ISO to be used for captures.

	--vstab,	-vs		Turn on video stabilisation

In video mode only, turns on video stabilisation.

	--ev,	-ev		Set EV compensation (-10 - 10)

Sets the EV compensation of the image. Default is 0.

	--exposure,	-ex		Set exposure mode

Possible options are:

  • auto: use automatic exposure mode

  • night: select setting for night shooting

  • nightpreview:

  • backlight: select setting for backlit subject

  • spotlight:

  • sports: select setting for sports (fast shutter etc.)

  • snow: select setting optimised for snowy scenery

  • beach: select setting optimised for beach

  • verylong: select setting for long exposures

  • fixedfps: constrain fps to a fixed value

  • antishake: antishake mode

  • fireworks: select setting optimised for fireworks

Note that not all of these settings may be implemented, depending on camera tuning.

	--flicker, -fli		Set flicker avoidance mode

Set a mode to compensate for lights flickering at the mains frequency, which can be seen as a dark horizontal band across an image. Flicker avoidance locks the exposure time to a multiple of the mains flicker frequency (8.33ms for 60Hz, or 10ms for 50Hz). This means that images can be noisier as the control algorithm has to increase the gain instead of exposure time should it wish for an intermediate exposure value. auto can be confused by external factors, therefore it is preferable to leave this setting off unless actually required.

Possible options are:

  • off: turn off flicker avoidance

  • auto: automatically detect mains frequency

  • 50hz: set avoidance at 50Hz

  • 60hz: set avoidance at 60Hz

	--awb,	-awb		Set Automatic White Balance (AWB) mode

Modes for which colour temperature ranges (K) are available have these settings in brackets.

  • off: turn off white balance calculation

  • auto: automatic mode (default)

  • sun: sunny mode (between 5000K and 6500K)

  • cloud: cloudy mode (between 6500K and 12000K)

  • shade: shade mode

  • tungsten: tungsten lighting mode (between 2500K and 3500K)

  • fluorescent: fluorescent lighting mode (between 2500K and 4500K)

  • incandescent: incandescent lighting mode

  • flash: flash mode

  • horizon: horizon mode

  • greyworld: Use this on the NoIR camera to fix incorrect AWB results due to the lack of the IR filter.

Note that not all of these settings may be implemented, depending on camera type.

	--imxfx,	-ifx		Set image effect

Set an effect to be applied to the image:

  • none: no effect (default)

  • negative: invert the image colours

  • solarise: solarise the image

  • posterise: posterise the image

  • whiteboard: whiteboard effect

  • blackboard: blackboard effect

  • sketch: sketch effect

  • denoise: denoise the image

  • emboss: emboss the image

  • oilpaint: oil paint effect

  • hatch: hatch sketch effect

  • gpen: graphite sketch effect

  • pastel: pastel effect

  • watercolour: watercolour effect

  • film: film grain effect

  • blur: blur the image

  • saturation: colour saturate the image

  • colourswap: not fully implemented

  • washedout: not fully implemented

  • colourpoint: not fully implemented

  • colourbalance: not fully implemented

  • cartoon: not fully implemented

Note that not all of these settings may be available in all circumstances.

	--colfx,	-cfx		Set colour effect <U:V>

The supplied U and V parameters (range 0 - 255) are applied to the U and Y channels of the image. For example, --colfx 128:128 should result in a monochrome image.

	--metering,	-mm		Set metering mode

Specify the metering mode used for the preview and capture:

  • average: average the whole frame for metering

  • spot: spot metering

  • backlit: assume a backlit image

  • matrix: matrix metering

	--rotation,	-rot		Set image rotation (0 - 359)

Sets the rotation of the image in the viewfinder and resulting image. This can take any value from 0 upwards, but due to hardware constraints only 0, 90, 180, and 270 degree rotations are supported.

	--hflip,	-hf		Set horizontal flip

Flips the preview and saved image horizontally.

	--vflip,	-vf		Set vertical flip

Flips the preview and saved image vertically.

	--roi,	-roi		Set sensor region of interest

Allows the specification of the area of the sensor to be used as the source for the preview and capture. This is defined as x,y for the top-left corner, and a width and height, with all values in normalised coordinates (0.0 - 1.0). So, to set a ROI at halfway across and down the sensor, and a width and height of a quarter of the sensor, use:

-roi 0.5,0.5,0.25,0.25
	--shutter,	-ss		Set shutter speed/time

Sets the shutter open time to the specified value (in microseconds). Shutter speed limits are as follows:

Camera Version Max (microseconds)

V1 (OV5647)

6000000 (i.e. 6s)

V2 (IMX219)

10000000 (i.e. 10s)

HQ (IMX477)

200000000 (i.e. 200s)

Using values above these maximums will result in undefined behaviour.

	--drc,	-drc		Enable/disable dynamic range compression

DRC changes the images by increasing the range of dark areas, and decreasing the brighter areas. This can improve the image in low light areas.

  • off

  • low

  • med

  • high

By default, DRC is off.

	--stats,	-st		Use stills capture frame for image statistics

Force recomputation of statistics on stills capture pass. Digital gain and AWB are recomputed based on the actual capture frame statistics, rather than the preceding preview frame.

	--awbgains,	-awbg

Sets blue and red gains (as floating point numbers) to be applied when -awb off is set e.g. -awbg 1.5,1.2

	--analoggain,	-ag

Sets the analog gain value directly on the sensor (floating point value from 1.0 to 8.0 for the OV5647 sensor on Camera Module V1, and 1.0 to 12.0 for the IMX219 sensor on Camera Module V2 and the IMX447 on the HQ Camera).

	--digitalgain,	-dg

Sets the digital gain value applied by the ISP (floating point value from 1.0 to 64.0, but values over about 4.0 will produce overexposed images)

	--mode,	-md

Sets a specified sensor mode, disabling the automatic selection. Possible values depend on the version of the Camera Module being used:

Version 1.x (OV5647)

Mode Size Aspect Ratio Frame rates FOV Binning

0

automatic selection

1

1920x1080

16:9

1-30fps

Partial

None

2

2592x1944

4:3

1-15fps

Full

None

3

2592x1944

4:3

0.1666-1fps

Full

None

4

1296x972

4:3

1-42fps

Full

2x2

5

1296x730

16:9

1-49fps

Full

2x2

6

640x480

4:3

42.1-60fps

Full

2x2 plus skip

7

640x480

4:3

60.1-90fps

Full

2x2 plus skip

Version 2.x (IMX219)

Mode Size Aspect Ratio Frame rates FOV Binning

0

automatic selection

1

1920x1080

16:9

0.1-30fps

Partial

None

2

3280x2464

4:3

0.1-15fps

Full

None

3

3280x2464

4:3

0.1-15fps

Full

None

4

1640x1232

4:3

0.1-40fps

Full

2x2

5

1640x922

16:9

0.1-40fps

Full

2x2

6

1280x720

16:9

40-90fps

Partial

2x2

7

640x480

4:3

40-200fps1

Partial

2x2

1For frame rates over 120fps, it is necessary to turn off automatic exposure and gain control using -ex off. Doing so should achieve the higher frame rates, but exposure time and gains will need to be set to fixed values supplied by the user.

HQ Camera

Mode Size Aspect Ratio Frame rates FOV Binning/Scaling

0

automatic selection

1

2028x1080

169:90

0.1-50fps

Partial

2x2 binned

2

2028x1520

4:3

0.1-50fps

Full

2x2 binned

3

4056x3040

4:3

0.005-10fps

Full

None

4

1332x990

74:55

50.1-120fps

Partial

2x2 binned

	--camselect,	-cs

Selects which camera to use on a multi-camera system. Use 0 or 1.

	--annotate,	-a		Enable/set annotate flags or text

Adds some text and/or metadata to the picture.

Metadata is indicated using a bitmask notation, so add them together to show multiple parameters. For example, 12 will show time(4) and date(8), since 4+8=12.

Text may include date/time placeholders by using the '%' character, as used by strftime.

Value Meaning Example Output

-a 4

Time

20:09:33

-a 8

Date

10/28/15

-a 12

4+8=12 Show the date(4) and time(8)

20:09:33 10/28/15

-a 16

Shutter Settings

-a 32

CAF Settings

-a 64

Gain Settings

-a 128

Lens Settings

-a 256

Motion Settings

-a 512

Frame Number

-a 1024

Black Background

-a "ABC %Y-%m-%d %X"

Show some text

ABC %Y-%m-%d %X

-a 4 -a "ABC %Y-%m-%d %X"

Show custom formatted date/time

ABC 2015-10-28 20:09:33

-a 8 -a "ABC %Y-%m-%d %X"

Show custom formatted date/time

ABC 2015-10-28 20:09:33

	--annotateex,	-ae		Set extra annotation parameters

Specifies annotation size, text colour, and background colour. Colours are in hex YUV format.

Size ranges from 6 - 160; default is 32. Asking for an invalid size should give you the default.

Example Explanation

-ae 32,0xff,0x808000 -a "Annotation text"

gives size 32 white text on black background

-ae 10,0x00,0x8080FF -a "Annotation text"

gives size 10 black text on white background

	--stereo,	-3d

Select the specified stereo imaging mode; sbs selects side-by-side mode, tb selects top/bottom mode; off turns off stereo mode (the default).

	--decimate,	-dec

Halves the width and height of the stereo image.

	--3dswap,	-3dswap

Swaps the camera order used in stereoscopic imaging; NOTE: currently not working.

	--settings,	-set

Retrieves the current camera settings and writes them to stdout.

Application-specific Settings

raspistill

	--width,	-w		Set image width <size>

	--height,	-h		Set image height <size>

	--quality,	-q		Set JPEG quality <0 to 100>

Quality 100 is almost completely uncompressed. 75 is a good all-round value.

	--raw,	-r		Add raw Bayer data to JPEG metadata

This option inserts the raw Bayer data from the camera into the JPEG metadata.

	--output,	-o		Output filename <filename>

Specifies the output filename. If not specified, no file is saved. If the filename is '-', then all output is sent to stdout.

	--latest,	-l		Link latest frame to filename <filename>

Makes a file system link under this name to the latest frame.

	--verbose,	-v		Output verbose information during run

Outputs debugging/information messages during the program run.

	--timeout,	-t		Time before the camera takes picture and shuts down

The program will run for the specified length of time, entered in milliseconds. It then takes the capture and saves it if an output is specified. If a timeout value is not specified, then it is set to 5 seconds (-t 5000). Note that low values (less than 500ms, although it can depend on other settings) may not give enough time for the camera to start up and provide enough frames for the automatic algorithms like AWB and AGC to provide accurate results.

If set to 0, the preview will run indefinitely, until stopped with CTRL-C. In this case no capture is made.

	--timelapse,	-tl		time-lapse mode

The specific value is the time between shots in milliseconds. Note that you should specify %04d at the point in the filename where you want a frame count number to appear. So, for example, the code below will produce a capture every 2 seconds, over a total period of 30s, named image0001.jpg, image0002.jpg and so on, through to image0015.jpg.

-t 30000 -tl 2000 -o image%04d.jpg

Note that the %04d indicates a 4-digit number, with leading zeroes added to make the required number of digits. So, for example, %08d would result in an 8-digit number.

If a time-lapse value of 0 is entered, the application will take pictures as fast as possible. Note that there’s an minimum enforced pause of 30ms between captures to ensure that exposure calculations can be made.

	--framestart,	-fs

Specifies the first frame number in the timelapse. Useful if you have already saved a number of frames, and want to start again at the next frame.

	--datetime,	-dt

Instead of a simple frame number, the timelapse file names will use a date/time value of the format aabbccddee, where aa is the month, bb is the day of the month, cc is the hour, dd is the minute, and ee is the second.

	--timestamp,	-ts

Instead of a simple frame number, the timelapse file names will use a single number which is the Unix timestamp, i.e. the seconds since 1970.

	--thumb,	-th		Set thumbnail parameters (x:y:quality)

Allows specification of the thumbnail image inserted into the JPEG file. If not specified, defaults are a size of 64x48 at quality 35.

if --thumb none is specified, no thumbnail information will be placed in the file. This reduces the file size slightly.

	--demo,	-d		Run a demo mode <milliseconds>

This options cycles through the range of camera options. No capture is taken, and the demo will end at the end of the timeout period, irrespective of whether all the options have been cycled. The time between cycles should be specified as a millisecond value.

	--encoding,	-e		Encoding to use for output file

Valid options are jpg, bmp, gif, and png. Note that unaccelerated image types (GIF, PNG, BMP) will take much longer to save than jpg, which is hardware accelerated. Also note that the filename suffix is completely ignored when deciding the encoding of a file.

	--restart,	-rs

Sets the JPEG restart marker interval to a specific value. Can be useful for lossy transport streams because it allows a broken JPEG file to still be partially displayed.

	--exif,	-x		EXIF tag to apply to captures (format as 'key=value')

Allows the insertion of specific EXIF tags into the JPEG image. You can have up to 32 EXIF tag entries. This is useful for tasks like adding GPS metadata. For example, to set the longitude:

--exif GPS.GPSLongitude=5/1,10/1,15/1

would set the longitude to 5 degs, 10 minutes, 15 seconds. See EXIF documentation for more details on the range of tags available; the supported tags are as follows:

IFD0.<   or
IFD1.<
ImageWidth, ImageLength, BitsPerSample, Compression, PhotometricInterpretation, ImageDescription, Make, Model, StripOffsets, Orientation, SamplesPerPixel, RowsPerString, StripByteCounts, XResolution, YResolution, PlanarConfiguration, ResolutionUnit, TransferFunction, Software, DateTime, Artist, WhitePoint, PrimaryChromaticities, JPEGInterchangeFormat, JPEGInterchangeFormatLength, YCbCrCoefficients, YCbCrSubSampling, YCbCrPositioning, ReferenceBlackWhite, Copyright>

EXIF.<
ExposureTime, FNumber, ExposureProgram, SpectralSensitivity, ISOSpeedRatings, OECF, ExifVersion, DateTimeOriginal, DateTimeDigitized, ComponentsConfiguration, CompressedBitsPerPixel, ShutterSpeedValue, ApertureValue, BrightnessValue, ExposureBiasValue, MaxApertureValue, SubjectDistance, MeteringMode, LightSource, Flash, FocalLength, SubjectArea, MakerNote, UserComment, SubSecTime, SubSecTimeOriginal, SubSecTimeDigitized, FlashpixVersion, ColorSpace, PixelXDimension, PixelYDimension, RelatedSoundFile, FlashEnergy, SpatialFrequencyResponse, FocalPlaneXResolution, FocalPlaneYResolution, FocalPlaneResolutionUnit, SubjectLocation, ExposureIndex, SensingMethod, FileSource, SceneType, CFAPattern, CustomRendered, ExposureMode, WhiteBalance, DigitalZoomRatio, FocalLengthIn35mmFilm, SceneCaptureType, GainControl, Contrast, Saturation, Sharpness, DeviceSettingDescription, SubjectDistanceRange, ImageUniqueID>

GPS.<
GPSVersionID, GPSLatitudeRef, GPSLatitude, GPSLongitudeRef, GPSLongitude, GPSAltitudeRef, GPSAltitude, GPSTimeStamp, GPSSatellites, GPSStatus, GPSMeasureMode, GPSDOP, GPSSpeedRef, GPSSpeed, GPSTrackRef, GPSTrack, GPSImgDirectionRef, GPSImgDirection, GPSMapDatum, GPSDestLatitudeRef, GPSDestLatitude, GPSDestLongitudeRef, GPSDestLongitude, GPSDestBearingRef, GPSDestBearing, GPSDestDistanceRef, GPSDestDistance, GPSProcessingMethod, GPSAreaInformation, GPSDateStamp, GPSDifferential>

EINT.<
InteroperabilityIndex, InteroperabilityVersion, RelatedImageFileFormat, RelatedImageWidth, RelatedImageLength>

Note that a small subset of these tags will be set automatically by the camera system, but will be overridden by any EXIF options on the command line.

Setting --exif none will prevent any EXIF information being stored in the file. This reduces the file size slightly.

	--gpsdexif,	-gps

Applies real-time EXIF information from any attached GPS dongle (using GSPD) to the image; requires libgps.so to be installed.

	--fullpreview,	-fp		Full preview mode

This runs the preview window using the full resolution capture mode. Maximum frames per second in this mode is 15fps, and the preview will have the same field of view as the capture. Captures should happen more quickly, as no mode change should be required. This feature is currently under development.

	--keypress,	-k		Keypress mode

The camera is run for the requested time (-t), and a capture can be initiated throughout that time by pressing the Enter key. Pressing X then Enter will exit the application before the timeout is reached. If the timeout is set to 0, the camera will run indefinitely until the user presses X then Enter. Using the verbose option (-v) will display a prompt asking for user input, otherwise no prompt is displayed.

	--signal,	-s		Signal mode

The camera is run for the requested time (-t), and a capture can be initiated throughout that time by sending a USR1 signal to the camera process. This can be done using the kill command. You can find the camera process ID using the pgrep raspistill command.

kill -USR1 <process id of raspistill>

	--burst,	-bm

Sets burst capture mode. This prevents the camera from returning to preview mode in between captures, meaning that captures can be taken closer together.

raspivid

	--width,	-w		Set image width <size>

Width of resulting video. This should be between 64 and 1920.

	--height,	-h		Set image height <size>

Height of resulting video. This should be between 64 and 1080.

	--bitrate,	-b		Set bitrate

Use bits per second, so 10Mbits/s would be -b 10000000. For H264, 1080p30 a high quality bitrate would be 15Mbits/s or more. Maximum bitrate is 25Mbits/s (-b 25000000), but much over 17Mbits/s won’t show noticeable improvement at 1080p30.

	--output,	-o		Output filename <filename>

Specify the output filename. If not specified, no file is saved. If the filename is '-', then all output is sent to stdout.

To connect to a remote IPv4 host, use tcp or udp followed by the required IP Address. e.g. tcp://192.168.1.2:1234 or udp://192.168.1.2:1234.

To listen on a TCP port (IPv4) and wait for an incoming connection use --listen (-l) option, e.g. raspivid -l -o tcp://0.0.0.0:3333 will bind to all network interfaces, raspivid -l -o tcp://192.168.1.1:3333 will bind to a local IPv4.

	--listen,	-l

When using a network connection as the data sink, this option will make the system wait for a connection from the remote system before sending data.

	--verbose,	-v		Output verbose information during run

Outputs debugging/information messages during the program run.

	--timeout,	-t		Time before the camera takes picture and shuts down

The total length of time that the program will run for. If not specified, the default is 5000ms (5 seconds). If set to 0, the application will run indefinitely until stopped with Ctrl-C.

	--demo,	-d		Run a demo mode <milliseconds>

This options cycles through the range of camera options. No recording is done, and the demo will end at the end of the timeout period, irrespective of whether all the options have been cycled. The time between cycles should be specified as a millisecond value.

	--framerate,	-fps		Specify the frames per second to record

At present, the minimum frame rate allowed is 2fps, and the maximum is 30fps. This is likely to change in the future.

	--penc,	-e		Display preview image after encoding

Switch on an option to display the preview after compression. This will show any compression artefacts in the preview window. In normal operation, the preview will show the camera output prior to being compressed. This option is not guaranteed to work in future releases.

	--intra,	-g		Specify the intra refresh period (key frame rate/GoP)

Sets the intra refresh period (GoP) rate for the recorded video. H264 video uses a complete frame (I-frame) every intra refresh period, from which subsequent frames are based. This option specifies the number of frames between each I-frame. Larger numbers here will reduce the size of the resulting video, and smaller numbers make the stream less error-prone.

	--qp,	-qp		Set quantisation parameter

Sets the initial quantisation parameter for the stream. Varies from approximately 10 to 40, and will greatly affect the quality of the recording. Higher values reduce quality and decrease file size. Combine this setting with a bitrate of 0 to set a completely variable bitrate.

	--profile,	-pf		Specify H264 profile to use for encoding

Sets the H264 profile to be used for the encoding. Options are:

  • baseline

  • main

  • high

	--level,	-lev

Specifies the H264 encoder level to use for encoding. Options are 4, 4.1, and 4.2.

	--irefresh,	-if

Sets the H264 intra-refresh type. Possible options are cyclic, adaptive, both, and cyclicrows.

	--inline,	-ih		Insert PPS, SPS headers

Forces the stream to include PPS and SPS headers on every I-frame. Needed for certain streaming cases e.g. Apple HLS. These headers are small, so don’t greatly increase the file size.

	--spstimings,	-stm

Insert timing information into the SPS block.

	--timed,	-td		Do timed switches between capture and pause

This options allows the video capture to be paused and restarted at particular time intervals. Two values are required: the on time and the off time. On time is the amount of time the video is captured, and off time is the amount it is paused. The total time of the recording is defined by the timeout option. Note that the recording may take slightly over the timeout setting depending on the on and off times.

For example:

raspivid -o test.h264 -t 25000 -timed 2500,5000

will record for a period of 25 seconds. The recording will be over a timeframe consisting of 2500ms (2.5s) segments with 5000ms (5s) gaps, repeating over the 20s. So the entire recording will actually be only 10s long, since 4 segments of 2.5s = 10s separated by 5s gaps. So:

2.5 record — 5 pause - 2.5 record — 5 pause - 2.5 record — 5 pause — 2.5 record

gives a total recording period of 25s, but only 10s of actual recorded footage.

	--keypress,	-k		Toggle between record and pause on Enter keypress

On each press of the Enter key, the recording will be paused or restarted. Pressing X then Enter will stop recording and close the application. Note that the timeout value will be used to signal the end of recording, but is only checked after each Enter keypress; so if the system is waiting for a keypress, even if the timeout has expired, it will still wait for the keypress before exiting.

	--signal,	-s		Toggle between record and pause according to SIGUSR1

Sending a USR1 signal to the raspivid process will toggle between recording and paused. This can be done using the kill command, as below. You can find the raspivid process ID using pgrep raspivid.

kill -USR1 <process id of raspivid>

Note that the timeout value will be used to indicate the end of recording, but is only checked after each receipt of the SIGUSR1 signal; so if the system is waiting for a signal, even if the timeout has expired, it will still wait for the signal before exiting.

	--split,	-sp

When in a signal or keypress mode, each time recording is restarted, a new file is created.

	--circular,	-c

Select circular buffer mode. All encoded data is stored in a circular buffer until a trigger is activated, then the buffer is saved.

	--vectors,	-x

Turns on output of motion vectors from the H264 encoder to the specified file name.

	--flush,	-fl

Forces a flush of output data buffers as soon as video data is written. This bypasses any OS caching of written data, and can decrease latency.

	--save-pts,	-pts

Saves timestamp information to the specified file. Useful as an input file to mkvmerge.

	--codec,	-cd

Specifies the encoder codec to use. Options are H264 and MJPEG. H264 can encode up to 1080p, whereas MJPEG can encode up to the sensor size, but at decreased framerates due to the higher processing and storage requirements.

	--initial,	-i		Define initial state on startup

Define whether the camera will start paused or will immediately start recording. Options are record or pause. Note that if you are using a simple timeout, and initial is set to pause, no output will be recorded.

	--segment,	-sg		Segment the stream into multiple files

Rather than creating a single file, the file is split into segments of approximately the number of milliseconds specified. In order to provide different filenames, you should add %04d or similar at the point in the filename where you want a segment count number to appear e.g:

--segment 3000 -o video%04d.h264

will produce video clips of approximately 3000ms (3s) long, named video0001.h264, video0002.h264 etc. The clips should be seamless (no frame drops between clips), but the accuracy of each clip length will depend on the intraframe period, as the segments will always start on an I-frame. They will therefore always be equal or longer to the specified period.

The most recent version of Raspivid will also allow the file name to be time-based, rather than using a segment number. For example:

--segment 3000 -o video_%c.h264

will produce file names formatted like so: video_Fri Jul 20 16:23:48 2018.h264

There are many different formatting options available. Note than the %d and %u options are not available, as they are used for the segment number formatting, and that some combinations may produce invalid file names.

	--wrap,	-wr		Set the maximum value for segment number

When outputting segments, this is the maximum the segment number can reach before it’s reset to 1, giving the ability to keep recording segments, but overwriting the oldest one. So if set to 4, in the segment example above, the files produced will be video0001.h264, video0002.h264, video0003.h264, and video0004.h264. Once video0004.h264 is recorded, the count will reset to 1, and video0001.h264 will be overwritten.

	--start,	-sn		Set the initial segment number

When outputting segments, this is the initial segment number, giving the ability to resume a previous recording from a given segment. The default value is 1.

	--raw,	-r

Specify the output file name for any raw data files requested.

	--raw-format,	-rf

Specify the raw format to be used if raw output requested. Options as yuv, rgb, and grey. grey simply saves the Y channel of the YUV image.

raspiyuv

Many of the options for raspiyuv are the same as those for raspistill. This section shows the differences.

Unsupported options:

--exif, --encoding, --thumb, --raw, --quality

Extra options :

	--rgb,	-rgb		Save uncompressed data as RGB888

This option forces the image to be saved as RGB data with 8 bits per channel, rather than YUV420.

Note that the image buffers saved in raspiyuv are padded to a horizontal size divisible by 32, so there may be unused bytes at the end of each line. Buffers are also padded vertically to be divisible by 16, and in the YUV mode, each plane of Y,U,V is padded in this way.

	--luma,	-y

Only outputs the luma (Y) channel of the YUV image. This is effectively the black and white, or intensity, part of the image.

	--bgr,	-bgr

Saves the image data as BGR data rather than YUV.

Command Line Examples

Still Captures

By default, captures are done at the highest resolution supported by the sensor. This can be changed using the -w and -h command line options.

Take a default capture after 2s (times are specified in milliseconds) on the viewfinder, saving in image.jpg:

raspistill -t 2000 -o image.jpg

Take a capture at a different resolution:

raspistill -t 2000 -o image.jpg -w 640 -h 480

Reduce the quality considerably to reduce file size:

raspistill -t 2000 -o image.jpg -q 5

Force the preview to appear at coordinate 100,100, with width 300 pixels and height 200 pixels:

raspistill -t 2000 -o image.jpg -p 100,100,300,200

Disable preview entirely:

raspistill -t 2000 -o image.jpg -n

Save the image as a PNG file (lossless compression, but slower than JPEG). Note that the filename suffix is ignored when choosing the image encoding:

raspistill -t 2000 -o image.png –e png

Add some EXIF information to the JPEG. This sets the Artist tag name to Boris, and the GPS altitude to 123.5m. Note that if setting GPS tags you should set as a minimum GPSLatitude, GPSLatitudeRef, GPSLongitude, GPSLongitudeRef, GPSAltitude, and GPSAltitudeRef:

raspistill -t 2000 -o image.jpg -x IFD0.Artist=Boris -x GPS.GPSAltitude=1235/10

Set an emboss image effect:

raspistill -t 2000 -o image.jpg -ifx emboss

Set the U and V channels of the YUV image to specific values (128:128 produces a greyscale image):

raspistill -t 2000 -o image.jpg -cfx 128:128

Run preview for 2s, with no saved image:

raspistill -t 2000

Take a time-lapse picture, every 10 seconds for 10 minutes (10 minutes = 600000ms), naming the files image_num_001_today.jpg, image_num_002_today.jpg and so on, with the latest picture also available under the name latest.jpg:

raspistill -t 600000 -tl 10000 -o image_num_%03d_today.jpg -l latest.jpg

Take a picture and send the image data to stdout:

raspistill -t 2000 -o -

Take a picture and send the image data to a file:

raspistill -t 2000 -o - > my_file.jpg

Run the camera forever, taking a picture when Enter is pressed:

raspistill -t 0 -k -o my_pics%02d.jpg

Video captures

Image size and preview settings are the same as for stills capture. Default size for video recording is 1080p (1920x1080).

Record a 5s clip with default settings (1080p30):

raspivid -t 5000 -o video.h264

Record a 5s clip at a specified bitrate (3.5Mbits/s):

raspivid -t 5000 -o video.h264 -b 3500000

Record a 5s clip at a specified framerate (5fps):

raspivid -t 5000 -o video.h264 -f 5

Encode a 5s camera stream and send the image data to stdout:

raspivid -t 5000 -o -

Encode a 5s camera stream and send the image data to a file:

raspivid -t 5000 -o - > my_file.h264

Shell Error Codes

The applications described here will return a standard error code to the shell on completion. Possible error codes are:

C Define Code Description

EX_OK

0

Application ran successfully

EX_USAGE

64

Bad command line parameter

EX_SOFTWARE

70

Software or camera error

130

Application terminated by Ctrl-C

libcamera and libcamera-apps Installation

Your Raspberry Pi should be running the latest version of the Raspberry Pi OS (Buster at the time of writing), and the camera and I2C interfaces must both be enabled (check the Interfaces tab of the Raspberry Pi Configuration tool, from the Preferences menu). First ensure your system, firmware and all its applications and repositories are up to date by entering the following commands into a terminal window.

sudo apt update
sudo apt full-upgrade

libcamera is under active development which sometimes means that new features need to be supported in Raspberry Pi OS, even before they are officially released. Therefore we currently recommend updating to the latest release candidate. To do this, first reboot your Pi, and then use

sudo rpi-update
Warning
Note that the release candidate is not as thoroughly tested as an official release. If your Raspberry Pi contains important or critical data we would strongly advise that it is backed up first, or that a fresh SD card is used for the purpose of trying libcamera.

Next, the /boot/config.txt file must be updated to load and use the camera driver, by adding the following to the bottom.

dtoverlay=imx219

If you are using a sensor other than the imx219 you will need to supply the alternative name here (for example, ov5647 for the V1 camera, or imx477 for the HQ Cam).

NOTE: after rebooting, control of the camera system will be passed to the ARM cores, and firmware-based camera functions (such as raspistill and so forth) will no longer work. Setting /boot/config.txt back and rebooting will restore the previous behaviour.

Select the Correct Graphics Driver

There are 3 different graphics drivers available on the Raspberry Pi: firmware, FKMS and KMS. The firmware graphics driver cannot be used with libcamera-apps. The Raspberry Pi 4 and 400 use the newer FKMS graphics driver by default: this is compatible with libcamera-apps. For all other models of Raspberry Pi, you must select the FKMS driver by adding the following line to the /boot/config.txt file:

dtoverlay=vc4-fkms-v3d

Building libcamera and qcam

The build system and runtime environment of libcamera have a number of dependencies. They can be installed with the following commands.

sudo apt install libboost-dev
sudo apt install libgnutls28-dev openssl libtiff5-dev
sudo apt install qtbase5-dev libqt5core5a libqt5gui5 libqt5widgets5
sudo apt install meson
sudo pip3 install pyyaml ply

The Qt libraries are only required for libcamera's qcam demo app.

Unfortunately, at the time of writing, the default version of meson is a little old, so please execute:

sudo pip3 install --upgrade meson

We can now check out the code and build libcamera as follows. Note that if you are using a 1GB system (such as a Pi 3) you may need to replace ninja -C build by ninja -C build -j 2 as this will stop ninja exhausting the system memory and aborting.

git clone git://linuxtv.org/libcamera.git
cd libcamera
meson build
cd build
meson configure -Dpipelines=raspberrypi -Dtest=false
cd ..
ninja -C build
sudo ninja -C build install

At this stage you may wish to check that qcam works. Type build/src/qcam/qcam and check that you see a camera image.

Raspberry Pi’s libcamera-apps

Raspberry Pi’s libcamera-apps provide very similar functionality to the raspistill and raspivid applications that use the proprietary firmware-based camera stack. To build them, we must first install libepoxy.

cd
sudo apt install libegl1-mesa-dev
git clone https://github.com/anholt/libepoxy.git
cd libepoxy
mkdir _build
cd _build
meson
ninja
sudo ninja install

Finally we can build the libcamera-apps. As we saw previously, 1GB platforms may need make -j2 in place of make -j4.

cd
sudo apt install cmake libboost-program-options-dev libdrm-dev libexif-dev
git clone https://github.com/raspberrypi/libcamera-apps.git
cd libcamera-apps
mkdir build
cd build
cmake ..
make -j4

To check everything is working correctly, type ./libcamera-hello - you should see a preview window displayed for about 5 seconds.

Note

For Pi 3 devices, as we saw previously, 1GB devices may need make -j2 instead of make -j4.

Also, Pi 3s do not by default use the correct GL driver, so please ensure you have dtoverlay=vc4-fkms-v3d in the [all] (not in the [pi4]) section of your /boot/config.txt file.

Further Documentation

You can find out more in the Raspberry Pi Camera Algorithm and Tuning Guide..

More information on the libcamera-apps is available on Github.

Creating Timelapse Video

To create a time-lapse video, you simply configure the Raspberry Pi to take a picture at a regular interval, such as once a minute, then use an application to stitch the pictures together into a video. There are a couple of ways of doing this.

Using raspistill Timelapse Mode

The raspistill application has a built in time-lapse mode, using the --timelapse (or -tl) command line switch. The value that follows the switch is the time between shots in milliseconds:

raspistill -t 30000 -tl 2000 -o image%04d.jpg
Note

The %04d in the output filename: this indicates the point in the filename where you want a frame count number to appear. So, for example, the command above will produce a capture every two seconds (2000ms), over a total period of 30 seconds (30000ms), named image0001.jpg, image0002.jpg, and so on, through to image0015.jpg.

The %04d indicates a four-digit number, with leading zeros added to make up the required number of digits. So, for example, %08d would result in an eight-digit number. You can miss out the 0 if you don’t want leading zeros.

If a timelapse value of 0 is entered, the application will take pictures as fast as possible. Note that there’s an minimum enforced pause of approximately 30 milliseconds between captures to ensure that exposure calculations can be made.

Automating using cron Jobs

A good way to automate taking a picture at a regular interval is using cron. Open the cron table for editing:

crontab -e

This will either ask which editor you would like to use, or open in your default editor. Once you have the file open in an editor, add the following line to schedule taking a picture every minute (referring to the Bash script from the raspistill page):

* * * * * /home/pi/camera.sh 2>&1

Save and exit and you should see the message:

crontab: installing new crontab

Make sure that you use e.g. %04d to make raspistill output each image to a new file: if you don’t, then each time raspistill writes an image it will overwrite the same file.

Stitching Images Together

Now you’ll need to stitch the photos together into a video. You can do this on the Pi using ffmpeg but the processing will be slow. You may prefer to transfer the image files to your desktop computer or laptop and produce the video there.

First you will need to install ffmpeg if it’s not already installed.

sudo apt install ffmpeg

Now you can use the ffmpeg tool to convert your JPEG files into an mp4 video:

ffmpeg -r 10 -f image2 -pattern_type glob -i 'image*.jpg' -s 1280x720 -vcodec libx264 timelapse.mp4

On a Raspberry Pi 3, this can encode a little more than two frames per second. The performance of other Pi models will vary. The parameters used are:

  • -r 10 Set frame rate (Hz value) to ten frames per second in the output video.

  • -f image2 Set ffmpeg to read from a list of image files specified by a pattern.

  • -pattern_type glob When importing the image files, use wildcard patterns (globbing) to interpret the filename input by -i, in this case "image.jpg", where "" would be the image number.

  • -i 'image*.jpg' The input file specification (to match the files produced during the capture).

  • -s 1280x720 Scale to 720p. You can also use 1920x1080, or lower resolutions, depending on your requirements.

  • -vcodec libx264 Use the software x264 encoder.

  • timelapse.mp4 The name of the output video file.

ffmpeg has a comprehensive parameter set for varying encoding options and other settings. These can be listed using ffmpeg --help.

Shooting RAW using the Camera Modules

The definition of raw images can vary. The usual meaning is raw Bayer data directly from the sensor, although some may regard an uncompressed image that has passed through the ISP (and has therefore been processed) as raw.

Both options are available from the Raspberry Pi cameras.

Processed, Non-Lossy Images

The usual output from raspistill is a compressed JPEG file that has passed through all the stages of image processing to produce a high-quality image. However, JPEG, being a lossy format does throw away some information that the user may want.

raspistill has an encoding option that allows you to specify the output format: either jpg, gif, bmp or png. The latter two are non-lossy, so no data is thrown away in an effort to improve compression, but do require conversion from the original YUV, and because these formats do not have hardware support they produce images slightly more slowly than JPEG.

e.g.

raspistill --encoding png -o fred.png

Another option is to use the raspiyuv application. This avoids any final formatting stage, and writes raw YUV420 or RGB888 data to the requested file. YUV420 is the format used in much of the ISP, so this can be regarded as a dump of the processed image data at the end of the ISP processing.

Unprocessed Images

For some applications, such as astrophotography, having the raw Bayer data direct from the sensor can be useful. This data will need to be post-processed to produce a useful image.

raspistill has a raw option that will append this raw Bayer data onto the end of the output JPEG file.

raspistill --raw -o fred.jpg

The raw data will need to be extracted from the JPEG file.

Long Exposures

The different camera modules have different capabilities with regard to exposure times:

Module Max exposure (seconds)

V1 (OMx5647)

6

V2 (IMX219)

10

HQ (IMX417)

230

Due to the way the ISP works, by default asking for a long exposure can result in the capture process taking up to 7 times the exposure time, so a 200 second exposure on the HQ camera could take 1400 seconds to actually return an image. This is due to the way the camera system works out the correct exposures and gains to use in the image, using it’s AGC (automatic gain control) and AWB (automatic white balance) algorithms. The system needs a few frames to calculate these numbers in order to produce a decent image. When combined with frame discards at the start of processing (in case they are corrupt), and the switching between preview and captures modes, this can result in up to 7 frames needed to produce a final image. With long exposures, that can take a long time.

Fortunately, the camera parameters can be altered to reduce frame time dramatically; however this means turning off the automatic algorithms and manually providing values for the AGC and, if required, AWB. In addition, a burst mode can be used to mitigate the effects of moving between preview and captures modes.

For the HQ camera, the following example will take a 100 second exposure.

raspistill -t 10 -bm -ex off -ag 1 -ss 100000000 -st -o long_exposure.jpg

This example turns on burst mode (-bm) which will disable the preview switching, turns off automatic gain control and manually sets it to 1 (-ag 1). The -st option forces statistics like AWB to be calculated from the captured frame, avoiding the need to provide specific values, although these can be entered if necessary.

The V4L2 Driver

The V4L2 driver provides a standard Linux driver for accessing camera features: this is the driver needed to use a Raspberry Pi camera as, for example, a webcam. The V4L2 driver provides a standard API on top of the firmware-based camera system.

Installing the V4L2 Driver

Installation of the V4L2 driver is automatic. It is loaded as a child of the VCHIQ driver, and once loaded it will check how many cameras are attached and then create the right number of device nodes.

\dev\videoX Default Action

video10

Decode

video11

Encode

video12

Simple ISP

video13

Full ISP In

video14

Full ISP Hi-res Out

video15

Full ISP Lo-res Out

video16

Full ISP statistics

video19

HEVC Decode

Testing the Driver

There are many Linux applications that use the V4L2 API. The kernel maintainers provide a test tool called Qv4l2 which can be installed from the Raspberry Pi OS repositories as follows:

sudo apt install Qv4l2

Running this tool will test whether the driver has been successfully installed.

Using the Driver

Please see the V4L2 documentation for details on using this driver.

Camera Serial Interface 2 (CSI2) "Unicam"

The SoC’s used on the Raspberry Pi range all have two camera interfaces that support either CSI-2 D-PHY 1.1 or CCP2 (Compact Camera Port 2) sources. This interface is known by the codename "Unicam". The first instance of Unicam supports 2 CSI-2 data lanes, whilst the second supports 4. Each lane can run at up to 1Gbit/s (DDR, so the max link frequency is 500MHz).

However, the normal variants of the Raspberry Pi only expose the second instance, and route out only 2 of the data lanes to the camera connector. The Compute Module range route out all lanes from both peripherals.

Software Interfaces

There are 3 independent software interfaces available for communicating with the Unicam peripheral:

Firmware

The closed source GPU firmware has drivers for Unicam and three camera sensors plus a bridge chip. They are the Raspberry Pi Camera v1.3 (Omnivision OV5647), Raspberry Pi Camera v2.1 (Sony IMX219), Raspberry Pi HQ camera (Sony IMX477), and an unsupported driver for the Toshiba TC358743 HDMI->CSI2 bridge chip.

This driver integrates the source driver, Unicam, ISP, and tuner control into a full camera stack delivering processed output images. It can be used via MMAL, OpenMAX IL and V4L2 using the bcm2835-v4l2 kernel module. Only Raspberry Pi cameras are supported via this interface.

MMAL rawcam component

This was an interim option before the V4L2 driver was available. The MMAL component vc.ril.rawcam allows receiving of the raw CSI2 data in the same way as the V4L2 driver, but all source configuration has to be done by userland over whatever interface the source requires. The raspiraw application is available on github. It uses this component and the standard I2C register sets for OV5647, IMX219, and ADV7282M to support streaming.

V4L2

There is a fully open source kernel driver available for the Unicam block; this is a kernel module called bcm2835-unicam. This interfaces to V4L2 subdevice drivers for the source to deliver the raw frames. This bcm2835-unicam driver controls the sensor, and configures the CSI-2 receiver so that the peripheral will write the raw frames (after Debayer) to SDRAM for V4L2 to deliver to applications. Except for this ability to unpack the CSI-2 Bayer formats to 16bits/pixel, there is no image processing between the image source (e.g. camera sensor) and bcm2835-unicam placing the image data in SDRAM.

|------------------------|
|     bcm2835-unicam     |
|------------------------|
     ^             |
     |      |-------------|
 img |      |  Subdevice  |
     |      |-------------|
     v   -SW/HW-   |
|---------|   |-----------|
| Unicam  |   | I2C or SPI|
|---------|   |-----------|
csi2/ ^             |
ccp2  |             |
    |-----------------|
    |     sensor      |
    |-----------------|

Mainline Linux has a range of existing drivers. The Raspberry Pi kernel tree has some additional drivers and device tree overlays to configure them that have all been tested and confirmed to work. They include:

Device Type Notes

Omnivision OV5647

5MP Camera

Original Raspberry Pi Camera

Sony IMX219

8MP Camera

Revision 2 Raspberry Pi camera

Sony IMX477

12MP Camera

Raspberry Pi HQ camera

Toshiba TC358743

HDMI to CSI-2 bridge

Analog Devices ADV728x-M

Analog video to CSI-2 bridge

No interlaced support

Infineon IRS1125

Time-of-flight depth sensor

Supported by a third party

As the subdevice driver is also a kernel driver, with a standardised API, 3rd parties are free to write their own for any source of their choosing.

Developing a Third-Party Drivers

This is the recommended approach to interfacing via Unicam.

When developing a driver for a new device intended to be used with the bcm2835-unicam module, you need the driver and corresponding device tree overlays. Ideally the driver should be submitted to the linux-media mailing list for code review and merging into mainline, then moved to the Raspberry Pi kernel tree, but exceptions may be made for the driver to be reviewed and merged directly to the Raspberry Pi kernel.

Please note that all kernel drivers are licensed under the GPLv2 licence, therefore source code MUST be available. Shipping of binary modules only is a violation of the GPLv2 licence under which the Linux kernel is licensed.

The bcm2835-unicam has been written to try and accommodate all types of CSI-2 source driver as are currently found in the mainline Linux kernel. Broadly these can be split into camera sensors and bridge chips. Bridge chips allow for conversion between some other format and CSI-2.

Camera sensors

The sensor driver for a camera sensor is responsible for all configuration of the device, usually via I2C or SPI. Rather than writing a driver from scratch, it is often easier to take an existing driver as a basis and modify it as appropriate.

The IMX219 driver is a good starting point. This driver supports both 8bit and 10bit Bayer readout, so enumerating frame formats and frame sizes is slightly more involved.

Sensors generally support V4L2 user controls. Not all these controls need to be implemented in a driver. The IMX219 driver only implements a small subset, listed below, the implementation of which is handled by the imx219_set_ctrl function.

  • V4L2_CID_PIXEL_RATE / V4L2_CID_VBLANK / V4L2_CID_HBLANK: allows the application to set the frame rate.

  • V4L2_CID_EXPOSURE: sets the exposure time in lines. The application needs to use V4L2_CID_PIXEL_RATE, V4L2_CID_HBLANK, and the frame width to compute the line time.

  • V4L2_CID_ANALOGUE_GAIN: analogue gain in sensor specific units.

  • V4L2_CID_DIGITAL_GAIN: optional digital gain in sensor specific units.

  • V4L2_CID_HFLIP / V4L2_CID_VFLIP: flips the image either horizontally or vertically. Note that this operation may change the Bayer order of the data in the frame, as is the case on the imx219.

  • V4L2_CID_TEST_PATTERN / V4L2_CID_TEST_PATTERN_*: Enables output of various test patterns from the sensor. Useful for debugging.

In the case of the IMX219, many of these controls map directly onto register writes to the sensor itself.

Device tree is used to select the sensor driver and configuren parameters such as number of CSI-2 lanes, continuous clock lane operation, and link frequency (often only one is supported).

Bridge chips

These are devices that convert an incoming video stream, for example HDMI or composite, into a CSI-2 stream that can be accepted by the Raspberry Pi CSI-2 receiver.

Handling bridge chips is more complicated, as unlike camera sensors they have to respond to the incoming signal and report that to the application.

The mechanisms for handling bridge chips can be broadly split into either analogue or digital.

When using ioctls in the sections below, an S in the ioctl name means it is a set function, whilst G is a get function and _ENUM enumerates a set of permitted values.

Analogue video sources

Analogue video sources use the standard ioctls for detecting and setting video standards. :https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-g-std.html[VIDIOC_G_STD], VIDIOC_S_STD, VIDIOC_ENUMSTD, and VIDIOC_QUERYSTD

Selecting the wrong standard will generally result in corrupt images. Setting the standard will typically also set the resolution on the V4L2 CAPTURE queue. It can not be set via VIDIOC_S_FMT. Generally requesting the detected standard via VIDIOC_QUERYSTD and then setting it with VIDIOC_S_STD before streaming is a good idea.

Digital video sources

For digital video sources, such as HDMI, there is an alternate set of calls that allow specifying of all the digital timing parameters (VIDIOC_G_DV_TIMINGS, VIDIOC_S_DV_TIMINGS, VIDIOC_ENUM_DV_TIMINGS, and VIDIOC_QUERY_DV_TIMINGS).

As with analogue bridges, the timings typically fix the V4L2 CAPTURE queue resolution, and calling VIDIOC_S_DV_TIMINGS with the result of VIDIOC_QUERY_DV_TIMINGS before streaming should ensure the format is correct.

Depending on the bridge chip and the driver, it may be possible for changes in the input source to be reported to the application via VIDIOC_SUBSCRIBE_EVENT and V4L2_EVENT_SOURCE_CHANGE.

Currently supported devices

There are 2 bridge chips that are currently supported by the Raspberry Pi Linux kernel, the Analog Devices ADV728x-M for analogue video sources, and the Toshiba TC358743 for HDMI sources.

Analog Devices ADV728x(A)-M Analogue video to CSI2 bridge

These chips convert composite, S-video (Y/C), or component (YPrPb) video into a single lane CSI-2 interface, and are supported by the ADV7180 kernel driver.

Product details for the various versions of this chip can be found on the Analog Devices website.

Because of some missing code in the current core V4L2 implementation, selecting the source fails, so the Raspberry Pi kernel version adds a kernel module parameter called dbg_input to the ADV7180 kernel driver which sets the input source every time VIDIOC_S_STD is called. At some point mainstream will fix the underlying issue (a disjoin between the kernel API call s_routing, and the userspace call VIDIOC_S_INPUT) and this modification will be removed.

Please note that receiving interlaced video is not supported, therefore the ADV7281(A)-M version of the chip is of limited use as it doesn’t have the necessary I2P deinterlacing block. Also ensure when selecting a device to specify the -M option. Without that you will get a parallel output bus which can not be interfaced to the Raspberry Pi.

There are no known commercially available boards using these chips, but this driver has been tested via the Analog Devices EVAL-ADV7282-M evaluation board

This driver can be loaded using the config.txt dtoverlay adv7282m if you are using the ADV7282-M chip variant; or adv728x-m with a parameter of either adv7280m=1, adv7281m=1, or adv7281ma=1 if you are using a different variant. e.g.

dtoverlay=adv728x-m,adv7280m=1

Toshiba TC358743 HDMI to CSI2 bridge

This is a HDMI to CSI-2 bridge chip, capable of converting video data at up to 1080p60.

Information on this bridge chip can be found on the Toshiba Website

The TC358743 interfaces HDMI in to CSI-2 and I2S outputs. It is supported by the TC358743 kernel module.

The chip supports incoming HDMI signals as either RGB888, YUV444, or YUV422, at up to 1080p60. It can forward RGB888, or convert it to YUV444 or YUV422, and convert either way between YUV444 and YUV422. Only RGB888 and YUV422 support has been tested. When using 2 CSI-2 lanes, the maximum rates that can be supported are 1080p30 as RGB888, or 1080p50 as YUV422. When using 4 lanes on a Compute Module, 1080p60 can be received in either format.

HDMI negotiates the resolution by a receiving device advertising an EDID of all the modes that it can support. The kernel driver has no knowledge of the resolutions, frame rates, or formats that you wish to receive, therefore it is up to the user to provide a suitable file. This is done via the VIDIOC_S_EDID ioctl, or more easily using v4l2-ctl --fix-edid-checksums --set-edid=file=filename.txt (adding the --fix-edid-checksums option means that you don’t have to get the checksum values correct in the source file). Generating the required EDID file (a textual hexdump of a binary EDID file) is not too onerous, and there are tools available to generate them, but it is beyond the scope of this page.

As described above, use the DV_TIMINGS ioctls to configure the driver to match the incoming video. The easiest approach for this is to use the command v4l2-ctl --set-dv-bt-timings query. The driver does support generating the SOURCE_CHANGED events should you wish to write an application to handle a changing source. Changing the output pixel format is achieved by setting it via VIDIOC_S_FMT, however only the pixel format field will be updated as the resolution is configured by the dv timings.

There are a couple of commercially available boards that connect this chip to the Raspberry Pi. The Auvidea B101 and B102 are the most widely obtainable, but other equivalent boards are available.

This driver is loaded using the config.txt dtoverlay tc358743.

The chip also supports capturing stereo HDMI audio via I2S. The Auvidea boards break the relevant signals out onto a header, which can be connected to the Pi’s 40 pin header. The required wiring is:

Signal B101 header Pi 40 pin header BCM GPIO

LRCK/WFS

7

35

19

BCK/SCK

6

12

18

DATA/SD

5

38

20

GND

8

39

N/A

The tc358743-audio overlay is required in addition to the tc358743 overlay. This should create an ALSA recording device for the HDMI audio. Please note that there is no resampling of the audio. The presence of audio is reflected in the V4L2 control TC358743_CID_AUDIO_PRESENT / "audio-present", and the sample rate of the incoming audio is reflected in the V4L2 control TC358743_CID_AUDIO_SAMPLING_RATE / "Audio sampling-frequency". Recording when no audio is present will generate warnings, as will recording at a sample rate different from that reported.

Raspberry Pi HQ Camera Filter Removal

The High Quality Camera contains an IR filter, which is used to reduce the camera’s sensitivity to infrared light. This ensures that outdoor photos look more natural. However, some nature photography can be enhanced with the removal of this filter; the colours of sky, plants, and water can be affected by its removal. The camera can also be used without the filter for night vision in a location that is illuminated with infrared light.

Warning
This procedure cannot be reversed: the adhesive that attaches the filter will not survive being lifted and replaced, and while the IR filter is about 1.1mm thick, it may crack when it is removed. Removing it will void the warranty on the product. Nevertheless, removing the filter will be desirable to some users.

To remove the filter:

  1. Work in a clean and dust-free environment, as the sensor will be exposed to the air. camera sensor

  2. Unscrew the two 1.5 mm hex lock keys on the underside of the main circuit board. Be careful not to let the washers roll away. There is a gasket of slightly sticky material between the housing and PCB which will require some force to separate. camera gasket

  3. Lift up the board and place it down on a very clean surface. Make sure the sensor does not touch the surface.

  4. Before completing the next step, read through all of the steps and decide whether you are willing to void your warranty. Do not proceed unless you are sure that you are willing to void your warranty.

  5. Turn the lens around so that it is "looking" upwards and place it on a table. You may try some ways to weaken the adhesive, such as a little isopropyl alcohol and/or heat (~20-30 C). Using a pen top or similar soft plastic item, push down on the filter only at the very edges where the glass attaches to the aluminium - to minimise the risk of breaking the filter. The glue will break and the filter will detach from the lens mount. camera ir filter

  6. Given that changing lenses will expose the sensor, at this point you could affix a clear filter (for example, OHP plastic) to minimize the chance of dust entering the sensor cavity. camera protective filter

  7. Replace the main housing over the circuit board. Be sure to realign the housing with the gasket, which remains on the circuit board.

  8. The nylon washer prevents damage to the circuit board; apply this washer first. Next, fit the steel washer, which prevents damage to the nylon washer.

  9. Screw down the two hex lock keys. As long as the washers have been fitted in the correct order, they do not need to be screwed very tightly.

  10. Note that it is likely to be difficult or impossible to glue the filter back in place and return the device to functioning as a normal optical camera.