That is measuring the latency between the frame start interrupt (ie end of exposure for the first line) to the ARM getting the encoded buffer. That's probably a more useful number than the one you measured.Using "raspivid -w 1280 -h 960 -o /dev/null -n" I get a latency of 46-48ms. Switching to 1080P I get 80-81ms.
Code: Select all
[email protected]:~ $ /opt/vc/bin/tvservice -s state 0x12000a [HDMI DMT (85) RGB full 16:9], 1280x720 @ 60.00Hz, progressive
Physically impossible. How can you read out a line that isn't exposed? It'll be black.Mauronic wrote:Is there a camera / interface option that will allow immediate readout so we get each line as scanned without waiting for the exposure to finish? Or is this how it works now?
The recommendation for 1080P would be at least 10Mbit/s for an efficient codec like H264. At ~VGA then you really still want > 1Mbit/s on H264. The latency difference between MJPEG and H264 really isn't huge.Mauronic wrote:It was suggested to me to consider a very fast (and inefficient) encoding like MJPEG to get the data over WiFi ASAP. In this case I would budget a max bandwidth of 4 - 10MBps.
Code: Select all
diff --git a/host_applications/linux/apps/raspicam/RaspiVid.c b/host_applications/linux/apps/raspicam/RaspiVid.c index 2a9e586..7dac347 100644 --- a/host_applications/linux/apps/raspicam/RaspiVid.c +++ b/host_applications/linux/apps/raspicam/RaspiVid.c @@ -1207,6 +1207,7 @@ static void encoder_buffer_callback(MMAL_PORT_T *port, MMAL_BUFFER_HEADER_T *buf MMAL_BUFFER_HEADER_T *new_buffer; static int64_t base_time = -1; static int64_t last_second = -1; + int64_t stc; // All our segment times based on the receipt of the first encoder callback if (base_time == -1) @@ -1221,6 +1222,9 @@ static void encoder_buffer_callback(MMAL_PORT_T *port, MMAL_BUFFER_HEADER_T *buf int bytes_written = buffer->length; int64_t current_time = get_microseconds64()/1000; + mmal_port_parameter_get_int64(port, MMAL_PARAMETER_SYSTEM_TIME, &stc); + vcos_log_error("Latency is %lld usecs", stc-buffer->pts); + vcos_assert(pData->file_handle); if(pData->pstate->inlineMotionVectors) vcos_assert(pData->imv_file_handle);
No, they're still rolling shutter sensors. From a system architecture perspective they are nearly identical to how the Pi camera works.Mauronic wrote:I did some research on this and found a company called Arducam that sells standalone camera modules and various camera boards. Could this hardware be leveraged in some way?
Off topic, but if you really have lots of light, then exposure time can be down to a few microseconds.