husamwadi
Posts: 18
Joined: Mon Jan 06, 2014 7:54 am

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Wed Jan 08, 2014 6:20 pm

recantha2 wrote:Thinking about your problem, I was just thinking how to do a stereoscopic camera... The only way, really, if you want decent FPS using the Pi camera module is to have two cameras, and therefore two Pis. You'd need to figure out some way to sync them so that the two video streams/photographs were displayed at the same time, together. That means you'd have a bit of a delay, but I'm sure that's not going to bother you if you're taking stereo stills. If you're interested in this concept, have a word with David Whale (@whaleygeek on Twitter) about his Pi-to-Pi networking workshop materials. He's got it set-up so that you program one Pi as a server, one Pi as a client and then establishes a chat session between the two. You could easily convert this into a take-a-still-now trigger.
That seems interesting, I will talk to David Whale and see if I can learn about networking 2 pi's together.

Thanks!

towolf
Posts: 421
Joined: Fri Jan 18, 2013 2:11 pm

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Wed Jan 08, 2014 11:55 pm

I think I will try this out in my lab tomorrow. I have two cams. I just need to find a second free RPi. We have a Rift lying around too.

With GStreamer, accurate V4l2 timestamps and NTP synch this should be possible. Just plug together some RTP pipelines on the terminal, combine with videobox side-by-side and slap a distortion shader on top with the glshader element (cf. http://lubosz.wordpress.com/2013/08/28/ ... ulus-rift/ )

Will report back.

husamwadi
Posts: 18
Joined: Mon Jan 06, 2014 7:54 am

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Thu Jan 09, 2014 1:49 am

After opening 4 tabs on th raspi browser and watching the system stutter, I was afraid that trying to add a barrel distortion to a live stream via raspi would blow it up :P

If the frame rate does suffer, then I would recommend a fisheye lens:

http://www.amazon.com/gp/aw/d/B0089I4AR ... SY165_QL70

it adds almost the right distortion and uses no gpu.

husamwadi
Posts: 18
Joined: Mon Jan 06, 2014 7:54 am

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Thu Jan 09, 2014 8:28 am

Here is another guy who did the raspberry pi and 1 camera piping it through an oculus rift:

He wrote his code in C++, but I didn't know how to compile it:

http://rifty-business.blogspot.com/2013 ... -2013.html

and here is his code:

https://github.com/jherico/TronCostume

Also, he didn't use a raspi camera, he used a normal webcam which made his camera inoperable at HD (he went ahead with 320x240).

towolf
Posts: 421
Joined: Fri Jan 18, 2013 2:11 pm

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Thu Jan 09, 2014 6:19 pm

So, I have made it work with the approach I thought up on the fly, using GStreamer only.

It definitely works. The two images are in bang-on synch for long stretches of time until when there's a glitch and they drift apart suddenly (which looks fun as well, like being drugged or something).

I'm still trying to figure out the optimal HMD warp fragment shader. We were still having issues with aspect ratio.

On the whole, I already was a bit underwhelmed by the Rift from before and this is also not "holodeck-like" at all, even though stereo fusion and everything was fairly good. It looks more like VGA or an early video art installation.

Also, the central crop in video mode is bad for stereo. It should be wider, i.e., full sensor video.

Image
Image
Image

yoshiwa
Posts: 9
Joined: Mon Dec 09, 2013 6:07 pm

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Thu Jan 09, 2014 7:55 pm

Could you elaborate abit more please?
I would need to get pi camera footage synched aswell in some time , and your efforts might come in very handy !

Thx in advance !

mhelin
Posts: 127
Joined: Wed Oct 17, 2012 7:18 pm

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Thu Jan 09, 2014 9:44 pm

OV5647 can be synched with another sensor using the FREX pin input/output (works also in rolling shutter mode). Needs support in driver though. Guess the FREX pin is not even used in Raspberry Pi Camera design, at least Omnivision reference desing tells not to populate the zero ohm resistor which connect this pin to connector.

husamwadi
Posts: 18
Joined: Mon Jan 06, 2014 7:54 am

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Thu Jan 09, 2014 11:15 pm

Wow thats awesome!

As for the barrel distortion, I found this processing example that has some opengl shaders set for the oculus (it may be useful):

http://forum.processing.org/one/topic/o ... hader.html


Also, what resolution did you run (HD or was that too much?), and was the latency ok?

But this was exactly what I was imagining. I think that you can make the picture clearer and get more field of view by putting a fish lens or wide angle lens on the camera.

like the previous post, can you describe what you did and maybe leave instructions on how you did the networking + opengl shader? That would be awesome, and thank you so much!

towolf
Posts: 421
Joined: Fri Jan 18, 2013 2:11 pm

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Fri Jan 10, 2014 12:07 am

yoshiwa wrote:Could you elaborate abit more please?
I would need to get pi camera footage synched aswell in some time , and your efforts might come in very handy !

Thx in advance !
I’m not really happy with it yet. It was too hacky still today. I needed to restart all three sources and every time to change something, which gets old fast. And if you don’t have a Linux PC to combine and warp the videos on you probably cannot use this solution. I don’t think you can make all these Gstreamer plugins work on Windows (gst-opengl with EGL and GLES, etc). Maybe it is possible, but I cannot be bothered to deal with Windows or even Macs.

Also I’m using Arch, since Raspbian with its ancient old Gstreamer and silly ffmpeg fork is just a pain in the ass.

The basic scripts that are still not fully worked out are these:

Code: Select all

$ cat raspirecv-stereo 
#!/bin/sh
VIDEO_CAPS="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H264"
VIDEO_DEC="rtph264depay ! h264parse ! avdec_h264"
VIDEO_SINK="glshader location=distortion.frag ! glimagesink sync=false"

DESTLEFT=10.38.103.6
DESTRIGHT=10.38.113.204

LATENCY=250

gst-launch-1.0 -v \
  videomixer name=mix \
      sink_0::xpos=0   sink_0::ypos=0 sink_0::alpha=0 \
      sink_1::xpos=640 sink_1::ypos=0 sink_1::alpha=1 \
      sink_2::xpos=0   sink_2::ypos=0 sink_2::alpha=1 \
    ! $VIDEO_SINK \
  videotestsrc pattern="black" \
      ! video/x-raw,width=1280,height=720 \
      ! mix.sink_0 \
  rtpbin name=rtpbinleft latency=$LATENCY ntp-sync=true do-retransmission=0 \
    udpsrc caps=$VIDEO_CAPS port=5000 ! rtpbinleft.recv_rtp_sink_0 \
      rtpbinleft. ! $VIDEO_DEC ! videoscale add-borders=false ! video/x-raw,width=640,height=720 ! mix.sink_1 \
    udpsrc port=5001 ! rtpbinleft.recv_rtcp_sink_0 \
      rtpbinleft.send_rtcp_src_0 ! udpsink port=5005 host=$DESTLEFT sync=false async=false \
  rtpbin name=rtpbinright latency=$LATENCY ntp-sync=true do-retransmission=0 \
    udpsrc caps=$VIDEO_CAPS port=6000 ! rtpbinright.recv_rtp_sink_0 \
      rtpbinright. ! $VIDEO_DEC ! videoscale add-borders=false ! video/x-raw,width=640,height=720 ! mix.sink_2 \
    udpsrc port=6001 ! rtpbinright.recv_rtcp_sink_0 \
      rtpbinright.send_rtcp_src_0 ! udpsink port=6005 host=$DESTRIGHT sync=false async=false
And the fragment shader for the above, I linked it in my last post and mended it slightly, since I got only black

Code: Select all

$ cat distortion.frag 
uniform sampler2D bgl_RenderedTexture;
 
const vec4 kappa = vec4(1.0,1.7,0.7,15.0);
 
const float screen_width = 1280.0;
const float screen_height = 720.0;
 
const float scaleFactor = 0.9;
 
const vec2 leftCenter = vec2(0.25, 0.5);
const vec2 rightCenter = vec2(0.75, 0.5);
 
const float separation = -0.05;
 
// Scales input texture coordinates for distortion.
vec2 hmdWarp(vec2 LensCenter, vec2 texCoord, vec2 Scale, vec2 ScaleIn) {
    vec2 theta = (texCoord - LensCenter) * ScaleIn; 
    float rSq = theta.x * theta.x + theta.y * theta.y;
    vec2 rvector = theta * (kappa.x + kappa.y * rSq + kappa.z * rSq * rSq + kappa.w * rSq * rSq * rSq);
    vec2 tc = LensCenter + Scale * rvector;
    return tc;
}
 
bool validate(vec2 tc, int left_eye) {
    //keep within bounds of texture 
    if ((left_eye == 1 && (tc.x < 0.0 || tc.x > 0.5)) ||   
        (left_eye == 0 && (tc.x < 0.5 || tc.x > 1.0)) ||
        tc.y < 0.0 || tc.y > 1.0) {
        return false;
    }
    return true;
}
 
 
void main() {
    vec2 screen = vec2(screen_width, screen_height);
 
    float as = float(screen.x / 2.0) / float(screen.y);
    vec2 Scale = vec2(0.5, as);
    vec2 ScaleIn = vec2(2.0 * scaleFactor, 1.0 / as * scaleFactor);
 
    vec2 texCoord = (gl_TexCoord[0].st);
    vec2 texCoordSeparated = texCoord;
    
    vec2 tc = vec2(0);
    vec4 color = vec4(0);
    
    if (texCoord.x < 0.5) {
        texCoordSeparated.x += separation;
        tc = hmdWarp(leftCenter, texCoordSeparated, Scale, ScaleIn );
        color = texture2D(bgl_RenderedTexture, tc);
        if (!validate(tc, 1))
            color = vec4(0);
    } else {
        texCoordSeparated.x -= separation;
        tc = hmdWarp(rightCenter, texCoordSeparated, Scale, ScaleIn);
        color = texture2D(bgl_RenderedTexture, tc);
        if (!validate(tc, 0))
            color = vec4(0);   
            
    }   
    gl_FragColor = color;
}
And the two sources:

Code: Select all

[email protected]:~# cat ./raspisend-rtx 
#!/bin/sh
DEST=10.38.104.0

VOFFSET=0
AOFFSET=0

VELEM="v4l2src do-timestamp=true"
VCAPS="video/x-h264,width=1280,height=720,framerate=30/1"
VSOURCE="$VELEM ! $VCAPS"
VENC="h264parse ! rtph264pay"

VRTPSINK="udpsink port=5000 host=$DEST ts-offset=$VOFFSET name=vrtpsink"
VRTCPSINK="udpsink port=5001 host=$DEST sync=false async=false name=vrtcpsink"
VRTCPSRC="udpsrc port=5005 name=vrtpsrc"

gst-launch-1.0 -v rtpbin ntp-sync=true name=rtpbin \
    $VSOURCE ! $VENC ! queue ! rtpbin.send_rtp_sink_0 \
        rtpbin.send_rtp_src_0 ! $VRTPSINK \
        rtpbin.send_rtcp_src_0 ! $VRTCPSINK \
    $VRTCPSRC ! rtpbin.recv_rtcp_sink_0

Code: Select all

[email protected]:~# cat ./raspisend-rtx
#!/bin/sh
DEST=10.38.104.0

VOFFSET=0
AOFFSET=0

VELEM="v4l2src do-timestamp=true"
VCAPS="video/x-h264,width=1280,height=720,framerate=30/1"
VSOURCE="$VELEM ! $VCAPS"
VENC="h264parse ! rtph264pay"

VRTPSINK="udpsink port=6000 host=$DEST ts-offset=$VOFFSET name=vrtpsink"
VRTCPSINK="udpsink port=6001 host=$DEST sync=false async=false name=vrtcpsink"
VRTCPSRC="udpsrc port=6005 name=vrtpsrc"

gst-launch-1.0 -v rtpbin ntp-sync=true name=rtpbin \
    $VSOURCE ! $VENC ! queue ! rtpbin.send_rtp_sink_0 \
        rtpbin.send_rtp_src_0 ! $VRTPSINK \
        rtpbin.send_rtcp_src_0 ! $VRTCPSINK \
    $VRTCPSRC ! rtpbin.recv_rtcp_sink_0

towolf
Posts: 421
Joined: Fri Jan 18, 2013 2:11 pm

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Fri Jan 10, 2014 12:13 am

husamwadi wrote:I think that you can make the picture clearer and get more field of view by putting a fish lens or wide angle lens on the camera.
Maybe, but I don’t think so. These basically crappy spectacles tend to muddle things up even more. Binned full sensor video would be better: Start from as rectilinear wide angle as possible and then slap well defined warp on top.

towolf
Posts: 421
Joined: Fri Jan 18, 2013 2:11 pm

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Fri Jan 10, 2014 12:16 am

Also, I’m wondering if my approach of capturing 1280x720 and scaling both to half-width (2x 640x720) side-by-side is the best, or only, way.

I tried 2560x720, but that overloaded my weak graphics card too much.

steve_gulick
Posts: 31
Joined: Wed Jul 18, 2012 12:06 pm
Contact: Website

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Fri Jan 10, 2014 1:23 pm

mhelin wrote:OV5647 can be synched with another sensor using the FREX pin input/output (works also in rolling shutter mode). Needs support in driver though. Guess the FREX pin is not even used in Raspberry Pi Camera design, at least Omnivision reference desing tells not to populate the zero ohm resistor which connect this pin to connector.
The FREX pad on the OV5647 is not brought out to the 24 pin "sunny" connector on the camera board - so that is not an option to sync the chip. But the VSYNC pad is brought out - at least to the connector pins and a wire could be soldered.
see http://www.raspberrypi.org/forum/viewto ... 03#p378003

This would allow hardware/software to compare the relative frame timing between 2 cameras and experimenting with ways of phase-locking them.

Unfortunately, even though the VSYNC pin is available, it has not been enabled during the initialization of the OV5647. I believe it is only a mater of writing to a bit in an OV5647 register once during initialization - and would have no other effect.

Being able to accurately sync two Raspberry Pi cameras would open up so many possible applications such as stereo vision in robotics

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 25914
Joined: Sat Jul 30, 2011 7:41 pm

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Fri Jan 10, 2014 2:10 pm

Actually, the register I htin this is is specifically defined to be an input. I can try it as an output to see if it breaks anything, but I have no way of testing whether it actually does anything.

There are a couple of other registers than refer to vsync, but the docs are extremely vague so no idea what they are for.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
“My wife said to me `...you’re not even listening`.
I thought, that’s an odd way to start a conversation.."

towolf
Posts: 421
Joined: Fri Jan 18, 2013 2:11 pm

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Fri Jan 10, 2014 5:01 pm

towolf wrote:Also, I’m wondering if my approach of capturing 1280x720 and scaling both to half-width (2x 640x720) side-by-side is the best, or only, way.

I tried 2560x720, but that overloaded my weak graphics card too much.
Actually, that was bullshit. What is needed is for the left and right halfs of the Oculus Rift display to be the juxtaposed narrow central crop of the two camera feeds.

Problem, what the RPi camera gives is a crop of a crop, yielding only a combined field of view of 20 degress. It is like wearing binoculars. There's no way you could walk around with that.

Image
Image

towolf
Posts: 421
Joined: Fri Jan 18, 2013 2:11 pm

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Fri Jan 10, 2014 5:09 pm

jamesh wrote:Actually, the register I htin this is is specifically defined to be an input. I can try it as an output to see if it breaks anything, but I have no way of testing whether it actually does anything.

There are a couple of other registers than refer to vsync, but the docs are extremely vague so no idea what they are for.

For my approach at least, synch of two cameras is no issue at all with the V4l2 driver timestamps (see the video and pause it: https://www.youtube.com/watch?v=YQ8F26evIi0 )

The problem is getting rid of as much latency as possible. It is at roughly 200ms now and could go lower. Somehow I cannot get mjpeg to work with Gstreamer, I really want to check out latencies for frame independent codecs. Problem is, as soon as bitrate goes up network latency kicks in.

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 25914
Joined: Sat Jul 30, 2011 7:41 pm

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Fri Jan 10, 2014 5:15 pm

Will look crappy, but for testing can you set the jpeg quality really low to get the bitrate down?
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
“My wife said to me `...you’re not even listening`.
I thought, that’s an odd way to start a conversation.."

towolf
Posts: 421
Joined: Fri Jan 18, 2013 2:11 pm

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Fri Jan 10, 2014 6:59 pm

I can't get mjpeg to work with GStreamer.

They have an odd way of mapping V4l driver pixel formats to so-called GStreamer caps based on MIME types, like this

Code: Select all

gst-launch-1.0 -e v4l2src ! video/x-h264,width=1280,height=720,framerate=30/1 ! ...
gst-launch-1.0 -e v4l2src ! video/x-raw,format=I420,width=1280,height=720,framerate=30/1 ! ...
gst-launch-1.0 -e v4l2src ! image/jpeg,width=1280,height=720,framerate=30/1 ! ...
I can only specify image/jpeg, nothing else. And I'm still trying to work out which of the two JPEG modes it picks

Code: Select all

	Index       : 3
	Type        : Video Capture
	Pixel Format: 'JPEG'
	Name        : JPEG

	Index       : 5
	Type        : Video Capture
	Pixel Format: 'MJPG'
	Name        : MJPEG
But I managed to make raw I420 work at 160x160 and it had one frame shorter latency (30-40ms).

Also interesting was that I plugged the HDMI port of the RPi directly into the HDMI port of the Rift and ran raspivid with preview. I got about 100ms delay, which I found strange. I had expected that be much lower since raspivid pipes the video images more or less straight via the GPU onto the HDMI port, doesn't it? Could it be that the composite output is even lower latency than HDMI?

Video of that: http://www.youtube.com/watch?v=ISm6v00A8lQ

husamwadi
Posts: 18
Joined: Mon Jan 06, 2014 7:54 am

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Fri Jan 10, 2014 11:05 pm

Thats weird, I think that it has to do with your gstreamer settings since when I plugged in the raspi camera to the hdmi directly, I got 1080p with almost no latency (instantaneous transmission).

But then again I was using raspivid and not gstreamer, and when I made the python app it ran off of the raspivid command.

Edit: If you can find a way to use the raspivid and raspistill command from the raspi wheezy linux os (since you are not using a different linux os), you'll find that it will run pretty much 100%. It is screwing up at the V4L2 driver (conversion and what not).

towolf
Posts: 421
Joined: Fri Jan 18, 2013 2:11 pm

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Sat Jan 11, 2014 12:30 am

Did you measure? otherwise, perceptual judgement of latency can be fallible.

This *was* with "raspivid -t 0"

husamwadi
Posts: 18
Joined: Mon Jan 06, 2014 7:54 am

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Sat Jan 11, 2014 12:39 am

Nah I eyeballed it, but it was definitely faster than the latency of the video you showed:

looked more like this:

http://youtu.be/T8T6S5eFpqE?t=7m36s

if you are seeing the smoothness and responsiveness of that video outputted and the latency is 100ms, then I was wrong about the latency.

otherwise, I found the raspicam to be very responsive.

towolf
Posts: 421
Joined: Fri Jan 18, 2013 2:11 pm

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Sat Jan 11, 2014 1:13 am

Are you sure you know what I mean with latency? It has nothing to do with smoothness.

If you want to do the project that you want to do, then latency is a key factor. If you want a coherent self-percept then delays can ruin the effect completely.

husamwadi
Posts: 18
Joined: Mon Jan 06, 2014 7:54 am

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Sat Jan 11, 2014 6:13 pm

Yea, you are talking about the lag behind the screen (turn your head, the display turns 2 seconds later).
I didn't really notice that much. I have been trying other methods to get a stereoscopic display:

http://www.youtube.com/watch?v=LIfOjl5fgiM

I tried using a dashcam which projected 1280 x 480, but it wasn't the quality that I was looking for. Infact, I don't think 640*480 pixels per eye is enough for anything, so I am going to go with 2 displays side by side for 1600 * 1280 (800*1280 per eye), I tried it today and the results were good, but I have to switch out the magnifying lens since 7x aspherical that was prescribed with the oculus is too strong.

antonvh
Posts: 5
Joined: Wed Sep 17, 2014 8:56 pm

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Fri Nov 14, 2014 3:55 pm

Has anyone got the GLSL shader to work on a Mac? I only get a black image.

Using streamer 1.4:

Code: Select all

gst-launch-1.0 videotestsrc ! video/x-raw, width=1280, height=720 ! glshader location=distortion.frag ! glimagesink
Built-in gl effects work fine:

Code: Select all

gst-launch-1.0 videotestsrc ! video/x-raw, width=1280, height=720 ! gleffects effect=7 ! glimagesink sync=false
except for the fish-eye

Code: Select all

gst-launch-1.0 videotestsrc ! video/x-raw, width=1280, height=720 ! gleffects effect=5 ! glimagesink sync=false

rtolesnikov
Posts: 1
Joined: Fri Sep 16, 2016 5:24 am

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Fri Sep 16, 2016 5:37 am

Is it possible to refresh this thread? I have the hardware necessary to test out the vsync usage. Can someone help with finding the location of the driver code to update the i2c registers, so that vsync could be enabled? If not in the driver, can the i2d be written to from useland python (or any other) code? Again, I'm willing and able to experiment to get it to work.

Another point of curiosity: has the camera board schematic been made available? My searches came up empty.

Thanks.
jamesh wrote:Actually, the register I htin this is is specifically defined to be an input. I can try it as an output to see if it breaks anything, but I have no way of testing whether it actually does anything.

There are a couple of other registers than refer to vsync, but the docs are extremely vague so no idea what they are for.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 8404
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Stereoscopic Raspberry Pi Camera ISSUE!

Fri Sep 16, 2016 10:11 am

rtolesnikov wrote:Is it possible to refresh this thread?
I have the hardware necessary to test out the vsync usage. Can someone help with finding the location of the driver code to update the i2c registers, so that vsync could be enabled?
Closed source firmware. You could analyse start_x.elf if you really felt like it.
rtolesnikov wrote:If not in the driver, can the i2d be written to from useland python (or any other) code? Again, I'm willing and able to experiment to get it to work.
Not safely. Accessing i2c-0 from the ARM at the same time as the GPU is not safe as they will both be attempting to service the same interrupt, and may write to the I2C FIFOs at the same time thereby corrupt both transactions.
rtolesnikov wrote:Another point of curiosity: has the camera board schematic been made available? My searches came up empty.
No. The OV5647 board has been reverse engineered and cloned by some third parties (badly in some cases), but Pi Towers do not publish the schematics for it, same as they don't publish full schematics for the main Pi boards any more.


One option if you really want to play is the rawcam stuff. I had already said that if someone puts together a working I2C register set then I'll look at merging it into the firmware, but I haven't got the time or inclination to do the investigation myself.

If you are just wanting to get two cameras in sync, then the other approach that has been achieved is to compare the timestamps of the frames that are produced and adjust the framerate on one sensor to act as a software PLL. Easiest on the Compute Module with two cameras on the same device, but possible over a network link between two devices. There was a thread on this - looks like https://github.com/waveform80/picamera/pull/279 and viewtopic.php?f=43&t=48238&start=75
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

Return to “Camera board”