User avatar
peepo
Posts: 305
Joined: Sun Oct 21, 2012 9:36 am

Re: CME -x postponed?

Tue May 13, 2014 4:39 pm

6by9 wfm thanks so much

thanks a!!

~:"
Last edited by peepo on Wed May 14, 2014 9:06 am, edited 1 time in total.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7420
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: CME -x postponed?

Tue May 13, 2014 4:49 pm

Sorry peepo, I'd misread your post as saying that updating to f0eeb5 worked.

Seeing as this post was about raspivid -x, we ought to shift any discussion about this encode_OGL app to the thread started for it - http://www.raspberrypi.org/forums/viewt ... 43&t=77066

edit: Just for reference, it is a bug in the app where it sets the AWB mode to Off, and then doesn't set any gains.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

ethanol100
Posts: 587
Joined: Wed Oct 02, 2013 12:28 pm

Re: CME -x postponed?

Tue May 13, 2014 5:54 pm

6by9 wrote: What exactly is the problem in "RPI Cam Web Interface"? I'm not trawling through 29 pages of forum post trying to extract the info when there will be many issues discussed and many fixed. Some comments seem to imply MJPEG, but that can't be the issue here as we're not encoding.
Sorry, you are right it has nothing to do with the new modes, but with the awb changes. I have not followed this thread and have not read it completely. They where reporting black images, they use raspimjpeg and this could probably use the awb off code wrong as raspimjpeg has not been updated in the last two month.

So now I hope to get some feedback to the inline motion vectors. And would like to discuss some ways how to use it for motion trigger or object tracking.
How could we trigger on motion? Gordon has suggested to compile a histogram of the motion vectors and then judge from there if motion occurs. Would a 2-bin histogram be enough for this purpose? Then, for example, if at least 10% of the velocities are over a threshold, we say there is motion? Would really like to contribute a motion trigger, as an additional wait method!

steve_gulick
Posts: 31
Joined: Wed Jul 18, 2012 12:06 pm
Contact: Website

Re: CME -x postponed?

Thu May 15, 2014 6:49 am

I'd like to do detection of moving objects (blobs) using the motion vectors for use as a trigger.

This post, in anticipation of the gpu motion vector code, has blob detection and tracking code for use with motion vectors which might prove useful :
http://www.raspberrypi.org/forums/viewt ... 43&t=76482

However, rather than trigger the h.264 video recording to a file, I would rather trigger the capture of a high-resolution still. Could that switching mode to still capture be managed within raspivid and how to achieve the minimum latency after the trigger before the still image is captured?
thanks
Steve

ethanol100
Posts: 587
Joined: Wed Oct 02, 2013 12:28 pm

Re: CME -x postponed?

Thu May 15, 2014 9:53 am

I think the switching to still capture can be integrated in raspivid, the still port is already there and you would only need to connect the jpeg encoder to the camera_still_port. But the mode switching and capturing of the jpeg image will take some time(would guess about 0.5s). Additionally the motion vectors are created only after encoding, so this will take some extra time. I would prefer the circular buffer to capture some pre and post motion.

For the blob detection you would need to be sure that it is fast enough else you start to drop frames. I have not found any details on how long the processing takes. There are a lot of different algorithms around and it is part of actual research. In his post he only analyses the motion vector magnitudes, these you need to calculate first. I would like to try some simpler trigger mechanisms first. What are the benefits of detecting the blobb? Do you want to track the motion of theses blobbs? Something link tracking football player during a match or tracking billiard balls? Would they be big enough with these 120x68 magnitudes? They need to be at least ~32x32pixels in the original image to be a mini 2x2 blobb. What about the velocities?

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7420
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: CME -x postponed?

Thu May 15, 2014 10:37 am

I will try and sort out a combined video encode/stills capture app that makes use of the zero shutter lag code that is present on the GPU, but time is a little short for me at the moment. That won't overcome the latency of the video encode (about a frame), and you will need to ensure that you are pulling the video data out as fast as possible to avoid the video_encode FIFO filling up.

To do full 5MPix stills captures we'll need to be limiting the video encode to the 15fps that the full res sensor mode can support, otherwise it will be have to work with whichever sensor mode is selected by the video encode (the main latency reduction is by not switching the sensor mode as that takes 60-90ms depending on exposure time). Is 15fps sufficient for your motion detection requirements?
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

steve_gulick
Posts: 31
Joined: Wed Jul 18, 2012 12:06 pm
Contact: Website

Re: CME -x postponed?

Thu May 15, 2014 11:01 am

I would want blob detection or feature extraction from the motion vectors to make sure the trigger occurs only when something of interest occurs - like a person walking by in a security system. Would not want the triggers to occur only because the magnitude of a single macroblock's motion vector exceeded a threshold. A connected group of macroblocks should have a correlated, consistent motion over a number of frames before a trigger would occur.

A good example use case is the gif sequence Gordon first published of a person walking across the field of view.

an interesting example - with source code a tutorials and tech papers where motion vectors extracted in the compressed domain are used to extract and track objects of interest - "Live Video"

https://sites.google.com/site/wsgyou/research/livevideo
http://sourceforge.net/projects/livevideo/

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7420
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: CME -x postponed?

Thu May 15, 2014 11:22 am

Let's get the frameworks sorted before you set your huge goals.
I hope you had noticed that those examples are using Microsoft Foundation classes, so is likely to be running on a nice beefy multicore x86 with shed loads of memory to get their 30-50fps "Live video". On the Pi you're on a 700MHz single core ARM with 512MB RAM.

As long as we can arrange for a framework that runs the analysis in a separate thread to the buffer processing, then people can play to their heart's content with analysis (and seeing just how long it takes), and a mechanism for triggering the stills captures, then people can tinker.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

User avatar
peepo
Posts: 305
Joined: Sun Oct 21, 2012 9:36 am

Re: CME -x postponed?

Thu May 15, 2014 1:17 pm

if anyone could explain the current CME codebase, it would be appreciated.
as I prefer tinkering at the core.
I find more layers of code gums up the works...

~:"

ie ethanol100-Gordon's collaboration,
though likely another voice may help.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7420
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: CME -x postponed?

Thu May 15, 2014 1:36 pm

peepo wrote:if anyone could explain the current CME codebase, it would be appreciated.
as I prefer tinkering at the core.
I find more layers of code gums up the works...
ethanol100's description of the current code at http://www.raspberrypi.org/forums/viewt ... 45#p549841 is pretty reasonable.
Or Gordon's description of the buffer contents in http://www.raspberrypi.org/forums/viewt ... 45#p548816
Anything specific that you want to know?
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

Yggdrasil
Posts: 138
Joined: Sun Aug 26, 2012 8:45 pm

Re: CME -x postponed?

Fri May 16, 2014 1:35 pm

ethanol100 wrote:For the blob detection you would need to be sure that it is fast enough else you start to drop frames. I have not found any details on how long the processing takes. There are a lot of different algorithms around and it is part of actual research. In his post he only analyses the motion vector magnitudes, these you need to calculate first. I would like to try some simpler trigger mechanisms first. What are the benefits of detecting the blobb? Do you want to track the motion of theses blobbs? Something link tracking football player during a match or tracking billiard balls? Would they be big enough with these 120x68 magnitudes? They need to be at least ~32x32pixels in the original image to be a mini 2x2 blobb. What about the velocities?
Hello,

dropping frames could be omit completely and I'm currently combine raspivid's source with this blob detection to show/prove it.
The second question, if the data can be interpreted in an useful manner is more difficult.
Currently, I assume that the motion direction can not be estimate with the motion vectors of one frame. I have plotted the outputs of imv2txt as vectorfield and its looks relatively fancy, see attachment (Maybe I'm just too bad in statistical analysis to get a good interpretation ;))
Thus, I decide to use the magnitude to mark hot spots and combine the hot spots of several frames to a movement direction.

Note to the vector field images: The adaptive exposure of the camera can will always be toggled on if the video encoder runs. This caused some unwanted motion vectors, i.e the vectors on the right side on my images. If the issue was fixed, the motion vector quality should increase.
Attachments
frame-556.dat.png
frame-556.dat.png (21.05 KiB) Viewed 6317 times
frame-567.dat.png
frame-567.dat.png (19.36 KiB) Viewed 6317 times

ethanol100
Posts: 587
Joined: Wed Oct 02, 2013 12:28 pm

Re: CME -x postponed?

Fri May 16, 2014 2:31 pm

Thank you for your update.

Taking the magnitude is the normal choice. We need to somehow reduce the complexity and taking only magnitude instead of magnitude and direction is easier. I did not want to attack your algorithm, but agree with 6by9 that first a framework should be create. Then everybody can play with its own motion detection algorithm.

Have you noticed, that I am not really sure, if I have mixed up the x and y direction in the conversion from the raw buffer. Would switching u and v give your more reasonable velocity fields? Additionally I could have made mistakes with the signs.

Have you looked at the sad values, you could try to filter velocities with big differences. Just from a particle tracking point of view, i.e., your leg with a straight edge will always be a problem, there are to many similar points along the edge, which will result in false big velocity vectors.

There are some additional difficulties, each now and then(each I-frame?), all velocities are zero. And I am not sure if we see the velocities calculated from to consecutive images or from the last I-frame.

Yesterday 6by9 has told us, that we can switch to exp mode off after the exposure has been found by the AGC algorithm, together with the manual setting of the AWB, the encoding should not change much. But the last row, presents some difficulties with the filled rounded to x16 image buffer.

Thanks for the pictures, the gnuplot pictures look fine. And the shape of the magnitudes seems to be OK.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7420
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: CME -x postponed?

Fri May 16, 2014 3:25 pm

ethanol100 wrote:There are some additional difficulties, each now and then(each I-frame?), all velocities are zero. And I am not sure if we see the velocities calculated from to consecutive images or from the last I-frame.
The GPU code change gsh put in has two lines that read:

Code: Select all

         // I-frames have no vectors so just set them to zero
         memset(stuff)
so you're right that you'll have no useful data on I-frames.
I haven't checked, but you may find that the motion vectors buffer from an I frame may have both MMAL_BUFFER_HEADER_FLAG_KEYFRAME and MMAL_BUFFER_HEADER_FLAG_CODECSIDEINFO set, so you could ignore those buffers. If not, I can see if we can arrange for that to be the case.
ethanol100 wrote:Yesterday 6by9 has told us, that we can switch to exp mode off after the exposure has been found by the AGC algorithm, together with the manual setting of the AWB, the encoding should not change much. But the last row, presents some difficulties with the filled rounded to x16 image buffer.
I know why that last row is bad, and can probably sort a fix relatively easily. Unfortunately it has all just hit the fan here, so getting it done may be delayed.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

ethanol100
Posts: 587
Joined: Wed Oct 02, 2013 12:28 pm

Re: CME -x postponed?

Fri May 16, 2014 4:04 pm

6by9 wrote: I haven't checked, but you may find that the motion vectors buffer from an I frame may have both MMAL_BUFFER_HEADER_FLAG_KEYFRAME and MMAL_BUFFER_HEADER_FLAG_CODECSIDEINFO set, so you could ignore those buffers. If not, I can see if we can arrange for that to be the case.
The buffer only contains MMAL_BUFFER_HEADER_FLAG_CODECSIDEINFO and MMAL_BUFFER_HEADER_FLAG_FRAME_END. Would be nice to have the keyframe flag there.
But it is not so important for me, I want to write each buffer, else we will get confused which buffer belongs to which frame. (And from the h264 file I can see if the frame was an keyframe.)
6by9 wrote: I know why that last row is bad, and can probably sort a fix relatively easily. Unfortunately it has all just hit the fan here, so getting it done may be delayed.
It is not a big problem, it is just the last line, only 8px in height. We can skip it and use the remaining 67 lines. It is not needed in a hurry.

Thanks for the explanations!

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7420
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: CME -x postponed?

Fri May 16, 2014 4:24 pm

ethanol100 wrote:The buffer only contains MMAL_BUFFER_HEADER_FLAG_CODECSIDEINFO and MMAL_BUFFER_HEADER_FLAG_FRAME_END. Would be nice to have the keyframe flag there.
But it is not so important for me, I want to write each buffer, else we will get confused which buffer belongs to which frame. (And from the h264 file I can see if the frame was an keyframe.)
Shucks. It looks like the internal state is almost correct inside the codec code anyway, but the buffer flag just isn't then set. Ensuring I get it right is a different matter!
The memset of the vectors will guarantee that it is all zero at least, so easy enough to filter out on that criteria as well.

I meant to say on my last post that the motion vectors are (almost) always with respect to the previous frame, irrespective of whether that previous frame was an I or P frame. It's the same frame that the H264 stream is using as the reference frame for the new image (The "almost" is that there is a mode where the codec uses one frame as the reference for multiple successive frames. It allows the decoder to skip those frames without compromising the bitstream).
ethanol100 wrote:
6by9 wrote: I know why that last row is bad, and can probably sort a fix relatively easily. Unfortunately it has all just hit the fan here, so getting it done may be delayed.
It is not a big problem, it is just the last line, only 8px in width. We can skip it and use the remaining 67 lines. It is not needed in a hurry.
(Raspberry Pi is far more fun than real work!) It's a trivial change so have just sorted it - I'll get it pushed internally now, and then it's up to Dom to release.

The other approach would be to ask for 1920x1088 (or 1920x1072). That aligns it to macroblocks and means the camera will write something sensible into those pixels.

EDIT: Dom's just said he'll be doing a release tonight, so that should hopefully clear up the scruffy bottom row of the motion vectors.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

ethanol100
Posts: 587
Joined: Wed Oct 02, 2013 12:28 pm

Re: CME -x postponed?

Fri May 16, 2014 5:03 pm

6by9 wrote: Shucks. It looks like the internal state is almost correct inside the codec code anyway, but the buffer flag just isn't then set. Ensuring I get it right is a different matter!
The memset of the vectors will guarantee that it is all zero at least, so easy enough to filter out on that criteria as well.
Yes, all vectors are zero and the sad values are all zero too, easy to filter.
6by9 wrote: I meant to say on my last post that the motion vectors are (almost) always with respect to the previous frame, irrespective of whether that previous frame was an I or P frame. It's the same frame that the H264 stream is using as the reference frame for the new image (The "almost" is that there is a mode where the codec uses one frame as the reference for multiple successive frames. It allows the decoder to skip those frames without compromising the bitstream).
Gordon has confused me a bit with his blog post, where he says: "To encode video, one of the things the hardware does is to compare the current frame with the previous (or a fixed) reference frame, and work out where the current macroblock (16×16 pixels) best matches the reference frame."

Good to know that the reference frame should be the last frame. I should sometime read more about how actually the h264 codec stores/calculates the frames.
6by9 wrote: (Raspberry Pi is far more fun than real work!) It's a trivial change so have just sorted it - I'll get it pushed internally now, and then it's up to Dom to release.

The other approach would be to ask for 1920x1088 (or 1920x1072). That aligns it to macroblocks and means the camera will write something sensible into those pixels.
It is just the faster success in the raspberrypi code base ;) Thanks for the quick change!

Have tried with 1088 and it looks OK. No more strange arrows at the bottom of the image.

dom
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 5341
Joined: Wed Aug 17, 2011 7:41 pm
Location: Cambridge

Re: CME -x postponed?

Fri May 16, 2014 7:02 pm

6by9 wrote:EDIT: Dom's just said he'll be doing a release tonight, so that should hopefully clear up the scruffy bottom row of the motion vectors.
Done.

ethanol100
Posts: 587
Joined: Wed Oct 02, 2013 12:28 pm

Re: CME -x postponed?

Sat May 17, 2014 12:37 pm

Thank you for the fast release! Now the last line looks fine for 1920x1080 resolution.

Have done one small test to check if the velocities are reasonable:

Image
Image

Creating a histogram of the velocity magnitudes the car moves with 42px/frame. If we now measure the the distance between the wheels(270px) and assume it to be about 2.7m, we can calculate the velocity of the car:

Code: Select all

u=42[px/frame]*2.7/270[m/px]*30[frames/s]
 =12.6 [m/s] = 45.36 [km/h]
This sounds reasonable for me. Additionally we see that the large wrong arrows have a high SAD value. The velocities behind the car look strange, but how can you find these images in the reference pictures, when the car has covered them before.

So everything should be fine with the velocities.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7420
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: CME -x postponed?

Sat May 17, 2014 12:47 pm

ethanol100 wrote:This sounds reasonable for me. Additionally we see that the large wrong arrows have a high SAD value. The velocities behind the car look strange, but how can you find these images in the reference pictures, when the car has covered them before.
Remember that these motion vectors are for compression purposes, so behind the car it is looking for blocks that most closely resemble what has now been revealed by the car no longer obscuring them. Those could be from almost anywhere in the image.
Glad it's working though.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

Yggdrasil
Posts: 138
Joined: Sun Aug 26, 2012 8:45 pm

Re: CME -x postponed?

Sun May 18, 2014 11:05 pm

ethanol100 wrote:Thank you for your update.

Taking the magnitude is the normal choice. We need to somehow reduce the complexity and taking only magnitude instead of magnitude and direction is easier. I did not want to attack your algorithm, but agree with 6by9 that first a framework should be create. Then everybody can play with its own motion detection algorithm.
Yes, a framework would be nice. I'm not sure how complex such a framework should be. One variant would be the thin solution with just a simple function handler, which replaces the fwrite call for the imv data. The counterpart would be an approach, which gives the user the capability, to give some data back to the main application (i.e. draw results for debugging or change camera settings). I would prefer the thin solution, but probably all programmers would be glad if someone provide a general solution for the output representation. ;)
I've tries to decouple the motion detection from the video encoding thread but I've reached some problems during the output generation. It's not easy to draw into the opengl context of the camera output from an other thread.
Otherwise, the graphical output is just relevant during the debugging stage…

Have you noticed, that I am not really sure, if I have mixed up the x and y direction in the conversion from the raw buffer. Would switching u and v give your more reasonable velocity fields? Additionally I could have made mistakes with the signs.
Yes, I checked that and the output for (x,y,sad) looks reasonable. But I do not understand why you switched from 121x68 resolution ( IMV array) to 120x68 resolution (array of magnitudes). Contain the last column errors like the last row?
Have you looked at the sad values, you could try to filter velocities with big differences. Just from a particle tracking point of view, i.e., your leg with a straight edge will always be a problem, there are to many similar points along the edge, which will result in false big velocity vectors.
No, i hasn't looked at the sad values because I've probably misinterpret this value as difference between two blocks at the same position.
There are some additional difficulties, each now and then(each I-frame?), all velocities are zero. And I am not sure if we see the velocities calculated from to consecutive images or from the last I-frame.
Yes, the velocities in I-frames are zero. Would be nice to get this info (=> framework ;))
Yesterday 6by9 has told us, that we can switch to exp mode off after the exposure has been found by the AGC algorithm, together with the manual setting of the AWB, the encoding should not change much. But the last row, presents some difficulties with the filled rounded to x16 image buffer.
Hm, I reset the camera settings after each frame but get no success.

ethanol100
Posts: 587
Joined: Wed Oct 02, 2013 12:28 pm

Re: CME -x postponed?

Mon May 19, 2014 9:22 am

Yggdrasil wrote:Yes, I checked that and the output for (x,y,sad) looks reasonable. But I do not understand why you switched from 121x68 resolution ( IMV array) to 120x68 resolution (array of magnitudes). Contain the last column errors like the last row?
There are only 120 macroblocks(1920/16=120) but they are stored in array with a width of 120+1. I have just followed Gordon's explanation in his blog post.
No, i hasn't looked at the sad values because I've probably misinterpret this value as difference between two blocks at the same position.
I think this is variable which is used to calculate the offset. They calculate the sum of absolute differences for all possible offsets and then they search for the offset with has a minimum value in this difference. They use the sum of absolute differences to optimize the speed, usually you would calulate the cross correlation which needs to calculate squares.
Hm, I reset the camera settings after each frame but get no success.
My idea would be to start the camera in the state 'paused' and let the auto-exposure fix the gains (for 5s or so) and then change the exposure to off and start the encoder. I will try to put something together, have just tested it for raspistill yet.

Edit: have pushed my changes to fix the exp mode to: https://github.com/ethanol100/userland/tree/expOff
A new parameter "--waitAndFix 5000" will run the preview for 5s and than fix the exposure and continues as usual.

sharix
Posts: 200
Joined: Thu Feb 16, 2012 11:29 am
Location: Slovenia

Re: CME -x postponed?

Fri May 23, 2014 3:29 pm

Another great feature of raspivid.
I'd like to use it for detecting the motion of clouds. I have everything set up, except I'd like to capture video with a slower frame rate. I tested with some values and it seems that raspivid's framerate is limited at one frame per second. Any reason why the framerate can't be lowered further?

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7420
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: CME -x postponed?

Fri May 23, 2014 4:25 pm

Sorry, setting the sensor up to produce less than 1fps is tricky. Framerate is linked to exposure time and the time to read one line from the sensor. The result is stored in a 16bit register in the sensor. That register overflows at 997ms in the 1080P mode (just found that it overflows at about 771ms in the binned mode - looking into potential fixes).

There is the potential for other solutions, but none are totally trivial.
The easiest approach would be to have the app handle the buffers between camera and video_encode instead of using a mmal_connection. Return N-1 out of N of the buffers straight back to the camera instead of sending them to the encoder, hence dividing the framerate by N. It gets a little messy, but easy enough at the sort of framerates you're wanting.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

ethanol100
Posts: 587
Joined: Wed Oct 02, 2013 12:28 pm

Re: CME -x postponed?

Fri May 23, 2014 4:35 pm

The framerate is specified as an integer in raspivid, so 1 would be the smallest possibility. But I am not even sure if it will work with 1 in the past there was a limit of 2 frames per second.

But you can pause the recording and restart it after some time. Easiest would be to just modify raspivid to your needs. ;)

A simple shell script can help you, but it is not perfect... Just to get the idea:

Code: Select all

#!/bin/bash
raspivid -v -s -i 'pause' -t $1 -x $3 -o $2 &
pid=$!
sleep 2;
while kill -0 $pid 2> /dev/null; do
   kill -USR1 $pid;
   sleep 0.14;
   kill -USR1 $pid;
   sleep 1.86;
done
If you save it for example as "capSlowMo.sh" and make it executable "chmod +x capSlowMo.sh", you can execute it with "./capSlowMo.sh 60000 /run/shm/test.h264 /run/shm/test.imv" to record a 0.5fps video for 60s and save the video to /run/shm/test.h264 and the motion vectors to /run/shm/test.imv. If you would like to get slower framerates increase the last sleep 1.86 to some higher values.

This is just trial and error, if the time between capture and pause is to small it will not capture an image, and if it is to long it will capture several images. If no frames are saved you can increase the second sleep or decrease it if you get to much frames. And you do not know the exact timing of the frames. So velocities are hard to calculate.

But you can modify RaspiVid.c to stop after one frame is received and to output the exact time it was captured. Then you would have a good way to measure slow velocities. It is fun to play with the source and it is not too complicated ;)

sharix
Posts: 200
Joined: Thu Feb 16, 2012 11:29 am
Location: Slovenia

Re: CME -x postponed?

Fri May 23, 2014 4:49 pm

Great, thanks for the example script. I want to try with framerates of tens of seconds or even minutes per frame, so small differences won't matter. I was only worried that with such a workaround the motion vectors won't be calculated properly.

Return to “Camera board”