hjimbens
Posts: 86
Joined: Fri May 24, 2013 9:05 am

Re: omxcam - OpenMAX camera abstraction layer

Tue Apr 22, 2014 7:09 pm

I hate to be that guy, but... Did you try it?
Yes I tried and it did not work. Sometimes, depending on the exact moment the capture code is called, I can have one more texture filled, but after that the FillBufferDone is not called anymore.
Actually I am not recording the video, just displaying it, and a few frames dropping is no problem for me. I just want to display video from the CapturePort (71) and grab stills from the StillImagePort (72) at certain moments.
So this is not a perfomance issue, but an issue of the CapturePort (71) stopping sending data.

unphased
Posts: 23
Joined: Tue Apr 08, 2014 2:44 pm

Re: omxcam - OpenMAX camera abstraction layer

Wed Apr 23, 2014 6:28 pm

Indeed, it will clearly be an additional big job to put on the plate for the little 2835 so that all makes a lot of sense.

Theoretically, prioritizing the video encode and careful scheduling (and potentially interrupting the JPEG encode as time runs out) can allow no frame drops, it's certainly going to take some extra frame buffer memory to get it done, though.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 9522
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: omxcam - OpenMAX camera abstraction layer

Wed Apr 23, 2014 7:01 pm

unphased wrote:Indeed, it will clearly be an additional big job to put on the plate for the little 2835 so that all makes a lot of sense.

Theoretically, prioritizing the video encode and careful scheduling (and potentially interrupting the JPEG encode as time runs out) can allow no frame drops, it's certainly going to take some extra frame buffer memory to get it done, though.
It can do it. All the heavy lifting is done on the GPU. All the code is written, but may not be on the Raspberry Pi firmware branch and I haven't got the time to check at the moment (I don't know what spare cycles JamesH has at the moment either).

@hjimbens Your comment has just reminded me of a bug that we fixed a while back on our main branch. Is there a reason you are using port 71 instead of 70 for your preview? 71 is intended for video encode, and the bug was exactly this of not resuming encode after a still. If preview (port 70) was running, then it generally did the right thing.
I will try to find 5 mins tomorrow to check out whether that bug has been fixed on the Pi branch, but it sounds like it hasn't.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: omxcam - OpenMAX camera abstraction layer

Thu Apr 24, 2014 12:38 pm

hjimbens wrote:
I hate to be that guy, but... Did you try it?
Yes I tried and it did not work. Sometimes, depending on the exact moment the capture code is called, I can have one more texture filled, but after that the FillBufferDone is not called anymore.
Actually I am not recording the video, just displaying it, and a few frames dropping is no problem for me. I just want to display video from the CapturePort (71) and grab stills from the StillImagePort (72) at certain moments.
So this is not a perfomance issue, but an issue of the CapturePort (71) stopping sending data.
As 6by9 said, this is not possible. You cannot take an still from the still port (72) while you are capturing frames from the video port (71). You first need to stop the recording, capture the still and start recording again. The only solution to "capture" a still while you are recording it's simple to me: get a frame from the video. The downside is that the still resolution will be the same as the video resolution. I have to investigate this more.

---------------------------------------

I have a question: What is the purpose of the preview port? Right now I'm just ignoring it.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 9522
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: omxcam - OpenMAX camera abstraction layer

Thu Apr 24, 2014 1:30 pm

gagle wrote:As 6by9 said, this is not possible. You cannot take an still from the still port (72) while you are capturing frames from the video port (71). You first need to stop the recording, capture the still and start recording again. The only solution to "capture" a still while you are recording it's simple to me: get a frame from the video. The downside is that the still resolution will be the same as the video resolution.
That is almost the exact opposite to what I said!

With default settings, enabling port 72 and setting OMX_IndexConfigPortCapturing on port 72 whilst port 71 is active will stop emitting frames from port 71 only whilst the still is being produced. It should automatically resume afterwards. The timestamps on the buffers from port 71 should all be correct assuming you have connected a clock component (and what joys clock components are - oh, you haven't played with them!).
There is the possibility of running both captures simultaneously to avoid dropping frames on the video encode - what is often called video snapshot. The main restrictions being that it needs more memory, and as it does NOT change the sensor mode, if preview/video encode is running the sensor in either the 1296x972 mode then asking for anything higher than that will result in upscaling. There's no real way around that unless your sensor can read out the full image at 30fps.
gagle wrote:I have a question: What is the purpose of the preview port? Right now I'm just ignoring it.
Er, the preview port would be for connecting to a preview display.

Seeing as you wanted to use IL, refer to the spec v1.2.0 section 8.9.1

Code: Select all

Name	camera
Description	Emits preview/viewfinder video and captured video according to settings.
Index	Domain	Direction	Description
VPB+0	video	output	Emits preview/viewfinder video.
VPB+1	video	output	Emits captured video.
OPB+0	other/time	input	Receives media time update/provides access to clock component.
Section 8.9.1.2 and 8.9.1.3 then describe how components should be connected to achieve video and still image capture use cases.

If you're a Khronos member, then there was a proposal IL533 from bug 6244 to extend the camera component to support the ports:

Code: Select all

Index	Domain	Direction	Description
VPB+0	video	output	Emits preview/viewfinder frames targeted to be shown to user on a display.
VPB+1	video	output	Emits captured video/camera frames.
IPB+0	image	output	Emits captured still images.
OPB+0	other/time	input	Receives media time update/provides access to clock component.
Whilst it was never adopted into the IL spec, it was implemented by Broadcom to achieve several use cases (mainly things like stills capture during video encode for video snapshot).
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: omxcam - OpenMAX camera abstraction layer

Fri Apr 25, 2014 11:34 am

6by9 wrote: That is almost the exact opposite to what I said!

With default settings, enabling port 72 and setting OMX_IndexConfigPortCapturing on port 72 whilst port 71 is active will stop emitting frames from port 71 only whilst the still is being produced. It should automatically resume afterwards. The timestamps on the buffers from port 71 should all be correct assuming you have connected a clock component (and what joys clock components are - oh, you haven't played with them!).
There is the possibility of running both captures simultaneously to avoid dropping frames on the video encode - what is often called video snapshot. The main restrictions being that it needs more memory, and as it does NOT change the sensor mode, if preview/video encode is running the sensor in either the 1296x972 mode then asking for anything higher than that will result in upscaling. There's no real way around that unless your sensor can read out the full image at 30fps.
Ok, thanks, I appreciate the explanation.
6by9 wrote: Er, the preview port would be for connecting to a preview display.

Seeing as you wanted to use IL, refer to the spec v1.2.0 section 8.9.1

Code: Select all

Name	camera
Description	Emits preview/viewfinder video and captured video according to settings.
Index	Domain	Direction	Description
VPB+0	video	output	Emits preview/viewfinder video.
VPB+1	video	output	Emits captured video.
OPB+0	other/time	input	Receives media time update/provides access to clock component.
Section 8.9.1.2 and 8.9.1.3 then describe how components should be connected to achieve video and still image capture use cases.

If you're a Khronos member, then there was a proposal IL533 from bug 6244 to extend the camera component to support the ports:

Code: Select all

Index	Domain	Direction	Description
VPB+0	video	output	Emits preview/viewfinder frames targeted to be shown to user on a display.
VPB+1	video	output	Emits captured video/camera frames.
IPB+0	image	output	Emits captured still images.
OPB+0	other/time	input	Receives media time update/provides access to clock component.
Whilst it was never adopted into the IL spec, it was implemented by Broadcom to achieve several use cases (mainly things like stills capture during video encode for video snapshot).
Ok, then I will continue using only the video and still ports.

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: omxcam - OpenMAX camera abstraction layer

Sun May 04, 2014 9:34 am

Today I'm pushing more updates:

- The api to record a video is simplified to the limit. I've removed the sleep(), wake(), lock() and unlock() functions because they were causing subtle problems due to the multithreading design. They are now integrated with the startVideo() and stopVideo() functions.
- I've added examples to show how to capture rgb and yuv video.
- I've added the rgba format.
- I've added a benchmark to test the video capture speed in raw format. I'm surprised to see the real performance. Recording a yuv video at vga resolution (640x480) I only get a ~20fps without doing anything (benchmark includes initialization and deinitialization), I simply fill a buffer, wait for the event using vcos_event_flags_get and vcos_event_flags_set and then call a function with the buffer as a parameter. Is this normal? How should I fill and send the buffers to the user then? I need to investigate this more.

Code: Select all

while (1){
	//Fill the buffer
	OMX_FillThisBuffer (buffer)
	
	//Wait for the FillBufferDone event
	vcos_event_flags_get ()
	
	//Emit the buffer
	bufferCallback (buffer);
}
Next steps is to learn opengl (I have some experience with webgl) and start writing the bindings to nodejs. For future reference: opengl textures

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: omxcam - OpenMAX camera abstraction layer

Mon May 12, 2014 4:20 pm

Ok, more news:

After some days stuck trying to compile the library with gyp, I'm starting to write the node.js bindings. Soon, I'll be able to capture images and video from node.js without the need to fork a child process and call raspistill/raspivid binaries.

node-omxcam repository

Stay tuned

unphased
Posts: 23
Joined: Tue Apr 08, 2014 2:44 pm

Re: omxcam - OpenMAX camera abstraction layer

Mon May 12, 2014 4:52 pm

Very exciting!!

I love node.js, let me tell you, so this is something we desperately need.

That being said, based on just my gut feelings, node is particularly optimized for x86 (and not ARM) and is generally quite sluggish as it runs on the Pi. Now mind you, I have not run tons of benchmarks yet, but it is my humble opinion that a simple C API is more awesome. In particular if one starts to shuffle around Node Buffers filled with image pixel data, performance may well go through the floor.

Node bindings are still plenty awesome to have though. I like prototyping with node a lot more than even python, because the asynchronous event based architecture is just so much better. Yes yes python modules, but you can't beat something designed with async in mind.

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: omxcam - OpenMAX camera abstraction layer

Tue May 27, 2014 10:07 am

This week I'll push a new big update. The past two weeks I've been doing a complete refactor to follow C style conventions and best practices for C libs.

I've also fixed the color produced by the jpeg encoder. I didn't know why the preview port was being used to capture videos in the h264 demo that I saw on a random repository. 6by9 explained the reason in a post, the preview port needs to be enabled because AGC (automatic gain control) and AWB (auto white balance) algorithms are executed only when the port is enabled.

Summary: the image and video color with omxcam are the same as with raspistill/raspivid.

magnatag
Posts: 33
Joined: Tue Mar 04, 2014 8:39 pm

Re: omxcam - OpenMAX camera abstraction layer

Thu May 29, 2014 3:52 pm

Will your library support adding an overlay (lines, text, etc) onto a video stream?

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 9522
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: omxcam - OpenMAX camera abstraction layer

Thu May 29, 2014 4:13 pm

magnatag wrote:Will your library support adding an overlay (lines, text, etc) onto a video stream?
At the moment there is no way to alter the images that are recorded without taking the buffers to the ARM for modification and then returning them to the GPU. This has quite an overhead.

On my list of "nice things to do" is a GPU-side video_overlay component to add an overlay buffer to the images as they pass through the pipe. When I'll actually get a chance to do it is another question!
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 27021
Joined: Sat Jul 30, 2011 7:41 pm

Re: omxcam - OpenMAX camera abstraction layer

Thu May 29, 2014 4:28 pm

6by9 wrote:
magnatag wrote:Will your library support adding an overlay (lines, text, etc) onto a video stream?
At the moment there is no way to alter the images that are recorded without taking the buffers to the ARM for modification and then returning them to the GPU. This has quite an overhead.

On my list of "nice things to do" is a GPU-side video_overlay component to add an overlay buffer to the images as they pass through the pipe. When I'll actually get a chance to do it is another question!
I was thinking of a modification to the 'annotate' sw-component...I had that working with a internal font definition at some point. which could superimpose text on to the video stream. Using the mailbox allocate functions it would be possible to pass bitmaps and/or font definitions to the component fairly easily. We can have a chat next week when I am back in the office.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed.
I've been saying "Mucho" to my Spanish friend a lot more lately. It means a lot to him.

magnatag
Posts: 33
Joined: Tue Mar 04, 2014 8:39 pm

Re: omxcam - OpenMAX camera abstraction layer

Thu May 29, 2014 6:13 pm

Thank you both for working so hard and giving us such an amazing device!

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: omxcam - OpenMAX camera abstraction layer

Tue Jun 24, 2014 7:34 pm

News:

- All the settings are validated before starting the camera.
- The settings can be updated while recording.
- Lots of fixes.

Roadmap to v0.0.1:

- Refactor the video/still recording to abstract common functionalities (all the thread behaviour).
- The shutter speed auto mode is still wrong!?
- More h264 settings.
- bgr (opencv)

Future:

- Lots of things: splitter, opengl.
- Maybe create a Linux package? How?
Last edited by gagle on Thu Jun 26, 2014 8:30 pm, edited 1 time in total.

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: omxcam - OpenMAX camera abstraction layer

Thu Jun 26, 2014 8:29 pm

Please, I need feedback. This project is taking me too much time (3+ months) and I want to return to my well-loved node.js. As soon I'll write the full docs and release the v0.0.1 I'd like to receive feedback (I prefer criticism than "you're superb!"). Thanks! :oops:

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: omxcam - OpenMAX camera abstraction layer

Tue Jul 08, 2014 6:05 pm

I'm going to add 3 more functions in order to allow asynchronous code. Right now you have: omxcam_video_start(). The thread is blocked for the specified amount of time or unblocked manually. As soon as the data comes, a buffer callback is executed. This internally is implemented with an infinite loop. It's the simplest way for synchronous operations, but it's very complex to use with asynchronous code.

Asynchronous means: "me, as a client, I'll let you know when I need data". The asynchronous code use a multithread architecture and with the infinite loop this is very complex and sloooooow because because you need some kind of interthread communication, aka, consumer-producer. The faster implementation is a lock-free queue. You can imagine all the complexity just to pass data to other threads...

So... the solution is to let the client obtain the data when it needs it. No lock-free queues, no mutexes, no nothing because this depends on the client. OpenMAX works this way, so it's just a thin wrapper around it. This will be implemented as follows:

Code: Select all

omxcam_video_start_async ()
...
//When the client needs to read the data
//This will call the buffer callback
omxcam_video_read_async ()
...
//When the client doesn't need more data
omxcam_video_stop_async ()

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: omxcam - OpenMAX camera abstraction layer

Wed Jul 09, 2014 7:38 pm

oops, my rpi has died. It doesn't turn on. :lol:

:(

The rpi needs some kind of switch to turn it on. Current pics are bad.

EDIT:

False alarm, it seems that the SD card adapter is not working properly. Yes, the shitty SD card adapter with the raspberry logo...
Last edited by gagle on Tue Jul 15, 2014 5:22 pm, edited 1 time in total.

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: omxcam - OpenMAX camera abstraction layer

Mon Jul 14, 2014 9:33 am

Today I come with exciting news. This weekend I've managed to implement a very simple but functional nodejs binding, both in async and sync ways. This week I'm gonna focus in the nodejs lib and hopefully, by the end of the week I'll have a simplistic prototype of a websocket server serving h264 video. Let's see how fast is node. Stay tunned :D

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: omxcam - OpenMAX camera abstraction layer

Fri Jul 18, 2014 9:43 pm

Video capture examples

The websocket server is not working correctly. Still being developed.

Recording 2s VGA video:

Code: Select all

var fs = require ("fs");
var omxcam = require ("omxcam");

var ws = fs.createWriteStream ("video.h264")
    .on ("error", function (error){
      console.error (error);
      vs.stop ();
    });

var vs = omxcam.createVideoStream ({ width: 640, height: 480, timeout: 2000 })
    .on ("error", function (error){
      console.error (error);
      ws.end ();
    });

vs.pipe (ws);
Still lots of things to do.

---------

Ok, this is my analysis:

The websocket server is working correctly. I receive the buffers in the browser. The problem is: the support of h264 in the browser is a crap. I need to investigate the Broadway.js lib or implement the mjpeg format, inefficient compared with h264 but you can view the video using a simple img tag...

nodejs (websocket server) -> nodejs (websocket client) is also working correctly BUT it seems that the inline option is required (SPS and PPS frame for the first received IDR frame): http://www.raspberrypi.org/forums/viewt ... 43&t=82380

For now, I'm going to focus in the nodejs lib.

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: omxcam - OpenMAX camera abstraction layer

Sat Jul 26, 2014 3:10 pm

C vs Nodejs, bench

width: 640
height: 480
frames: 30
  • C
    set up h264: 323 ms
    tear down h264: 58 ms
    video rgb: 28.17 fps (1065 ms)
    video yuv (npt): 27.86 fps (1077 ms)
  • Nodejs
    set up h264: 326 ms
    tear down h264: 46 ms
    video rgb: 17.28 fps (1736 ms) -63%
    video yuv: 20.59 fps (1457 ms) -35%
The setup and tear down have identical performance. I don't know why, but sometimes the tear down in node is faster than C :| .
Obviously Nodejs is slower than C. Take into account that this is not a real benchmark since h264 is not being benchmarked by the moment. No one wants to manipulate rgb/yuv data from Nodejs due to the slow performance, but in case you need to do so, you are not restricted by any limitation in the api.

Focusing again in the C lib.

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: omxcam - OpenMAX camera abstraction layer

Tue Jul 29, 2014 2:41 pm

Very very good. Things are going well. I've fixed the websocket client/server and now it works but sometimes I receive a segmentation fault in the server when I try to write to the file in the client, very weird. Perhaps the SD card write speed is too low.

https://github.com/gagle/node-omxcam/tr ... ket-server

The camera is executed with these settings:

Code: Select all

{
  width: 640,
  height: 480,
  h264: {
    idrPeriod: 10,
    inlineHeaders: true
  }
}
And the client when receives a buffer checks for the first SPS:

Code: Select all

sps = false;
...
websocketClient.on ("message", function (data){
  //Wait until an SPS message is received
  if (!sps){
    if ((data[4] & 0x07) !== 0x07) return;
    sps = true;
  }
  writeBufferToFile (data);
});
Captura 1.png
Captura 1.png (12.34 KiB) Viewed 4068 times
Last edited by gagle on Thu Jul 31, 2014 5:27 pm, edited 2 times in total.

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: omxcam - OpenMAX camera abstraction layer

Tue Jul 29, 2014 4:50 pm

Motion vectors now can be captured. The bad news are, I have no clue about motion vectors: http://picamera.readthedocs.org/en/rele ... ector-data

The last thing I need to do is to expose the DRC parameter. Then, I "only" need to do a refactor of the still capture, modify the error handling because now it has some problems with asynchronous code (omxcam_last_error() will be removed) and the first version of omxcam will be released.

User avatar
waveform80
Posts: 359
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK
Contact: Website Twitter

Re: omxcam - OpenMAX camera abstraction layer

Thu Jul 31, 2014 10:38 am

gagle wrote:Motion vectors now can be captured. The bad news are, I have no clue about motion vectors: http://picamera.readthedocs.org/en/rele ... ector-data

The last thing I need to do is to expose the DRC parameter. Then, I "only" need to do a refactor of the still capture, modify the error handling because now it has some problems with asynchronous code (omxcam_last_error() will be removed) and the first version of omxcam will be released.
Sounds like an impressive project! Do feel free to drop me a mail if you need a hand understanding stuff like the inline motion vectors. I can point you to the relevant bits of picamera (or raspivid which is where I get most of my inspiration from anyway :)

Dave.
Author of / contributor to a few pi related things (picamera, Sense HAT emulator, gpio-zero, piwheels, etc.), and currently a software engineer at Canonical responsible for Ubuntu Server and Core on the Raspberry Pi.

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: omxcam - OpenMAX camera abstraction layer

Thu Jul 31, 2014 5:42 pm

Ok, thanks Dave. :D

Tomorrow I'll take a look to the DRC parameter.

EDIT: ok, done, but DRC only works in still mode.

Return to “Camera board”