User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

omxcam - OpenMAX camera abstraction layer

Wed Mar 19, 2014 11:33 am

I'd like to share a C library with the appropiate Node.js bindings that encapsulates the camera usage. It doesn't use MMAL, it talks directly to OpenMAX IL.

C library
Node.js module

Why OpenMAX IL and not MMAL?
  • MMAL is not a camera library, it's a wrapper around OpenMAX IL plus other things. If you want to write a camera library, you're adding an extra layer: cam -> mmal -> omx. Why not remove MMAL? cam -> omx.
  • MMAL is not documented, so the learning curve is much bigger with MMAL than with OpenMAX IL.
  • OpenMAX IL is an open standard, whereas MMAL is a Broadcom's propietary api (not open source). Do you want to learn OpenMAX IL? The best way is to read and understand the source files; with omxcam you can.
Note: I don't have anything against MMAL. Without the help of the MMAL core maintainers this library wouldn't exist.

This library is MIT-licensed, so you can do whatever you want with it: sell, distribute, modify, etc.

Do you want to see SSCCE OpenMAX IL examples? See jpeg and h264.

TODO:
  • Try to fix the red things from here.
  • Implement ermaining h264 settings from here.
  • Investigate the OpenGL world.
  • Node.js bindings.
  • Write docs.
Last edited by gagle on Sun Jul 20, 2014 5:41 pm, edited 10 times in total.

didi
Posts: 12
Joined: Fri Jun 14, 2013 7:53 pm

Re: rpicam - Camera Abstraction layer using OpenMAX IL

Thu Mar 20, 2014 6:24 am

hi gagle. does this driver work? did you break it down so we can use it?
if so, this would be the best news since i bought my camera!

later.............dd

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 27010
Joined: Sat Jul 30, 2011 7:41 pm

Re: rpicam - Camera Abstraction layer using OpenMAX IL

Thu Mar 20, 2014 9:04 am

Thanks for the library, although I would definitely have gone with MMAL. MMAL was written specifically to make the whole area of use easier than OpenMAX, simply because OpenMAX is horribly difficult to get right (debugging, as I am sure you have found, it very very unpleasant). MMAL handles all the nasty stuff for you, and has already been debugged. MMAL is entirely Doxygen commented, so does actually have a lot of documentation.

What you say about MMAL being Broadcom proprietary is true. I don't believe that offsets the much greater ease of use presented by MMAL.

MMAL is a layer above OpenMAX in the same way that the OpenMAX RIL is a layer above OpenMAX, so there should be little or no performance difference between the two.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed.
I've been saying "Mucho" to my Spanish friend a lot more lately. It means a lot to him.

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: rpicam - Camera Abstraction layer using OpenMAX IL

Thu Mar 20, 2014 1:35 pm

didi wrote:...
This is not a driver, it's just a lib around the OpenMAX IL. The driver layer it's located under OpenMAX: http://elinux.org/Raspberry_Pi_VideoCore_APIs

There are a lot of things to fix/improve/implement, so the answer here it's: use at your own risk. The lib is still not versioned (eg: v0.1.2) because it's still not usable.
jamesh wrote: ...
The main problem with MMAL it's the lack of documentation. The doxygen docs are ok, but I'd like to see some usage examples. Right now, you need to read the raspistill and raspivid source code and learn how MMAL is used. I tried to do this but in my opinion they are very hard to follow.

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 27010
Joined: Sat Jul 30, 2011 7:41 pm

Re: rpicam - Camera Abstraction layer using OpenMAX IL

Thu Mar 20, 2014 3:08 pm

gagle wrote:
didi wrote:...
This is not a driver, it's just a lib around the OpenMAX IL. The driver layer it's located under OpenMAX: http://elinux.org/Raspberry_Pi_VideoCore_APIs

There are a lot of things to fix/improve/implement, so the answer here it's: use at your own risk. The lib is still not versioned (eg: v0.1.2) because it's still not usable.
jamesh wrote: ...
The main problem with MMAL it's the lack of documentation. The doxygen docs are ok, but I'd like to see some usage examples. Right now, you need to read the raspistill and raspivid source code and learn how MMAL is used. I tried to do this but in my opinion they are very hard to follow.
I did try to make them as easy to understand as possible but there are limits!

I guarantee OpenMAX code will be more complicated! I've not really looked - is there a lot of usage documentation available for it?
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed.
I've been saying "Mucho" to my Spanish friend a lot more lately. It means a lot to him.

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: rpicam - Camera Abstraction layer using OpenMAX IL

Thu Mar 20, 2014 7:40 pm

jamesh wrote: I guarantee OpenMAX code will be more complicated! I've not really looked - is there a lot of usage documentation available for it?
Nope. Just the 400-pages spec and some examples.

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 27010
Joined: Sat Jul 30, 2011 7:41 pm

Re: rpicam - Camera Abstraction layer using OpenMAX IL

Thu Mar 20, 2014 8:12 pm

gagle wrote:
jamesh wrote: I guarantee OpenMAX code will be more complicated! I've not really looked - is there a lot of usage documentation available for it?
Nope. Just the 400-pages spec and some examples.
Yes, the spec. I (tried to) read it. Impenetrable. And I actually work with one of the people who helped design it....
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed.
I've been saying "Mucho" to my Spanish friend a lot more lately. It means a lot to him.

didi
Posts: 12
Joined: Fri Jun 14, 2013 7:53 pm

Re: rpicam - Camera Abstraction layer using OpenMAX IL

Fri Mar 21, 2014 1:50 pm

OK I must have direct access to the driver routines. That means C functions that do not have hidden protocol logic buried in obfuscated C++ classes. For the purpose of robot-vision I need to talk to the camera directly from my program. I must capture and crunch 10 frames per second. To do that I need the same functionality as raspiyuv but very lean. Letting us have the the lower classes would make your job much easier. CMUCam3 did this 5 years ago, with an omnivision camera. They have a short and sweet real-time C library...............dd
Last edited by didi on Sat Mar 22, 2014 4:59 pm, edited 1 time in total.

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 27010
Joined: Sat Jul 30, 2011 7:41 pm

Re: rpicam - Camera Abstraction layer using OpenMAX IL

Fri Mar 21, 2014 2:04 pm

didi wrote:OK boys, i will settle this conflagration right now. We, as programmers, must have direct access to the driver routines. That means C functions that do not have hidden protocol logic buried in obfuscated C++ classes. If you do not have that, you have NOTHING.
The Omnivision camera interface is SPI, which is very common (Omnivision calls it SCCB). This has been done many times before. This is not a big secret. We are not asking for a full-featured-point-and-click app. We need to talk to the camera.
Jamesh, you are obstructing progress. RaspberryPi purports to be open source. Broadcom is definitely not open source. You must decide where you stand and stop wasting our time. New programmers are perplexed and dismayed and attribute their lack-of-success to their inexperience. This is very sad and must stop. ................dd
I'd like you to withdraw that comment please. I have NOTHING to do with whether the GPU code is open sourced or not (whci is where all the camera interface code is). I am not obstructing anything. I have spent months (in work and personal time) on producing OPEN SOURCE software so that people can use the camera. Without that work YOU WOULD NOT HAVE A CAMERA AT ALL. You are just the sort of person who put me right off from EVER DOING ANY MORE WORK ON THE CAMERA. If this wasn't a family friendly forum, you would be getting a torrent of deserved abuse right now, because zealots like you do more to harm the OSS community than almost anything else.

It's not me who must stop, but you with with your rudeness, misinformation and obfuscation. You as programmers DO NOT NEED direct access to the driver routines (although for completeness it would be a nice to have) - all that work has already been done for you by very experienced professionals in camera development.

The Omnivision camera is CSI-2 btw, with an I2C interface for setting up.

And on that note, I think I'll take a rest from camera work for the moment. Just a couple of minor bug fixes I need to get out, and I'll start looking at something else. The HW cursor still isn't working right, so I'll take a look at that.

Well done.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed.
I've been saying "Mucho" to my Spanish friend a lot more lately. It means a lot to him.

Chris_Reynolds
Posts: 72
Joined: Mon May 14, 2012 7:25 am

Re: rpicam - Camera Abstraction layer using OpenMAX IL

Fri Mar 21, 2014 4:32 pm

Heavens above.

didi, As I understand it James doesn't work for the foundation, if you have a problem with how the RPi foundation is supplying software support for its camera board then you should talk to them not unload on a forum. What you shouldn't do is have a rant at someone, who as he says, put a lot of personal effort into developing software for the board.

If you don't like what James has done then there is a very good python interface and whether James and gagle agree on approach or not then there is also a fledgling C library.

I don't have the skills or time to create what these people have done but I appreciate their efforts and have happily used James' code and the Python interface to create software. I feel neither perplexed or dismayed.

Your approach to getting what you want is unlikely to get you anywhere but has upset someone who has given us something very useful.

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: rpicam - Camera Abstraction layer using OpenMAX IL

Fri Mar 21, 2014 10:00 pm

Don't panic :mrgreen:

We as developers don't need to access to the driver layer. OpenMAX/MMAL it's just what we need. It's a hard job, especially if no one pays you to do so. I agree that the raspberry pi could be a little more open source, but we should not forget that behind it there's a company, and companies need to be competitive to survive, that is, closed source.

We as developers can hack here and there to bring to the community what they ask. We love this kind of things, our minds were designated to do this kind of jobs. We love open source because it's the best way to share knowledge, so instead of discuss between us, we can cooperate.

------------------

NEWS

Today I've been trying to add the resize component, but I'm failing and I don't know how can I solve this mess. Right now I have this:

Image

And I want to allow the user to attach multiple "resized RGB channels", basically because image processing algorithms take too much time with big images. The idea is: I send to the user the original image along with the multiple requested resized images. For example, JPEG 2592x1944 and 2 resized images: 1/2 and 1/4. For now, I'm trying to resize the image only one time, something like this:

Image

I don't know if OpenMAX allows you create 2 tunnels with the same origin port (do you know what is the purpose of the parameter nBufferCountActual from OMX_IndexParamPortDefinition?)

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 27010
Joined: Sat Jul 30, 2011 7:41 pm

Re: rpicam - Camera Abstraction layer using OpenMAX IL

Fri Mar 21, 2014 10:06 pm

You need to use the splitter component to split the stream, then plumb the components in to the output from that. You cannot attach two things to one port.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed.
I've been saying "Mucho" to my Spanish friend a lot more lately. It means a lot to him.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 9499
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: rpicam - Camera Abstraction layer using OpenMAX IL

Sat Mar 22, 2014 9:39 am

Jamesh is right that you can't connect two inputs to one output port using tunnels.
You could retrieve the buffers and attach the same pBuffer to a second buffer header and pass it to a second component too, just don't return it to the source component until both have returned the buffer.

The video_splitter component will allow you to have split to have multiple destinations. However it doesn't support the proprietary tunneling calls that camera to image_encode uses, so you would lose all EXIF data if you insert it between (bare IL buffers have no real mechanism for metadata).
You may be better adding it to the output of the preview port. I won't say too much, but the proprietary extensions OMX_IndexConfigBrcmBufferFlagFilter (OMX_Index.h) and OMX_BUFFERFLAG_CAPTURE_PREVIEW (OMX_Broadcom.h) may be of use there to only get the preview buffer generated from the still capture source frame.
video_splitter can also do some format conversions on the output ports, whereas IIRC resize i a pure resize and can't change colour format.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

didi
Posts: 12
Joined: Fri Jun 14, 2013 7:53 pm

Re: rpicam - Camera Abstraction layer using OpenMAX IL

Sat Mar 22, 2014 5:09 pm

Ok, sorry for the flame. I edited my response. If the Pi guys dont want to release the lower classes, they should let us know before we buy their stuff. I do not want the GPU code. Just a few C routines to talk to the camera, which I know you have. Don't lose heart, I don't blame you, I know you have a tough job and appreciate your work. later.............dd

didi
Posts: 12
Joined: Fri Jun 14, 2013 7:53 pm

Re: rpicam - Camera abstraction layer using OpenMAX IL

Sun Mar 23, 2014 6:57 am

Good work Gagle. Man, I don't know how you waded through all that stuff.
I really do not like all these layers and classes. If I cannot cut out OpenMax and MMAL,
I can not and will not use the Pi. If the libraries are clean and open, I will consider using them.
I must have fast mission critical C (or Assembly) routines. I have found that it is usually easier
(and educational/fun) to write them myself. But I don't know if I can tackle the Pi. Too much
stuff to punch through. I think I will get started on plan B. Let me know how you make out.
rock on.................dd

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 27010
Joined: Sat Jul 30, 2011 7:41 pm

Re: rpicam - Camera abstraction layer using OpenMAX IL

Sun Mar 23, 2014 9:35 am

didi wrote:Good work Gagle. Man, I don't know how you waded through all that stuff.
I really do not like all these layers and classes. If I cannot cut out OpenMax and MMAL,
I can not and will not use the Pi. If the libraries are clean and open, I will consider using them.
I must have fast mission critical C (or Assembly) routines. I have found that it is usually easier
(and educational/fun) to write them myself. But I don't know if I can tackle the Pi. Too much
stuff to punch through. I think I will get started on plan B. Let me know how you make out.
rock on.................dd
You seem woefully misinformed.

ALL the ARM side libraries are fully open source. They are all written in C. There is a fast communications layer to the GPU to tell it what to do. All stuff on the GPU is C and vector assembler. The MMAL and OpenMAX libraries are the way to talk to the GPU - this IS the API. There are no other basic C routines on the ARM side as you seem to think. You will not be able to make things faster by writing you own code - this stuff is pretty well optimised - they are the libraries used in mobile phones and similar the use VC4 and are proven and fast. They are also large so doing it yourself would be pretty much a waste of time. You could write code the the very low level communication layer to the GPU, but that is undocumented, and would take months, and in all likelihood won't be as good as the existing libraries.

There is a reason for all these layers and classes. They reduce the complexity of doing it yourself. Seriously. That's what API's are for - they reduce the complexity and time to implementation of programs using those features. By a hell of a lot. But if you want to do it the hard way, no-one here is going to stop you. It's educational if nothing else.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed.
I've been saying "Mucho" to my Spanish friend a lot more lately. It means a lot to him.

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: rpicam - Camera abstraction layer using OpenMAX IL

Tue Mar 25, 2014 9:30 pm

NEWS

- I've tried to add the splitter component but I cannot set a tunnel to the image encoder because you cannot tunnel video and image domain ports. I've tried with manual allocation and buffer transfer but then the encoder fails because it needs to be tunnelled directly from the camera due to the propietary tunnelling (or it's too complex for me, my code hangs up :? ), so for now, you can only get jpeg and rgb/yuv stills (no secondary resized still). And I don't want to use the preview port (with the buffer flag mechanism) because as far as I know, the resolution is lower than with the still port (http://picamera.readthedocs.org/en/rele ... resolution) and because the result image doesn't have exif tags.

- I've implemented the h264 tunnel. It's basically a refactor of this example. The following repository shows an h264 video capture example: https://github.com/gagle/raspberrypi-openmax-h264

Next steps are:
- rgb/yuv video
- split the video tunnel and provide a resized rgb/yuv stream
- validate function parameters (stride, sliceheight, aspect ratio, camera settings, etc)

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: rpicam - Camera abstraction layer using OpenMAX IL

Mon Mar 31, 2014 10:17 am

Hey people, I'm still here :P. The last week I had no time to do anything and this week will be the same more or less.

h264 recording works fine in a self-contained program but I still need to apply the changes to the library. For doing that I need to use pthreads. It's a very simple design:

Image

I also need to add a cancelStill() function...

unphased
Posts: 23
Joined: Tue Apr 08, 2014 2:44 pm

Re: omxcam - OpenMAX camera abstraction layer

Wed Apr 09, 2014 9:46 pm

Hey guys, I am also tasked with a computer vision application using the Raspberry Pi and basically the kind of image data pipeline that I need is to be able to run optimized C/C++ code over a stream of YUV or RGB rasters and then I want to encode the result of that image processing (e.g. Draw boxes and such, after recognizing features inside the video frame) into H.264. I have been having quite a difficult time finding the most efficient means of streaming this resultant H.264 stream over the network for use at ground control, but I think the challenge there is a completely separate one.

So another way of looking at it is that I just need to inject a little bit of code into what raspivid can already do, but it's gotta run my image processing code on the raw image buffer prior to encoding it.

Both the onboard processing and H.264 aspects are absolutely crucial to getting the quality required. I can definitely live with much worse latency if I don't need to depend on the latency to do processing on the receiving client, and therefore H.264 streaming should be ideal as it will let me push a much higher resolution vs MJPEG, and the higher the resolution, the more precise the robotic control will be.

I am looking at the raspivid source and it's not yet clear to me if the way that openMAX/MMAL is being used allows me to jump in at this stage and implement some image processing.

What I am really hoping is possible is that we can thread it so that we can put the GPU to work on H.264 encoding of the previously captured frame while the current frame is being captured, and then it can be processed, and repeat. This would provide a decent CPU time quota left for running the arbitrary image processing routines. I don't know if the API's here are letting this happen implicitly so I can just call something magical which is nonblocking which pushes a frame into the encoder.

For example, encoder_buffer_callback(MMAL_PORT_T*, MMAL_BUFFER_HEADER_T*) seems relevant to having a better understanding. My suspicion (hope) at this point is that maybe this is passed over to MMAL and MMAL calls encoder_buffer_callback with a buffer containing the part of the H.264 encoded stream which corresponds to the last frame? I can't tell if at any point the uncompressed raw camera data passes through this raspivid program!

Some high-level overview of how this code works, and how I might be able to build in my processing step, would be really helpful!

And thanks everyone for contributing to this awesome little platform. When I get done with my work I will share it with the world as well.

unphased
Posts: 23
Joined: Tue Apr 08, 2014 2:44 pm

Re: omxcam - OpenMAX camera abstraction layer

Wed Apr 09, 2014 10:58 pm

I'm really sorry and I hope I don't derail the discussion. Maybe I should move my question into its own thread?

I did find that this code from raspivid.c is probably the part where I need to intervene.

Code: Select all

// Now connect the camera to the encoder
status = connect_ports(camera_video_port, encoder_input_port, &state.encoder_connection);

User avatar
gagle
Posts: 82
Joined: Fri Feb 14, 2014 6:54 pm
Contact: Website

Re: omxcam - OpenMAX camera abstraction layer

Thu Apr 10, 2014 5:11 pm

Don't worry, this library will try to solve most of the problems with image processing, basically because the success of a bigger project that I have in mind depends on the success of this library. Right now you can capture video and images but I still need to fix and improve a lot of things. The past 2-3 weeks I've been busy and without internet access, but I promise that you will have the following:

- Optimized C library that provides an stream of images (jpeg/yuv/rgb) and video (h264/yuv/rgb).
- Easy to integrate with image processing algorithms.
- The video stream will emit encoded video data (h264) and if you want, a secondary resized stream in yuv/rgb, so you can emit an h264 video and at the same time apply image algorithms to the yuv/rgb stream.
- API super easy to use.
- Built with asynchronicity in mind (multithreaded design), so you can integrate it with other languages such as node.js (in fact, I'd like to write the bindings: node.js + raspberry pi = the union of the 2 most popular technologies nowadays).

Patience.

unphased
Posts: 23
Joined: Tue Apr 08, 2014 2:44 pm

Re: omxcam - OpenMAX camera abstraction layer

Thu Apr 10, 2014 6:30 pm

That sounds very awesome gagle!

There is a lot of pioneering work happening in this area.

I'll definitely be keeping an eye on this!

See wibble82's work here.

He hasn't got his stuff up on github yet. But the youtube video demo of GPU processing via shaders is very very impressive (and I like it because I'm quite comfortable writing frag shaders to do initial image processing steps).

Keep up the great work guys!

hjimbens
Posts: 86
Joined: Fri May 24, 2013 9:05 am

Re: omxcam - OpenMAX camera abstraction layer

Tue Apr 22, 2014 1:53 pm

Hi, I am using the camera to feed images from the CapturePort (71) to a texture using the egl_render component. That works fine.
When the user clicks on a button, I enable the StillImagePort (72) and create a jpeg file from the data on that port using the "image_encode" component in the same way as in https://github.com/gagle/raspberrypi-op ... ter/jpeg.c.
Unfortunately after creating the still, the last buffer that was presented to the CapturePort (71) with OMX_FillThisBuffer does not result in a FillBufferDone callback.
From http://home.nouwen.name/RaspberryPi/doc ... amera.html I guess this should be possible. Do i have to disable the CapturePort (71) before taking a still and reenable it afterwards?

unphased
Posts: 23
Joined: Tue Apr 08, 2014 2:44 pm

Re: omxcam - OpenMAX camera abstraction layer

Tue Apr 22, 2014 6:25 pm

I hate to be that guy, but... Did you try it?

The reason I say this is that I am also genuinely curious (because that is a very powerful capability to be able to record video going through GL and simultaneously snap stills without dropping any frames in the video) and you appear to be in a very suitable environment for giving it a whirl.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 9499
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: omxcam - OpenMAX camera abstraction layer

Tue Apr 22, 2014 7:03 pm

unphased wrote:The reason I say this is that I am also genuinely curious (because that is a very powerful capability to be able to record video going through GL and simultaneously snap stills without dropping any frames in the video) and you appear to be in a very suitable environment for giving it a whirl.
It is not possible to grab a still whilst encoding without dropping any frames with the current client code. The GPU can do it, but the extra setup code hasn't been released (extra image buffers in critical places, and a change in behaviour as the still has be processed in parallel with encode still ongoing as it takes more than one frame period).
I haven't time at the moment to investigate producing such demo code, or even confirming the state of the GPU firmware for doing it - sorry.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

Return to “Camera board”