ethanol100
Posts: 583
Joined: Wed Oct 02, 2013 12:28 pm

Re: Pure Python camera interface

Sun May 03, 2015 11:07 am

electronmage wrote:

Code: Select all

Traceback (most recent call last):
  File "/home/pi/picamera.py", line 1, in <module>
    import picamera
  File "/home/pi/picamera.py", line 4, in <module>
    camera = picamera.PiCamera()
AttributeError: 'module' object has no attribute 'PiCamera'
Any ideas?
Did you install the picamera software?
You can run

Code: Select all

sudo apt-get update
sudo apt-get install python-picamera
and then try again.

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Sun May 03, 2015 12:18 pm

electronmage wrote:Good morning,

I was awakened by cats fighting outside and couldn't go back to sleep, so I thought I would work on getting my camera to work on my PI2.

raspistill -o photo.jpg works just fine, but the simplest example from picamera fails. But it has failed in two distinctly different ways.

Initially, the first example from "picamera.readthedocs.org" was starting to take the picture, but would time out. The photo file would be created, but would contain no data.

Now, same example program won't run and has the following errors.

Code: Select all

import picamera
from time import sleep

camera = picamera.PiCamera()
camera.capture('image.jpg')

camera.start_preview()
camera.vflip = True
camera.hflip = True
camera.brightness = 60

camera.start_recording('video.h264')
sleep(5)
camera.stop_recording()
The error I am getting is as follows:

Code: Select all

Traceback (most recent call last):
  File "/home/pi/picamera.py", line 1, in <module>
    import picamera
  File "/home/pi/picamera.py", line 4, in <module>
    camera = picamera.PiCamera()
AttributeError: 'module' object has no attribute 'PiCamera'
Any ideas?
I think you've named your script "picamera.py" which means when your script tries to import picamera it winds up importing itself instead of the picamera library. Just rename the script to something else and it ought to work.

Dave.

spikedrba
Posts: 75
Joined: Fri Feb 28, 2014 2:19 am

Re: Pure Python camera interface

Sun May 03, 2015 5:05 pm

Hi all,

posted this to the github issues, but maybe this is a better place for discussion.

I've been using picamera for a while and it's been all kinds of wonderful to get a grip on the workings of the rpi's camera and developing a prototype. I am now moving toward a more "production" stage and started to pay more attention to performances, especially if I hope to get this going on an A+, which would be great.

The core of what I'm doing surrounds motion detection for which I'm been referencing the following resources:

https://raw.githubusercontent.com/citru ... imotion.py
http://bits.citrusbyte.com/motion-detec ... pberry-pi/

from there I've been playing with these two simple scripts:

https://github.com/citrusbyte/pimotion/ ... imotion.py
https://gist.github.com/spikedrba/aecb82d8b51e991dbd01

in both cases looking at top/vmstat cpu usage on a b+ is close to, respectively, 30% and 40% while not recording, and 50% and 70% while recording.

Testing latest raspimjpeg with motion detection (https://github.com/roberttidey/userland ... spiMJPEG.c), base cpu usage is 8% going up to 23% when displaying motion vectors (worth keeping in mind that that seems to be primarily due to the displaying of images containing the vectors requiring conversion from YUV to jpeg, otherwise the analysis itself would not make that much of a diff in cpu usage).

To be clear, I don't mean to make it a python Vs C argument, I'm just looking for input to better understand long term performances of something built around picamera. Am I doing something dumb here that's making those python implementations of motion detection very inefficient?

The baseline isn't too far off, with a simple camera.start_recording taking 10% on the same B+.

Memory profile on the other end seems quite different, with even the simple script taking 3% of the pi's mem up to 10% when doing motion detection while raspimjpeg sits steady below 1%.

any input would be much appreciated. Thanks in advance,

Spike

btidey
Posts: 1616
Joined: Sun Feb 17, 2013 6:51 pm

Re: Pure Python camera interface

Sun May 03, 2015 10:00 pm

spikedrba wrote: Testing latest raspimjpeg with motion detection (https://github.com/roberttidey/userland ... spiMJPEG.c), base cpu usage is 8% going up to 23% when displaying motion vectors (worth keeping in mind that that seems to be primarily due to the displaying of images containing the vectors requiring conversion from YUV to jpeg, otherwise the analysis itself would not make that much of a diff in cpu usage).
Spike
Like you said the base cpu usage increase on raspimjpeg displaying motion vectors was largely due to the conversion process and some rescaling in particular.

This has now been improved / eliminates and I see very little difference in CPU usage between live image view and vector view, both hovering around the 8% mark. This is still experimental at this stage whilst vector processing for motion detection is being added.

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Sun May 03, 2015 11:53 pm

spikedrba wrote:Hi all,

posted this to the github issues, but maybe this is a better place for discussion.
Hi Spike,

I saw the GitHub ticket and it's on the list of things I want to have a look at, but I'm a bit swamped at the moment with a new addition to the family! Simple things (like the script question above) I can respond to easily, but more complex things might take a while for me to get around to. Still, with a bit of luck others might want to jump in and have a go :)

I've had a quick scan over your pimotion code and I can't see anything obviously nasty from a performance perspective (the other gist looked rather more involved so I haven't dug into that yet). That said, I'm not terribly surprised that having a python script process fairly large numpy arrays 10 times a second pegs the CPU at 50%; even though numpy is largely compiled C that's still a fair amount of work going on in pure python which is several thousand times slower than C. It's probably worth using something like cProfile with it to find out where the real hotspots are but unfortunately I don't have the time to do that right now (might do next week though).


Dave.

spikedrba
Posts: 75
Joined: Fri Feb 28, 2014 2:19 am

Re: Pure Python camera interface

Mon May 04, 2015 12:53 am

Dave
I saw the GitHub ticket and it's on the list of things I want to have a look at, but I'm a bit swamped at the moment with a new addition to the family! Simple things (like the script question above) I can respond to easily, but more complex things might take a while for me to get around to. Still, with a bit of luck others might want to jump in and have a go :)
first of all, congrats on the new addition to family! and second, hope posting here didn't feel like sidestepping you in any way, I just thought the forum might spark up some conversation and relieve you from having to answer yourself.

I tried to give cProfile a shot on the simple code (it's the same idea anyway of the larger script, so feel free to ignore that one), but couldn't point my finger to anything interesting, altho it was also the first time I was using it. I'll give it another go some time this week, however just to be sure I'm understanding this right, that MotionAnalysis class is using the h264 motion vectors, correct? I'm wondering if anything could be done using the mmal interface to process the vectors on the GPU rather than taking them CPU side and using numpy.

take care and congrats again!

Spike

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Mon May 04, 2015 4:31 pm

Hi Spike,
spikedrba wrote:Dave
I saw the GitHub ticket and it's on the list of things I want to have a look at, but I'm a bit swamped at the moment with a new addition to the family! Simple things (like the script question above) I can respond to easily, but more complex things might take a while for me to get around to. Still, with a bit of luck others might want to jump in and have a go :)
first of all, congrats on the new addition to family! and second, hope posting here didn't feel like sidestepping you in any way, I just thought the forum might spark up some conversation and relieve you from having to answer yourself.
No, no, not at all - the more forums the more chance someone can jump in! More my way of apologising that I've answered some things recently (all the simple stuff basically) and ignored all the complex stuff until I can give it some serious attention!
spikedrba wrote:I tried to give cProfile a shot on the simple code (it's the same idea anyway of the larger script, so feel free to ignore that one), but couldn't point my finger to anything interesting, altho it was also the first time I was using it. I'll give it another go some time this week, however just to be sure I'm understanding this right, that MotionAnalysis class is using the h264 motion vectors, correct? I'm wondering if anything could be done using the mmal interface to process the vectors on the GPU rather than taking them CPU side and using numpy.

take care and congrats again!

Spike
Yup, the motion analysis does indeed use the h264 motion vectors. Or more precisely, anything that you point motion_output to (in the start_recording call) will receive the motion vector data. The motion analysis class just does a little work to convert that into a useful numpy array (I *think* it's about as efficient as it can be but if anyone wants to check over the array module and tell me I'm doing something heinous performance-wise, I'd be most interested!). Other than that I'll try and find some time to profile it myself and see if I can spot anything.

Dave.

joeramsay
Posts: 15
Joined: Sat Jul 26, 2014 9:49 am

Re: Pure Python camera interface

Sat May 16, 2015 8:50 pm

I'm getting a strange error when I try to take a still with the camera module - I noticed that somebody mentioned it about a year ago in this thread, but I couldn't find a solution in there. Running both 'raspistill -o x.jpg' in the shell and 'cam.capture('x.jpg')' using the Python interface returns the same problem - I get the error 'picamera.exc.PiCameraRuntimeError: Timed out waiting for capture to end'. I get this message on about 90% of attempts. Stupidly, I didn't ground myself before taking the camera out of its bag - could this be anything to do with it? I've upgraded the firmware and enabled the camera in raspi-config. Any help would be awesome

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Sat May 16, 2015 10:32 pm

joeramsay wrote:I'm getting a strange error when I try to take a still with the camera module - I noticed that somebody mentioned it about a year ago in this thread, but I couldn't find a solution in there. Running both 'raspistill -o x.jpg' in the shell and 'cam.capture('x.jpg')' using the Python interface returns the same problem - I get the error 'picamera.exc.PiCameraRuntimeError: Timed out waiting for capture to end'. I get this message on about 90% of attempts. Stupidly, I didn't ground myself before taking the camera out of its bag - could this be anything to do with it? I've upgraded the firmware and enabled the camera in raspi-config. Any help would be awesome
Given that it's not working with raspstill either I'm afraid there's only a few possibilities here:
  • There's something wrong with the camera's cable
  • There's something wrong with the camera module
  • There's something wrong with the Pi's CSI connector (very unlikely, but I did come across this for the first time today at the Manchester Raspberry Jam!)
If at all possible, test with another camera module or cable to confirm or deny the above, but it definitely sounds like something hardware related.


Dave.

User avatar
DougieLawson
Posts: 35801
Joined: Sun Jun 16, 2013 11:19 pm
Location: Basingstoke, UK
Contact: Website Twitter

Re: Pure Python camera interface

Sun May 17, 2015 8:34 am

Dave has missed the most common camera problem.

The SUNNY connector between the image sensor and the camera board. It needs a firm amount of pressure between a finger and thumb to reseat that connector. [Note: take static precautions first.]

Image
Note: Having anything humorous in your signature is completely banned on this forum. Wear a tin-foil hat and you'll get a ban.

Any DMs sent on Twitter will be answered next month.

This is a doctor free zone.

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Sun May 17, 2015 5:53 pm

DougieLawson wrote:Dave has missed the most common camera problem.

The SUNNY connector between the image sensor and the camera board. It needs a firm amount of pressure between a finger and thumb to reseat that connector. [Note: take static precautions first.]

Image
Ah yes, thanks Doug - I've heard that's common too (oddly I've never encountered it in a workshop yet, but that's random failures for you!)

joeramsay
Posts: 15
Joined: Sat Jul 26, 2014 9:49 am

Re: Pure Python camera interface

Sun May 17, 2015 6:43 pm

The Sunny connector was firmly in place. It must be the board itself. What a moron I was. Thanks for your help, anyway

tsalex
Posts: 2
Joined: Mon Jun 15, 2015 10:50 pm

Re: Pure Python camera interface

Mon Jun 15, 2015 11:03 pm

Hi Dave,

with the following code I am getting an output FPS of 20~22 and I can't manage to get anything better than that. My target is to be able to capture 720p at 30FPS, do you think this is in any way possible?

Code: Select all

import io
import time
import picamera
with picamera.PiCamera() as camera:
    camera.framerate=42
    camera.resolution=(640,480)
    stream = io.BytesIO()
    c=0
    start=time.time()
    for foo in camera.capture_continuous(stream, format='jpeg', use_video_port=True):
        stream.truncate()
        stream.seek(0)
        c+=1
        if(c>100):
                break
    print 100/(time.time()-start)

User avatar
jbeale
Posts: 3476
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Tue Jun 16, 2015 5:30 am

If you want video frame rates (30 fps or more) you should probably use the video codec, which is .h264
that way you can even get full HD at 30 fps.

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Tue Jun 16, 2015 8:13 am

tsalex wrote:Hi Dave,

with the following code I am getting an output FPS of 20~22 and I can't manage to get anything better than that. My target is to be able to capture 720p at 30FPS, do you think this is in any way possible?

Code: Select all

import io
import time
import picamera
with picamera.PiCamera() as camera:
    camera.framerate=42
    camera.resolution=(640,480)
    stream = io.BytesIO()
    c=0
    start=time.time()
    for foo in camera.capture_continuous(stream, format='jpeg', use_video_port=True):
        stream.truncate()
        stream.seek(0)
        c+=1
        if(c>100):
                break
    print 100/(time.time()-start)
You certainly don't want to use capture_continuous for that as under the covers it's setting up a encoder and destroying it again between each frame. To go as fast as possible you want to set up a video recording and extract individual frames. Thankfully with JPEG this is actually quite easy due to JPEG's magic number being guaranteed not to appear anywhere in the rest of the stream. The following gist demonstrates the principal (basically just start an MJPEG recording then look for the FFD8 byte sequence in the output buffers):

https://gist.github.com/waveform80/263b9c8bdcb1e9b79749

Now, although that'll easily capture 720p at 30fps (in fact on my Pi2 it easily manages 42fps when recording to pure in-memory BytesIO streams) you'll have a tough time doing much with them at that rate!

Dave.

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Tue Jun 16, 2015 9:01 am

jbeale wrote:If you want video frame rates (30 fps or more) you should probably use the video codec, which is .h264
that way you can even get full HD at 30 fps.
Just to follow up on jbeale's point (which is well made), it's probably about time I did a post on realistic processing limits on the Pi...

Whenever you're working on the Pi the major thing in your head should be: bandwidth. The Pi has pretty severe limits on bandwidth in numerous places - firstly disk (SD card or USB), but also memory. This is one of the reasons it's sometimes faster to work with JPEG output from the camera instead of unencoded YUV/RGB; although the latter may involve less processing, the former involves moving smaller chunks of memory from the GPU to the CPU.

Now if you're considering doing serious visual processing (face recognition, object tracking, etc.) you can pretty much forget doing 720p30 on the Pi (unless you're willing to get down'n'dirty with the GPU - but that's outside my realm of experience). Let's assume you go for the next simplest option: capture on the Pi, and ship the frames over the network to some beefy box capable of doing the processing fast enough.

If the box is fast enough to do the serious visual processing at that rate it's almost certainly fast enough to do the (comparatively trivial) video decoding and frame splitting as well. Hence, just record video on the Pi and dump the stream as-is over the network.

So, why use H264 instead of MJPEG? Surely full JPEG pictures will be higher quality than a video codec? Incorrect. Of all the formats supported, H264 provides by far the best quality in the smallest space. Remember that JPEG is (by now) quite an ancient format and hasn't been upgraded (in a widely supported manner) since the late 90s. H264 keyframes (I-frames) are smaller, yet better quality than JPEGs (hardly surprising given H264 has the benefit of years of research beyond JPEG). One might argue that the P-frames (predicted frames in between) are lower quality and in the case of a full scene change or extremely complex motion, you might have a point. But you can easily configure the H264 encoder to only output I-frames (intra_period=1) and you'll still get similar or better quality to JPEG in a smaller space.

What about shipping the unencoded YUV/RGB frames for maximum quality? Good luck: here's the bandwidth requirement for 720p30 in RGB format:

1280 * 720 * 3 * 30 / 1048576 = 79Mbytes/sec

Remember that the Pi's ethernet port (which is connected via USB anyway) is 100Mbit, so its maximum capacity (assuming no protocol overhead or other bandwidth restrictions) is 10Mbytes/sec. What about YUV? That just cuts the requirement in half by halving the bytes per pixel to 1.5, so we get down to 40Mbytes/sec - still way outside the available capacity (and we haven't even discussed whether the Pi can shove bits around that fast in memory - in my experience it can't).

So, realistic processing limits:

If you're doing your processing entirely on the Pi and you're wanting to do things that are considered relatively hard (like face tracking), expect to be down in the single digits of fps (1-2fps on a Pi1, probably higher on a Pi2). Simpler stuff (like recognizing the dominant colour in a scene) can be achieved much faster, so you might manage 15fps or perhaps higher with a bit of cunning. Generally speaking in this scenario you might start working with unencoded frames (because it's easiest and involves the least processing overhead), but you'll probably want to experiment with frames extracted from MJPEG too (as in the gist posted above ... I really must add that as a recipe in the next release ...) in order to find out whether bandwidth is your limiting factor.

If you're offloading your processing onto a big box (a big laptop with a Core i7 for example) - don't bother playing around with unencoded frames or MJPEG. Just record H264, shove it over the network and do all the processing on the other end. Fire up an appropriately configured ffmpeg subprocess, feed the H264 stream to its stdin, and read unencoded YUV/RGB frames from its stdout (or find some appropriate bindings for libav). In this scenario you should be able to manage 30fps or higher with ease (the limiting factor will be processing speed on the other machine).

Dave.

tsalex
Posts: 2
Joined: Mon Jun 15, 2015 10:50 pm

Re: Pure Python camera interface

Tue Jul 14, 2015 12:24 am

waveform80 wrote:
You certainly don't want to use capture_continuous for that as under the covers it's setting up a encoder and destroying it again between each frame. To go as fast as possible you want to set up a video recording and extract individual frames. Thankfully with JPEG this is actually quite easy due to JPEG's magic number being guaranteed not to appear anywhere in the rest of the stream. The following gist demonstrates the principal (basically just start an MJPEG recording then look for the FFD8 byte sequence in the output buffers):

https://gist.github.com/waveform80/263b9c8bdcb1e9b79749

Now, although that'll easily capture 720p at 30fps (in fact on my Pi2 it easily manages 42fps when recording to pure in-memory BytesIO streams) you'll have a tough time doing much with them at that rate!

Dave.
Thanks for the help, that works very well at 720p giving us ~30fps, only problem now is that changing the bitrate and quality properties does not have any effect. I even sampled single frames setting different bitrate/quality but the sample size always remained in the same range.

bantammenace2012
Posts: 122
Joined: Mon May 28, 2012 12:18 pm

Re: Pure Python camera interface

Mon Nov 09, 2015 3:46 pm

My daughter and I are ‘showing and telling’ our Lego based robot at Pi Wars at Cambridge in a couple of weeks and we are using Dave's Pistreaming for the video.
It works well but due to the location of the camera in the robot the image is inverted. How easy (if its possible at all) to invert the image in Pistreaming ?

User avatar
DougieLawson
Posts: 35801
Joined: Sun Jun 16, 2013 11:19 pm
Location: Basingstoke, UK
Contact: Website Twitter

Re: Pure Python camera interface

Mon Nov 09, 2015 7:55 pm

At line #143 in the pistreaming program

Code: Select all

 camera.framerate = FRAMERATE
 sleep(1) # camera warm-up time
Update it to

Code: Select all

 camera.framerate = FRAMERATE
 camera.rotation=180
 sleep(1) # camera warm-up time
Note: Having anything humorous in your signature is completely banned on this forum. Wear a tin-foil hat and you'll get a ban.

Any DMs sent on Twitter will be answered next month.

This is a doctor free zone.

bantammenace2012
Posts: 122
Joined: Mon May 28, 2012 12:18 pm

Re: Pure Python camera interface

Mon Nov 09, 2015 10:37 pm

Thanks Dougie.
I couldn't get it working with your suggestion (spacing issue possibly ? or brackets needed ?) but you did show me where to add things.
I did get it working by adding camera.vflip = True

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Thu Nov 12, 2015 8:37 pm

bantammenace2012 wrote:Thanks Dougie.
I couldn't get it working with your suggestion (spacing issue possibly ? or brackets needed ?) but you did show me where to add things.
I did get it working by adding camera.vflip = True
I'm afraid it's actually "rotation" instead of "rotate". As to why "rotate" doesn't raise an error: that's because setting arbitrary attributes on objects in Python isn't an error.

That said, I was recently persuaded that in the case of picamera (and other libraries) this really isn't friendly and we ought to do something about it (using either slots or metaclasses under the covers) so in the next release (this year, honest!) I'll be "fixing" this (I say "fixing" because the method is actually horrid, but since sub-classing PiCamera is an extremely rare activity, I figure it's worth the horror under the covers :).


Dave.

robopo
Posts: 10
Joined: Sun Nov 22, 2015 10:43 am

Re: Pure Python camera interface

Sun Nov 22, 2015 11:07 am

You certainly don't want to use capture_continuous for that as under the covers it's setting up a encoder and destroying it again between each frame. To go as fast as possible you want to set up a video recording and extract individual frames. Thankfully with JPEG this is actually quite easy due to JPEG's magic number being guaranteed not to appear anywhere in the rest of the stream. The following gist demonstrates the principal (basically just start an MJPEG recording then look for the FFD8 byte sequence in the output buffers):

https://gist.github.com/waveform80/263b9c8bdcb1e9b79749

Now, although that'll easily capture 720p at 30fps (in fact on my Pi2 it easily manages 42fps when recording to pure in-memory BytesIO streams) you'll have a tough time doing much with them at that rate!

Dave..
Firstly, thanks to Dave for all the massive amount of work done!

I just recently received my Raspberry Pi Model 2 and the camera (Arducam) and there is a lot to learn for someone who has only used Windows and C to get things going. I've been reading this thread from the beginning and at the same time trying to learn Python and how the picamera class works.

So far so good, but now I'm facing a problem as I can't get the: "lots_of_jpegs_when_u_want.py" to run. I get the errors below, maybe someone knows what up there.

>>> ================================ RESTART ================================
>>>
Traceback (most recent call last):
File "/usr/lib/python3.4/curses/__init__.py", line 78, in wrapper cbreak()
_curses.error: cbreak() returned ERR

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/pi/0-Python code/Samples/picWhenYouWant.py", line 45, in <module>
curses.wrapper(main)
File "/usr/lib/python3.4/curses/__init__.py", line 100, in wrapper
nocbreak()
_curses.error: nocbreak() returned ERR

Thanks!

robopo
Posts: 10
Joined: Sun Nov 22, 2015 10:43 am

Re: Pure Python camera interface

Sun Nov 22, 2015 12:31 pm

Well, I just realized that the code must be run from command window and not from Python IDLE window. At least that's how I got it to run. Is there an easy way to break out from the camera loop in the same fashion as with using curses when using IDLE?

Thanks!

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Mon Nov 23, 2015 7:42 pm

robopo wrote:Well, I just realized that the code must be run from command window and not from Python IDLE window. At least that's how I got it to run. Is there an easy way to break out from the camera loop in the same fashion as with using curses when using IDLE?

Thanks!
Ah yes, I'm afraid curses requires a terminal for operation (IDLE doesn't provide a terminal so curses won't work with it). As for a similar method of interacting with IDLE... Hmmm, I don't know of a direct way to obtain key-presses in IDLE without needing Return after them (I'm afraid I never use IDLE myself so I'm not that familiar with it!), but you could construct a trivial GUI with tkinter and have someone click a "stop" button.

Dave.

robopo
Posts: 10
Joined: Sun Nov 22, 2015 10:43 am

Re: Pure Python camera interface

Tue Nov 24, 2015 6:24 pm

Alright, thanks Dave.

Return to “Camera board”