XAPBob
Posts: 91
Joined: Tue Jan 03, 2012 2:40 pm

Re: Pure Python camera interface

Wed May 14, 2014 12:02 pm

That may well be the difference I am seeing now then - I am using one of the new full FoV modes ;)
Although I still don't see much change in quality (of course that might be the effect of running through PIL)

I'm currently limited by the CPU on the unit using PIL to add a timestamp to the image - the bigger the image the harder it has to work (unsurprisingly)

I shall start to do smaller normal captures along with image comparison on those to decide when to take larger images - and find somewhere to send them.
I think google pictures looks interesting, because it has "no limit" to the number of pictures less than 2k width...

User avatar
jbeale
Posts: 3439
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Wed May 14, 2014 4:27 pm

ethanol100 claims in this post http://www.raspberrypi.org/forums/viewt ... 10#p550364 that splitting files should be possible in VBR mode, and in particular that raspivid does it. If that is true, can PiCamera do the same? The documentation for v1.4 at http://picamera.readthedocs.org/en/release-1.4/api.html currently says
Note that split_recording() cannot be used in VBR mode.

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Wed May 14, 2014 4:37 pm

jbeale wrote:ethanol100 claims in this post http://www.raspberrypi.org/forums/viewt ... 10#p550364 that splitting files should be possible in VBR mode, and in particular that raspivid does it. If that is true, can PiCamera do the same? The documentation for v1.4 at http://picamera.readthedocs.org/en/release-1.4/api.html currently says
Note that split_recording() cannot be used in VBR mode.
Last time I tested it (which admittedly is a few months ago and there's been plenty of firmware updates since then), the firmware didn't output SPS/PPS headers (which we use as split points) when in VBR mode (it output one right at the start, and then no more). For some reason I haven't got an upstream ticket in picamera's issue tracker for this one (I should do!) but there was a forum thread where it was acknowledged as a firmware issue. Perhaps it got fixed in the meantime; I'll bung a ticket in the picamera tracker to remind me to test this before the next release.


Dave.

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Wed May 14, 2014 4:38 pm

waveform80 wrote:
jbeale wrote:ethanol100 claims in this post http://www.raspberrypi.org/forums/viewt ... 10#p550364 that splitting files should be possible in VBR mode, and in particular that raspivid does it. If that is true, can PiCamera do the same? The documentation for v1.4 at http://picamera.readthedocs.org/en/release-1.4/api.html currently says
Note that split_recording() cannot be used in VBR mode.
Last time I tested it (which admittedly is a few months ago and there's been plenty of firmware updates since then), the firmware didn't output SPS/PPS headers (which we use as split points) when in VBR mode (it output one right at the start, and then no more). For some reason I haven't got an upstream ticket in picamera's issue tracker for this one (I should do!) but there was a forum thread where it was acknowledged as a firmware issue. Perhaps it got fixed in the meantime; I'll bung a ticket in the picamera tracker to remind me to test this before the next release.


Dave.
Doh! I did have a ticket for this: https://github.com/waveform80/picamera/issues/70


Dave.

jrsphoto
Posts: 1
Joined: Mon May 19, 2014 10:28 pm

Re: Pure Python camera interface

Mon May 19, 2014 10:36 pm

Somehow found the picamera python library a month or so ago but I missed this bit about the picroscopy program. My application is a bit different, I wish to use it as a guide camera on a telescope. I downloaded the code from github (forked it actually). My goal would be to have more focus on imaging from the remote client rather than displayed on the hdmi monitor, although I would like to keep that option available as well.

I have been looking for a case to put all this in and managed to put together something myself that works but I also found a commercial solution that looks very nice and I figured I should share it: http://bit.ly/1oLOyWM

If you look at their blog post here: http://bit.ly/R2Oxij they mention that they have a CS-mount version in the works. I'm waiting to hear back from them on its availability.

Cheers and thanks for the great picamera library and picroscopy program.

kelevraxx
Posts: 11
Joined: Wed Mar 26, 2014 12:58 am
Location: Brazil

Re: Pure Python camera interface

Wed May 21, 2014 4:18 pm

Hello Dave and Pi Enthusiasts!

i am finishing my project with Raspberry Pi + Camera Board + Microscope, i spend a LOT of time for attach the Camera into microscope....made a Hugh amount of testes on it.

i am very happy with results, but my project its a quite simple but very useful for teacher ho uses microscope into classroom, a simple projection of the microscope slide, help a lot teachers and students.
So i need make a simple shell script for make more simple the use of my project, my questions:
1- if i put the command "raspvid -t 0 -awb auto -ex auto " on the script will work?
2 - if i need the image saved (raspivid or raspistill) on dropbox folder where i put the path ? -o /home/xxx/dropbox ????
3 - i need put some messages on screen before the command start like " Press Ctrl + C for stop the projection" or " A preview image will delay for 10 seconds before take a snap, use this time to focus the image" how i do that

Sorry but my English is so so

XAPBob
Posts: 91
Joined: Tue Jan 03, 2012 2:40 pm

Re: Pure Python camera interface

Wed May 21, 2014 4:26 pm

IIRC there is an opacity setting on the preview, that would allow you to write instructions (possibly using "banner") to display under/overthe image.

kelevraxx
Posts: 11
Joined: Wed Mar 26, 2014 12:58 am
Location: Brazil

Re: Pure Python camera interface

Wed May 21, 2014 5:21 pm

kelevraxx wrote:Hello Dave and Pi Enthusiasts!

i am finishing my project with Raspberry Pi + Camera Board + Microscope, i spend a LOT of time for attach the Camera into microscope....made a Hugh amount of testes on it.

i am very happy with results, but my project its a quite simple but very useful for teacher ho uses microscope into classroom, a simple projection of the microscope slide, help a lot teachers and students.
So i need make a simple shell script for make more simple the use of my project, my questions:
1- if i put the command "raspvid -t 0 -awb auto -ex auto " on the script will work?
2 - if i need the image saved (raspivid or raspistill) on dropbox folder where i put the path ? -o /home/xxx/dropbox ????
3 - i need put some messages on screen before the command start like " Press Ctrl + C for stop the projection" or " A preview image will delay for 10 seconds before take a snap, use this time to focus the image" how i do that

Sorry but my English is so so

i achieved the answer of my questions, but i have another's one
The question Number 2 stays without answer
and how i use one variable like " Put the name of the image for save" / read name...and use this name on raspivid or raspistill comand, like raspivid -t 600000 - awb auto -ex auto - ISO 100 -o /home/xxx/dropbox/$name.png ????

camdeveloper1
Posts: 7
Joined: Mon May 26, 2014 6:52 pm

Re: Pure Python camera interface

Tue May 27, 2014 5:43 am

hi dave i wondering if you could help me with this, it is your advance recipe to process images but if i want to save them what should i do

Code: Select all

import io
import time
import threading
import picamera

# Create a pool of image processors
done = False
lock = threading.Lock()
pool = []

class ImageProcessor(threading.Thread):
    def __init__(self):
        super(ImageProcessor, self).__init__()
        self.stream = io.BytesIO()
        self.event = threading.Event()
        self.terminated = False
        self.start()

    def run(self):
        # This method runs in a separate thread
        global done
        while not self.terminated:
            if self.event.wait(1):
                try:
                    self.stream.seek(0)
                    # Read the image and do some processing on it
                    #Image.open(self.stream)
                    #...
                    #...
                    # Set done to True if you want the script to terminate
                    # at some point
                    #done=True
                finally:
                    # Reset the stream and event
                    self.stream.seek(0)
                    self.stream.truncate()
                    self.event.clear()
                    # Return ourselves to the pool
                    with lock:
                        pool.append(self)

def streams():
    while not done:
        with lock:
            processor = pool.pop()
        yield processor.stream
        processor.event.set()

with picamera.PiCamera() as camera:
    pool = [ImageProcessor() for i in range (4)]
    camera.resolution = (640, 480)
    # Set the framerate appropriately; too fast and the image processors
    # will stall the image pipeline and crash the script
    camera.framerate = 10
    camera.start_preview()
    time.sleep(2)
    camera.capture_sequence(streams(), use_video_port=True)

# Shut down the processors in an orderly fashion
while pool:
    with lock:
        processor = pool.pop()
    processor.terminated = True
    processor.join()

camdeveloper1
Posts: 7
Joined: Mon May 26, 2014 6:52 pm

Re: simple motion-detector using picamera

Tue May 27, 2014 7:17 am

jbeale wrote:In case of interest, a simple motion-detector using picamera. My motivation was to find something faster than motion-mmal which seems limited to between 1/4 and 1/2 second response time regardless of image resolution. This one can go as fast as 10 fps (0.1 second response) at very low resolutions. It builds up a long-term average background image (controlled by filter parameter 'frac', smaller = slower averaging) and checks each new image for at least "pc_thresh" pixels having a difference from the background greater than (tfactor * avg_max) where "avg_max" is the running average of the maximum pixel deviation-from-averaged-background in each frame. It works on only the green channel to save time. Of course all this would probably run at 30 fps on the GPU, but I'm not smart enough to do that yet. Python is a lot easier to work with :-).

Code: Select all

#!/usr/bin/python

# simple motion-detection using pypi.python.org/pypi/picamera
# runs at up to 10 fps depending on resolution, etc.
# Nov. 30 2013 J.Beale

from __future__ import print_function
import io, os, time, datetime, picamera, cv2
import numpy as np

#width = 32
#height = 16 # minimum size that works ?
width =144
height = 96
frames = 0
first_frame = True
frac = 0.05  # fraction to update long-term average on each pass
a_thresh = 16.0  # amplitude change detection threshold for any pixel
pc_thresh = 20   # number of pixels required to exceed threshold
avgmax = 3     # long-term average of maximum-pixel-change-value
tfactor = 2  # threshold above max.average diff per frame for motion detect
picHoldoff = 1.0  # minimum interval (seconds) between saving images
fupdate = 10000   # report debug data every this many frames
#fupdate = 10   # report debug data every this many frames
logfile = "/home/pi/pics/cam-log.csv" 

np.set_printoptions(precision=2)
f = open(logfile, 'a')
f.write ("# cam log v0.1 Nov.28 2013 JPB\n")
outbuf = "# Start: " +  str(datetime.datetime.now()) + "\n"
f.write (outbuf)
f.flush()

daytime = datetime.datetime.now().strftime("%y%m%d-%H_%M_%S.%f")
daytime = daytime[:-3]  # remove last three digits (xxx microseconds)
print ("# Start at %s" % str(datetime.datetime.now()))

stream = io.BytesIO()
with picamera.PiCamera() as camera:
    camera.resolution = (width, height)
    camera.start_preview()
    time.sleep(2)
    start = time.time()
    while True:
      camera.capture(stream, format='jpeg', use_video_port=True)
#      time.sleep(0.15)  # sometimes delay needed to avoid crashes (!?)
      stream.seek(0)
      data = np.fromstring(stream.getvalue(), dtype=np.uint8)
      image = cv2.imdecode(data, 1)
      image = image.astype(np.float32)
      (h,w,cols) = image.shape
      (xc,yc) = (h/2,w/2)
      frames = frames + 1
      if (frames % fupdate) == 0:
	print ("%s,  %03d max = %5.3f, avg = %5.3f" % (str(datetime.datetime.now()),frames,max,avgmax))
#	print (avgcol); print (avgdiff)
#       print "%02d center: %s (BGR)" % (frames,image[xc,yc])
      if first_frame == False:
	newcol = image[:,:,1] # full image, green only
	avgcol = (avgcol * (1.0-frac)) + (newcol * frac)
	diff = newcol - avgcol  # matrix of difference-from-average pixel values
	diff = abs(diff)	# take absolute value of difference
	avgdiff = ((1 - frac)*avgdiff) + (frac * diff)  # long-term average difference
	a_thresh = tfactor*avgmax  # adaptive amplitude-of-change threshold
	condition = diff > a_thresh
	changedPixels = np.extract(condition,diff)
	countPixels = changedPixels.size
	max = np.amax(diff)     # find the biggest (single-pixel) change
	avgmax = ((1 - frac)*avgmax) + (max * frac)
	if countPixels > pc_thresh:  # notable change of enough pixels => motion!
	  now = time.time()
	  interval = now - start
	  start = now
          daytime = datetime.datetime.now().strftime("%y%m%d-%H_%M_%S.%f")
          daytime = daytime[:-3]  # remove last three digits (xxx microseconds)
	  daytime = daytime + "_" + str(countPixels)
          tstr = ("%s,  %04.1f, %6.3f, %03d\n" % (daytime,max,interval,countPixels))
	  print (tstr, end='')
          f.write(tstr)
          f.flush()
	  if (interval > picHoldoff):  # don't write images more quickly than picHoldoff interval
  	    imgName = daytime + ".jpg"
	    cv2.imwrite(imgName, image)  # save as an image
      else:			# very first frame
	avgcol = image[:,:,1]  # green channel, set the initial average to first frame 
	avgdiff = avgcol / 20.0 # obviously a coarse estimate
	first_frame = False
There is one mystery, I'd love to know why this delay is sometimes required to prevent crashes (seems to depend on lighting => camera exposure time)

Code: Select all

      camera.capture(stream, format='jpeg', use_video_port=True)
#      time.sleep(0.15)  # sometimes delay needed to avoid crashes (!?)

can you upload this code to github i want to test it please is it exactly the same you have posted here or is a fix needed thank a lot

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Wed Jun 11, 2014 7:41 pm

An early evening release for once: Picamera 1.5 is here! By far the most exciting thing in this release is the ability to grab motion vectors from the H.264 encoder (i.e. we're playing catch-up with raspivid again :). Also new in this release is an interface to numpy (which is introduced as an optional dependency*), which means building a motion detector is now really easy (there's even an example buried in the advanced recipes ;).

In other news, there are now explicit interfaces and instructions for constructing custom encoder and custom output classes, and a few bugs have been squashed. As usual bug reports, suggestions, patches, etc. are all welcome!


Dave.

* the PyPI package won't pull it in automatically, which you might be grateful of given numpy takes a good hour to build on a Pi if you aren't installing from a pre-built package! The Raspbian package will pull in the dependency automatically, but only because I figured as numpy is pre-built there it's not a huge burden.

nickneubrand
Posts: 29
Joined: Fri Apr 26, 2013 4:54 am

Re: Pure Python camera interface

Wed Jun 11, 2014 10:24 pm

Just wondering, what are the odds of the start_preview() will ever be able to be put into a window. I know the documentation says that it cannot currently be done, but are there plans to make that possible? Just asking because I really like the fact that the preview() has really good FPS and want to utilize that in a python app run through a window. I have currently been using the official driver and pygame, but that tends to kill the frame rate.

Thanks

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Wed Jun 11, 2014 10:34 pm

nickneubrand wrote:Just wondering, what are the odds of the start_preview() will ever be able to be put into a window. I know the documentation says that it cannot currently be done, but are there plans to make that possible? Just asking because I really like the fact that the preview() has really good FPS and want to utilize that in a python app run through a window. I have currently been using the official driver and pygame, but that tends to kill the frame rate.

Thanks
If it can be done via OpenGL/ES then that'll be the way it gets done. But there's no way the existing MMAL video renderer is going to get stuff in a window under the X environment. In other words, keep an eye on issues #16 and #17 but don't hold your breath (as you can tell, they've been there a while - mostly because they involve a serious amount of work in an area I'm largely unfamiliar with).


Dave.

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Wed Jun 11, 2014 10:43 pm

jrsphoto wrote:Somehow found the picamera python library a month or so ago but I missed this bit about the picroscopy program. My application is a bit different, I wish to use it as a guide camera on a telescope. I downloaded the code from github (forked it actually). My goal would be to have more focus on imaging from the remote client rather than displayed on the hdmi monitor, although I would like to keep that option available as well.

I have been looking for a case to put all this in and managed to put together something myself that works but I also found a commercial solution that looks very nice and I figured I should share it: http://bit.ly/1oLOyWM

If you look at their blog post here: http://bit.ly/R2Oxij they mention that they have a CS-mount version in the works. I'm waiting to hear back from them on its availability.

Cheers and thanks for the great picamera library and picroscopy program.

That's very interesting! Mostly because it just so happens that my main Pi (used for developing picamera) happens to be in a proto-armor case:
kermit.jpg
kermit.jpg (30.36 KiB) Viewed 5487 times
At the time I bought it, it was the only decent case I could find that housed the Pi, the camera, and provided a standard tripod mount. I'll be keeping a close eye out for that CS-mount version!


Dave.

User avatar
jbeale
Posts: 3439
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Thu Jun 12, 2014 11:25 pm

Great work on v1.5, I'm particularly interested in the raw data and motion vector output.
By the way, in case of interest there is a color matrix suggested for the R-Pi camera as posted here: http://www.raspberrypi.org/forums/viewt ... 50#p511476
I haven't read it closely but if you didn't use a color matrix in your example raw converter, using one like that one should get you better color quality.

By the way, the below code example from http://picamera.readthedocs.org/en/rele ... ector-data does not run for me. It says "NameError: name 'frames' is not defined" which seems to be true.

Code: Select all

import picamera
import picamera.array
from PIL import Image

with picamera.PiCamera() as camera:
    with picamera.array.PiMotionArray(camera) as stream:
        camera.resolution = (640, 480)
        camera.framerate = 30
        camera.start_recording('/dev/null', format='h264', motion_output=stream)
        camera.wait_recording(10)
        camera.stop_recording()
        for frame in range(frames):
            data = np.sqrt(
                np.square(stream.array[frame]['x'].astype(np.float)) +
                np.square(stream.array[frame]['y'].astype(np.float))
                ).clip(0, 255).astype(np.uint8)
            img = Image.fromarray(data)
            filename = 'frame%03d.png' % frame
            print('Writing %s' % filename)
            img.save(filename)

User avatar
jbeale
Posts: 3439
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Fri Jun 13, 2014 1:01 am

as a followup, this code with two minor changes (defined 'frames' and import 'numpy' module) is working

Code: Select all

#!/usr/bin/python

import picamera
import picamera.array
import numpy as np
from PIL import Image

frames = 260

with picamera.PiCamera() as camera:
    with picamera.array.PiMotionArray(camera) as stream:
        camera.resolution = (640, 480)
        camera.framerate = 30
        camera.start_recording('/dev/null', format='h264', motion_output=stream)
        camera.wait_recording(10)
        camera.stop_recording()
        for frame in range(frames):
            data = np.sqrt(
                np.square(stream.array[frame]['x'].astype(np.float)) +
                np.square(stream.array[frame]['y'].astype(np.float))
                ).clip(0, 255).astype(np.uint8)
            img = Image.fromarray(data)
            filename = 'frame%03d.png' % frame
            print('Writing %s' % filename)
            img.save(filename)

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Fri Jun 13, 2014 8:44 am

jbeale wrote:By the way, the below code example from http://picamera.readthedocs.org/en/rele ... ector-data does not run for me. It says "NameError: name 'frames' is not defined" which seems to be true.

Code: Select all

import picamera
import picamera.array
from PIL import Image

with picamera.PiCamera() as camera:
    with picamera.array.PiMotionArray(camera) as stream:
        camera.resolution = (640, 480)
        camera.framerate = 30
        camera.start_recording('/dev/null', format='h264', motion_output=stream)
        camera.wait_recording(10)
        camera.stop_recording()
        for frame in range(frames):
            data = np.sqrt(
                np.square(stream.array[frame]['x'].astype(np.float)) +
                np.square(stream.array[frame]['y'].astype(np.float))
                ).clip(0, 255).astype(np.uint8)
            img = Image.fromarray(data)
            filename = 'frame%03d.png' % frame
            print('Writing %s' % filename)
            img.save(filename)
Argh! I noticed that error when I was going through the new examples. I must've corrected it on the Pi and forgotten to copy the corrected version back into the docs! For reference, here's the corrected version which'll appear in the next version of the docs:

Code: Select all

import numpy as np
import picamera
import picamera.array
from PIL import Image

with picamera.PiCamera() as camera:
    with picamera.array.PiMotionArray(camera) as stream:
        camera.resolution = (640, 480)
        camera.framerate = 30
        camera.start_recording('/dev/null', format='h264', motion_output=stream)
        camera.wait_recording(10)
        camera.stop_recording()
        frames = stream.array.shape[0]
        for frame in range(frames):
            data = np.sqrt(
                np.square(stream.array[frame]['x'].astype(np.float)) +
                np.square(stream.array[frame]['y'].astype(np.float))
                ).clip(0, 255).astype(np.uint8)
            img = Image.fromarray(data)
            filename = 'frame%03d.png' % frame
            print('Writing %s' % filename)
            img.save(filename)

Dave.

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Fri Jun 13, 2014 9:31 am

jbeale wrote:Great work on v1.5, I'm particularly interested in the raw data and motion vector output.
By the way, in case of interest there is a color matrix suggested for the R-Pi camera as posted here: http://www.raspberrypi.org/forums/viewt ... 50#p511476
I haven't read it closely but if you didn't use a color matrix in your example raw converter, using one like that one should get you better color quality.
Just a quick clarification: the bayer keyword for "real" raw capture was actually introduced back in 1.3 but I didn't have the time to investigate how to parse the resulting data back then. The two things 1.5 introduces for bayer capture are firstly a recipe in the docs detailing how to parse the data in Python (which is largely based on your excellent investigative posts), and a custom output class which wraps up that recipe for easy production of numpy arrays.

The weighted-average de-mosaicing algorithm included in these was mostly based on trawling wikipedia (sadly I didn't have time to try and implement some of the more impressive de-mosaic algorithms), with some performance bits adapted from this fascinating blog post).

I did have a look at the colour matrix suggested at the end of your raw-investigation thread, but this is not an area I'm familiar with (I could talk your ear off about SQL and database theory, but graphics is still a weird and mysterious area of computer science for me!). After a bit more trawling of wikipedia it seemed like colour balance is mostly a question of dot products (much like YUV to RGB conversion), but I don't think I found out how to deal with the "neutral float" (or there was something I didn't understand there). I'll try and find some time to read up on this stuff for 1.6.


Dave.

jit
Posts: 33
Joined: Fri Apr 18, 2014 2:52 pm

Re: Pure Python camera interface

Tue Jun 24, 2014 8:08 pm

Dave: Many thanks for this great library. I really appreciate the time you've taken to document it with all the examples. I'm looking forward to using it as a basis for a motion detection project.

jit
Posts: 33
Joined: Fri Apr 18, 2014 2:52 pm

Re: simple motion-detector using picamera

Tue Jun 24, 2014 8:20 pm

jbeale wrote:In case of interest, a simple motion-detector using picamera. My motivation was to find something faster than motion-mmal which seems limited to between 1/4 and 1/2 second response time regardless of image resolution. This one can go as fast as 10 fps (0.1 second response) at very low resolutions. It builds up a long-term average background image (controlled by filter parameter 'frac', smaller = slower averaging) and checks each new image for at least "pc_thresh" pixels having a difference from the background greater than (tfactor * avg_max) where "avg_max" is the running average of the maximum pixel deviation-from-averaged-background in each frame. It works on only the green channel to save time. Of course all this would probably run at 30 fps on the GPU, but I'm not smart enough to do that yet. Python is a lot easier to work with :-).

Code: Select all

#!/usr/bin/python

# simple motion-detection using pypi.python.org/pypi/picamera
# runs at up to 10 fps depending on resolution, etc.
# Nov. 30 2013 J.Beale

from __future__ import print_function
import io, os, time, datetime, picamera, cv2
import numpy as np

#width = 32
#height = 16 # minimum size that works ?
width =144
height = 96
frames = 0
first_frame = True
frac = 0.05  # fraction to update long-term average on each pass
a_thresh = 16.0  # amplitude change detection threshold for any pixel
pc_thresh = 20   # number of pixels required to exceed threshold
avgmax = 3     # long-term average of maximum-pixel-change-value
tfactor = 2  # threshold above max.average diff per frame for motion detect
picHoldoff = 1.0  # minimum interval (seconds) between saving images
fupdate = 10000   # report debug data every this many frames
#fupdate = 10   # report debug data every this many frames
logfile = "/home/pi/pics/cam-log.csv" 

np.set_printoptions(precision=2)
f = open(logfile, 'a')
f.write ("# cam log v0.1 Nov.28 2013 JPB\n")
outbuf = "# Start: " +  str(datetime.datetime.now()) + "\n"
f.write (outbuf)
f.flush()

daytime = datetime.datetime.now().strftime("%y%m%d-%H_%M_%S.%f")
daytime = daytime[:-3]  # remove last three digits (xxx microseconds)
print ("# Start at %s" % str(datetime.datetime.now()))

stream = io.BytesIO()
with picamera.PiCamera() as camera:
    camera.resolution = (width, height)
    camera.start_preview()
    time.sleep(2)
    start = time.time()
    while True:
      camera.capture(stream, format='jpeg', use_video_port=True)
#      time.sleep(0.15)  # sometimes delay needed to avoid crashes (!?)
      stream.seek(0)
      data = np.fromstring(stream.getvalue(), dtype=np.uint8)
      image = cv2.imdecode(data, 1)
      image = image.astype(np.float32)
      (h,w,cols) = image.shape
      (xc,yc) = (h/2,w/2)
      frames = frames + 1
      if (frames % fupdate) == 0:
	print ("%s,  %03d max = %5.3f, avg = %5.3f" % (str(datetime.datetime.now()),frames,max,avgmax))
#	print (avgcol); print (avgdiff)
#       print "%02d center: %s (BGR)" % (frames,image[xc,yc])
      if first_frame == False:
	newcol = image[:,:,1] # full image, green only
	avgcol = (avgcol * (1.0-frac)) + (newcol * frac)
	diff = newcol - avgcol  # matrix of difference-from-average pixel values
	diff = abs(diff)	# take absolute value of difference
	avgdiff = ((1 - frac)*avgdiff) + (frac * diff)  # long-term average difference
	a_thresh = tfactor*avgmax  # adaptive amplitude-of-change threshold
	condition = diff > a_thresh
	changedPixels = np.extract(condition,diff)
	countPixels = changedPixels.size
	max = np.amax(diff)     # find the biggest (single-pixel) change
	avgmax = ((1 - frac)*avgmax) + (max * frac)
	if countPixels > pc_thresh:  # notable change of enough pixels => motion!
	  now = time.time()
	  interval = now - start
	  start = now
          daytime = datetime.datetime.now().strftime("%y%m%d-%H_%M_%S.%f")
          daytime = daytime[:-3]  # remove last three digits (xxx microseconds)
	  daytime = daytime + "_" + str(countPixels)
          tstr = ("%s,  %04.1f, %6.3f, %03d\n" % (daytime,max,interval,countPixels))
	  print (tstr, end='')
          f.write(tstr)
          f.flush()
	  if (interval > picHoldoff):  # don't write images more quickly than picHoldoff interval
  	    imgName = daytime + ".jpg"
	    cv2.imwrite(imgName, image)  # save as an image
      else:			# very first frame
	avgcol = image[:,:,1]  # green channel, set the initial average to first frame 
	avgdiff = avgcol / 20.0 # obviously a coarse estimate
	first_frame = False
There is one mystery, I'd love to know why this delay is sometimes required to prevent crashes (seems to depend on lighting => camera exposure time)

Code: Select all

      camera.capture(stream, format='jpeg', use_video_port=True)
#      time.sleep(0.15)  # sometimes delay needed to avoid crashes (!?)
I'm looking at doing something similar. Would you be able to provide an update on your progress and any updated code? I have a solution using motion-mmal, however as you mention the frame rate for detection is quite slow. I also get a number of false alarms etc. due to brightness differences between frames.

I'm hoping to use this library to increase the rate at which motion can be detected by using smaller image resolutions. In theory having a greater frame rate should reduce the delta between images for things like brightness changes, thereby reducing false alarms.

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: simple motion-detector using picamera

Tue Jun 24, 2014 9:36 pm

jit wrote:
jbeale wrote:In case of interest, a simple motion-detector using picamera. My motivation was to find something faster than motion-mmal which seems limited to between 1/4 and 1/2 second response time regardless of image resolution. This one can go as fast as 10 fps (0.1 second response) at very low resolutions. It builds up a long-term average background image (controlled by filter parameter 'frac', smaller = slower averaging) and checks each new image for at least "pc_thresh" pixels having a difference from the background greater than (tfactor * avg_max) where "avg_max" is the running average of the maximum pixel deviation-from-averaged-background in each frame. It works on only the green channel to save time. Of course all this would probably run at 30 fps on the GPU, but I'm not smart enough to do that yet. Python is a lot easier to work with :-).

Code: Select all

#!/usr/bin/python

# simple motion-detection using pypi.python.org/pypi/picamera
# runs at up to 10 fps depending on resolution, etc.
# Nov. 30 2013 J.Beale

from __future__ import print_function
import io, os, time, datetime, picamera, cv2
import numpy as np

#width = 32
#height = 16 # minimum size that works ?
width =144
height = 96
frames = 0
first_frame = True
frac = 0.05  # fraction to update long-term average on each pass
a_thresh = 16.0  # amplitude change detection threshold for any pixel
pc_thresh = 20   # number of pixels required to exceed threshold
avgmax = 3     # long-term average of maximum-pixel-change-value
tfactor = 2  # threshold above max.average diff per frame for motion detect
picHoldoff = 1.0  # minimum interval (seconds) between saving images
fupdate = 10000   # report debug data every this many frames
#fupdate = 10   # report debug data every this many frames
logfile = "/home/pi/pics/cam-log.csv" 

np.set_printoptions(precision=2)
f = open(logfile, 'a')
f.write ("# cam log v0.1 Nov.28 2013 JPB\n")
outbuf = "# Start: " +  str(datetime.datetime.now()) + "\n"
f.write (outbuf)
f.flush()

daytime = datetime.datetime.now().strftime("%y%m%d-%H_%M_%S.%f")
daytime = daytime[:-3]  # remove last three digits (xxx microseconds)
print ("# Start at %s" % str(datetime.datetime.now()))

stream = io.BytesIO()
with picamera.PiCamera() as camera:
    camera.resolution = (width, height)
    camera.start_preview()
    time.sleep(2)
    start = time.time()
    while True:
      camera.capture(stream, format='jpeg', use_video_port=True)
#      time.sleep(0.15)  # sometimes delay needed to avoid crashes (!?)
      stream.seek(0)
      data = np.fromstring(stream.getvalue(), dtype=np.uint8)
      image = cv2.imdecode(data, 1)
      image = image.astype(np.float32)
      (h,w,cols) = image.shape
      (xc,yc) = (h/2,w/2)
      frames = frames + 1
      if (frames % fupdate) == 0:
	print ("%s,  %03d max = %5.3f, avg = %5.3f" % (str(datetime.datetime.now()),frames,max,avgmax))
#	print (avgcol); print (avgdiff)
#       print "%02d center: %s (BGR)" % (frames,image[xc,yc])
      if first_frame == False:
	newcol = image[:,:,1] # full image, green only
	avgcol = (avgcol * (1.0-frac)) + (newcol * frac)
	diff = newcol - avgcol  # matrix of difference-from-average pixel values
	diff = abs(diff)	# take absolute value of difference
	avgdiff = ((1 - frac)*avgdiff) + (frac * diff)  # long-term average difference
	a_thresh = tfactor*avgmax  # adaptive amplitude-of-change threshold
	condition = diff > a_thresh
	changedPixels = np.extract(condition,diff)
	countPixels = changedPixels.size
	max = np.amax(diff)     # find the biggest (single-pixel) change
	avgmax = ((1 - frac)*avgmax) + (max * frac)
	if countPixels > pc_thresh:  # notable change of enough pixels => motion!
	  now = time.time()
	  interval = now - start
	  start = now
          daytime = datetime.datetime.now().strftime("%y%m%d-%H_%M_%S.%f")
          daytime = daytime[:-3]  # remove last three digits (xxx microseconds)
	  daytime = daytime + "_" + str(countPixels)
          tstr = ("%s,  %04.1f, %6.3f, %03d\n" % (daytime,max,interval,countPixels))
	  print (tstr, end='')
          f.write(tstr)
          f.flush()
	  if (interval > picHoldoff):  # don't write images more quickly than picHoldoff interval
  	    imgName = daytime + ".jpg"
	    cv2.imwrite(imgName, image)  # save as an image
      else:			# very first frame
	avgcol = image[:,:,1]  # green channel, set the initial average to first frame 
	avgdiff = avgcol / 20.0 # obviously a coarse estimate
	first_frame = False
There is one mystery, I'd love to know why this delay is sometimes required to prevent crashes (seems to depend on lighting => camera exposure time)

Code: Select all

      camera.capture(stream, format='jpeg', use_video_port=True)
#      time.sleep(0.15)  # sometimes delay needed to avoid crashes (!?)
I'm looking at doing something similar. Would you be able to provide an update on your progress and any updated code? I have a solution using motion-mmal, however as you mention the frame rate for detection is quite slow. I also get a number of false alarms etc. due to brightness differences between frames.

I'm hoping to use this library to increase the rate at which motion can be detected by using smaller image resolutions. In theory having a greater frame rate should reduce the delta between images for things like brightness changes, thereby reducing false alarms.
Have you had a look at the stuff for querying motion vectors from the H.264 encoder in 1.5? Although it's not exactly the same as background averaging and subtraction it seems another reasonable basis for coding motion detection, and given that most of the processing is already done for you in the GPU it's *much* more efficient. There's an example hidden away in the custom outputs section of the docs, and another one buried in the array extensions section. I should probably have put those in a clearly labelled "motion detection" section in the advanced recipes section now I come to think of it!

Dave.

User avatar
AndrewS
Posts: 3625
Joined: Sun Apr 22, 2012 4:50 pm
Location: Cambridge, UK
Contact: Website

Re: Pure Python camera interface

Wed Jun 25, 2014 3:59 pm

In case anyone is using picamera in a battery-powered application, I wonder if it's worth making a note about http://www.raspberrypi.org/forums/viewt ... 05#p569205 in your docs?

i.e. if they're only taking very occasional snapshots, presumably it'd be more power-efficient to actually close and re-open the PiCamera object each time, rather than just opening it once at the start of the program?

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Wed Jun 25, 2014 4:09 pm

AndrewS wrote:In case anyone is using picamera in a battery-powered application, I wonder if it's worth making a note about http://www.raspberrypi.org/forums/viewt ... 05#p569205 in your docs?

i.e. if they're only taking very occasional snapshots, presumably it'd be more power-efficient to actually close and re-open the PiCamera object each time, rather than just opening it once at the start of the program?
That sounds like a good candidate for the FAQ - I'll make a ticket for it so I don't forget!


Thanks,

Dave.

jit
Posts: 33
Joined: Fri Apr 18, 2014 2:52 pm

Re: simple motion-detector using picamera

Thu Jun 26, 2014 7:49 am

waveform80 wrote: Have you had a look at the stuff for querying motion vectors from the H.264 encoder in 1.5? Although it's not exactly the same as background averaging and subtraction it seems another reasonable basis for coding motion detection, and given that most of the processing is already done for you in the GPU it's *much* more efficient. There's an example hidden away in the custom outputs section of the docs, and another one buried in the array extensions section. I should probably have put those in a clearly labelled "motion detection" section in the advanced recipes section now I come to think of it!

Dave.
Thanks for the suggestion Dave. And again, great documentation! I see that you mention that the analyse() method needs to run faster than the rate that frames are provided to it. If the analysis takes too long what sort of behaviour can I expect from the library? e.g. is there an error thrown? is this something that is recoverable? I'm thinking of the scenario where analyse() runs in the time required, however if the system is under heavy load occasionally (e.g. installing updates), there might be a case where it may take longer than normal. So just wondering about the error handling.

bootsmann
Posts: 28
Joined: Thu Jan 23, 2014 7:07 am

Re: Pure Python camera interface

Fri Jul 04, 2014 1:39 pm

Hi

Does anyone have an idea how I can stream (pictures or a video) via bottle?
This code takes only one picture per request. My idea would be to show a live stream. Is it possible with picamera?

EDIT:
How can I write the byte string to get shown a picture?

Code: Select all

def get_data_from_camera():
    # Create an in-memory stream
    stream = io.BytesIO()
    with picamera.PiCamera() as camera:
        camera.resolution = (640, 480)
        camera.rotation = 180
        camera.capture_continuous(stream, format='mjpg')
        stream.getvalue()
        stream.seek(0)
        stream.truncate()
    return stream

@route('/live-stream', method='GET')
def live_stream():
    response.content_type = 'multipart/x-mixed-replace; boundary=--jpgboundary'
    while True:
        yield get_data_from_camera()

run(reloader=True, host='10.0.2.108', port=8080, server='gevent')
Last edited by bootsmann on Tue Jul 15, 2014 9:03 am, edited 1 time in total.

Return to “Camera board”