jit
Posts: 30
Joined: Fri Apr 18, 2014 2:52 pm

Re: simple motion-detector using picamera

Sat Jul 12, 2014 7:38 pm

waveform80 wrote: Have you had a look at the stuff for querying motion vectors from the H.264 encoder in 1.5? Although it's not exactly the same as background averaging and subtraction it seems another reasonable basis for coding motion detection, and given that most of the processing is already done for you in the GPU it's *much* more efficient. There's an example hidden away in the custom outputs section of the docs, and another one buried in the array extensions section. I should probably have put those in a clearly labelled "motion detection" section in the advanced recipes section now I come to think of it!
Dave.
I tried to combine your two examples, so that a circular buffer is used for a high definition stream and a lower resolution stream is used for motion detection. It doesn't quite seem to work, it seems to detect motion, but doesn't write out the files. Would appreciate if anyone could provide assistance.

Code: Select all

from __future__ import division

import io
import picamera
import numpy as np
import time
import traceback, sys

# customisable variables

# for some reason the following seems to stall
#record_width=1920
#record_height=1080
#analysis_width=480
#analysis_height=270

record_width=640
record_height=480
analysis_width=640
analysis_height=480
framerate=30

motion_detected = False

motion_dtype = np.dtype([
    ('x', 'i1'),
    ('y', 'i1'),
    ('sad', 'u2'),
    ])

class MyMotionDetector(object):
    def __init__(self, camera):
        self.cols = (analysis_width + 15) // 16
        self.cols += 1 # there's always an extra column
        self.rows = (analysis_height + 15) // 16

    def write(self, s):
        global motion_detected
        try:
            # Load the motion data from the string to the written
            data = np.fromstring(s, dtype=motion_dtype)
            # Re-shape it and calculate the magnitude of each vector
            data = data.reshape((self.rows, self.cols))
            data = np.sqrt(
                np.square(data['x'].astype(np.float)) +
                np.square(data['y'].astype(np.float))
                ).clip(0, 255).astype(np.uint8)
            # If there're more than 10 vectors with a magnitude greater
            # than 60, then say we've detected motion
            if (data > 60).sum() > 10:
                print('Motion detected!')
                motion_detected = True
            else:
                print('Motion stopped')
                motion_detected = False
            # Pretend we wrote all the bytes of s
            return len(s)
        except:
            traceback.print_exc(file=sys.stdout)
            raise

def write_video(stream):
    # Write the entire content of the circular buffer to disk. No need to
    # lock the stream here as we're definitely not writing to it
    # simultaneously
    with io.open('before.h264', 'wb') as output:
        for frame in stream.frames:
            if frame.header:
                stream.seek(frame.position)
                break
        while True:
            buf = stream.read1()
            if not buf:
                break
            output.write(buf)
    # Wipe the circular stream once we're done
    stream.seek(0)
    stream.truncate()

with picamera.PiCamera() as camera:
    try:
        camera.resolution = (record_width, record_height)
        camera.framerate = framerate
        camera.start_recording('/dev/null', format='h264', motion_output=MyMotionDetector(camera), resize=(analysis_width, analysis_height))

        stream = picamera.PiCameraCircularIO(camera, seconds=2)
        camera.start_recording(stream, format='h264', splitter_port=2)

        while True:
			# wait recording doesn't seem to work, maybe because two recordings are being done
            # camera.wait_recording(1, splitter_port=2)
            time.sleep(1)
            if (motion_detected):
                print('Splitting recording')
                # As soon as we detect motion, split the recording to
                # record the frames "after" motion
                camera.split_recording('after.h264', splitter_port=2)
                # Write the seconds "before" motion to disk as well
                print('Writing \'before\' stream')
                write_video(stream)
                # Wait until motion is no longer detected, then split
                # recording back to the in-memory circular buffer
                while (motion_detected):
                    time.sleep(1)
                    # wait recording doesn't seem to work
                    # camera.wait_recording(1, splitter_port=2)
                print('Motion stopped, splitting to stream')
                camera.split_recording(stream, splitter_port=2)
    except:
        traceback.print_exc(file=sys.stdout)
        raise
    finally:
        camera.stop_recording()

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: simple motion-detector using picamera

Mon Jul 14, 2014 7:56 pm

jit wrote:
waveform80 wrote: Have you had a look at the stuff for querying motion vectors from the H.264 encoder in 1.5? Although it's not exactly the same as background averaging and subtraction it seems another reasonable basis for coding motion detection, and given that most of the processing is already done for you in the GPU it's *much* more efficient. There's an example hidden away in the custom outputs section of the docs, and another one buried in the array extensions section. I should probably have put those in a clearly labelled "motion detection" section in the advanced recipes section now I come to think of it!
Dave.
I tried to combine your two examples, so that a circular buffer is used for a high definition stream and a lower resolution stream is used for motion detection. It doesn't quite seem to work, it seems to detect motion, but doesn't write out the files. Would appreciate if anyone could provide assistance.

Code: Select all

from __future__ import division

import io
import picamera
import numpy as np
import time
import traceback, sys

# customisable variables

# for some reason the following seems to stall
#record_width=1920
#record_height=1080
#analysis_width=480
#analysis_height=270

record_width=640
record_height=480
analysis_width=640
analysis_height=480
framerate=30

motion_detected = False

motion_dtype = np.dtype([
    ('x', 'i1'),
    ('y', 'i1'),
    ('sad', 'u2'),
    ])

class MyMotionDetector(object):
    def __init__(self, camera):
        self.cols = (analysis_width + 15) // 16
        self.cols += 1 # there's always an extra column
        self.rows = (analysis_height + 15) // 16

    def write(self, s):
        global motion_detected
        try:
            # Load the motion data from the string to the written
            data = np.fromstring(s, dtype=motion_dtype)
            # Re-shape it and calculate the magnitude of each vector
            data = data.reshape((self.rows, self.cols))
            data = np.sqrt(
                np.square(data['x'].astype(np.float)) +
                np.square(data['y'].astype(np.float))
                ).clip(0, 255).astype(np.uint8)
            # If there're more than 10 vectors with a magnitude greater
            # than 60, then say we've detected motion
            if (data > 60).sum() > 10:
                print('Motion detected!')
                motion_detected = True
            else:
                print('Motion stopped')
                motion_detected = False
            # Pretend we wrote all the bytes of s
            return len(s)
        except:
            traceback.print_exc(file=sys.stdout)
            raise

def write_video(stream):
    # Write the entire content of the circular buffer to disk. No need to
    # lock the stream here as we're definitely not writing to it
    # simultaneously
    with io.open('before.h264', 'wb') as output:
        for frame in stream.frames:
            if frame.header:
                stream.seek(frame.position)
                break
        while True:
            buf = stream.read1()
            if not buf:
                break
            output.write(buf)
    # Wipe the circular stream once we're done
    stream.seek(0)
    stream.truncate()

with picamera.PiCamera() as camera:
    try:
        camera.resolution = (record_width, record_height)
        camera.framerate = framerate
        camera.start_recording('/dev/null', format='h264', motion_output=MyMotionDetector(camera), resize=(analysis_width, analysis_height))

        stream = picamera.PiCameraCircularIO(camera, seconds=2)
        camera.start_recording(stream, format='h264', splitter_port=2)

        while True:
			# wait recording doesn't seem to work, maybe because two recordings are being done
            # camera.wait_recording(1, splitter_port=2)
            time.sleep(1)
            if (motion_detected):
                print('Splitting recording')
                # As soon as we detect motion, split the recording to
                # record the frames "after" motion
                camera.split_recording('after.h264', splitter_port=2)
                # Write the seconds "before" motion to disk as well
                print('Writing \'before\' stream')
                write_video(stream)
                # Wait until motion is no longer detected, then split
                # recording back to the in-memory circular buffer
                while (motion_detected):
                    time.sleep(1)
                    # wait recording doesn't seem to work
                    # camera.wait_recording(1, splitter_port=2)
                print('Motion stopped, splitting to stream')
                camera.split_recording(stream, splitter_port=2)
    except:
        traceback.print_exc(file=sys.stdout)
        raise
    finally:
        camera.stop_recording()
I've tried the code above under picamera 1.5 and my under-development copy of 1.6. I can reproduce the issue in 1.5, but in 1.6 it seems to work tolerably happily (although I'll suggest a few improvements below). I'm not sure why it's not working under 1.5 but unfortunately I haven't got much time to dig into it at the moment (though I am interested in the root cause...).

Anyway, some suggested improvements. Firstly I wouldn't bother with the splitter-port stuff at all. It's fine to bung motion_output to an analyzer, and have the video output going elsewhere; you can even split the video output without affecting the motion output on the same port. Secondly - I'd use the PiMotionAnalysis class to implement the motion detection; it's easier but I can see why you didn't use it above (because in 1.5 it doesn't handle resized streams properly - that's getting fixed with a new "resize" parameter for the class in 1.6). Thirdly, I'd add some hysteresis to the motion detection state. It tends to flip on and off frame-by-frame and actually you probably don't want it flipping state more than once every second or so (I've gone for three in the code below):

So, here's my rough attempt at an improved version:

Code: Select all

import io
import time
import picamera
import picamera.array
import numpy as np

motion_timestamp = time.time()
motion_detected = False

class MyMotionDetector(picamera.array.PiMotionAnalysis):
    def analyse(self, a):
        global motion_timestamp, motion_detected
        a = np.sqrt(
            np.square(a['x'].astype(np.float)) +
            np.square(a['y'].astype(np.float))
            ).clip(0, 255).astype(np.uint8)
        # If there're more than 10 vectors with a magnitude greater
        # than 60, then say we've detected motion
        detected = (a > 60).sum() > 10
        # Ensure all motion detected events last at least three seconds
        now = time.time()
        if detected:
            motion_timestamp = now
        elif (now - motion_timestamp) < 3:
            detected = True
        # If the state's changed, notify the console
        if motion_detected != detected:
            motion_detected = detected
            print('Motion %s' % ('stopped', 'detected')[detected])


def write_video(stream):
    # Write the entire content of the circular buffer to disk. No need to
    # lock the stream here as we're definitely not writing to it
    # simultaneously
    with io.open('before.h264', 'wb') as output:
        for frame in stream.frames:
            if frame.header:
                stream.seek(frame.position)
                break
        while True:
            buf = stream.read1()
            if not buf:
                break
            output.write(buf)
    # Wipe the circular stream once we're done
    stream.seek(0)
    stream.truncate()


with picamera.PiCamera() as camera:
    camera.resolution = (640, 480)
    camera.framerate = 30
    with picamera.PiCameraCircularIO(camera, seconds=2) as stream:
        camera.start_recording(
                stream, format='h264', motion_output=MyMotionDetector(camera))
        try:
            while True:
                camera.wait_recording(1)
                if motion_detected:
                    print('Splitting recording')
                    # As soon as we detect motion, split the recording to
                    # record the frames "after" motion
                    camera.split_recording('after.h264')
                    # Write the seconds "before" motion to disk as well
                    print("Writing 'before' stream")
                    write_video(stream)
                    # Wait until motion is no longer detected, then split
                    # recording back to the in-memory circular buffer
                    while motion_detected:
                        camera.wait_recording(1)
                    print('Motion stopped, returning to circular buffer')
                    camera.split_recording(stream)
        finally:
            camera.stop_recording()
I've tried it under 1.5 and the 1.6 development version and it seems to work tolerably happily - could probably do with some improvement in the motion detection though!

Have fun :)


Dave.

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Sat Jul 19, 2014 6:02 pm

Almost forgot to add an announcement here! Picamera 1.6 has just been released (available now on PyPI, should be in Raspbian in a few days when Alex gets a spare second). Highlights this time are:
  • You can now query the awb_gains attribute to get the red and blue gains determined by the camera's auto-white-balance algorithm. Useful for getting sane initial values and then fixing them.
  • Similarly, there's a new exposure_speed attribute for reading the shutter speed determined by the camera's auto-exposure algorithm.
  • The new drc_strength attribute can be used to query/set the camera's dynamic range strength
  • The intra_period parameter for start_recording can now be set to 0 (produce an initial I-frame, then only P-frames)
  • Several nasty bugs have been squashed, including one which prevented picamera from being used when HDMI was disabled (with tvservice -o) to save a bit of power.
Anyway, as usual there's a more comprehensive list of the changes in the change log, and the docs have all the installation/upgrade instructions.

Many thanks to the community for another round of excellent bug reports and questions; keep 'em coming and have fun!


Dave.

jit
Posts: 30
Joined: Fri Apr 18, 2014 2:52 pm

Re: Pure Python camera interface

Sat Jul 19, 2014 8:01 pm

Great to see a new version, thanks Dave. I'll be upgrading very shortly.

I've spent some time playing around with the script you modified. I though I'd share it and see if anyone has suggestions on improving it and making the motion detection better. I'm very pleased with the way that the circular buffer is working, ideal for capturing the moments before motion takes place.

I've added the ability to merge together the before and after files and box them using mp4box.

With regards to the motion detection, I'm wondering whether its better to plug-in another library given that this problem has been solved, although I'm not entirely sure how I'd go about that (my python isn't very strong).

I've added some TODOs around bits that need work.

Disclaimer: I'm not very familiar with Python, so I'm sure there's a lot of tidy up that could be done.

Code: Select all

import io
import time
import picamera
import picamera.array
import numpy as np
import subprocess

# This uses motion vectors for cheap motion detection, to help reduce false positives it expects motion to be detected for a sequence of frames before triggering.  Although this works for most cases, there are issues around detection of slow moving objects

# TODO this requires considerable clean-up
# TODO would be nice to have a date/time overlay on the video
# TODO sort out logging, using a debug boolean isn't great

debug=True
debugMagnitudeMatrix=False

# customisable variables
record_width=1296
record_height=730
framerate=15
pre_buffer_seconds=1 # 1 is actually around 3 seconds at the res/frame rate settings above
vector_magnitude=40 # the magnitude of vector change for motion detection
min_vectors_above_magnitude=15 # the number of motion vectors that must be above the vector_magnitude for this frame to count towards motion
min_sequence_threshold=3 # the minimum number of frames in sequence that must exceed the vector threshold for motion to have been considered as detected
file_root='/var/www/motion'
mp4box=True

sequence_counter=0
sequential_frame_count=0
start_motion_timestamp = time.time()
last_motion_timestamp = time.time()
motion_detected = False

class MyMotionDetector(picamera.array.PiMotionAnalysis):
	def analyse(self, a):
		global debug, debugMagnitudeMatrix, sequence_counter, sequential_frame_count, min_sequence_threshold, start_motion_timestamp, last_motion_timestamp, motion_detected, vector_magnitude, min_vectors_above_magnitude
		a = np.sqrt(
			np.square(a['x'].astype(np.float)) +
			np.square(a['y'].astype(np.float))
			).clip(0, 255).astype(np.uint8)

		if debugMagnitudeMatrix:
			# TODO this is a bit ugly, should really use some sort of loop
			# print out a matrix of sum of vectors above a certain threshold, this is just to help determine some good numbers to plug into detection
			sum_of_vectors_above_10 = (a > 10).sum()
			sum_of_vectors_above_20 = (a > 20).sum()
			sum_of_vectors_above_30 = (a > 30).sum()
			sum_of_vectors_above_40 = (a > 40).sum()
			sum_of_vectors_above_50 = (a > 50).sum()
			sum_of_vectors_above_60 = (a > 60).sum()
			sum_of_vectors_above_70 = (a > 70).sum()
			sum_of_vectors_above_80 = (a > 80).sum()
			sum_of_vectors_above_90 = (a > 90).sum()
			sum_of_vectors_above_100 = (a > 100).sum()
			print(
				'10=' + str(sum_of_vectors_above_10) + ', ' +
				'20=' + str(sum_of_vectors_above_20) + ', ' +
				'30=' + str(sum_of_vectors_above_30) + ', ' +
				'40=' + str(sum_of_vectors_above_40) + ', ' +
				'50=' + str(sum_of_vectors_above_50) + ', ' +
				'60=' + str(sum_of_vectors_above_60) + ', ' +
				'70=' + str(sum_of_vectors_above_70) + ', ' +
				'80=' + str(sum_of_vectors_above_80) + ', ' +
				'90=' + str(sum_of_vectors_above_90) + ', ' +
				'100=' + str(sum_of_vectors_above_100) + ', '
			)

		sum_of_vectors_above_threshold = (a > vector_magnitude).sum()
		# if (debug and (sum_of_vectors_above_threshold > 0)): print(str(sum_of_vectors_above_threshold) + ' vectors above magnitude of ' + str(vector_magnitude))
		detected = sum_of_vectors_above_threshold > min_vectors_above_magnitude

		if detected:
			sequential_frame_count = sequential_frame_count + 1
			if (debug and (sequential_frame_count > 0)): print('sequential_frame_count %d' % sequential_frame_count)
			if (motion_detected):
				if (debug): print('extending time')
				last_motion_timestamp = time.time()
		else:
			sequential_frame_count=0
			# if debug: print('sequential_frame_count %d' % sequential_frame_count)

		if ((sequential_frame_count >= min_sequence_threshold) and (not motion_detected)):
			if debug: print('>> Motion detected')
			sequence_counter = sequence_counter + 1
			start_motion_timestamp = time.time()
			last_motion_timestamp = start_motion_timestamp
			motion_detected = True

		if (motion_detected and not detected):
			if ((time.time() - last_motion_timestamp) > 3):
				motion_detected = False
				if debug: print('<< Motion stopped, beyond 3s')
			else:
				if debug: print('Motion stopped, but still within 3s')

def write_video(stream):
	# Write the entire content of the circular buffer to disk. No need to
	# lock the stream here as we're definitely not writing to it
	# simultaneously
	global sequence_counter, start_motion_timestamp
	before_filename = file_root + '/before-' + str(sequence_counter) + '.h264';
	with io.open(before_filename, 'wb') as output:
		for frame in stream.frames:
			if frame.header:
				stream.seek(frame.position)
				break
		while True:
			buf = stream.read1()
			if not buf:
				break
			output.write(buf)
	# Wipe the circular stream once we're done
	stream.seek(0)
	stream.truncate()
	return before_filename


with picamera.PiCamera() as camera:
	camera.resolution = (record_width, record_height)
	camera.framerate = framerate
	with picamera.PiCameraCircularIO(camera, seconds=pre_buffer_seconds) as stream:
		# this delay is needed, otherwise you seem to get some noise which triggers the motion detection
		time.sleep(1)
		if debug: print ('starting motion analysis')
		camera.start_recording(stream, format='h264', motion_output=MyMotionDetector(camera))
		try:
			while True:
				camera.wait_recording(1)
				if motion_detected:
					file_count = sequence_counter;
					if debug: print('Splitting recording ' + str(file_count))
					# As soon as we detect motion, split the recording to record the frames "after" motion
					after_filename = file_root + '/after-' + str(file_count) + '.h264';
					camera.split_recording(after_filename)
					# Write the seconds "before" motion to disk as well
					if debug: print("Writing 'before' stream")
					before_filename = write_video(stream)
					# Wait until motion is no longer detected, then split recording back to the in-memory circular buffer
					while motion_detected:
						camera.wait_recording(1)
					print('Motion stopped, returning to circular buffer\n')
					camera.split_recording(stream)

					# merge before and after files into a single file
					# TODO this should ideally be done asynchronously
					# TODO is there a better way of doing this, feels a bit hacky to call out to a subprocess
					output_prefix = file_root + '/' + time.strftime("%Y-%m-%d--%H:%M:%S", time.gmtime(start_motion_timestamp)) + '--' + str(sequence_counter)
					h264_file = output_prefix + '.h264'

					# for some reason mp4box doesn't work with semicolons in the filename, you always get a 'Requested URL is not valid or cannot be found', so work around by using a different filename
					if mp4box:
						h264_file = file_root + '/' + 'merge-' + str(file_count) + '.h264'

					cmd = 'mv ' + before_filename + ' ' + h264_file + ' && cat ' + after_filename + ' >> ' + h264_file + ' && rm ' + after_filename
					if debug: print('[CMD]: ' + cmd)
					subprocess.call([cmd], shell=True)
					if debug: print('finished file merge')

					if mp4box:
						# mp4box the file
						# TODO this should ideally be done asynchronously
						# TODO investigate if mp4box has a python api
						mp4_file = output_prefix + '.mp4'
						cmd = 'MP4Box -fps ' + str(framerate) + ' -add ' + h264_file + ' ' + mp4_file
						if debug: print('[CMD] ' + cmd)
						subprocess.call([cmd], shell=True)
						if debug: print('finished mp4box')
		finally:
			camera.stop_recording()

User avatar
jbeale
Posts: 3263
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Mon Jul 21, 2014 11:45 pm

I have an idea, but not sure how to implement it in a simple way. I'd really like to be able to distinguish between tree leaves shaking back and forth, and a person walking by. In the latter case, there are a group of contiguous pixels that all have motion in the same direction for some number of frames, and the moving pixels stay next to each other in each frame. For the tree leaves, it is smaller groups and uncorrelated motion. If you simply add up all the motion vectors, which I think is the simple approach, the tree leaves might give you more overall "motion" although in my case it's uninteresting motion that I want to ignore.

The problem I think is that finding the border around an area of contiguous pixels of arbitrary shape might be computationally expensive, when you have to do it for each frame.

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Tue Jul 22, 2014 9:18 am

jbeale wrote:I have an idea, but not sure how to implement it in a simple way. I'd really like to be able to distinguish between tree leaves shaking back and forth, and a person walking by. In the latter case, there are a group of contiguous pixels that all have motion in the same direction for some number of frames, and the moving pixels stay next to each other in each frame. For the tree leaves, it is smaller groups and uncorrelated motion. If you simply add up all the motion vectors, which I think is the simple approach, the tree leaves might give you more overall "motion" although in my case it's uninteresting motion that I want to ignore.

The problem I think is that finding the border around an area of contiguous pixels of arbitrary shape might be computationally expensive, when you have to do it for each frame.
Oddly enough this is something I've been looking into as well! The first thing I looked into was convex hulls, but while numpy has an implementation that relies on qhull it's not available in the version packaged with Raspbian. However, then I read up on "feature labelling" which looked even more promising, and it just so happens there's an implementation of it in scipy: http://docs.scipy.org/doc/scipy-0.13.0/ ... label.html

I haven't found the time to play with it yet so I'm not sure how accurate the results will be or how fast it is, but hopefully it's a good starting point!


Dave.

bootsmann
Posts: 26
Joined: Thu Jan 23, 2014 7:07 am

Re: Pure Python camera interface

Tue Jul 22, 2014 1:41 pm

Hi

I have managed to run a stream server via bottle:

- How can I use a while loop instead of a for statement?

Code: Select all

BOUNDARY = '--jpgboundary'
 
 
@route('/index.html')
def index():
    return '<html><head></head><body><img src="/stream.mjpg" /></body></html>'
 
@route('/stream.mjpg')
def mjpeg():
    response.content_type = 'multipart/x-mixed-replace;boundary=%s' % BOUNDARY
    stream = io.BytesIO()
    yield BOUNDARY+'\r\n'
    with picamera.PiCamera() as camera:
         camera.rotation = 180
         camera.resolution = (640, 480)
         for picture in camera.capture_continuous(stream,'jpeg'):
             yield BOUNDARY+'\r\n'
             yield 'Content-Type: image/jpeg\r\nContent-Length: %s\r\n\r\n' % len(stream.getvalue())
             yield stream.getvalue()
             stream.seek(0)
             stream.truncate()
             time.sleep(.1)

run(reloader=True, host='0.0.0.0', port=8080, server='gevent')
Now I want to use instead of mjpg h264. Does anyone have an idea how to handle it?

User avatar
jbeale
Posts: 3263
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Tue Jul 29, 2014 6:38 pm

According to this http://www.raspberrypi.org/forums/viewt ... 94#p587905
it should now be possible to record video at 10 frames per minute with each frame having a 6-second exposure. The latest development raspistill from '6by9' now offers still exposures up to 6 seconds, which I verified here http://www.raspberrypi.org/forums/viewt ... 94#p587807 although it will not repeat faster than 2 frames per minute. That could apparently be improved with use of the BURST_CAPTURE parameter in mmal, although I don't know exactly how.

Any chance of these new capabilities making their way into the picamera library?

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Tue Jul 29, 2014 10:15 pm

jbeale wrote:According to this http://www.raspberrypi.org/forums/viewt ... 94#p587905
it should now be possible to record video at 10 frames per minute with each frame having a 6-second exposure. The latest development raspistill from '6by9' now offers still exposures up to 6 seconds, which I verified here http://www.raspberrypi.org/forums/viewt ... 94#p587807 although it will not repeat faster than 2 frames per minute. That could apparently be improved with use of the BURST_CAPTURE parameter in mmal, although I don't know exactly how.

Any chance of these new capabilities making their way into the picamera library?
Well now, that's very interesting! Ticket added :)


Dave.


User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Thu Jul 31, 2014 9:53 am

bootsmann wrote:Nobody an idea or a tip?
http://www.raspberrypi.org/forums/viewt ... 96#p583296
Sorry! Completely forgot about that earlier post. On the subject of using "while" instead of "for" with iterators (such as that returned by capture_continuous): you can't because it's up to the iterator when to terminate the loop (unless you force your way out of it with "break"). I'm assuming you want to use "while" because you want to terminate the loop when some condition becomes true? If that's the case just test the condition in an "if" statement within the loop body and use "break" to jump out of the loop when the condition is true, e.g.:

Code: Select all

for i in some_iterator():
    if some_condition():
        break
Alternatively, you could use capture instead of capture_continuous in a while loop (it's all the same thing as far as the camera's concerned when you're using the still port):

Code: Select all

import picamera
import time

with picamera.PiCamera() as camera:
    time.sleep(2)
    while some_condition():
        camera.capture('some_filename.jpg')
Dave.

bootsmann
Posts: 26
Joined: Thu Jan 23, 2014 7:07 am

Re: Pure Python camera interface

Thu Jul 31, 2014 11:20 am

Hi Dave

I have already done it and it works great.
An other question is, what do you think is better (quality and real time stream):

streaming .jpg files or .h264 frames?

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Thu Jul 31, 2014 12:29 pm

bootsmann wrote:Hi Dave

I have already done it and it works great.
An other question is, what do you think is better (quality and real time stream):

streaming .jpg files or .h264 frames?
As a general rule of thumb H.264 is better quality than JPEG in the same space. Bear in mind that JPEG is a 20 year old format at this point, while H.264 gets to benefit from all the research done since then; from what I've read H.264 I-frames provide better quality than JPEG in less space. However, replacing a format as widely used as JPEG is a major task and thus far JPEG's been "good enough" that it's hung around all these years.

That said, client support for H.264 in web browsers is still lacking (to say the least), so older formats like MJPEG or even MPEG1 can be a decent alternative just for the sake of compatibility!

Dave.

bootsmann
Posts: 26
Joined: Thu Jan 23, 2014 7:07 am

Re: Pure Python camera interface

Tue Aug 05, 2014 1:41 pm

Hi Dave

Sorry for annoying you again but one last question:

Now I try to use h.264 instead of .mjpg.

Code: Select all

@route('/home')
def index():
   HTML = """<html>
                <head>
                </head>
                <body>
                <video controls>
                <source src="/test.mp4" type="video/mp4">
                </video>
                </body>
            </html>"""
   return HTML
    

@route('/test.mp4')
def mjpeg():
    stream = io.BytesIO()

    with picamera.PiCamera() as camera:
        camera.rotation = 180
        camera.resolution = (640, 480)
        
        while True:
            camera.start_recording(stream, format='h264', quantization=23)
            #camera.wait_recording(15)
            #camera.stop_recording()
            #yield 'application/mp4'+'\r\n'
            yield 'Content-Type: video/mp4 .mp4\r\nContent-Length: %s\r\n\r\n' % len(stream.getvalue())
            yield stream.getvalue()
            stream.seek(0)
            stream.truncate()
            time.sleep(.1)  
Chrome/Firefox displays no video. Although the html video player is still loading and the video can be downloaded as right click.
Do you have some ideas to display it inside a browser like a ''live stream''?

Thank you very much for helping me.

User avatar
jbeale
Posts: 3263
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Tue Aug 05, 2014 2:24 pm

Are you working with H.264 file or a MP4 file? They are not the same- MP4 is a "container" format. The raw H.264 file that the R-Pi camera generates is NOT MP4, it first has to be "wrapped" separately to become MP4 (eg. by MP4Box) and I'll bet the browsers cannot play raw H.264 files.

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Tue Aug 05, 2014 10:40 pm

bootsmann wrote:Hi Dave

Sorry for annoying you again but one last question:

Now I try to use h.264 instead of .mjpg.

Code: Select all

@route('/home')
def index():
   HTML = """<html>
                <head>
                </head>
                <body>
                <video controls>
                <source src="/test.mp4" type="video/mp4">
                </video>
                </body>
            </html>"""
   return HTML
    

@route('/test.mp4')
def mjpeg():
    stream = io.BytesIO()

    with picamera.PiCamera() as camera:
        camera.rotation = 180
        camera.resolution = (640, 480)
        
        while True:
            camera.start_recording(stream, format='h264', quantization=23)
            #camera.wait_recording(15)
            #camera.stop_recording()
            #yield 'application/mp4'+'\r\n'
            yield 'Content-Type: video/mp4 .mp4\r\nContent-Length: %s\r\n\r\n' % len(stream.getvalue())
            yield stream.getvalue()
            stream.seek(0)
            stream.truncate()
            time.sleep(.1)  
Chrome/Firefox displays no video. Although the html video player is still loading and the video can be downloaded as right click.
Do you have some ideas to display it inside a browser like a ''live stream''?

Thank you very much for helping me.
jbeale is absolutely correct: the Pi's camera outputs a raw H.264 stream as opposed to an MPEG transport stream which is what browsers seem to expect (quite reasonably, given that an H.264 stream contains no information on framerate so the client will have no idea what framerate to play it back at!).

For playing back a recording, you can simply run the result through MP4Box (again, as jbeale suggested) which will result in a nicely encapsulated video which just about anything can play. For streaming things get considerably more complicated as you'd need to encapsulate the raw H.264 data on the fly. This isn't something MP4Box can (as far as I'm aware?). Theoretically it is something that ffmpeg can do, but in all my tests with it, it fails to operate correctly (specifically it outputs the same PTS for every frame). Admittedly I'm using the stock ffmpeg from Raspbian; later builds may well fix this but I was searching for a solution that would be trivial for people to install and use.

Recently, a group in Canada got in contact with me about this issue (live streaming to a web browser) and pointed me at a fascinating project: a Javascript based MPEG1 decoder. Okay, it's seriously crude compared to H.264 but it turns out the Pi just quick enough to do realtime transcoding of raw YUV to MPEG1 on the CPU at low resolutions, while the Javascript decoder is fast enough to do MPEG1 decoding on most platforms (not quite on the Pi, but it works fine on my smartphone for example).

I've hacked together a demo using websockets for live streaming of the video which you can find here but before everyone gets all excited be aware that this won't work yet (at least not with picamera 1.6): it requires the new unencoded video output which comes with 1.7. I was intending to release this yesterday, but with the sterling work 6by9's putting into the firmware over the next couple of weeks I've decided to postpone the release until Friday in the hopes of squeezing in a bit more new functionality.

If you're really really desperate to try out the pistreaming repo right you can install a development copy of picamera cloned from the github repo (instructions in the docs) but if you're not comfortable with python virtualenv's I'd advise just waiting for 1.7 to have a play with it - it won't be long now!


Dave.

User avatar
jbeale
Posts: 3263
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Thu Aug 07, 2014 5:06 pm

Looks to be a lot of goodies in the latest release https://github.com/raspberrypi/firmware ... 6009415165
I'm hoping to have python access to the new text annotate function. I would love to have burned-in time/date/frame number, without needing to decode and re-encode video... is it possible?

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Thu Aug 07, 2014 5:29 pm

jbeale wrote:Looks to be a lot of goodies in the latest release https://github.com/raspberrypi/firmware ... 6009415165
I'm hoping to have python access to the new text annotate function. I would love to have burned-in time/date/frame number, without needing to decode and re-encode video... is it possible?
I've been experimenting with the annotation stuff: the text bit works nicely (maximum 31 characters) but as far as I can tell it's for the preview only; annotations (text or otherwise) don't appear in capture or recording output. I'm also having a play with the additional preview layers to provide a bit more flexibility (i.e. after reading http://www.raspberrypi.org/forums/viewt ... 37#p556637 I figured it might be possible to permit creating an overlay of an arbitrary image) - again, this is preview only stuff though.


Dave.

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Thu Aug 07, 2014 5:52 pm

waveform80 wrote:
jbeale wrote:Looks to be a lot of goodies in the latest release https://github.com/raspberrypi/firmware ... 6009415165
I'm hoping to have python access to the new text annotate function. I would love to have burned-in time/date/frame number, without needing to decode and re-encode video... is it possible?
I've been experimenting with the annotation stuff: the text bit works nicely (maximum 31 characters) but as far as I can tell it's for the preview only; annotations (text or otherwise) don't appear in capture or recording output. I'm also having a play with the additional preview layers to provide a bit more flexibility (i.e. after reading http://www.raspberrypi.org/forums/viewt ... 37#p556637 I figured it might be possible to permit creating an overlay of an arbitrary image) - again, this is preview only stuff though.


Dave.
Huh ... correction, the text is definitely there on stills and video too. How on earth did I miss that last night?! Unless I was looking at the preview overlay stuff instead ... weird. Oh well, I'm happy to report annotated text is there in all output!

Dave.

User avatar
jbeale
Posts: 3263
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Thu Aug 07, 2014 6:55 pm

waveform80 wrote: I'm happy to report annotated text is there in all output!
Outstanding. I will look forward to trying it out with the next picamera release!

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 4353
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Pure Python camera interface

Thu Aug 07, 2014 7:25 pm

waveform80 wrote:Huh ... correction, the text is definitely there on stills and video too. How on earth did I miss that last night?! Unless I was looking at the preview overlay stuff instead ... weird. Oh well, I'm happy to report annotated text is there in all output!

Dave.
One slight quirk is that I think the text may be missing from the image sent to the preview of the captured image (if that description makes sense). It does also mean it will be missing from the JPEG thumbnail.
If I get the time to go back and correct it I will, but the thumbnail and capture still won't be the same (fixed sized font, and this is done as a post-processing step) - other things are ahead of it in the queue though.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

User avatar
jbeale
Posts: 3263
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Thu Aug 07, 2014 8:20 pm

6by9 wrote:...it will be missing from the JPEG thumbnail. If I get the time to go back and correct it I will, but the thumbnail and capture still won't be the same (fixed sized font, and this is done as a post-processing step)
If the fixed-size font is legible on the full image, a 31 character string may not even fit on the thumbnail. If that's so, maybe better to leave it as-is. Anyway definitely a lower priority, as you say.

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Fri Aug 08, 2014 6:28 pm

So, on the back of some sterling firmware work from 6by9, here's picamera 1.7! Highlights this time include:
  • An incredibly long-standing request, namely text overlay on output (previews, images, and video) is now included. Please bear in mind you will need a recent firmware for this (702 I think), so make sure you've done sudo rpi-update before submitting any bugs about it!
  • Support for multiple cameras on the compute module is included; this is purely based on the camera-selection code added to raspistill for the same purpose - I don't have a compute module to test this with, so I'd be interested to hear from anyone who wants to give it a go!
  • Exposure mode "off" has been added, along with some recipes demonstrating how to lock down the camera's settings for consistent shooting (e.g. long timelapse)
  • Unencoded video output (YUV, RGB, etc.) has been added. This may not sound terribly exciting, but check out my little pistreaming demo repo and you might change your mind! There's also some new classes in picamera.array for doing analysis on such video streams
  • Several issues got fixed, most especially some silly bugs in PiBayerArray, and an issue with multi-port/res recordings which has been hanging around for ages but which I hadn't noticed
As always, have fun and let me know of any bugs, suggestions, questions, etc. in the GitHub issue tracker!

Dave.

User avatar
jbeale
Posts: 3263
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Fri Aug 08, 2014 8:09 pm

Great work getting a new release out so quickly!

Trying to test the streaming demo, I ran into a snag:

Code: Select all

$ sudo apt-get install python-picamera python-ws4py libav-tools git
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package python-ws4py
UPDATE: looks like I could first pretend that I was installing "mopidy" and then after below commands, I'm able to install python-ws4py

Code: Select all

wget -q -O - http://apt.mopidy.com/mopidy.gpg | sudo apt-key add -
sudo wget -q -O /etc/apt/sources.list.d/mopidy.list http://apt.mopidy.com/mopidy.list
sudo apt-get update
UPDATE2: Hrm, seems to have installed, and starts up but I do not get quite the same output as shown on the demo page, and another PC with Firefox or Chrome does not connect to the :8082 port.

Code: Select all

 $ python server.py
Initializing camera
Initializing websockets server on port 8084
Initializing HTTP server on port 8082
Initializing broadcast thread
Spawning background conversion process
Starting recording
Instead of "Starting HTTP server thread - Starting broadcast thread" I just get "Starting recording".

I also tried the example below. It runs without error, but the resulting "foo.jpg" does not show any text. I know I have the updated firmware, because the modified raspistill from ethanol100 does show captions as I found here http://www.raspberrypi.org/forums/viewt ... 50#p594959.

Code: Select all

import picamera
import time

with picamera.PiCamera() as camera:
    camera.resolution = (640, 480)
    camera.framerate = 24
    camera.start_preview()
    camera.annotate_text = 'Hello world!'
    time.sleep(2)
    # Take a picture including the annotation
    camera.capture('foo.jpg')

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Sat Aug 09, 2014 3:58 pm

jbeale wrote:Great work getting a new release out so quickly!

Trying to test the streaming demo, I ran into a snag:

Code: Select all

$ sudo apt-get install python-picamera python-ws4py libav-tools git
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package python-ws4py
UPDATE: looks like I could first pretend that I was installing "mopidy" and then after below commands, I'm able to install python-ws4py

Code: Select all

wget -q -O - http://apt.mopidy.com/mopidy.gpg | sudo apt-key add -
sudo wget -q -O /etc/apt/sources.list.d/mopidy.list http://apt.mopidy.com/mopidy.list
sudo apt-get update
Doh, serves me right for not testing this stuff on a virgin Pi. Turns out python-ws4py isn't packaged on Raspbian (it is on Ubuntu; originally I was following another recipe and the HTTP and web-sockets bit was sat on an Ubuntu PC; then I combined it all to run on the Pi but only tested within a virtualenv because I didn't have picamera 1.7 packaged at that point). Well ... that puts a crimp on the "easily installed" bit ... I'll revise the instructions to use pip for installation of some components for now but I'll have to see what I can do about getting ws4py packaged for Raspbian (or Debian if it's not? Odd that Ubuntu's got it, given the shared heritage - oh well).
jbeale wrote: UPDATE2: Hrm, seems to have installed, and starts up but I do not get quite the same output as shown on the demo page, and another PC with Firefox or Chrome does not connect to the :8082 port.

Code: Select all

 $ python server.py
Initializing camera
Initializing websockets server on port 8084
Initializing HTTP server on port 8082
Initializing broadcast thread
Spawning background conversion process
Starting recording
Instead of "Starting HTTP server thread - Starting broadcast thread" I just get "Starting recording".
I've revised the code a bit, and I need to update the README again, but what you should see with the current version is:

Code: Select all

Initializing camera
Initializing websockets server on port 8084
Initializing HTTP server on port 8082
Initializing broadcast thread
Spawning background conversion process
Starting recording
Starting websockets thread
Starting HTTP server thread
Starting broadcast thread
At that point, it's ready to stream stuff. When you connect from a remote web-browser (using a URL like http://pi-address:8082/) you should see additional lines like:

Code: Select all

192.168.80.157 - - [09/Aug/2014 15:49:01] "GET / HTTP/1.1" 301 -
192.168.80.157 - - [09/Aug/2014 15:49:01] "GET /index.html HTTP/1.1" 200 -
192.168.80.157 - - [09/Aug/2014 15:49:17] "GET /jsmpg.js HTTP/1.1" 200 -
192.168.80.157 - - [09/Aug/2014 15:49:17] code 404, message File not found
192.168.80.157 - - [09/Aug/2014 15:49:17] "GET /favicon.ico HTTP/1.1" 404 -
Which are just log lines from the built-in HTTP server. Then if you hit Ctrl+C to shut down the streaming server, you should see:

Code: Select all

^CStopping recording
Waiting for background conversion process to exit
Waiting for broadcast thread to finish
Shutting down HTTP server
Shutting down websockets server
Waiting for HTTP server thread to finish
Waiting for websockets thread to finish
... eventually. If you haven't shut down all the streaming web-browser clients it can hang around for quite a while before finishing the shut down. I should also add a warning about streaming over a bad wifi connection - I've just tried that and the latency is *awful* - works nicely over ethernet though :)
jbeale wrote:
I also tried the example below. It runs without error, but the resulting "foo.jpg" does not show any text. I know I have the updated firmware, because the modified raspistill from ethanol100 does show captions as I found here http://www.raspberrypi.org/forums/viewt ... 50#p594959.

Code: Select all

import picamera
import time

with picamera.PiCamera() as camera:
    camera.resolution = (640, 480)
    camera.framerate = 24
    camera.start_preview()
    camera.annotate_text = 'Hello world!'
    time.sleep(2)
    # Take a picture including the annotation
    camera.capture('foo.jpg')
That's weird ... works fine for me! Just to make absolutely sure, can you check with "uname -a" that you're definitely on firmware 702 or later? You've obviously got picamera 1.7 installed happily or the setting of annonate_text would've raised an error but I've no idea what a lower level firmware would do (as I haven't tested it) and that's the only other issue I can think of (well, other than the idea that you're looking at the wrong file ;)

Dave.

Return to “Camera board”

Who is online

Users browsing this forum: No registered users and 17 guests