bootsmann
Posts: 28
Joined: Thu Jan 23, 2014 7:07 am

Re: Pure Python camera interface

Thu Sep 11, 2014 1:21 pm

Hi Dave

Since I have upgraded the firmware to "Linux raspberrypi 3.12.28+ #709 PREEMPT Mon Sep 8 15:28:00 BST 2014 armv6l GNU/Linux" and picamera to 1.8 I have a delay/lag of about 10 seconds and more.

Is this a problem of the new firmware or of picamera 1.8?

Code: Select all

@route('/test.mjpg')
def mjpeg():
    response.content_type = 'multipart/x-mixed-replace;boundary=%s' % BOUNDARY
    stream = io.BytesIO()
    yield BOUNDARY+'\r\n'
    with picamera.PiCamera() as camera:
        camera.led = False
        camera.exposure_mode ='auto'
        camera.rotation = 180
        camera.resolution = (640, 480)
        time.sleep(1)
        
        while True:
            camera.capture(stream, 'jpeg')
            yield BOUNDARY+'\r\n'
            yield 'Content-Type: image/jpeg\r\nContent-Length: %s\r\n\r\n' % stream.tell()
            yield stream.getvalue()
            stream.seek(0)
            stream.truncate()
            time.sleep(.2)

User avatar
jbeale
Posts: 3511
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Fri Sep 19, 2014 6:03 pm

I have two R-Pis completely upgraded (sudo apt-get update / dist-upgrade, sudo rpi-update) as of today, with Picamera version 1.8 installed, and they give me two different results with the below code. Both of them run without error, and both save the video file, but one shows timestamps and the other does not. The working one is a Model A. The not-working one is an old, original Model B, so I wonder if rpi-update somehow fails to actually update it? Although when I re-run it, it says "Your firmware is already up to date", and 'sudo apt-get install python-picamera' says "python-picamera is already the newest version".

There are other oddities showing that the older Pi is not the same: the latest version of "RPi Cam Web Interface" ( http://www.raspberrypi.org/forums/viewt ... 43&t=63276 " works on it in default mode, but the picture goes very overexposed and then raspimjpeg locks up if I try changing to "16x9 wide mode". The newer Pi handles all modes in the RPi Cam Web Interface without trouble.

Code: Select all

#!/usr/bin/python
# video timestamp demo from http://picamera.readthedocs.org/en/release-1.8/recipes1.html

import picamera
import datetime as dt
import time

with picamera.PiCamera() as camera:
    camera.resolution = (1280, 720)
    camera.framerate = 24
    camera.start_preview()
    time.sleep(2) # wait for autoexposure to work
    camera.annotate_bg = True
    camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
    camera.start_recording('timestamped.h264')
    start = dt.datetime.now()
    while (dt.datetime.now() - start).seconds < 10:
        camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
        camera.wait_recording(0.2)
    camera.stop_recording()
EDIT: Note: the black-background code shown in the example does not work:

Code: Select all

camera.annotate_bg = True
The black background rectangle does work if I do it this way:

Code: Select all

camera.annotate_background = True

User avatar
DougieLawson
Posts: 36515
Joined: Sun Jun 16, 2013 11:19 pm
Location: Basingstoke, UK
Contact: Website Twitter

Re: Pure Python camera interface

Fri Sep 19, 2014 8:21 pm

You can force a backup and update by removing or renaming the hidden /boot/.firmware_version file before running rpi-update
Note: Having anything humorous in your signature is completely banned on this forum. Wear a tin-foil hat and you'll get a ban.

Any DMs sent on Twitter will be answered next month.

This is a doctor free zone.

User avatar
waveform80
Posts: 306
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Fri Sep 19, 2014 10:21 pm

jbeale wrote:I have two R-Pis completely upgraded (sudo apt-get update / dist-upgrade, sudo rpi-update) as of today, with Picamera version 1.8 installed, and they give me two different results with the below code. Both of them run without error, and both save the video file, but one shows timestamps and the other does not. The working one is a Model A. The not-working one is an old, original Model B, so I wonder if rpi-update somehow fails to actually update it? Although when I re-run it, it says "Your firmware is already up to date", and 'sudo apt-get install python-picamera' says "python-picamera is already the newest version".
I'd suggest the first thing to confirm is that the versions really are the same. "uname -a" should indicate the running firmware version, and there's a FAQ entry in the picamera docs that'll let you see exactly which version Python is using (in case there's a couple installed, e.g. via pip and apt).

If both Pi's have exactly the same firmware and picamera installation then I'm quite stumped!

Dave.

User avatar
waveform80
Posts: 306
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Fri Sep 19, 2014 10:22 pm

jbeale wrote: EDIT: Note: the black-background code shown in the example does not work:

Code: Select all

camera.annotate_bg = True
The black background rectangle does work if I do it this way:

Code: Select all

camera.annotate_background = True
Doh! I really must get around to making something for automatically testing the doc recipes...

User avatar
jbeale
Posts: 3511
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Fri Sep 19, 2014 11:05 pm

I forgot how to check the picamera version; thanks for the reminder. ...Ok, well that answered that question :-)

Code: Select all

[email protected] ~ $ python
Python 2.7.3 (default, Mar 18 2014, 05:13:23)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from pkg_resources import require
>>> require('picamera')
[picamera 0.6 (/usr/local/lib/python2.7/dist-packages/picamera-0.6-py2.7.egg)]
>>> require('picamera')[0].version
'0.6'

User avatar
jbeale
Posts: 3511
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Sat Sep 20, 2014 1:02 am

The below Python picamera code basically works as I intended. It makes a continuous recording, split into files each 1-minute long. The filename is the time+date and the video is also timestamped using the new annotate feature. One problem: each video actually records for the full minute plus some amount with the timestamp stuck at (xx:59) and the next video starts moving again with :06, :07 etc. It varies each time, up to as much as 14 seconds delay while the timestamp is frozen. I checked, recording a clock with sweep second hand and there are no actual video frames missing when the separate .h264 files are put together into one continuous movie, it's just the timestamp lagging. Is there a better way to do this, so the timestamps can be continuous in the assembled movie?

This type of recording is very handy when you need to record video 24/7 and later want immediate access to what happened at 2:37 pm, for example.

Code: Select all

#!/usr/bin/python
from __future__ import print_function
from datetime import datetime, time
from time import sleep
import picamera

def date_gen():
  while True:
    dstr = "/var/www/pics/v" + datetime.now().strftime("%y%m%d_%H%M%S") + ".h264"
    yield dstr

with picamera.PiCamera() as camera:
    camera.resolution = (1296, 972)  # sensor res = 2592 x 1944
    camera.framerate = 4
    camera.exposure_mode = 'night'
    camera.annotate_background = True
    camera.annotate_text = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
    for filename in camera.record_sequence( date_gen(), format='h264',  bitrate=1500000 ):
        utcnow = datetime.utcnow()
        midnight_utc = datetime.combine(utcnow.date(), time(0))
        delta = datetime.utcnow() - midnight_utc
        ts = delta.total_seconds()  # seconds since midnight (Python 2.7)
        waitTime = 60.0 - (ts % 60)  # duration of time before start of the next minute
        SecCountEnd = ts + waitTime   # second count at the top of the next minute
        # print("Recording for %d to %s" % (waitTime,filename))
        start = datetime.now()
        while (ts < SecCountEnd):
            camera.annotate_text = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
            camera.wait_recording(0.2)
            utcnow = datetime.utcnow()
            midnight_utc = datetime.combine(utcnow.date(), time(0))
            delta = datetime.utcnow() - midnight_utc
            ts = delta.total_seconds()  # seconds since midnight (Python 2.7)
EDIT: is the problem that the record_sequence() function has to wait for an inline SPS header to come around, before it can split the file and start a new one? If so, maybe there is no easy workaround for this. UPDATE: I guess that was it; when I change the recording framerate to 24 fps, then I get only about 1 second of lag around :59 - :00 in the timestamp update. For some applications, handling the extra data overhead of 24 fps is annoying when 4 fps would be adequate, but I guess that's the tradeoff.
Last edited by jbeale on Sat Sep 20, 2014 2:54 pm, edited 2 times in total.

User avatar
waveform80
Posts: 306
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Sat Sep 20, 2014 1:14 pm

BerryPicker wrote:On trying the picamera advanced recipe '5.7. Recording motion vector data' found at
http://picamera.readthedocs.org/en/rele ... ipes2.html
a file is made but the motion data extracted from that file cannot be fitted into a three-dimensional numpy array.
The line
motion_data = motion_data.reshape((frames, rows, cols))
throws
ValueError: total size of new array must be unchanged

I think this may be due to the recorder not filing data only in complete frame quanta.

This problem exists for both versions of python. Help in resolving it would be welcome.
Sorry it's taken me ages to get around to replying to this - my inbox has been filling up again, and I'm only just getting around to clearing the backlog. There's definitely a bug in that recipe - I stupidly have the frames calculation include the size of the motion data-type, but there's no need to do that as the array's already set to that data-type (i.e. it's not an array of bytes). So, the second part of that recipe *should* read:

Code: Select all

from __future__ import division

import numpy as np

width = 640
height = 480
cols = (width + 15) // 16
cols += 1 # there's always an extra column
rows = (height + 15) // 16

motion_data = np.fromfile(
    'motion.data', dtype=[
        ('x', 'i1'),
        ('y', 'i1'),
        ('sad', 'u2'),
        ])
frames = motion_data.shape[0] // (cols * rows)
motion_data = motion_data.reshape((frames, rows, cols))

# Access the data for the first frame
motion_data[0]

# Access just the x-vectors from the fifth frame
motion_data[4]['x']

# Access SAD values for the tenth frame
motion_data[9]['sad']
I'll get that corrected for the next release!

Thanks,

Dave.

User avatar
waveform80
Posts: 306
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Sat Sep 20, 2014 1:22 pm

skypi wrote:yeah, worked OK after an rpi-update, which also fixed it so it now does the 6s exposure... waiting for a nice moonlit night now then! :)

Thats cool then, the docs site allows a pdf download, thanks!

do you know anything about the pi firmware/hardware face recognition and learning stuff? question I have is, is it better than opencv, which is very-good up to a point.
Again, apologies for the long time to reply! My understanding (which is probably incomplete), is that the face recognition stuff in the MMAL layer isn't really there. In other words, the various values are present in the header (MMAL_PARAMETER_FACE_TRACK, MMAL_PARAMETER_DRAW_BOX_FACES_AND_FOCUS, etc.) because some firmware builds (for mobile phones) include it, but the corresponding logic in the Pi's firmware is missing (because that functionality wasn't purchased/licensed by the foundation). There was some forum post to that effect ages ago, but I can't seem to find it right now so I can't confirm if it was from one of the firmware devs (and therefore definitely correct) rather than speculation.

Can't say I've attempted to play with it myself, but the picamera MMAL headers (picamera.mmal) are a more or less complete translation so there's nothing stopping anyone from giving it a whirl and see what happens!

Dave.

User avatar
jbeale
Posts: 3511
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: simple motion-detector using picamera

Sat Sep 20, 2014 11:44 pm

camdeveloper1 wrote:can you upload this code to github i want to test it please is it exactly the same you have posted here or is a fix needed thank a lot
Sorry to reply 4 months later, and I see the code indents got mangled. Here is the code on pastebin: http://pastebin.com/1yTxtgwz

It works for me as posted there, but I don't think the code is practical for most uses just now. That code also is doing silly things like pixel computations in floating point instead of fixed, and getting JPEGS which have to be decoded, instead of straight YUV data (because I couldn't get YUV data working at that time).

I want to combine this motion-sensing algorithm with the picamera split-resolution feature, so I can do motion-detection at low res, and save stills and/or video at high res. I can do those things separately, but I don't know how to do them together. The nice split-resolution example http://picamera.readthedocs.org/en/rele ... esolutions is just saving .h264 files, but my motion detection code works with streams.

Is is possible to get a low resolution stream, and also save a high-resolution .h264 file at the same time? Looks to be possible; this code does work to record a 10-second H264 video in full HD while also grabbing a 256x144 YUV frame. Not sure how to do the YUV frames continually though.

Code: Select all

import time, picamera, picamera.array

with picamera.PiCamera() as camera:
  camera.resolution = (1920,1080)
  with picamera.array.PiYUVArray(camera, size=(256,144)) as stream:
    camera.capture(stream, format='yuv', resize=(256,144), splitter_port=1)
    camera.start_recording('highres.h264', splitter_port=2)
    print(stream.array.shape)
    camera.wait_recording(10, splitter_port=2)  # record H264 video for this long
    camera.stop_recording(splitter_port=2) # stop recording video
Oh, wait! It looks like I should be building on this example: http://picamera.readthedocs.org/en/rele ... lar-stream

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7513
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Pure Python camera interface

Sun Sep 21, 2014 8:15 am

jbeale wrote:The below Python picamera code basically works as I intended. It makes a continuous recording, split into files each 1-minute long. The filename is the time+date and the video is also timestamped using the new annotate feature. One problem: each video actually records for the full minute plus some amount with the timestamp stuck at (xx:59) and the next video starts moving again with :06, :07 etc.
<snip>
EDIT: is the problem that the record_sequence() function has to wait for an inline SPS header to come around, before it can split the file and start a new one? If so, maybe there is no easy workaround for this. UPDATE: I guess that was it; when I change the recording framerate to 24 fps, then I get only about 1 second of lag around :59 - :00 in the timestamp update. For some applications, handling the extra data overhead of 24 fps is annoying when 4 fps would be adequate, but I guess that's the tradeoff.
I haven't checked the Python API, but you can change the intra I-frame period (MMAL_PARAMETER_INTRAPERIOD) which sets how many frames between I-frames. Set that to be a multiple of your framerate / duration and you should get an SPS header just at the point you want it.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7513
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Pure Python camera interface

Sun Sep 21, 2014 8:43 am

waveform80 wrote:
skypi wrote:do you know anything about the pi firmware/hardware face recognition and learning stuff? question I have is, is it better than opencv, which is very-good up to a point.
Again, apologies for the long time to reply! My understanding (which is probably incomplete), is that the face recognition stuff in the MMAL layer isn't really there. In other words, the various values are present in the header (MMAL_PARAMETER_FACE_TRACK, MMAL_PARAMETER_DRAW_BOX_FACES_AND_FOCUS, etc.) because some firmware builds (for mobile phones) include it, but the corresponding logic in the Pi's firmware is missing (because that functionality wasn't purchased/licensed by the foundation). There was some forum post to that effect ages ago, but I can't seem to find it right now so I can't confirm if it was from one of the firmware devs (and therefore definitely correct) rather than speculation.
Correct. There is a 3rd party face tracking algorithm that can be built into the firmware, but requires a licence fee to be paid. Whilst possible, I very much doubt that there is any intention from the Foundation to add licencing of that algo in the way they did with MPEG2 and VC1 codecs - the overhead wouldn't be worth it.
Drawing boxes around faces or focus rectangles isn't very useful without a controllable lens and/or face tracking algorithm!
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

hoggerz
Posts: 8
Joined: Sun Dec 29, 2013 1:05 am

Re: Pure Python camera interface

Sun Sep 21, 2014 2:48 pm

Wondered If jit or anyone else managed to improve upon this? It works ok but It can be a little unpredictable! Unfortunately my knowledge of python isn't very good. The circular buffer functionality combined with motion detection seems ideal for security applications.

jit wrote:Great to see a new version, thanks Dave. I'll be upgrading very shortly.

I've spent some time playing around with the script you modified. I though I'd share it and see if anyone has suggestions on improving it and making the motion detection better. I'm very pleased with the way that the circular buffer is working, ideal for capturing the moments before motion takes place.

I've added the ability to merge together the before and after files and box them using mp4box.

With regards to the motion detection, I'm wondering whether its better to plug-in another library given that this problem has been solved, although I'm not entirely sure how I'd go about that (my python isn't very strong).

I've added some TODOs around bits that need work.

Disclaimer: I'm not very familiar with Python, so I'm sure there's a lot of tidy up that could be done.

Code: Select all

import io
import time
import picamera
import picamera.array
import numpy as np
import subprocess

# This uses motion vectors for cheap motion detection, to help reduce false positives it expects motion to be detected for a sequence of frames before triggering.  Although this works for most cases, there are issues around detection of slow moving objects

# TODO this requires considerable clean-up
# TODO would be nice to have a date/time overlay on the video
# TODO sort out logging, using a debug boolean isn't great

debug=True
debugMagnitudeMatrix=False

# customisable variables
record_width=1296
record_height=730
framerate=15
pre_buffer_seconds=1 # 1 is actually around 3 seconds at the res/frame rate settings above
vector_magnitude=40 # the magnitude of vector change for motion detection
min_vectors_above_magnitude=15 # the number of motion vectors that must be above the vector_magnitude for this frame to count towards motion
min_sequence_threshold=3 # the minimum number of frames in sequence that must exceed the vector threshold for motion to have been considered as detected
file_root='/var/www/motion'
mp4box=True

sequence_counter=0
sequential_frame_count=0
start_motion_timestamp = time.time()
last_motion_timestamp = time.time()
motion_detected = False

class MyMotionDetector(picamera.array.PiMotionAnalysis):
	def analyse(self, a):
		global debug, debugMagnitudeMatrix, sequence_counter, sequential_frame_count, min_sequence_threshold, start_motion_timestamp, last_motion_timestamp, motion_detected, vector_magnitude, min_vectors_above_magnitude
		a = np.sqrt(
			np.square(a['x'].astype(np.float)) +
			np.square(a['y'].astype(np.float))
			).clip(0, 255).astype(np.uint8)

		if debugMagnitudeMatrix:
			# TODO this is a bit ugly, should really use some sort of loop
			# print out a matrix of sum of vectors above a certain threshold, this is just to help determine some good numbers to plug into detection
			sum_of_vectors_above_10 = (a > 10).sum()
			sum_of_vectors_above_20 = (a > 20).sum()
			sum_of_vectors_above_30 = (a > 30).sum()
			sum_of_vectors_above_40 = (a > 40).sum()
			sum_of_vectors_above_50 = (a > 50).sum()
			sum_of_vectors_above_60 = (a > 60).sum()
			sum_of_vectors_above_70 = (a > 70).sum()
			sum_of_vectors_above_80 = (a > 80).sum()
			sum_of_vectors_above_90 = (a > 90).sum()
			sum_of_vectors_above_100 = (a > 100).sum()
			print(
				'10=' + str(sum_of_vectors_above_10) + ', ' +
				'20=' + str(sum_of_vectors_above_20) + ', ' +
				'30=' + str(sum_of_vectors_above_30) + ', ' +
				'40=' + str(sum_of_vectors_above_40) + ', ' +
				'50=' + str(sum_of_vectors_above_50) + ', ' +
				'60=' + str(sum_of_vectors_above_60) + ', ' +
				'70=' + str(sum_of_vectors_above_70) + ', ' +
				'80=' + str(sum_of_vectors_above_80) + ', ' +
				'90=' + str(sum_of_vectors_above_90) + ', ' +
				'100=' + str(sum_of_vectors_above_100) + ', '
			)

		sum_of_vectors_above_threshold = (a > vector_magnitude).sum()
		# if (debug and (sum_of_vectors_above_threshold > 0)): print(str(sum_of_vectors_above_threshold) + ' vectors above magnitude of ' + str(vector_magnitude))
		detected = sum_of_vectors_above_threshold > min_vectors_above_magnitude

		if detected:
			sequential_frame_count = sequential_frame_count + 1
			if (debug and (sequential_frame_count > 0)): print('sequential_frame_count %d' % sequential_frame_count)
			if (motion_detected):
				if (debug): print('extending time')
				last_motion_timestamp = time.time()
		else:
			sequential_frame_count=0
			# if debug: print('sequential_frame_count %d' % sequential_frame_count)

		if ((sequential_frame_count >= min_sequence_threshold) and (not motion_detected)):
			if debug: print('>> Motion detected')
			sequence_counter = sequence_counter + 1
			start_motion_timestamp = time.time()
			last_motion_timestamp = start_motion_timestamp
			motion_detected = True

		if (motion_detected and not detected):
			if ((time.time() - last_motion_timestamp) > 3):
				motion_detected = False
				if debug: print('<< Motion stopped, beyond 3s')
			else:
				if debug: print('Motion stopped, but still within 3s')

def write_video(stream):
	# Write the entire content of the circular buffer to disk. No need to
	# lock the stream here as we're definitely not writing to it
	# simultaneously
	global sequence_counter, start_motion_timestamp
	before_filename = file_root + '/before-' + str(sequence_counter) + '.h264';
	with io.open(before_filename, 'wb') as output:
		for frame in stream.frames:
			if frame.header:
				stream.seek(frame.position)
				break
		while True:
			buf = stream.read1()
			if not buf:
				break
			output.write(buf)
	# Wipe the circular stream once we're done
	stream.seek(0)
	stream.truncate()
	return before_filename


with picamera.PiCamera() as camera:
	camera.resolution = (record_width, record_height)
	camera.framerate = framerate
	with picamera.PiCameraCircularIO(camera, seconds=pre_buffer_seconds) as stream:
		# this delay is needed, otherwise you seem to get some noise which triggers the motion detection
		time.sleep(1)
		if debug: print ('starting motion analysis')
		camera.start_recording(stream, format='h264', motion_output=MyMotionDetector(camera))
		try:
			while True:
				camera.wait_recording(1)
				if motion_detected:
					file_count = sequence_counter;
					if debug: print('Splitting recording ' + str(file_count))
					# As soon as we detect motion, split the recording to record the frames "after" motion
					after_filename = file_root + '/after-' + str(file_count) + '.h264';
					camera.split_recording(after_filename)
					# Write the seconds "before" motion to disk as well
					if debug: print("Writing 'before' stream")
					before_filename = write_video(stream)
					# Wait until motion is no longer detected, then split recording back to the in-memory circular buffer
					while motion_detected:
						camera.wait_recording(1)
					print('Motion stopped, returning to circular buffer\n')
					camera.split_recording(stream)

					# merge before and after files into a single file
					# TODO this should ideally be done asynchronously
					# TODO is there a better way of doing this, feels a bit hacky to call out to a subprocess
					output_prefix = file_root + '/' + time.strftime("%Y-%m-%d--%H:%M:%S", time.gmtime(start_motion_timestamp)) + '--' + str(sequence_counter)
					h264_file = output_prefix + '.h264'

					# for some reason mp4box doesn't work with semicolons in the filename, you always get a 'Requested URL is not valid or cannot be found', so work around by using a different filename
					if mp4box:
						h264_file = file_root + '/' + 'merge-' + str(file_count) + '.h264'

					cmd = 'mv ' + before_filename + ' ' + h264_file + ' && cat ' + after_filename + ' >> ' + h264_file + ' && rm ' + after_filename
					if debug: print('[CMD]: ' + cmd)
					subprocess.call([cmd], shell=True)
					if debug: print('finished file merge')

					if mp4box:
						# mp4box the file
						# TODO this should ideally be done asynchronously
						# TODO investigate if mp4box has a python api
						mp4_file = output_prefix + '.mp4'
						cmd = 'MP4Box -fps ' + str(framerate) + ' -add ' + h264_file + ' ' + mp4_file
						if debug: print('[CMD] ' + cmd)
						subprocess.call([cmd], shell=True)
						if debug: print('finished mp4box')
		finally:
			camera.stop_recording()

User avatar
jbeale
Posts: 3511
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Sun Sep 21, 2014 4:20 pm

hoggerz wrote:It works ok but It can be a little unpredictable! Unfortunately my knowledge of python isn't very good. The circular buffer functionality combined with motion detection seems ideal for security applications.
I haven't tried this code exactly, but "unpredictable" is also my experience with motion-vector approaches.

My simple test code here http://pastebin.com/1yTxtgwz uses the same idea that the 'motion' program does, and I have found it to work just as I expect (in other words "predictable"). You can tune the sensitivity as you wish. I am planning to combine that with the "splitting to/from a circular stream" demo http://picamera.readthedocs.org/en/rele ... lar-stream and it will hopefully do what I and some other people want: low-latency motion detection with full-HD video capture both before and after the motion detection event. I'll want still frames as well, but maybe that is best done by a completely separate process that extracts them from the video. Also, should be straightforward to run MP4Box as a separate process to convert .h264 files to .mp4 and avoid one issue with the "RPi Cam Web Interface" currently, that motion isn't detected after each event while MP4Box works.

UPDATE: worried that this may not work as I'd hoped, just due to the timing. Below is a simple code example. When the camera resolution is set to full-HD (1920x1080), even with the (GPU-based?) resize to low res, the motion-detect stub function is called at intervals of 0.4 seconds, which might too slow to capture some events (even without any actual motion-detection code running).

Code: Select all

import io, picamera, datetime

def detect_motion(camera):
    stream = io.BytesIO()
    camera.capture(stream, format='jpeg', resize=(64,36), use_video_port=True)
    print datetime.datetime.now().strftime("%y%m%d-%H_%M_%S.%f")
    return False

with picamera.PiCamera() as camera:
    camera.resolution = (1920,1080)
    stream = picamera.PiCameraCircularIO(camera, seconds=5)
    camera.start_recording(stream, resize=(320,240), format='h264')
    for i in range(0,10):
       detect_motion(camera)
I see that if I comment out the camera.capture() function in detect_motion(), then it runs at 2.5 msec intervals, so it is just grabbing the still frame that is taking all the time. Also, with the format='yuv' option, I notice the function is 5x faster (0.2 seconds vs 1 second) if I do not use the 'resize=(x,y)' parameter. I thought that resizing was done on the GPU?

Also, if I comment out the camera.start_recording() line, detect_motion() runs at a perfectly usable 10 fps (code below). So it is just the combination of recording H264 into a buffer AND grabbing still frames, that is slow.

Code: Select all

import io, picamera, datetime

def detect_motion(camera):
    stream = io.BytesIO()
    camera.capture(stream, format='yuv', resize=(64,36), use_video_port=True)
    print datetime.datetime.now().strftime("%y%m%d-%H_%M_%S.%f")
    return False

with picamera.PiCamera() as camera:
    camera.resolution = (1920,1080)
    stream = picamera.PiCameraCircularIO(camera, seconds=5)
    # camera.start_recording(stream, format='h264')
    for i in range(0,10):
       detect_motion(camera)
I would appreciate any hints on how to make this work faster! Meanwhile, if I set camera.resolution = (1280, 720) I can get 7 fps with both the still capture and h264 buffer running (with CPU at 50%), so maybe that's as good as I can do for now. At the full-HD resolution it was only using 25% CPU, suggesting there is some memory access bottleneck (?)

User avatar
jbeale
Posts: 3511
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Mon Sep 22, 2014 1:41 am

Just to show things are looking up, here is a brief test code showing a very simple motion detection idea, working only with peak change in brightness ("Y" of YUV) which is working. I have the 1280x720 video circular buffer running, but am not actually writing it out to disk in this case (but adding that is easy: http://picamera.readthedocs.org/en/rele ... lar-stream ). The motion detection runs around 11 or 12 frames per second, using about 50% of CPU on a non-overclocked Pi. It's looking like this could really work.

Code: Select all

import io, picamera, datetime, time
import numpy as np

xsize = 64 # YUV matrix output horizontal size will be multiple of 32
ysize = 32 # YUV matrix output vertical size will be multiple of 16
mThresh = 10 # pixel brightness change threshold that means motion (probably needs adjustment)

avgmap = np.arange(xsize*ysize,dtype=np.int32).reshape(ysize, xsize) # average background image
newmap = np.arange(xsize*ysize,dtype=np.int32).reshape(ysize, xsize) # new image

def detect_motion(camera):
    global lastTime
    global xsize, ysize
    global avgmap, newmap

    # yuv format is YUV420(planar). Output is horizontal mult of 32, vertical mult of 16
    # http://picamera.readthedocs.org/en/release-1.8/recipes2.html#unencoded-image-capture-yuv-format
    stream=open('/run/shm/picamtemp.dat','w+b')
    camera.capture(stream, format='yuv', resize=(xsize,ysize), use_video_port=True)
    stream.seek(0)

    newmap = np.fromfile(stream, dtype=np.uint8, count=xsize*ysize).reshape((ysize, xsize))
    difmap = newmap - avgmap
    avgmap = avgmap/2 + (newmap/2)
    max = np.amax(difmap)
    min = np.amin(difmap)

    newTime = time.time()
    elapsedTime = newTime - lastTime
    lastTime = newTime
    fps = int(1/elapsedTime)
    gotMotion = ((max - min) > mThresh)  # if the peak deviation exceeds threshold, call it motion
    print gotMotion, min, max, fps
    return gotMotion

with picamera.PiCamera() as camera:
    camera.resolution = (1280,720)
    camera.framerate = 25
    stream = picamera.PiCameraCircularIO(camera, seconds=5)
    camera.start_recording(stream, format='h264')
    lastTime = time.time()
    for i in range(0,300):      # run for a while and then quit
       detect_motion(camera)
UPDATE: Further progress: pre- and post-record works per the example in the picamera docs. It's pretty neat and the code is starting to be useful. I might have to learn how to make a gihub account.

Observation 1: It wonder if I can control the duration of the "before-event" video recorded. When I ask for a 2-second buffer, I actually get about 10 seconds recorded before the event.

Observation 2:
When I hit "control-C" to stop this Python program while the circular buffer is running, it: 1) sometimes stops with some debug trace, as usual in Python, 2) sometimes freezes and the process must be killed from another login, 3) sometimes stops with "segmentation fault" and 4) once it locked up all logins and required a power cycle. Using 'kill <pid>' or 'killall <name>' works consistently, though.

dgm5555
Posts: 1
Joined: Mon Sep 22, 2014 8:41 pm

Re: Pure Python camera interface

Mon Sep 22, 2014 9:26 pm

jbeale wrote:a very simple motion detection idea
jbeale: I know very little python, or numpy. However it seems to me it would be easy to use your code as a very fast/lightweight method to track markers (either to look at a very small area for occlusion, to track relatively small predictable moving points (eg markers on a person), or for self-feedback to a robot to ensure movements occurred as expected).
I'm thinking something like coding a separate thread which monitored the camera, and fed back location data to the main process.
eg this sort of logic for a bot:-
Markers would be a fixed size individually coloured square positioned at a defined location. Locations would (probably) be mapped to a dictionary, but could possibly be mapped on an initial slow 'explore mode'
*Startup
Load dictionary of marker locations (this would have been created initially when defining the travel zone)
Search entire image for markers to figure out initial location/orientation (slow but only doing once, so OK)
once one is found use that to predict where other markers might be to speed subsequent search.
*Running Loop
Get current estimate of bot orientation
Guess where markers should be in the image
Start the search in the image for each expected marker in the expected area, and widen search only a limited number of pixels before trying another marker (thus only reviewing a fraction of the total pixels)
Use look-up table to calculate angle of marker from centre of image
Calculate current location of bot by triangulating from identified markers - using angular separation
[optionally: if only one marker visible use lookup table to find distance and angle from centre of maker
(based on size and distortion of square to quadralateral due to changes in viewpoint)]
Pass calculated location to a global for other threads to access)
and repeat...

I can probably figure out most of it, but I'm not sure how to check the numpy array for particular colours in a defined area of pixels. Can you help? Thanks

User avatar
jbeale
Posts: 3511
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Tue Sep 23, 2014 12:26 am

dgm5555 wrote:I can probably figure out most of it, but I'm not sure how to check the numpy array for particular colours in a defined area of pixels. Can you help? Thanks
I've never tried this, but you will need to define a range that your color components (R,G,B), or (Y,U,V) or whatever, are allowable to be considered part of marker. You may find it tricky, the human eye is very good at "automatic white balance" and I think you'll find cameras give a surprising range of different R,G,B values for the same color, depending on lighting and exposure and sometimes even angle to the camera. Then you need to decide how many pixels inside of that range count as a valid marker. Then you probably need to check if the pixels are adjacent to each other, not "noise" spread randomly across the image. One way is a smoothing (low-pass) filter that removes isolated pixels (high-frequency noise) and leaves blobs.

User avatar
jbeale
Posts: 3511
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Tue Sep 23, 2014 12:38 am

@waveform80 (or anyone else): I've got my motion-detector with pre-record feature "sort of" working. It works fine at low res, say 640x480. However I get no pre-record buffer (0 length when "before.h264" file is saved) if I try a high resolution like 1920x1080 full-HD. I assume this is a problem with memory space (although this happens even when requesting a short buffer like 1 second long).

Now, 'top' says the ARM is seeing 123044 KiB memory and I assume that's where rpicamera's buffer is going. /boot/config.txt says gpu_mem=128. Is this the optimal setting for this application? Is there any leeway to reduce GPU memory to allow the ARM more buffer space?

Separately it would be great if picamera allowed the "sensor mode" to be forced. I'm pretty convinced that 1280x720 video which is GPU-downscaled from a full-HD sensor readout has significantly better detail than 1280x720 which comes from the on-sensor 2x2 binning mode.

User avatar
waveform80
Posts: 306
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Tue Sep 23, 2014 12:54 am

jbeale wrote:@waveform80 (or anyone else): I've got my motion-detector with pre-record feature "sort of" working. It works fine at low res, say 640x480. However I get no pre-record buffer (0 length when "before.h264" file is saved) if I try a high resolution like 1920x1080 full-HD. I assume is a problem with memory space, even when requesting a short buffer like 1 second long.

Now, 'top' says the ARM is seeing 123044 KiB memory and I assume that's where rpicamera's buffer is going. /boot/config.txt says gpu_mem=128. Is this the optimal setting for this application? Is there any leeway to reduce GPU memory to allow the ARM more buffer space?

Separately it would be great if picamera allowed the "sensor mode" to be forced. I'm pretty convinced that 1280x720 video which is GPU-downscaled from a full-HD sensor readout has significantly better detail than 1280x720 which comes from the on-sensor 2x2 binning mode.
My apologies I don't have time to help with this as it all sounds fascinating! Unfortunately I'm swamped with work at the moment. Some quick responses off the top of my head:

GPU memory can be reduced below 128, but expect bits of functionality to begin failing below this. AIUI, this split is required for full 1080p recording; as you step down the GPU memory split, the maximum recording resolution gets gradually smaller and smaller. I assume certain other things may fail at lower splits too, depending on their memory requirements. Below 16Mb, nothing works (from some bug reports against the RPi official docs).

Forcing the sensor mode is planned for 1.9, but I haven't looked into this in detail yet so I'm not sure what the API's going to look like and I can't give any estimates on when 1.9 will be ready at the moment.

Speaking of 1.9, I'm planning to make the splitter an optional component in that release, although this is going to involve quite a bit of work. This is due to learning some interesting "under the hood" details about the MJPEG codec and the splitter. TL;DR: the splitter hurts performance in certain cases (I suspect this may the source of 1080p woes that a few users have reported).


Dave.

User avatar
jbeale
Posts: 3511
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Tue Sep 23, 2014 2:57 am

waveform80 wrote:My apologies I don't have time to help with this as it all sounds fascinating! Unfortunately I'm swamped with work at the moment.
No worries, you've made a great tool with picamera and great documentation too; I'm still working my way through it. I have some new B+ RPi's on order so I can move into the modern era with 512 MB ram :-).

Edit: I tested this on a Model B+ with 512M ram. The very first time I tried, the circular buffer worked and saved out at 1920x1080 resolution, yay! Then I changed something unrelated: the camera exposure bias, and then I had the same problem as before: the saved "before.h264" files were 0 length, and even reducing to 1280x720 did not fix it. Even tried rebooting and that didn't help so clearly it's something besides memory, I just need to track it down.

User avatar
jbeale
Posts: 3511
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Thu Sep 25, 2014 10:13 pm

I have a guess as to what's happening in my code. The pre-roll buffer will not write anything out if the buffer is too short (a few seconds) and/or the framerate is too slow (like 4 fps). It is working with buffer=4 seconds and 6 FPS. My guess is that for small buffers, you can't fit one full GOP (at least it was called that in MPEG2, I mean a standalone set of frames starting with an I-frame) so the "split" routine can't find a place to split, and it gives up on writing out the pre-roll buffer.

UPDATE: it is not as simple as I thought. It was working for a while. But then the problem happens again after the sun goes down and the room gets darker. But when I prop a well-lit picture in front of the camera, then the problem goes away. Maybe the darker image has a lower bitrate, due to noise reduction going up and detail getting washed out (?)

So the code is not yet in a consistently working state. But if you want to look at it, I'm currently on the 'testing' branch at
https://github.com/jbeale1/PiCam1/tree/testing

Update: A little more detail: http://www.raspberrypi.org/forums/viewt ... 43&t=87997

User avatar
jbeale
Posts: 3511
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Bug? white line across middle, with frame number annotation

Sun Sep 28, 2014 6:28 pm

When I set text annotation with black background, it works as expected. If, in addition, I set

Code: Select all

camera.annotate_frame_num = True
I get the frame number shown on a second line underneath my text annotation as expected, but I also get a 1-pixel wide white line across the full frame from one side to the other, slightly above the midpoint. Is this an intended behavior? Is there some graph function that also got turned on?

The problem appears in the .h264 video files saved by this code:

Code: Select all

#!/usr/bin/python

# Record a set of time-stamped video files from RPi camera
# At the same time, do motion detection and generate log of motion events
# Saving stills when motion is detected is possible, but interrupts the video.
# J.Beale  v0.21 28 Sept 2014

from __future__ import print_function
from datetime import datetime
import picamera, time
import numpy as np
# from PIL import Image  # for converting array back to a JPEG for export (debug only)

# --------------------------------------------------
videoDir = "/mnt/video1/" # directory to record video files 
picDir = "/mnt/video1/" # directory to record still images
logfile = "/home/pi/logs/RecSeq_log.csv" # where to save log of motion detections
recFPS = 8  # how many frames per second to record
cxsize = 1920 # camera video X size
cysize = 1080 # camera video Y size
segTime = 3600 # how many seconds long each video file should be 
showFrameNum = True # set 'True' if each frame number should be drawn on video

# xsize and ysize are used in the internal motion algorithm, not in the .h264 video output
xsize = 64 # YUV matrix output horizontal size will be multiple of 32
ysize = 32 # YUV matrix output vertical size will be multiple of 16
dFactor = 3.0  # how many sigma above st.dev for diff value to qualify as motion pixel
pcThresh = 30  # total number of changed elements which add up to "motion"
novMaxThresh = 200 # peak "novel" pixmap value required to qualify "motion event"

logHoldoff = 0.4 # don't log another motion event until this many seconds after previous event

avgmax = 3     # long-term average of maximum-pixel-change-value
stg = 10       # groupsize for rolling statistics

timeMin = 1.0/6  # minimum time between motion computation (seconds)
running = False  # whether we have done our initial average-settling time
initPass = 5     # how many initial passes to do
pixvalScaleFactor = 65535/255.0  # multiply single-byte values by this factor
frames = 0 # how many frames we've looked at for motion
fupdate = 1   # report debug data every this many frames
gotMotion = False # true when motion has been detected
debug = False # should we report debug data (pixmap dump to PNG files)
showStatus = True # if we should print status data every pass?
showStatus = False

# Image crop / zoom parameters (can change image aspect ratio)
zx = 0.0  # normalized horizontal image offset
zy = 0.0 # normalized vertical image offset (0 = top of frame)
zw = 1.0 # normalized horizontal scale factor (1.0 = full size)
zh = 0.5 # normalized vertical scale factor (1.0 = full size)
resX = 1920  # rescaled X resolution (video / still)
resY = 540  # rescaled Y resolutio (video / still)

# --------------------------------------------------
sti = (1.0/stg) # inverse of statistics groupsize
sti1 = 1.0 - sti # 1 - inverse of statistics groupsize

# --------------------------------------------------
def date_gen():
  while True:
    dstr = videoDir + datetime.now().strftime("%y%m%d_%H%M%S") + ".h264"
    yield dstr

# initMaps(): initialize pixel maps with correct size and data type
def initMaps():
    global newmap, difmap, avgdif, tStart, lastTime, stsum, sqsum, stdev
    newmap = np.zeros((ysize,xsize),dtype=np.float32) # new image
    difmap = np.zeros((ysize,xsize),dtype=np.float32) # difference between new & avg
    stsum  = np.zeros((ysize,xsize),dtype=np.int32) # rolling average sum of pix values
    sqsum  = np.zeros((ysize,xsize),dtype=np.int32) # rolling average sum of squared pix values
    stdev  = np.zeros((ysize,xsize),dtype=np.int32) # rolling average standard deviation
    avgdif  = np.zeros((ysize,xsize),dtype=np.int32) # rolling average difference

    tStart = time.time()  # time that program starts
    lastTime = tStart  # last time event detected

# getFrame(): returns Y intensity pixelmap (xsize x ysize) as np.array type
def getFrame(camera):
    stream=open('/run/shm/picamtemp.dat','w+b')
    camera.capture(stream, format='yuv', resize=(xsize,ysize), use_video_port=True)
    stream.seek(0)
    return np.fromfile(stream, dtype=np.uint8, count=xsize*ysize).reshape((ysize, xsize))

# saveFrame(): save a JPEG file
def saveFrame(camera):
    fname = picDir + daytime + ".jpg"
    camera.capture(fname, format='jpeg', use_video_port=True)


# updateTS1(): update video timestamp with current time, and '*' if motion detected
# the optional second argument specifies a delay in seconds, meanwhile time keeps updating
def updateTS1(camera, delay = 0):
  tStart = time.time() # actual value is raw seconds since Jan.1 1970
  while True: 
    detect_motion(camera) # one pass through the motion-detect algorithm
    if gotMotion:
      camera.annotate_text = datetime.now().strftime('* %Y-%m-%d %H:%M:%S *')
    else:
      camera.annotate_text = datetime.now().strftime('  %Y-%m-%d %H:%M:%S  ')
    tElapsed = time.time() - tStart  # seconds since this function started
    if (tElapsed >= delay): # quit when elapsed time reaches the delay requested
      break

# =============================================== 

# detect_motion() is where the low-res version of the image is compared with an average of
# past images, and a recent standard-deviation pixel map, to detect 'novel' pixels.
# Enough novel pixels, with a large enough peak amplitude, generates a motion event.

def detect_motion(camera):
    global running # true if algorithm has passed through initial startup settling
    global xsize, ysize  # dimensions of pixmap for motion calculations
    global newmap, avgdif # pixmap data arrays
    global avgmax # (scalar) running average of maximum magnitude of pixel change
    global frames  # how many frames we've examined for motion
    global gotMotion # boolean True if motion has been detected
    global tStart  # time of last event
    global lastTime # time this function was last run
    global daytime # current time of day when motion event detected
    global stsum # (matrix) rolling average sum of pixvals
    global sqsum # (matrix) rolling average sum of squared pixvals
    global stdev # (matrix) rolling average standard deviation of pixels
    global initPass # how many initial passes we're doing

     
    newTime = time.time()
    elapsedTime = newTime - lastTime
    if (elapsedTime < timeMin):  # don't recompute motion data too rapidly (eg. on same frame)
      time.sleep(timeMin - elapsedTime)

    lastTime = newTime
    fps = int(1/elapsedTime)

    newmap = pixvalScaleFactor * getFrame(camera)  # current pixmap
    if not running:  # first time ever through this function?
      stsum = stg * newmap         # call the sum over 'stg' elements just stg * initial frame
      sqsum = stg * np.power(newmap, 2) # initialze sum of squares
      running = True                    # ok, now we're running
      return False

						   # avgmap = [stsum] / stg
    difmap = newmap - np.divide(stsum, stg)        # difference pixmap (amount of per-pixel change)
    difmap = abs(difmap)                 # take absolute value (brightness may increase or decrease)
    magMax = np.amax(difmap)               # peak magnitude of change

# note: stdev ~8000, difmap ~500 for 32x16 matrix, pixvalScaleFactor = 100/255, stg = 15

    stsum = (stsum * sti1) + newmap           # rolling sum of most recent 'stg' images (approximately)
    sqsum = (sqsum * sti1) + np.power(newmap, 2) # rolling sum-of-squares of 'stg' images (approx)
    devsq = 0.1 + (stg * sqsum) - np.power(stsum, 2)  # variance, had better not be negative
    stdev = (1.0/stg) * np.power(devsq, 0.5)    # matrix holding rolling-average element-wise std.deviation
    novel = difmap - (dFactor * stdev)   # novel pixels have difference exceeding (dFactor * standard.deviation)

    novMax = np.amax(novel)  # largest value in 'novel' array: greatest unusual brightness change 
    novMin = np.amin(novel)  # smallest value; very close to zero unless recent big brightness change

    dAvg = np.average(difmap)  # average of all elements of array (pixmap)
    sAvg = np.average(stdev)

    if initPass > 0:             # are we still initializing array averages?
      initPass = initPass - 1
      return False		# if so, quit now before making any event-detections

    condition = novel > 0  # boolean array of pixels showing positive 'novelty' value
    changedPixels = np.extract(condition, novel)
    countPixels = changedPixels.size  # how many pixels are considered unusual / novel in this frame

    novel = novel - novMin  # force minimum to zero

    if (countPixels > pcThresh) and (novMax > novMaxThresh):  # found enough changed pixels to qualify as motion?
      gotMotion = True
    else:
      gotMotion = False

    if showStatus:  # print debug info
      print ("%d %f %d %f" % (gotMotion, magMax, countPixels, fps))

    if gotMotion:
      tNow = time.time()
      tInterval = tNow - tStart
      if (tInterval > logHoldoff):  # only log when at least logHoldoff time has elapsed
        tStart = tNow
        daytime = datetime.now().strftime("%y%m%d_%H%M%S")
	# saveFrame(camera)  # save a still image - unfortunately makes the video recording skip frames
        tstr = ("%s,  dM:%4.1f, nM:%4.1f, dT:%6.3f, px:%d\n" % (daytime,magMax,novMax,tInterval,countPixels))
        f.write(tstr)
        f.flush()

      if showStatus:
        print("********************* MOTION **********************************")
    else:
      running = True  # 'running' set True after initial filter settles and "Motion-Detect" drops

    frames = frames + 1
    if (((frames % fupdate) == 0) and debug):
        print ("cPx:%d nM:%5.1f d:%5.2f s:%5.2f fps=%3.0f" %\
               (countPixels, novMax, dAvg, sAvg, fps))
        
	# np.set_printoptions(precision=1)
	# print(difmap)
	# print(sqsum) # show all elements of array 
	# print(stdev) # show all elements of array 

#        fstr = '%04d' % (frames)  # convert integer to formatted string with leading zeros
#        img = Image.fromarray(stsum.astype(int))
#        avgMapName = "A" + fstr + ".png"
#        img.save(avgMapName)  # save as image for visual analysis

    # running = False  # DEBUG never admit to a motion detection
    if running:
      return gotMotion
    else:
      return False

# ============= Main Program ====================

with picamera.PiCamera() as camera:

    np.set_printoptions(precision=1)
    daytime = datetime.now().strftime("%y%m%d_%H:%M:%S")
    f = open(logfile, 'a')
    f.write ("# RecSeq log v0.21 Sept. 28 2014 J.Beale\n")
    outbuf = "# Start: " + daytime  + "\n"
    f.write (outbuf)
    f.flush()
    print ("PiMotion starting at %s" % daytime)

    initMaps() # set up pixelmap arrays
#    camera.resolution = camera.MAX_RESOLUTION  # sensor res = 2592 x 1944
    camera.resolution = (cxsize, cysize)
    camera.framerate = recFPS  # how many frames per second to record
    camera.annotate_frame_num = showFrameNum # set 'True' if we should show the frame number on the video
    camera.annotate_background = True # black rectangle behind white text for readibility
    camera.zoom = (zx, zy, zw, zh) # set image offset and scale factor (default 0,0,1,1 )
    camera.exposure_mode = 'night'
    for filename in camera.record_sequence( date_gen(), format='h264', resize=(resX, resY)):
        waitTime = segTime-(time.time()%segTime)
        print("Recording for %d to %s" % (waitTime,filename))
        updateTS1(camera, waitTime)

# ================================================
Last edited by jbeale on Sun Sep 28, 2014 6:39 pm, edited 1 time in total.

User avatar
waveform80
Posts: 306
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Bug? white line across middle, with frame number annotat

Sun Sep 28, 2014 6:34 pm

jbeale wrote:When I set text annotation with black background, it works as expected. If, in addition set

Code: Select all

camera.annotate_frame_num = True
I get the frame number shown on a second line underneath my annotation, but I also get a thin white line across the center of the frame from one side to the other. Is this an intended behavior?
It seems to be an issue in the firmware when more than one line is displayed. You can cause the same line to appear by using a really long text annotation that spans multiple lines.

I think this is something left over from some of the other annotations which are present in the MMAL interface but (I'm guessing) unimplemented in the Pi's firmware (have a play with "show_analog_gain", "show_caf", "show_motion" and so forth in the MMAL_PARAMETER_CAMERA_ANNOTATE_T structure and all sorts of other lines and things appear in the annotation but they all appear to be non-functional on the Pi).

Dave.

User avatar
jbeale
Posts: 3511
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Sun Sep 28, 2014 6:58 pm

Thanks Dave, looks like we would need to wait for a firmware fix then.

By the way, can I read back the current frame number which is used in the annotation? Then I could add that into my text and not need the second line and the glitch. I never actually leaned Python, I'm just working by extending examples.... is there a code example showing how to use

Code: Select all

class picamera.PiVideoFrame(index, frame_type, frame_size, video_size, split_size, timestamp)
to extract the 'index' value ?

User avatar
waveform80
Posts: 306
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Sun Sep 28, 2014 8:46 pm

jbeale wrote:Thanks Dave, looks like we would need to wait for a firmware fix then.

By the way, can I read back the current frame number which is used in the annotation? Then I could add that into my text and not need the second line and the glitch. I never actually leaned Python, I'm just working by extending examples.... is there a code example showing how to use

Code: Select all

class picamera.PiVideoFrame(index, frame_type, frame_size, video_size, split_size, timestamp)
to extract the 'index' value ?

There's no direct way to extract the frame number use by the annotation property, but you can extract the index from the PiCamera.frame property (which is an instance of PiVideoFrame) like so:

Code: Select all

import picamera

with picamera.PiCamera() as camera:
    camera.resolution = (640, 480)
    camera.start_recording('foo.h264')
    for i in range(10):
        camera.wait_recording(1)
        print camera.frame.index
    camera.stop_recording()
However, the tricky bit is knowing when to query this to get a value for each frame (in the above case, the printed values will be *approximately* every second, but given all the overhead of writing, python, the OS, etc. it's very approximate). Using a custom output is probably the easiest way of dealing with this as the write() method will get called at least once for every frame (it's worth noting that large frames like I-frames might call write() several times in order to get written depending on the buffer size, so in the following example you might want to check whether camera.frame.index has actually changed from write to write):

Code: Select all

import io
import picamera

class MyCustomOutput(object):
    def __init__(self, camera, filename):
        self.camera = camera
        self._file = io.open(filename, 'wb')

    def write(self, buf):
        print self.camera.frame.index
        return self._file.write(buf)

    def flush(self):
        self._file.flush()

    def close(self):
        self._file.close()


with picamera.PiCamera() as camera:
    camera.resolution = (640, 480)
    output = MyCustomOutput(camera, 'foo.h264')
    camera.start_recording(output, format='h264')
    camera.wait_recording(10)
    camera.stop_recording()
    output.close()
My apologies if the above doesn't work "out of the box" - this is just off the top of my head. Still, it should demonstrate the idea - building a custom output class isn't terribly hard - just define a class with a write() method at minimum and do whatever extra stuff you want to in the write() method.


Dave.

Return to “Camera board”