bootsmann
Posts: 28
Joined: Thu Jan 23, 2014 7:07 am

Re: Pure Python camera interface

Tue Sep 30, 2014 11:35 am

I am using picamera 1.8 and #713 firmware. How can I turn off follow message/warning?

Code: Select all

/usr/local/lib/python2.7/dist-packages/picamera/camera.py:2488: PiCameraDeprecated: PiCamera.ISO is deprecated; use PiCamera.iso instead
  'PiCamera.ISO is deprecated; use PiCamera.iso instead'))

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Tue Sep 30, 2014 11:39 am

bootsmann wrote:I am using picamera 1.8 and #713 firmware. How can I turn off follow message/warning?

Code: Select all

/usr/local/lib/python2.7/dist-packages/picamera/camera.py:2488: PiCameraDeprecated: PiCamera.ISO is deprecated; use PiCamera.iso instead
  'PiCamera.ISO is deprecated; use PiCamera.iso instead'))
Sorry about that, silly mistake on my part - it'll be corrected in 1.9. I'm slightly surprised it's showing up though - deprecation warnings are meant to be silenced in Python by default. Are you setting a warnings filter in your script anywhere? The following filter ought to silence it (but obviously if you've got another filter set later on it won't):

Code: Select all

import warnings
warnings.simplefilter('ignore', DeprecationWarning)

Dave.

User avatar
jbeale
Posts: 3522
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Tue Sep 30, 2014 5:32 pm

waveform80 wrote:you can extract the index from the PiCamera.frame property [...]
However, the tricky bit is knowing when to query this to get a value for each frame (in the above case, the printed values will be *approximately* every second, but given all the overhead of writing, python, the OS, etc. it's very approximate). Using a custom output is probably the easiest way of dealing with this as the write() method will get called at least once for every frame (it's worth noting that large frames like I-frames might call write() several times in order to get written depending on the buffer size, so in the following example you might want to check whether camera.frame.index has actually changed from write to write): [...]
Thank you very much for the helpful examples! I have not yet, but will try the custom output class.

So far, I have found that my existing code (which is writing a 1080p H264 file at 8 fps, and also grabbing rescaled 64x32 pixel YUV stills for motion analysis, and also updating the text annotation with time/date down to milliseconds) behaves in a way I did not expect. Setting camera.annotate_frame_num = True confirms it. When stepping frame-by-frame through the nominally 8 fps output (MP4Box to MP4, then Quicktime Viewer on Windows PC), I see that the .h264 file actually contains two copies of many frames. In each such pair, the background image and the MMAL-annotated frame number are the same, but the text containing realtime milliseconds changes, showing about 1/8 second between frames. So I am getting about the right framerate, but some frames are duplicates except for my text annotation. I guess that either grabbing the secondary YUV output, or the constantly updating text annotation are sometimes causing new camera data to be dropped. The H264 encoder is doing its job as best it can, keeping the requested framerate output by reusing the previous frame. Seems like these tasks really ought to be synchronous with new camera frames; a custom output class might enable it to be?

Separately, I just found something stupid I did: writing a text log file on each motion detection immediately with file.flush() was causing gaps in the video recording at that moment. Removing the flush() request after each write on the logfile seems to have fixed that issue.

There is another problem causing some missed video, but right now it looks like it is an issue with my NFS remote share and kernel threads hogging all CPU in the Pi, as described here: http://www.raspberrypi.org/forums/viewt ... 28&t=88213
...or not. Tried 'tc' to institute traffic shaping (limiting average eth0 bandwidth to 20 Mbit/s) but it wasn't the problem, which looks more like my program trying to stream video, also capture full-res JPEG, and also capture low res YUV all at once, is too much to ask of the camera system.

User avatar
jbeale
Posts: 3522
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Wed Oct 01, 2014 7:56 pm

waveform80 wrote:Using a custom output is probably the easiest way of dealing with this as the write() method will get called at least once for every frame (it's worth noting that large frames like I-frames might call write() several times in order to get written depending on the buffer size, so in the following example you might want to check whether camera.frame.index has actually changed from write to write):

Code: Select all

import io
import picamera

class MyCustomOutput(object):
    def __init__(self, camera, filename):
        self.camera = camera
        self._file = io.open(filename, 'wb')

    def write(self, buf):
        print self.camera.frame.index
        return self._file.write(buf)

    def flush(self):
        self._file.flush()

    def close(self):
        self._file.close()


with picamera.PiCamera() as camera:
    camera.resolution = (640, 480)
    output = MyCustomOutput(camera, 'foo.h264')
    camera.start_recording(output, format='h264')
    camera.wait_recording(10)
    camera.stop_recording()
    output.close()
Dave: great stuff, your example did work perfectly as you wrote it. It runs and the output frame number runs -1, 0, 1, 2, 3 ... 308 and there are five frame numbers that come up twice, at uniform intervals of 61 frames: 60, 121, 182, 243, 304. Maybe 61 frames is the I-frame interval.

Note to self: this extracts 10 full-quality JPEG frames from 'inputfile.mp4' starting 15 seconds in.

Code: Select all

avconv -i inputfile.mp4 -ss 00:00:15 -vframes 10 -f image2 -q:v 2 F%03d.jpg
Note2: omxplayer does not properly display the I-frames, at least for one specific 2 fps 1920x1080 MP4 file I generated with a version of the code above (+ MP4Box). Every 61st frame in the H264 / MP4 file is an I frame, the other 60 are P frames. It looks like omxplayer shows the last P frame twice, so that the I frame is never seen; at any rate there is one duplicate frame with a period of 61. By contrast, VLC on Ubuntu shows all the frames correctly, and avconv can extract all the frames correctly.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7580
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Bug? white line across middle, with frame number annotat

Fri Oct 03, 2014 11:43 am

waveform80 wrote:
jbeale wrote:When I set text annotation with black background, it works as expected. If, in addition set

Code: Select all

camera.annotate_frame_num = True
I get the frame number shown on a second line underneath my annotation, but I also get a thin white line across the center of the frame from one side to the other. Is this an intended behavior?
It seems to be an issue in the firmware when more than one line is displayed. You can cause the same line to appear by using a really long text annotation that spans multiple lines.

I think this is something left over from some of the other annotations which are present in the MMAL interface but (I'm guessing) unimplemented in the Pi's firmware (have a play with "show_analog_gain", "show_caf", "show_motion" and so forth in the MMAL_PARAMETER_CAMERA_ANNOTATE_T structure and all sorts of other lines and things appear in the annotation but they all appear to be non-functional on the Pi).
I've just tried to reproduce this in raspistill and failed. What exactly are you filling the MMAL_PARAMETER_CAMERA_ANNOTATE_V2_T structure with when you're seeing this line? If any of the "show_XXX" parameters are set then it will draw that line, and if it is unchanging (eg because it isn't used on Pi) then you will get a horizontal line.
My diff:

Code: Select all

diff --git a/host_applications/linux/apps/raspicam/RaspiStill.c b/host_applications/linux/apps/raspicam/RaspiStill.c
index 9b791fb..b030dd9 100644
--- a/host_applications/linux/apps/raspicam/RaspiStill.c
+++ b/host_applications/linux/apps/raspicam/RaspiStill.c
@@ -923,7 +923,21 @@ static MMAL_STATUS_T create_camera_component(RASPISTILL_STATE *state)

       mmal_port_parameter_set(camera->control, &cam_config.hdr);
    }
-
+
+   {
+       MMAL_PARAMETER_CAMERA_ANNOTATE_V2_T annotate =
+          { { MMAL_PARAMETER_ANNOTATE, sizeof(annotate) },
+               MMAL_TRUE,
+               MMAL_FALSE,
+               MMAL_FALSE,
+               MMAL_FALSE,
+               MMAL_FALSE,
+               MMAL_FALSE,
+               MMAL_TRUE,
+               MMAL_TRUE,
+               "Wibble this is a long text string if I keep going like this and dont stop typing" };
+       mmal_port_parameter_set(camera->control, &annotate.hdr);
+   }
    raspicamcontrol_set_all_parameters(camera, &state->camera_parameters);

    // Now set up the port formats
Comments in the code (I have access again! But very limited time to do anything :( ) say analogue gain should be blue, exposure time in red, lens position in green, and motion vectors in red and blue.
PS I don't tend to follow this thread, so if you suspect a genuine bug then please open a new thread.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7580
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Pure Python camera interface

Fri Oct 03, 2014 11:52 am

Further comment in the code

Code: Select all

/* Draw grey bar to show bottom of the graph */
And then draws a bar with Y=0xC8, U=0x80, V=0x80, which is not far off white.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

User avatar
jbeale
Posts: 3522
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

motion detection efforts

Sun Oct 05, 2014 7:09 am

Dave, just FYI: I built a crude motion-detecting code around your custom output example, thanks again for that. This now does the motion-detection as well as on-video timestamp update, as soon as the first buffer on each new frame passes through. It also segments the video files. It can't keep up with detection on every frame at 8 fps, so I update time/date every frame but run the motion-detect every other frame. This basically works, but I notice that there is sometimes 1 frame of timing slop between the timestamp and the real time. In other words, while my code tracks the true frame count as the buffers come in, the onscreen annotation sometimes labels two frames with the same number, then a little later will skip a number and be back to "true" (maybe I should be using C instead of Python? But your library is so nice... Anyway, here's the code: https://github.com/jbeale1/PiCam1/blob/master/stest2.py

Annoyance one: code will just hang / freeze up if you start it up writing the video to an external USB drive which has spun down from inactivity. So I guess I need to poke the drive first, then start. No big deal as once it's running, the constant video writing keeps the disk on.

Annoyance two: For now,this code logs motion just as time/date text. Saving out stills on motion is possible but reduces the framerate of the motion detect algorithm. My plan is to extract frames after the fact from the saved 1080p MP4 file (which of course also enables you to scan backward and forward in time from the trigger point), and I have done that manually with avconv BUT I had to compile 'avconv version v12_dev0' code from github myself, because avconv ver. 9.14 in current Raspbian and even the one in my current Ubuntu, does not provide the '-accurate_seek' option (that patch now more than a year old) which you need to extract with frame accuracy from the MP4.

On the plus side, it is apparently possible to do motion detection at 4 fps with 1080p output and effectively no latency (just not in real time; using the post-process MP4 frame extraction). As far as I know, that's something new on the Pi; the alternatives do lower framerate and/or lower resolution and/or have significant delay between motion detect and still capture.

spinomaly
Posts: 7
Joined: Tue Oct 14, 2014 10:04 pm

Re: Pure Python camera interface

Sun Oct 19, 2014 1:36 am

Sorry for the repost...but I am stuck...

I have a B+ with the latest Noobs Raspian off the website (release date 2014-09-09). I am having troubles adding and removing overlays. After a sequence of 60 add/remove overlays an out of memory error occurs. Code below as well as error. Any help would be appreciated.

Code: Select all

import time
import picamera
import numpy as np

# Create an array representing a 1280x720 image of
# a cross through the center of the display. The shape of
# the array must be of the form (height, width, color)
a = np.zeros((720, 1280, 3), dtype=np.uint8)
a[360, :, :] = 0xff
a[:, 640, :] = 0xff

i = 0

with picamera.PiCamera() as camera:
    camera.resolution = (1280, 720)
    camera.framerate = 24
    camera.start_preview()
    # Add the overlay directly into layer 3 with transparency;
    # we can omit the size parameter of add_overlay as the
    # size is the same as the camera's resolution

    while True:

        i += 1
        print(i)
        o = camera.add_overlay(np.getbuffer(a), layer=3, alpha=64)
        time.sleep(.1)
        camera.remove_overlay(o)
mmal: mmal_vc_component_create: failed to create component 'vc.ril.video_render' (1:ENOMEM)
mmal: mmal_component_create_core: could not create component 'vc.ril.video_render' (1)
Traceback (most recent call last):
File "overlay_bug.py", line 26, in <module>
o = camera.add_overlay(np.getbuffer(a), layer=3, alpha=64)
File "/usr/lib/python2.7/dist-packages/picamera/camera.py", line 953, in add_overlay
renderer = PiOverlayRenderer(self, source, size, **options)
File "/usr/lib/python2.7/dist-packages/picamera/renderers.py", line 474, in __init__
rotation, vflip, hflip)
File "/usr/lib/python2.7/dist-packages/picamera/renderers.py", line 89, in __init__
prefix="Failed to create renderer component")
File "/usr/lib/python2.7/dist-packages/picamera/exc.py", line 133, in mmal_check
raise PiCameraMMALError(status, prefix)
picamera.exc.PiCameraMMALError: Failed to create renderer component: Out of memory

spinomaly
Posts: 7
Joined: Tue Oct 14, 2014 10:04 pm

Re: Pure Python camera interface

Sun Oct 19, 2014 4:14 am

I watched the system memory during the add and remove overlay. The memory appears to be decreasing at a rate that is the size of the numpy buffer.

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Sun Oct 19, 2014 10:42 am

spinomaly wrote:Sorry for the repost...but I am stuck...

I have a B+ with the latest Noobs Raspian off the website (release date 2014-09-09). I am having troubles adding and removing overlays. After a sequence of 60 add/remove overlays an out of memory error occurs. Code below as well as error. Any help would be appreciated.

Code: Select all

import time
import picamera
import numpy as np

# Create an array representing a 1280x720 image of
# a cross through the center of the display. The shape of
# the array must be of the form (height, width, color)
a = np.zeros((720, 1280, 3), dtype=np.uint8)
a[360, :, :] = 0xff
a[:, 640, :] = 0xff

i = 0

with picamera.PiCamera() as camera:
    camera.resolution = (1280, 720)
    camera.framerate = 24
    camera.start_preview()
    # Add the overlay directly into layer 3 with transparency;
    # we can omit the size parameter of add_overlay as the
    # size is the same as the camera's resolution

    while True:

        i += 1
        print(i)
        o = camera.add_overlay(np.getbuffer(a), layer=3, alpha=64)
        time.sleep(.1)
        camera.remove_overlay(o)
mmal: mmal_vc_component_create: failed to create component 'vc.ril.video_render' (1:ENOMEM)
mmal: mmal_component_create_core: could not create component 'vc.ril.video_render' (1)
Traceback (most recent call last):
File "overlay_bug.py", line 26, in <module>
o = camera.add_overlay(np.getbuffer(a), layer=3, alpha=64)
File "/usr/lib/python2.7/dist-packages/picamera/camera.py", line 953, in add_overlay
renderer = PiOverlayRenderer(self, source, size, **options)
File "/usr/lib/python2.7/dist-packages/picamera/renderers.py", line 474, in __init__
rotation, vflip, hflip)
File "/usr/lib/python2.7/dist-packages/picamera/renderers.py", line 89, in __init__
prefix="Failed to create renderer component")
File "/usr/lib/python2.7/dist-packages/picamera/exc.py", line 133, in mmal_check
raise PiCameraMMALError(status, prefix)
picamera.exc.PiCameraMMALError: Failed to create renderer component: Out of memory
]

Damn - definitely sounds like a memory leak. Any chance you could open a ticket on the github site? I probably won't have time to look at this for a couple of weeks as I'm swamped at the moment, but that should definitely be fixed before 1.9 gets released.

Dave.

spinomaly
Posts: 7
Joined: Tue Oct 14, 2014 10:04 pm

Re: Pure Python camera interface

Sun Oct 19, 2014 3:04 pm

Sure can. Which one under Raspi? Firmware?


spinomaly
Posts: 7
Joined: Tue Oct 14, 2014 10:04 pm

Re: Pure Python camera interface

Sun Oct 19, 2014 9:12 pm

Thank you. Just posted the issue to the area you recommended.

User avatar
jbeale
Posts: 3522
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Bitrate limit doesn't always work

Wed Oct 29, 2014 5:29 pm

I have noticed that setting the bitrate sometimes, but not always limits the bitrate to the value specified. I presume this is something in the firmware, not in picamera, but I can file it as a bug if you want.

I am using picamera to write out .h264 files in 30 second long segments (240 frames at 8 fps). The complete code is at https://github.com/jbeale1/PiCam1/blob/master/stest2.py but the relevant part is just this:

Code: Select all

BPS = 12000000  # bits per second from H.264 video encoder
for vidFile in camera.record_sequence( date_gen(camera), format='h264', bitrate=BPS):
  ... blah, blah
At 12 Mbps, a 30-second file should be 45 MB in size. And indeed, if I plot the sizes of all the files over 24 hours I see there is one plateau at 45 MB but there the largest few files of the day are much bigger, reaching 70 MB in size. In that case we have an average bitrate around 19 Mbps across 4 GOPs (240 frames = 4 GOPs * 60 frames/GOP), and that is pretty far above the target. At the moment of sunrise and sunset, the image becomes very noisy and obviously that is hard to compress, but is there a bitrate limit, or is there not? See also: http://www.raspberrypi.org/forums/viewt ... 76#p633176

Image

If it matters, below is a still frame taken around dawn this morning, extracted from a ~ 19 Mbps .h264 video that was supposed to be 12 Mbps max. The ringing artifacts towards the right edge aren't visible when the light gets brighter. I am using the "sports" exposure mode to reduce motion blur, so I believe the ISO is 1600 in dim light: https://picasaweb.google.com/1099282360 ... 9638825746

User avatar
jbeale
Posts: 3522
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Mon Nov 03, 2014 7:19 am

The below example works as expected. Now, is it possible to grab a second YUV frame, for example 0.5 seconds after the first frame without closing and reinitializing the camera and waiting for the autoexposure to settle? In other modes there is capture_sequence and capture_continuous, but I don't know if that is possible with picamera.array.PiYUVArray()

Code: Select all

import time
import picamera
import picamera.array

with picamera.PiCamera() as camera:
    with picamera.array.PiYUVArray(camera) as output:
        camera.resolution = (64, 32)
        camera.start_preview()
        time.sleep(4)
        camera.capture(output, format='yuv')
        print(output.array[32/2,:,0]) # look at the Y channel, center scanline across
Example output:

Code: Select all

[ 19  20  22  23  21  22  25  25  19  19  41  68  25  32  48  55  68  58
  97  97  59  48  37  42 105 136 147 145 143 140 138 135 134 135 129 130
 123 125  90  71  77  99 106  78  70  82  84  76  91  83  89  62  37  29
  33  32  41  84  71  66  75  62  66  65]

jit
Posts: 33
Joined: Fri Apr 18, 2014 2:52 pm

Re: Pure Python camera interface

Sun Dec 14, 2014 9:08 pm

hoggerz wrote:Wondered If jit or anyone else managed to improve upon this? It works ok but It can be a little unpredictable! Unfortunately my knowledge of python isn't very good. The circular buffer functionality combined with motion detection seems ideal for security applications.

jit wrote:Great to see a new version, thanks Dave. I'll be upgrading very shortly.

I've spent some time playing around with the script you modified. I though I'd share it and see if anyone has suggestions on improving it and making the motion detection better. I'm very pleased with the way that the circular buffer is working, ideal for capturing the moments before motion takes place.

I've added the ability to merge together the before and after files and box them using mp4box.

With regards to the motion detection, I'm wondering whether its better to plug-in another library given that this problem has been solved, although I'm not entirely sure how I'd go about that (my python isn't very strong).

I've added some TODOs around bits that need work.

Disclaimer: I'm not very familiar with Python, so I'm sure there's a lot of tidy up that could be done.

Code: Select all

import io
import time
import picamera
import picamera.array
import numpy as np
import subprocess

# This uses motion vectors for cheap motion detection, to help reduce false positives it expects motion to be detected for a sequence of frames before triggering.  Although this works for most cases, there are issues around detection of slow moving objects

# TODO this requires considerable clean-up
# TODO would be nice to have a date/time overlay on the video
# TODO sort out logging, using a debug boolean isn't great

debug=True
debugMagnitudeMatrix=False

# customisable variables
record_width=1296
record_height=730
framerate=15
pre_buffer_seconds=1 # 1 is actually around 3 seconds at the res/frame rate settings above
vector_magnitude=40 # the magnitude of vector change for motion detection
min_vectors_above_magnitude=15 # the number of motion vectors that must be above the vector_magnitude for this frame to count towards motion
min_sequence_threshold=3 # the minimum number of frames in sequence that must exceed the vector threshold for motion to have been considered as detected
file_root='/var/www/motion'
mp4box=True

sequence_counter=0
sequential_frame_count=0
start_motion_timestamp = time.time()
last_motion_timestamp = time.time()
motion_detected = False

class MyMotionDetector(picamera.array.PiMotionAnalysis):
	def analyse(self, a):
		global debug, debugMagnitudeMatrix, sequence_counter, sequential_frame_count, min_sequence_threshold, start_motion_timestamp, last_motion_timestamp, motion_detected, vector_magnitude, min_vectors_above_magnitude
		a = np.sqrt(
			np.square(a['x'].astype(np.float)) +
			np.square(a['y'].astype(np.float))
			).clip(0, 255).astype(np.uint8)

		if debugMagnitudeMatrix:
			# TODO this is a bit ugly, should really use some sort of loop
			# print out a matrix of sum of vectors above a certain threshold, this is just to help determine some good numbers to plug into detection
			sum_of_vectors_above_10 = (a > 10).sum()
			sum_of_vectors_above_20 = (a > 20).sum()
			sum_of_vectors_above_30 = (a > 30).sum()
			sum_of_vectors_above_40 = (a > 40).sum()
			sum_of_vectors_above_50 = (a > 50).sum()
			sum_of_vectors_above_60 = (a > 60).sum()
			sum_of_vectors_above_70 = (a > 70).sum()
			sum_of_vectors_above_80 = (a > 80).sum()
			sum_of_vectors_above_90 = (a > 90).sum()
			sum_of_vectors_above_100 = (a > 100).sum()
			print(
				'10=' + str(sum_of_vectors_above_10) + ', ' +
				'20=' + str(sum_of_vectors_above_20) + ', ' +
				'30=' + str(sum_of_vectors_above_30) + ', ' +
				'40=' + str(sum_of_vectors_above_40) + ', ' +
				'50=' + str(sum_of_vectors_above_50) + ', ' +
				'60=' + str(sum_of_vectors_above_60) + ', ' +
				'70=' + str(sum_of_vectors_above_70) + ', ' +
				'80=' + str(sum_of_vectors_above_80) + ', ' +
				'90=' + str(sum_of_vectors_above_90) + ', ' +
				'100=' + str(sum_of_vectors_above_100) + ', '
			)

		sum_of_vectors_above_threshold = (a > vector_magnitude).sum()
		# if (debug and (sum_of_vectors_above_threshold > 0)): print(str(sum_of_vectors_above_threshold) + ' vectors above magnitude of ' + str(vector_magnitude))
		detected = sum_of_vectors_above_threshold > min_vectors_above_magnitude

		if detected:
			sequential_frame_count = sequential_frame_count + 1
			if (debug and (sequential_frame_count > 0)): print('sequential_frame_count %d' % sequential_frame_count)
			if (motion_detected):
				if (debug): print('extending time')
				last_motion_timestamp = time.time()
		else:
			sequential_frame_count=0
			# if debug: print('sequential_frame_count %d' % sequential_frame_count)

		if ((sequential_frame_count >= min_sequence_threshold) and (not motion_detected)):
			if debug: print('>> Motion detected')
			sequence_counter = sequence_counter + 1
			start_motion_timestamp = time.time()
			last_motion_timestamp = start_motion_timestamp
			motion_detected = True

		if (motion_detected and not detected):
			if ((time.time() - last_motion_timestamp) > 3):
				motion_detected = False
				if debug: print('<< Motion stopped, beyond 3s')
			else:
				if debug: print('Motion stopped, but still within 3s')

def write_video(stream):
	# Write the entire content of the circular buffer to disk. No need to
	# lock the stream here as we're definitely not writing to it
	# simultaneously
	global sequence_counter, start_motion_timestamp
	before_filename = file_root + '/before-' + str(sequence_counter) + '.h264';
	with io.open(before_filename, 'wb') as output:
		for frame in stream.frames:
			if frame.header:
				stream.seek(frame.position)
				break
		while True:
			buf = stream.read1()
			if not buf:
				break
			output.write(buf)
	# Wipe the circular stream once we're done
	stream.seek(0)
	stream.truncate()
	return before_filename


with picamera.PiCamera() as camera:
	camera.resolution = (record_width, record_height)
	camera.framerate = framerate
	with picamera.PiCameraCircularIO(camera, seconds=pre_buffer_seconds) as stream:
		# this delay is needed, otherwise you seem to get some noise which triggers the motion detection
		time.sleep(1)
		if debug: print ('starting motion analysis')
		camera.start_recording(stream, format='h264', motion_output=MyMotionDetector(camera))
		try:
			while True:
				camera.wait_recording(1)
				if motion_detected:
					file_count = sequence_counter;
					if debug: print('Splitting recording ' + str(file_count))
					# As soon as we detect motion, split the recording to record the frames "after" motion
					after_filename = file_root + '/after-' + str(file_count) + '.h264';
					camera.split_recording(after_filename)
					# Write the seconds "before" motion to disk as well
					if debug: print("Writing 'before' stream")
					before_filename = write_video(stream)
					# Wait until motion is no longer detected, then split recording back to the in-memory circular buffer
					while motion_detected:
						camera.wait_recording(1)
					print('Motion stopped, returning to circular buffer\n')
					camera.split_recording(stream)

					# merge before and after files into a single file
					# TODO this should ideally be done asynchronously
					# TODO is there a better way of doing this, feels a bit hacky to call out to a subprocess
					output_prefix = file_root + '/' + time.strftime("%Y-%m-%d--%H:%M:%S", time.gmtime(start_motion_timestamp)) + '--' + str(sequence_counter)
					h264_file = output_prefix + '.h264'

					# for some reason mp4box doesn't work with semicolons in the filename, you always get a 'Requested URL is not valid or cannot be found', so work around by using a different filename
					if mp4box:
						h264_file = file_root + '/' + 'merge-' + str(file_count) + '.h264'

					cmd = 'mv ' + before_filename + ' ' + h264_file + ' && cat ' + after_filename + ' >> ' + h264_file + ' && rm ' + after_filename
					if debug: print('[CMD]: ' + cmd)
					subprocess.call([cmd], shell=True)
					if debug: print('finished file merge')

					if mp4box:
						# mp4box the file
						# TODO this should ideally be done asynchronously
						# TODO investigate if mp4box has a python api
						mp4_file = output_prefix + '.mp4'
						cmd = 'MP4Box -fps ' + str(framerate) + ' -add ' + h264_file + ' ' + mp4_file
						if debug: print('[CMD] ' + cmd)
						subprocess.call([cmd], shell=True)
						if debug: print('finished mp4box')
		finally:
			camera.stop_recording()

I've been meaning to respond to this post for a while. Things are a bit hectic for me at the moment, so probably won't have time to work on any improvements for a couple of months :-(. My python isn't that great either, so would appreciate anyone taking a look and making it more robust :-).

I noticed that jbeale put up some code, that may work better.

In the past I've used 'motion' fairly successfully, the only downside being that you get a lot of false alarms - e.g. sun comes out of the clouds causing a high contrast change, windy days, etc. I think those problems were compounded by the fact that motion could only handle around 1 frame per second for the motion detection (a lot can change in 1 second). I was hoping that these sorts of false alarms could be eliminated by a better frame rate as well as checking that motion was detected between a series of frames.

It would be good to hear back if anyone's been able to take my example and improve upon it. Or if anyone has successfully pulled in any true object detection.

User avatar
jbeale
Posts: 3522
Joined: Tue Nov 22, 2011 11:51 pm
Contact: Website

Re: Pure Python camera interface

Mon Dec 15, 2014 1:41 am

It was limitation of 'motion' on the Pi, primarily the lag time and limited resolution, that led me to write the Python code described here:
http://www.raspberrypi.org/forums/viewt ... 43&t=87997

This has been working in a satisfactory way, running 24/7 for over a month now. It doesn't generate (much) false detects on cloud/sun lighting changes and I've managed to aim my camera and adaptively set threshold levels so it's usually not detecting branches blowing in the wind. That program (actually, set of programs) is not a solution for everyone, because it relies on a second computer to store all the 1080p video and extract stills. The motion detect generates a rectangular "region of detected motion" but it's only about 75% accurate, often skipping areas of actual motion which are too similar in brightness to the background, or including the whole frame when brightness changes too much from autoexposure. That's OK in my case, I use the motion box coordinates only to frame the area of the thumbnail image.

To really do object detection I think you need to start with edge detection, instead of simple per-pixel brightness changes like I do.

Redsandro
Posts: 27
Joined: Mon Nov 25, 2013 7:19 pm
Location: The Netherlands
Contact: Website

Re: Pure Python camera interface

Mon Dec 15, 2014 10:09 am

I don't like having to concert the h264 streams to mp4 or mkv afterwards. I know it should be possible to store this in a container directly from the stream, but for some reason I haven't managed to create a playable file by piping from pyCamera directly to ffmpeg or one of the other stream processors.

Is there a reason I don't see this code anywhere? I'm half sure this should be possible, saving hassle. It should only minorly impact the raspberry since it will be using the same stream in a container format instead of the raw stream.

Also, has anyone played with the "motion" output from pyCamera yet? If I understand correctly, when monitoring this, using some python-foo one could continuously stream h264 and use the motion channel to decide when to actually record the stream.

hydra3333
Posts: 127
Joined: Thu Jan 10, 2013 11:48 pm

Re: Pure Python camera interface

Tue Dec 30, 2014 12:01 pm

Thank you lots, waveform80 !! Your efforts are really appreciated.

There's some code at this other, thread by "killagreg", using motion vectors for motion detection and recording.
http://www.raspberrypi.org/forums/viewt ... 08#p662008

I'd like to incorporate some of jbeale's concepts from http://www.raspberrypi.org/forums/viewt ... 47#p619347 and https://github.com/jbeale1/PiCam1 (stest2.py)
like night and day, but I'm not a programmer so it'll take me a long time to understand what was done and how.

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Fri Jan 02, 2015 3:25 am

Well, happy new year everyone! I was hoping to get 1.9 out before xmas, but events conspired to keep me too busy to get it done. This one is mostly bug fixes but there's a few new bits of functionality. I won't duplicate the changelog like usual - just check it out if you want a summary of the headlines.

As usual, the new release is immediately available from PyPI but there'll probably be a short delay until it hits Raspbian (unless young Alex is bonkers enough to be reading his e-mail at 3AM on New Years ;)

Hopefully I'll find some time to respond to some of the queries above in the next few days but it looks like January is going to be very busy for me for a variety of reasons, so my apologies in advance if I don't get a spare moment!

Dave.

bantammenace2012
Posts: 122
Joined: Mon May 28, 2012 12:18 pm

Re: Pure Python camera interface

Fri Jan 02, 2015 12:32 pm

Happy New Year Dave and everyone else contributing to this thread.
Is this feasible ?
Father Christmas brought me a Google Cardboard VR headset to work with my Nexus 5 android smartphone which I already had. Forgetting the 3D bit for the moment, (compute module with 2 cameras? or panasonic micro-four thirds 3D lens attached to a single Pi camera?) is it possible to stream a Picamera video/still image so that it appears as two identical videos/images side by side on my smartphone? I hope you know what I mean. Is it also possible to distort/modify it so that it would appear with the correct amount of distortion ? for Google cardboard.
I am successfully streaming the gyro values from my Nexus 5 to my RPi using the Google Play app Wireless IMU. Once I have worked out how to extract the gyro values from the streaming (UDP) csv file I intend using them to pan and tilt the RPi camera on my robot, the aim being to control the pan and tilt by turning my head and for the streaming video/image to update on the headset. No doubt latency will be an issue so i would have the sick bucket ready in advance.
My apologies in advance if this distracts anyone from doing what puts bread on the table but as we all know RPi is very addictive and is a great aid to procrastinators everywhere.

hydra3333
Posts: 127
Joined: Thu Jan 10, 2013 11:48 pm

Re: Pure Python camera interface

Sun Jan 04, 2015 3:00 am

Hello. I am not sure where to post this, so here goes.

Over at http://www.raspberrypi.org/forums/viewt ... 29#p664329 is a python lightweight motion detector thread (code by killagreg) based on picamera and use of h264 motion vectors coming from a smaller-size "split port", eg

Code: Select all

camera.start_recording('/dev/null', splitter_port=2, resize=(motion_width,motion_height) ,format='h264', motion_output=MyMotionDetector(camera, size=(motion_width,motion_height)))
As far as I can tell, it seems to take about 10% of cpu whilst idling along, which I'm happy with.

As you may gather, timestamping in frames is essential - is there some way to tell picamera where to place the annotation text either absolutely or as offsets from top/bottom/left/right ? (and to specify font size?). There's a minor issue in that moving annotation triggers movement sensing, possibly due to "auto centering" of text at the top of the frame. Specifying a text position without auto centering may help a bit..
http://www.raspberrypi.org/forums/viewt ... 46#p662746
Yes, the annotation is put into the image before the encoder processes the image. Hence the change of the pixels in the annotation text causes a small "virtual motion". Enable and disable annotion causes a strong motion.

bootsmann
Posts: 28
Joined: Thu Jan 23, 2014 7:07 am

Re: Pure Python camera interface

Fri Jan 09, 2015 12:30 pm

Hi

What does this message exactly mean?

Code: Select all

/usr/lib/python2.7/dist-packages/picamera/camera.py:120: PiCameraDeprecated: Accessing framerate as a tuple is deprecated; this value is now a Fraction, so you can query the numerator and denominator properties directly, convert to an int or float, or perform arithmetic operations and comparisons directly
  'Accessing framerate as a tuple is deprecated; this value is '
Everything works fine when the camera starts

Code: Select all

    with picamera.PiCamera() as camera:
        camera.led = False
        camera.exposure_mode = 'night'
        #camera.exposure_mode ='auto'
        camera.rotation = 180
        camera.resolution = (640, 480)
        camera.start_preview()
        camera.annotate_background = True
        time.sleep(2)
        
        while True:
            camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
            camera.capture(stream, 'jpeg')
            yield stream.getvalue()
            stream.seek(0)
            stream.truncate()
            time.sleep(.2)

BorisS
Posts: 15
Joined: Fri Oct 11, 2013 7:08 pm

Re: Pure Python camera interface

Thu Feb 05, 2015 1:25 pm

> but it looks like January is going to be very busy for me
Hi waveform80 - hope february is going to be better ;)

I'd like to ask for a new picamera feature: Would it be possible to output pts (timestamp information) also for picamera as it was done by ethanol100 for raspivid? (thread here http://www.raspberrypi.org/forums/viewt ... 43&t=98541 code here https://github.com/ethanol100/userland/ ... RaspiVid.c)

Proposal/wish for output format would be:
- output in mkvmerge format v2 as done for raspivid modification (adding a "# timecode format v2\n" header line)
- splitting output into multiple txt files along "camera.split_recording('%d.h264' % i)" calls would be perfect to obtain matching video segments/pts files

Kind regards,
Boris

User avatar
waveform80
Posts: 315
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Pure Python camera interface

Thu Feb 05, 2015 2:22 pm

Hi Boris,
BorisS wrote:> but it looks like January is going to be very busy for me
Hi waveform80 - hope february is going to be better ;)
Heh - we'll see - I'm currently transitioning jobs, so I'm not sure!
BorisS wrote:I'd like to ask for a new picamera feature: Would it be possible to output pts (timestamp information) also for picamera as it was done by ethanol100 for raspivid? (thread here http://www.raspberrypi.org/forums/viewt ... 43&t=98541 code here https://github.com/ethanol100/userland/ ... RaspiVid.c)

Proposal/wish for output format would be:
- output in mkvmerge format v2 as done for raspivid modification (adding a "# timecode format v2\n" header line)
- splitting output into multiple txt files along "camera.split_recording('%d.h264' % i)" calls would be perfect to obtain matching video segments/pts files

Kind regards,
Boris
Actually, picamera already provides PTS already via the PiCamera.frame.timestamp property. Outputting that as a separate text file would be easiest with a custom output which outputs that timestamp in whatever format you want to another file and then writes the video data to another file. Something like this:

Code: Select all

from __future__ import unicode_literals

import io
import picamera

class PtsOutput(object):
    def __init__(self, camera, video_filename, pts_filename):
        self.camera = camera
        self.video_output = io.open(video_filename, 'wb')
        self.pts_output = io.open(pts_filename, 'w')
        self.start_time = None

    def write(self, buf):
        self.video_output.write(buf)
        if self.camera.frame.complete and self.camera.frame.timestamp:
            if self.start_time is None:
                self.start_time = self.camera.frame.timestamp
		self.pts_output.write('# timecode format v2\n')
            self.pts_output.write('%f\n' % ((self.camera.frame.timestamp - self.start_time) / 1000.0))

    def flush(self):
        self.video_output.flush()
        self.pts_output.flush()

    def close(self):
        self.video_output.close()
        self.pts_output.close()


with picamera.PiCamera() as camera:
    camera.resolution = (1280, 720)
    camera.framerate = 24

    camera.start_recording(PtsOutput(camera, 'foo.h264', 'foo.txt'), format='h264')
    camera.wait_recording(30)
    camera.stop_recording()
I haven't tried the results in mkvmerge yet (haven't got time today), but they look vaguely sane and similar to what ethanol's modified raspivid produces. Incidentally, custom outputs work fine with split_recording() too, but I'll leave you to figure that bit out :)

Dave.

Return to “Camera board”