Code: Select all
/usr/local/lib/python2.7/dist-packages/picamera/camera.py:2488: PiCameraDeprecated: PiCamera.ISO is deprecated; use PiCamera.iso instead
'PiCamera.ISO is deprecated; use PiCamera.iso instead'))
Code: Select all
/usr/local/lib/python2.7/dist-packages/picamera/camera.py:2488: PiCameraDeprecated: PiCamera.ISO is deprecated; use PiCamera.iso instead
'PiCamera.ISO is deprecated; use PiCamera.iso instead'))
Sorry about that, silly mistake on my part - it'll be corrected in 1.9. I'm slightly surprised it's showing up though - deprecation warnings are meant to be silenced in Python by default. Are you setting a warnings filter in your script anywhere? The following filter ought to silence it (but obviously if you've got another filter set later on it won't):bootsmann wrote:I am using picamera 1.8 and #713 firmware. How can I turn off follow message/warning?
Code: Select all
/usr/local/lib/python2.7/dist-packages/picamera/camera.py:2488: PiCameraDeprecated: PiCamera.ISO is deprecated; use PiCamera.iso instead 'PiCamera.ISO is deprecated; use PiCamera.iso instead'))
Code: Select all
import warnings
warnings.simplefilter('ignore', DeprecationWarning)
Thank you very much for the helpful examples! I have not yet, but will try the custom output class.waveform80 wrote:you can extract the index from the PiCamera.frame property [...]
However, the tricky bit is knowing when to query this to get a value for each frame (in the above case, the printed values will be *approximately* every second, but given all the overhead of writing, python, the OS, etc. it's very approximate). Using a custom output is probably the easiest way of dealing with this as the write() method will get called at least once for every frame (it's worth noting that large frames like I-frames might call write() several times in order to get written depending on the buffer size, so in the following example you might want to check whether camera.frame.index has actually changed from write to write): [...]
Dave: great stuff, your example did work perfectly as you wrote it. It runs and the output frame number runs -1, 0, 1, 2, 3 ... 308 and there are five frame numbers that come up twice, at uniform intervals of 61 frames: 60, 121, 182, 243, 304. Maybe 61 frames is the I-frame interval.waveform80 wrote:Using a custom output is probably the easiest way of dealing with this as the write() method will get called at least once for every frame (it's worth noting that large frames like I-frames might call write() several times in order to get written depending on the buffer size, so in the following example you might want to check whether camera.frame.index has actually changed from write to write):
Code: Select all
import io import picamera class MyCustomOutput(object): def __init__(self, camera, filename): self.camera = camera self._file = io.open(filename, 'wb') def write(self, buf): print self.camera.frame.index return self._file.write(buf) def flush(self): self._file.flush() def close(self): self._file.close() with picamera.PiCamera() as camera: camera.resolution = (640, 480) output = MyCustomOutput(camera, 'foo.h264') camera.start_recording(output, format='h264') camera.wait_recording(10) camera.stop_recording() output.close()
Code: Select all
avconv -i inputfile.mp4 -ss 00:00:15 -vframes 10 -f image2 -q:v 2 F%03d.jpg
I've just tried to reproduce this in raspistill and failed. What exactly are you filling the MMAL_PARAMETER_CAMERA_ANNOTATE_V2_T structure with when you're seeing this line? If any of the "show_XXX" parameters are set then it will draw that line, and if it is unchanging (eg because it isn't used on Pi) then you will get a horizontal line.waveform80 wrote:It seems to be an issue in the firmware when more than one line is displayed. You can cause the same line to appear by using a really long text annotation that spans multiple lines.jbeale wrote:When I set text annotation with black background, it works as expected. If, in addition setI get the frame number shown on a second line underneath my annotation, but I also get a thin white line across the center of the frame from one side to the other. Is this an intended behavior?Code: Select all
camera.annotate_frame_num = True
I think this is something left over from some of the other annotations which are present in the MMAL interface but (I'm guessing) unimplemented in the Pi's firmware (have a play with "show_analog_gain", "show_caf", "show_motion" and so forth in the MMAL_PARAMETER_CAMERA_ANNOTATE_T structure and all sorts of other lines and things appear in the annotation but they all appear to be non-functional on the Pi).
Code: Select all
diff --git a/host_applications/linux/apps/raspicam/RaspiStill.c b/host_applications/linux/apps/raspicam/RaspiStill.c
index 9b791fb..b030dd9 100644
--- a/host_applications/linux/apps/raspicam/RaspiStill.c
+++ b/host_applications/linux/apps/raspicam/RaspiStill.c
@@ -923,7 +923,21 @@ static MMAL_STATUS_T create_camera_component(RASPISTILL_STATE *state)
mmal_port_parameter_set(camera->control, &cam_config.hdr);
}
-
+
+ {
+ MMAL_PARAMETER_CAMERA_ANNOTATE_V2_T annotate =
+ { { MMAL_PARAMETER_ANNOTATE, sizeof(annotate) },
+ MMAL_TRUE,
+ MMAL_FALSE,
+ MMAL_FALSE,
+ MMAL_FALSE,
+ MMAL_FALSE,
+ MMAL_FALSE,
+ MMAL_TRUE,
+ MMAL_TRUE,
+ "Wibble this is a long text string if I keep going like this and dont stop typing" };
+ mmal_port_parameter_set(camera->control, &annotate.hdr);
+ }
raspicamcontrol_set_all_parameters(camera, &state->camera_parameters);
// Now set up the port formats
Code: Select all
/* Draw grey bar to show bottom of the graph */
Code: Select all
import time
import picamera
import numpy as np
# Create an array representing a 1280x720 image of
# a cross through the center of the display. The shape of
# the array must be of the form (height, width, color)
a = np.zeros((720, 1280, 3), dtype=np.uint8)
a[360, :, :] = 0xff
a[:, 640, :] = 0xff
i = 0
with picamera.PiCamera() as camera:
camera.resolution = (1280, 720)
camera.framerate = 24
camera.start_preview()
# Add the overlay directly into layer 3 with transparency;
# we can omit the size parameter of add_overlay as the
# size is the same as the camera's resolution
while True:
i += 1
print(i)
o = camera.add_overlay(np.getbuffer(a), layer=3, alpha=64)
time.sleep(.1)
camera.remove_overlay(o)
]spinomaly wrote:Sorry for the repost...but I am stuck...
I have a B+ with the latest Noobs Raspian off the website (release date 2014-09-09). I am having troubles adding and removing overlays. After a sequence of 60 add/remove overlays an out of memory error occurs. Code below as well as error. Any help would be appreciated.
mmal: mmal_vc_component_create: failed to create component 'vc.ril.video_render' (1:ENOMEM)Code: Select all
import time import picamera import numpy as np # Create an array representing a 1280x720 image of # a cross through the center of the display. The shape of # the array must be of the form (height, width, color) a = np.zeros((720, 1280, 3), dtype=np.uint8) a[360, :, :] = 0xff a[:, 640, :] = 0xff i = 0 with picamera.PiCamera() as camera: camera.resolution = (1280, 720) camera.framerate = 24 camera.start_preview() # Add the overlay directly into layer 3 with transparency; # we can omit the size parameter of add_overlay as the # size is the same as the camera's resolution while True: i += 1 print(i) o = camera.add_overlay(np.getbuffer(a), layer=3, alpha=64) time.sleep(.1) camera.remove_overlay(o)
mmal: mmal_component_create_core: could not create component 'vc.ril.video_render' (1)
Traceback (most recent call last):
File "overlay_bug.py", line 26, in <module>
o = camera.add_overlay(np.getbuffer(a), layer=3, alpha=64)
File "/usr/lib/python2.7/dist-packages/picamera/camera.py", line 953, in add_overlay
renderer = PiOverlayRenderer(self, source, size, **options)
File "/usr/lib/python2.7/dist-packages/picamera/renderers.py", line 474, in __init__
rotation, vflip, hflip)
File "/usr/lib/python2.7/dist-packages/picamera/renderers.py", line 89, in __init__
prefix="Failed to create renderer component")
File "/usr/lib/python2.7/dist-packages/picamera/exc.py", line 133, in mmal_check
raise PiCameraMMALError(status, prefix)
picamera.exc.PiCameraMMALError: Failed to create renderer component: Out of memory
Code: Select all
BPS = 12000000 # bits per second from H.264 video encoder
for vidFile in camera.record_sequence( date_gen(camera), format='h264', bitrate=BPS):
... blah, blah
Code: Select all
import time
import picamera
import picamera.array
with picamera.PiCamera() as camera:
with picamera.array.PiYUVArray(camera) as output:
camera.resolution = (64, 32)
camera.start_preview()
time.sleep(4)
camera.capture(output, format='yuv')
print(output.array[32/2,:,0]) # look at the Y channel, center scanline across
Code: Select all
[ 19 20 22 23 21 22 25 25 19 19 41 68 25 32 48 55 68 58
97 97 59 48 37 42 105 136 147 145 143 140 138 135 134 135 129 130
123 125 90 71 77 99 106 78 70 82 84 76 91 83 89 62 37 29
33 32 41 84 71 66 75 62 66 65]
hoggerz wrote:Wondered If jit or anyone else managed to improve upon this? It works ok but It can be a little unpredictable! Unfortunately my knowledge of python isn't very good. The circular buffer functionality combined with motion detection seems ideal for security applications.
jit wrote:Great to see a new version, thanks Dave. I'll be upgrading very shortly.
I've spent some time playing around with the script you modified. I though I'd share it and see if anyone has suggestions on improving it and making the motion detection better. I'm very pleased with the way that the circular buffer is working, ideal for capturing the moments before motion takes place.
I've added the ability to merge together the before and after files and box them using mp4box.
With regards to the motion detection, I'm wondering whether its better to plug-in another library given that this problem has been solved, although I'm not entirely sure how I'd go about that (my python isn't very strong).
I've added some TODOs around bits that need work.
Disclaimer: I'm not very familiar with Python, so I'm sure there's a lot of tidy up that could be done.
Code: Select all
import io import time import picamera import picamera.array import numpy as np import subprocess # This uses motion vectors for cheap motion detection, to help reduce false positives it expects motion to be detected for a sequence of frames before triggering. Although this works for most cases, there are issues around detection of slow moving objects # TODO this requires considerable clean-up # TODO would be nice to have a date/time overlay on the video # TODO sort out logging, using a debug boolean isn't great debug=True debugMagnitudeMatrix=False # customisable variables record_width=1296 record_height=730 framerate=15 pre_buffer_seconds=1 # 1 is actually around 3 seconds at the res/frame rate settings above vector_magnitude=40 # the magnitude of vector change for motion detection min_vectors_above_magnitude=15 # the number of motion vectors that must be above the vector_magnitude for this frame to count towards motion min_sequence_threshold=3 # the minimum number of frames in sequence that must exceed the vector threshold for motion to have been considered as detected file_root='/var/www/motion' mp4box=True sequence_counter=0 sequential_frame_count=0 start_motion_timestamp = time.time() last_motion_timestamp = time.time() motion_detected = False class MyMotionDetector(picamera.array.PiMotionAnalysis): def analyse(self, a): global debug, debugMagnitudeMatrix, sequence_counter, sequential_frame_count, min_sequence_threshold, start_motion_timestamp, last_motion_timestamp, motion_detected, vector_magnitude, min_vectors_above_magnitude a = np.sqrt( np.square(a['x'].astype(np.float)) + np.square(a['y'].astype(np.float)) ).clip(0, 255).astype(np.uint8) if debugMagnitudeMatrix: # TODO this is a bit ugly, should really use some sort of loop # print out a matrix of sum of vectors above a certain threshold, this is just to help determine some good numbers to plug into detection sum_of_vectors_above_10 = (a > 10).sum() sum_of_vectors_above_20 = (a > 20).sum() sum_of_vectors_above_30 = (a > 30).sum() sum_of_vectors_above_40 = (a > 40).sum() sum_of_vectors_above_50 = (a > 50).sum() sum_of_vectors_above_60 = (a > 60).sum() sum_of_vectors_above_70 = (a > 70).sum() sum_of_vectors_above_80 = (a > 80).sum() sum_of_vectors_above_90 = (a > 90).sum() sum_of_vectors_above_100 = (a > 100).sum() print( '10=' + str(sum_of_vectors_above_10) + ', ' + '20=' + str(sum_of_vectors_above_20) + ', ' + '30=' + str(sum_of_vectors_above_30) + ', ' + '40=' + str(sum_of_vectors_above_40) + ', ' + '50=' + str(sum_of_vectors_above_50) + ', ' + '60=' + str(sum_of_vectors_above_60) + ', ' + '70=' + str(sum_of_vectors_above_70) + ', ' + '80=' + str(sum_of_vectors_above_80) + ', ' + '90=' + str(sum_of_vectors_above_90) + ', ' + '100=' + str(sum_of_vectors_above_100) + ', ' ) sum_of_vectors_above_threshold = (a > vector_magnitude).sum() # if (debug and (sum_of_vectors_above_threshold > 0)): print(str(sum_of_vectors_above_threshold) + ' vectors above magnitude of ' + str(vector_magnitude)) detected = sum_of_vectors_above_threshold > min_vectors_above_magnitude if detected: sequential_frame_count = sequential_frame_count + 1 if (debug and (sequential_frame_count > 0)): print('sequential_frame_count %d' % sequential_frame_count) if (motion_detected): if (debug): print('extending time') last_motion_timestamp = time.time() else: sequential_frame_count=0 # if debug: print('sequential_frame_count %d' % sequential_frame_count) if ((sequential_frame_count >= min_sequence_threshold) and (not motion_detected)): if debug: print('>> Motion detected') sequence_counter = sequence_counter + 1 start_motion_timestamp = time.time() last_motion_timestamp = start_motion_timestamp motion_detected = True if (motion_detected and not detected): if ((time.time() - last_motion_timestamp) > 3): motion_detected = False if debug: print('<< Motion stopped, beyond 3s') else: if debug: print('Motion stopped, but still within 3s') def write_video(stream): # Write the entire content of the circular buffer to disk. No need to # lock the stream here as we're definitely not writing to it # simultaneously global sequence_counter, start_motion_timestamp before_filename = file_root + '/before-' + str(sequence_counter) + '.h264'; with io.open(before_filename, 'wb') as output: for frame in stream.frames: if frame.header: stream.seek(frame.position) break while True: buf = stream.read1() if not buf: break output.write(buf) # Wipe the circular stream once we're done stream.seek(0) stream.truncate() return before_filename with picamera.PiCamera() as camera: camera.resolution = (record_width, record_height) camera.framerate = framerate with picamera.PiCameraCircularIO(camera, seconds=pre_buffer_seconds) as stream: # this delay is needed, otherwise you seem to get some noise which triggers the motion detection time.sleep(1) if debug: print ('starting motion analysis') camera.start_recording(stream, format='h264', motion_output=MyMotionDetector(camera)) try: while True: camera.wait_recording(1) if motion_detected: file_count = sequence_counter; if debug: print('Splitting recording ' + str(file_count)) # As soon as we detect motion, split the recording to record the frames "after" motion after_filename = file_root + '/after-' + str(file_count) + '.h264'; camera.split_recording(after_filename) # Write the seconds "before" motion to disk as well if debug: print("Writing 'before' stream") before_filename = write_video(stream) # Wait until motion is no longer detected, then split recording back to the in-memory circular buffer while motion_detected: camera.wait_recording(1) print('Motion stopped, returning to circular buffer\n') camera.split_recording(stream) # merge before and after files into a single file # TODO this should ideally be done asynchronously # TODO is there a better way of doing this, feels a bit hacky to call out to a subprocess output_prefix = file_root + '/' + time.strftime("%Y-%m-%d--%H:%M:%S", time.gmtime(start_motion_timestamp)) + '--' + str(sequence_counter) h264_file = output_prefix + '.h264' # for some reason mp4box doesn't work with semicolons in the filename, you always get a 'Requested URL is not valid or cannot be found', so work around by using a different filename if mp4box: h264_file = file_root + '/' + 'merge-' + str(file_count) + '.h264' cmd = 'mv ' + before_filename + ' ' + h264_file + ' && cat ' + after_filename + ' >> ' + h264_file + ' && rm ' + after_filename if debug: print('[CMD]: ' + cmd) subprocess.call([cmd], shell=True) if debug: print('finished file merge') if mp4box: # mp4box the file # TODO this should ideally be done asynchronously # TODO investigate if mp4box has a python api mp4_file = output_prefix + '.mp4' cmd = 'MP4Box -fps ' + str(framerate) + ' -add ' + h264_file + ' ' + mp4_file if debug: print('[CMD] ' + cmd) subprocess.call([cmd], shell=True) if debug: print('finished mp4box') finally: camera.stop_recording()
Code: Select all
camera.start_recording('/dev/null', splitter_port=2, resize=(motion_width,motion_height) ,format='h264', motion_output=MyMotionDetector(camera, size=(motion_width,motion_height)))
Yes, the annotation is put into the image before the encoder processes the image. Hence the change of the pixels in the annotation text causes a small "virtual motion". Enable and disable annotion causes a strong motion.
Code: Select all
/usr/lib/python2.7/dist-packages/picamera/camera.py:120: PiCameraDeprecated: Accessing framerate as a tuple is deprecated; this value is now a Fraction, so you can query the numerator and denominator properties directly, convert to an int or float, or perform arithmetic operations and comparisons directly
'Accessing framerate as a tuple is deprecated; this value is '
Code: Select all
with picamera.PiCamera() as camera:
camera.led = False
camera.exposure_mode = 'night'
#camera.exposure_mode ='auto'
camera.rotation = 180
camera.resolution = (640, 480)
camera.start_preview()
camera.annotate_background = True
time.sleep(2)
while True:
camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
camera.capture(stream, 'jpeg')
yield stream.getvalue()
stream.seek(0)
stream.truncate()
time.sleep(.2)
Heh - we'll see - I'm currently transitioning jobs, so I'm not sure!BorisS wrote:> but it looks like January is going to be very busy for me
Hi waveform80 - hope february is going to be better ;)
Actually, picamera already provides PTS already via the PiCamera.frame.timestamp property. Outputting that as a separate text file would be easiest with a custom output which outputs that timestamp in whatever format you want to another file and then writes the video data to another file. Something like this:BorisS wrote:I'd like to ask for a new picamera feature: Would it be possible to output pts (timestamp information) also for picamera as it was done by ethanol100 for raspivid? (thread here http://www.raspberrypi.org/forums/viewt ... 43&t=98541 code here https://github.com/ethanol100/userland/ ... RaspiVid.c)
Proposal/wish for output format would be:
- output in mkvmerge format v2 as done for raspivid modification (adding a "# timecode format v2\n" header line)
- splitting output into multiple txt files along "camera.split_recording('%d.h264' % i)" calls would be perfect to obtain matching video segments/pts files
Kind regards,
Boris
Code: Select all
from __future__ import unicode_literals
import io
import picamera
class PtsOutput(object):
def __init__(self, camera, video_filename, pts_filename):
self.camera = camera
self.video_output = io.open(video_filename, 'wb')
self.pts_output = io.open(pts_filename, 'w')
self.start_time = None
def write(self, buf):
self.video_output.write(buf)
if self.camera.frame.complete and self.camera.frame.timestamp:
if self.start_time is None:
self.start_time = self.camera.frame.timestamp
self.pts_output.write('# timecode format v2\n')
self.pts_output.write('%f\n' % ((self.camera.frame.timestamp - self.start_time) / 1000.0))
def flush(self):
self.video_output.flush()
self.pts_output.flush()
def close(self):
self.video_output.close()
self.pts_output.close()
with picamera.PiCamera() as camera:
camera.resolution = (1280, 720)
camera.framerate = 24
camera.start_recording(PtsOutput(camera, 'foo.h264', 'foo.txt'), format='h264')
camera.wait_recording(30)
camera.stop_recording()