Code: Select all
camera.split_recording(PtsOutput(camera, 'video_%d.h264' % index, 'timecodes_%d.txt' % index))
Code: Select all
#camera.drc_strength = 'medium'
You need a completely new version of RPi.GPIO.trevor-s wrote: Do I need to add a new entry, something like:
4: 32, # model 2B
?
trevor-s wrote:I'm new to the RPi, and I'm helping introduce them into my local primary school ...
from picamera.camera import PiCamera
File "/usr/lib/python2.7/dist-packages/picamera/camera.py", line 83, in <module>
import RPi.GPIO as GPIO
RuntimeError: This module can only be run on a Raspberry Pi!
Looking in the source of camera.py
try:
import RPi.GPIO as GPIO
GPIO_LED_PIN = {
0: 5, # compute module (XXX is this correct?)
1: 5, # model B rev 1
2: 5, # model B rev 2
3: 32, # model B+
}[GPIO.RPI_REVISION]
except ImportError:
# Can't find RPi.GPIO so just null-out the reference
GPIO = None
?
Ben Croston has made a release candidate version of RPi.GPIO available for testing.waveform80 wrote: Once RPi.GPIO is updated, picamera will need another update to re-enable it (not least because I've no idea what GPIO the LED will wind up on yet), but I'll keep an eye on the RPi.GPIO ticket and get another one out shortly after it's done.
Code: Select all
Linux raspberrypi 3.18.6+ #754 PREEMPT Sun Feb 8 20:22:45 GMT 2015 armv6l GNU/Linux
Code: Select all
usr/lib/python2.7/dist-packages/picamera/camera.py:120: PiCameraDeprecated: Accessing framerate as a tuple is deprecated; this value is now a Fraction, so you can query the numerator and denominator properties directly, convert to an int or float, or perform arithmetic operations and comparisons directly
'Accessing framerate as a tuple is deprecated; this value is '
mmal: mmal_vc_component_enable: failed to enable component: ENOSPC
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gevent/pywsgi.py", line 508, in handle_one_response
self.run_application()
File "/usr/local/lib/python2.7/dist-packages/gevent/pywsgi.py", line 495, in run_application
self.process_result()
File "/usr/local/lib/python2.7/dist-packages/gevent/pywsgi.py", line 484, in process_result
for data in self.result:
File "app.py", line 110, in mjpeg
with picamera.PiCamera() as camera:
File "/usr/lib/python2.7/dist-packages/picamera/camera.py", line 419, in __init__
self.STEREO_MODES[stereo_mode], stereo_decimate)
File "/usr/lib/python2.7/dist-packages/picamera/camera.py", line 551, in _init_camera
prefix="Camera component couldn't be enabled")
File "/usr/lib/python2.7/dist-packages/picamera/exc.py", line 133, in mmal_check
raise PiCameraMMALError(status, prefix)
PiCameraMMALError: Camera component couldn't be enabled: Out of resources (other than memory)
I've just given this a whirl under daylight conditions (the vast majority of my picamera testing is usually done in the evenings) and it appears the analog gain will never go above one, so that particular loop will never finish. The problem with updating it to use a lower value is that with indoor/darkened conditions (which from correspondence, is the majority use-case here) the loop finishes too quickly and the subsequent shots will use insufficient gain.pablok wrote:Hi all,
Consistent Images (basic recipes 4.6) seemed the solution to see my surroundings getting dark during the partial solar eclipse. I tested it inside the house and it worked fine. Then changed the params for a lot of images and set the picam in the garden.
After the eclipse I took it to my study to sew the images together with ffmpeg.
Not one image to be found on the card....
Started the Pi again and tested just to see everything was working fine under these conditions.
What I suspect is that the lines:
# Wait for analog gain to settle on a higher value than 1
while camera.analog_gain <= 1:
time.sleep(0.1)
have kept the Pi under daylight conditions in the loop.
What do you think has happened?
Only necessary if you want to construct fractions of your own to compare against the values returned by things like analog_gain.pablok wrote:(BTW should'nt there be an import of Fractions library?)
Code: Select all
import io
import time
import picamera
import cv2
import picamera.array
import numpy as np
with picamera.PiCamera() as camera:
camera.resolution = (1296, 972)
camera.framerate = 6
print("Selecting adequate fixed settings")
# Wait for analog gain to settle on a higher value than 1
while camera.analog_gain <= 1:
time.sleep(0.1)
# Now fix the values
camera.shutter_speed = camera.exposure_speed
camera.exposure_mode = 'off'
g = camera.awb_gains
camera.awb_mode = 'off'
camera.awb_gains = g
# start and wait for camera setup
camera.start_preview()
time.sleep(2)
# Capture using jpeg video port
stream = io.BytesIO()
start = time.time()
camera.capture(stream, format='jpeg',use_video_port=True)
# Construct a numpy array from the stream
data = np.fromstring(stream.getvalue(), dtype=np.uint8)
# "Decode" the image from the array, preserving colour
image = cv2.imdecode(data, 1)
cv2.imwrite('jpegVIDEO.png',image)
print("Time with jpeg+stream+numpy = %.4f" % (time.time()-start))
# Capture using jpeg stills port
stream = io.BytesIO()
start = time.time()
camera.capture(stream, format='jpeg')
# Construct a numpy array from the stream
data = np.fromstring(stream.getvalue(), dtype=np.uint8)
# "Decode" the image from the array, preserving colour
image = cv2.imdecode(data, 1)
cv2.imwrite('jpegSTILL.png',image)
print("Time with jpeg+stream+numpy = %.4f" % (time.time()-start))
# capture using picamera.array video port
start = time.time()
with picamera.array.PiRGBArray(camera) as stream:
camera.capture(stream, format='bgr')
# At this point the image is available as stream.array
background = stream.array
cv2.imwrite('picameraSTILL.png',background)
print("Time with picamera.array = %.4f" % (time.time()-start))
# capture using picamera.array video port
start = time.time()
with picamera.array.PiRGBArray(camera) as stream:
camera.capture(stream, format='bgr',use_video_port=True)
# At this point the image is available as stream.array
background = stream.array
cv2.imwrite('pimcameraVIDEO.png',background)
print("Time with picamera.array = %.4f" % (time.time()-start))
Hmm, this is a complicated one - it's an unintended side effect of a series of interactions "under the hood" that your script is making.Gelu wrote:Is it possible to use the video port with picamera.array?
I posted this question in another forum but it was buried in another post and did not receive much attention... (at the end of: http://www.raspberrypi.org/forums/viewt ... 52#p721252)
Sorry if it not correct to repost here.
This code demostrates my issue:All images but the last one (the one using picamera.array and the video port) seem to work well, the last one just saves a completely black image.Code: Select all
import io import time import picamera import cv2 import picamera.array import numpy as np with picamera.PiCamera() as camera: camera.resolution = (1296, 972) camera.framerate = 6 print("Selecting adequate fixed settings") # Wait for analog gain to settle on a higher value than 1 while camera.analog_gain <= 1: time.sleep(0.1) # Now fix the values camera.shutter_speed = camera.exposure_speed camera.exposure_mode = 'off' g = camera.awb_gains camera.awb_mode = 'off' camera.awb_gains = g # start and wait for camera setup camera.start_preview() time.sleep(2) # Capture using jpeg video port stream = io.BytesIO() start = time.time() camera.capture(stream, format='jpeg',use_video_port=True) # Construct a numpy array from the stream data = np.fromstring(stream.getvalue(), dtype=np.uint8) # "Decode" the image from the array, preserving colour image = cv2.imdecode(data, 1) cv2.imwrite('jpegVIDEO.png',image) print("Time with jpeg+stream+numpy = %.4f" % (time.time()-start)) # Capture using jpeg stills port stream = io.BytesIO() start = time.time() camera.capture(stream, format='jpeg') # Construct a numpy array from the stream data = np.fromstring(stream.getvalue(), dtype=np.uint8) # "Decode" the image from the array, preserving colour image = cv2.imdecode(data, 1) cv2.imwrite('jpegSTILL.png',image) print("Time with jpeg+stream+numpy = %.4f" % (time.time()-start)) # capture using picamera.array video port start = time.time() with picamera.array.PiRGBArray(camera) as stream: camera.capture(stream, format='bgr') # At this point the image is available as stream.array background = stream.array cv2.imwrite('picameraSTILL.png',background) print("Time with picamera.array = %.4f" % (time.time()-start)) # capture using picamera.array video port start = time.time() with picamera.array.PiRGBArray(camera) as stream: camera.capture(stream, format='bgr',use_video_port=True) # At this point the image is available as stream.array background = stream.array cv2.imwrite('pimcameraVIDEO.png',background) print("Time with picamera.array = %.4f" % (time.time()-start))
Am I doing something incorrectly?
Thanks,
Angel
Code: Select all
import picamera.mmal as mmal
...
camera._still_encoding = mmal.MMAL_ENCODING_BGR24
Hmm, seems odd. The camera component supports delivering MMAL encodings BGRA, BGR24, RGB24, and RGB16 (all available via the V4L2 driver). Why the need for resize to do the format conversion? Admittedly some of those formats weren't added until relatively late (ie July/August 2014), so may predate when you were adding support.waveform80 wrote:If you adjust your script to only capture BGR images from the video port (i.e. just the last bit), everything works fine because the video port encoding is never adjusted (an otherwise redundant resizer is inserted into the MMAL pipeline to handle the YUV->BGR conversion in the case of video port captures - actually it's a little more complicated than that because resizers only work with RGBA and BGRA, not RGB and BGR, but anyway...).
Code: Select all
import picamera
import datetime as dt
#640x480 90fps
#1280x720 60fps
#1920x1080 30fps
video_length = 5
y_var = 1920
x_var = 1080
fps_var = 30
def videoCap():
dt_var = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
with picamera.PiCamera() as camera:
camera.resolution = (y_var, x_var)
camera.framerate = fps_var
camera.annotate_text = dt_var
camera.start_recording('%s.h264' % dt_var, quality=35, bitrate=0)
print("Recording 1080p Video at 30fps for %d seconds" % video_length)
camera.wait_recording(video_length)
camera.stop_recording()
videoCap()
I really should refine the wait_recording docs - several people have found them confusing now. The gist is as follows:grats wrote:Is there a way to have annotate_text run and update its time while the video is recording?
is there a "while is_recording" or something? I couldn't find anything..
This is the first time I've used python as well.
I'm just messing around learning, I'd like to have annotate_text trigger to update the seconds while it is recording.. we have it "wait_recording"Code: Select all
import picamera import datetime as dt #640x480 90fps #1280x720 60fps #1920x1080 30fps video_length = 5 y_var = 1920 x_var = 1080 fps_var = 30 def videoCap(): dt_var = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S') with picamera.PiCamera() as camera: camera.resolution = (y_var, x_var) camera.framerate = fps_var camera.annotate_text = dt_var camera.start_recording('%s.h264' % dt_var, quality=35, bitrate=0) print("Recording 1080p Video at 30fps for %d seconds" % video_length) camera.wait_recording(video_length) camera.stop_recording() videoCap()
do I have to do something like putting a loop in there to write annotate_text every 1 second for video_length and not use wait_recording?? I read that was bad in case the drive you are writing to ran out of space etc... but I do not know now, I assume there's something I just don't know about picamera / python
Code: Select all
...
start = dt.datetime.now()
while (dt.datetime.now() - start).total_seconds() < video_length:
camera.wait_recording(0.1)
camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
...
waveform80 wrote:I really should refine the wait_recording docs - several people have found them confusing now. The gist is as follows:grats wrote:Is there a way to have annotate_text run and update its time while the video is recording?
is there a "while is_recording" or something? I couldn't find anything..
This is the first time I've used python as well.
I'm just messing around learning, I'd like to have annotate_text trigger to update the seconds while it is recording.. we have it "wait_recording"Code: Select all
import picamera import datetime as dt #640x480 90fps #1280x720 60fps #1920x1080 30fps video_length = 5 y_var = 1920 x_var = 1080 fps_var = 30 def videoCap(): dt_var = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S') with picamera.PiCamera() as camera: camera.resolution = (y_var, x_var) camera.framerate = fps_var camera.annotate_text = dt_var camera.start_recording('%s.h264' % dt_var, quality=35, bitrate=0) print("Recording 1080p Video at 30fps for %d seconds" % video_length) camera.wait_recording(video_length) camera.stop_recording() videoCap()
do I have to do something like putting a loop in there to write annotate_text every 1 second for video_length and not use wait_recording?? I read that was bad in case the drive you are writing to ran out of space etc... but I do not know now, I assume there's something I just don't know about picamera / python
You don't *have* to call wait_recording at all. You could just use time.sleep instead, or wait on anything you like and most of the time your script would run absolutely the same. When you call start_recording, a background thread is spawned which handles recording video to the output object you've provided (a file, a stream, a network socket, whatever). Your script can continue doing whatever it likes, and the background recording thread will keep running, dumping recorded frames to your output object. Eventually, your script will (hopefully) call stop_recording, and the background recording thread will be shut down.
So what's the point of wait_recording? The purpose has to do with the scenario where something goes wrong. What happens if the output object is a file, and the destination disk runs out of space? What happens if the output object is a network socket, and the Wifi disconnects? Basically, what happens if something goes catastrophically wrong in the background recording thread? Well, firstly the background thread will terminate due to whatever exception was raised and then ... well, nothing will happen because the exception was raised in the background thread and your script (which is running in the main thread) has no way of seeing such exceptions.
This is the purpose of wait_recording. It effectively says "wait n seconds for something to go wrong in the background recording thread; if something does go wrong, raise the exception in the main thread". In other words, it's a method for determining whether the background recording thread has terminated and for transferring any exception from the background recording thread to the main thread.
Hopefully from the above you can see that while you don't *have* to call wait_recording, it's a good idea to periodically do so just to check things haven't gone wrong. That said, you can call it as many times as you like, with as shorter wait period as you like. So if you want to update annotate_text with a timestamp containing seconds I'd recommend doing something like the following after calling start_recording:
There's something vaguely similar to this in the final code block of the text annotation recipe now I come to think of it.Code: Select all
... start = dt.datetime.now() while (dt.datetime.now() - start).total_seconds() < video_length: camera.wait_recording(0.1) camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S') ...
Good luck!
Dave.
Code: Select all
#i = 0 set somewhere up here
camera.start_recording('%s.h264' % dt_var, quality=40, bitrate=0)
print("Recording 1080p Video at 30fps for %d seconds" % video_length)
while i < video_length:
i += 1
camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
camera.wait_recording(1)
camera.stop_recording()
Code: Select all
import picamera
from time import sleep
camera = picamera.PiCamera()
camera.capture('image.jpg')
camera.start_preview()
camera.vflip = True
camera.hflip = True
camera.brightness = 60
camera.start_recording('video.h264')
sleep(5)
camera.stop_recording()
Code: Select all
Traceback (most recent call last):
File "/home/pi/picamera.py", line 1, in <module>
import picamera
File "/home/pi/picamera.py", line 4, in <module>
camera = picamera.PiCamera()
AttributeError: 'module' object has no attribute 'PiCamera'