shinagan
Posts: 9
Joined: Fri Oct 10, 2014 3:19 pm

Custom image filter with picamera

Mon Jun 17, 2019 7:59 pm

Hello,

I would like to record a video with picamera in which each frame is processed/filtered before being saved. The processing is simple, something like thresholding each RGB component.
However, after grabbing each frame with a custom output (extremely fast) and processing it (extremely fast), most of the time ends up being spent on adding the frame to the stream to be saved (about 0.5 s per frame).

I am using cv2.VideoWriter() and I assume there is a lot of overhead involved. What is the proper way to save these frames to a mjpeg/h264 file after processing using picamera?


Here is an example code, not even including the processing step (and still takes 0.5 s per frame).
I save to mjpeg format as the resolution is larger than what h264 can handle. Please let me know if there is any major differences with a less than 1920x1080 case that could use h264.

Code: Select all

class MyOutput(object):
	def __init__(self, fileName, resolution, fps):
		self.frameCount = 0
		self.processingTime = []
		self.savingTime = []
		self.res = resolution
		self.fps = fps

		# Define writer object
		self.fourcc = cv2.VideoWriter_fourcc(*'MJPG')
		self.out = cv2.VideoWriter(fileName, self.fourcc, self.fps, resolution)

	def write(self, buf):
		# 1. Retrieve a frame
		t0 = time.perf_counter()
		rawArray = np.frombuffer(buf, dtype=np.uint8, count=self.res[0]*self.res[1]*3).reshape((self.res[1], self.res[0], 3))

		# 2. Save frame to file
		t1 = time.perf_counter()
		self.out.write(rawArray) # 0.5 S AVERAGE!!
		t2 = time.perf_counter()
		print("Processed frame {}\t".format(self.frameCount))

		self.frameCount += 1
		self.processingTime.append(t1-t0)
		self.savingTime.append(t2-t1)
		

	def flush(self):
		# this will be called at the end of the recording; do whatever you want
		# here
		self.out.release()
		print("Average processing time: {} s".format(np.mean(self.processingTime)))
		print("Average saving time: {} s".format(np.mean(self.savingTime)))

if __name__ == '__main__':
	with picamera.PiCamera() as camera:
		#camera.resolution = (1920, 1080)
		camera.resolution = (3280, 1024)
		camera.sensor_mode = 2
		camera.hflip = True
		camera.vflip = True
		camera.framerate = 2
		bitrate = 17000000
		time.sleep(2) # let the camera warm up and set gain/white balance
		output = MyOutput('my_output.mjpeg', camera.resolution, camera.framerate)
		# Saving image via OpenCV requires 'bgr' output	
		print("Manual saving")
		camera.start_recording(output, 'bgr')
		camera.wait_recording(5) # record 10 seconds worth of data
		camera.stop_recording()

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 6899
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Custom image filter with picamera

Tue Jun 18, 2019 7:46 am

OpenCV is hideously slow for many operations, and has no hardware acceleration. JPEG and MJPEG also generally operate on yuv data, not rgb, therefore you have a format conversion to be done too.
You ought to look at the MMAL encoder options. PiCamera does expose some of those (https://picamera.readthedocs.io/en/late ... oders.html), but I'm not sure if it allows them to be used standalone - they appear to want to always connect up to a source component.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

shinagan
Posts: 9
Joined: Fri Oct 10, 2014 3:19 pm

Re: Custom image filter with picamera

Tue Jun 18, 2019 1:39 pm

Too bad. I tried again, saving files on a RAM disk - it's not faster, meaning that my SD card was not the bottleneck. It seems to really take 0.5 s to do the rgb conversion & save for each of these frames.

I'm not an expert in MMAL functions, are there example codes I could tailor to try to achieve this? After looking at the PiEncoder section, I saw that there was a Custom Encoder example in Advanced Recipe 15 but it does not show how to grab the frame, merely to count them. Are frames accessible from a custom encoder via a similar write method?

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 6899
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Custom image filter with picamera

Tue Jun 18, 2019 1:47 pm

PiCamera has set everything up to expect the input to components to be a connection from another component. Typically the original source will have been the camera component, which requires no other input. It only registers user callbacks for the output data.

image_encode (which is what you want) has an input port and an output port, and you want to supply random data to both input and output ports.
If you drop down to the lower level mmalobj library, then you can do what you want. See https://picamera.readthedocs.io/en/late ... g-encoding for JPEG encoding.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

shinagan
Posts: 9
Joined: Fri Oct 10, 2014 3:19 pm

Re: Custom image filter with picamera

Tue Jun 18, 2019 2:41 pm

Wow this seems to just do it!
From what I can read, the frame (in this case, in RGB format) is shuttled back and forth between the GPU and CPU RAM. The frame processing would then be done by the CPU, right? Is there a way to have the (simple) processing done by the GPU without shuttling the data back and forth?

For instance, let's say I build a motion detector on the full frames (not using h264 that bins them beforehand). Rather than shuttling frames, can I insert a step in the GPU flow that would perform that difference and then encode the resulting stream to a mjpeg/h264 file? This way, any kind of (simple) filters could be applied to the camera feed before being saved to file at GPU speed!

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 6899
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Custom image filter with picamera

Thu Jun 20, 2019 11:53 am

You can't insert random processing steps into the GPU pipeline other than as MMAL or IL components. Both APIs allow buffers to be brought back to the ARM and returned to the GPU.
MMAL also allows for a zero copy option that maps the GPUs memory buffer into the ARM MMU for direct access. You do have to be a little careful on cache management for that to work correctly, but hopefully the framework deals with that for you. You won't be able to use that from Python though.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

Return to “Camera board”