I'm working to a project for college which involves real-time image processing on a Raspberry Pi Zero W.
My main problem is the processing speed.
I continously take frames from camera stream and display on a ILI9341 display using the Adafruit_ILI9341 library.
The problem is that the display request RGB565 and the CPU spend too much time on conversion
From Adafruit_ILI9341(https://github.com/adafruit/Adafruit_Py ... ILI9341.py):
Code: Select all
def image_to_data(image): """Generator function to convert a PIL image to 16-bit 565 RGB bytes.""" #NumPy is much faster at doing this. NumPy code provided by: #Keith (https://www.blogger.com/profile/02555547344016007163) pb = np.array(image.convert('RGB')).astype('uint16') color = ((pb[:,:,0] & 0xF8) << 8) | ((pb[:,:,1] & 0xFC) << 3) | (pb[:,:,2] >> 3) return np.dstack(((color >> 8) & 0xFF, color & 0xFF)).flatten().tolist()
And my idea was to change the encoding type of camera directly to RGB565 but I don't know exactly how.
Code: Select all
def outputs(): stream = io.BytesIO() while True: yield stream stream.seek(0) #thresh = 130 #fn = lambda x : 255 if x > thresh else 0 #img = Image.open(stream).convert('L').point(fn, mode='1') img = Image.open(stream) disp.display(img) stream.seek(0) stream.truncate() with picamera.PiCamera() as camera: camera.resolution = (240, 320) camera.rotation=180 #c = picamera.Color.rgb_565 #camera.framerate = 60 time.sleep(2) start = time.time() camera.capture_sequence(outputs(), 'jpeg', use_video_port=True) finish = time.time()
I read the encoding documentation but still don't know how to implement.
https://picamera.readthedocs.io/en/rele ... m-encoders
https://picamera.readthedocs.io/en/rele ... #piencoder
If someone already done that and want to help me I will so glad.