- Distortion Correction on the ISP
- Image Effect on the GPU
- GL Shader on the GPU
In pursuit of this approach, I saw some memory reserved for the distortion tables in this GitHub issue. Perhaps there's a way to manipulate the lookup tables in memory without modifying the GPU source?
The image effect shader seems to be a non starter due to the locked down GPU firmware.
I noticed that raspistill can apply an OpenGL shader to its output, however this feature does not seem present in raspivid. It seems feasible from a bandwidth and memory perspective as the shader would be relatively simple, and I'm only streaming at 720p with limited H.264 encoding demands (baseline, no prediction, etc.). I've gathered a few references to individuals who have attempted this as well, but I'm unclear on the best path forward here.
It seems my best option at this point is to get a pointer to an OpenGL OES texture from the camera MMAL, run it through a shader, but then I'm unsure about how to feed that texture to the video encoder in a performant way. Any advice here that doesn't involve a copy through the CPU?
Thank you to 6by9, jamesh, and other community members that have contributed to the camera development effort!
- stackoverflow: Capture video from camera on Raspberry Pi and filter in OpenGL before encoding
- RPi forums: is there a way to get from mmal to opengl texture
- RPi forums: camera -> egl texture path question
- RPi forums: PiCam - Get YUV data first, then push it to the encoder
- I'm using a Pi Zero in a headless configuration.
- H.264 video is streamed over the network, not to disk.
- My current software implementation is based in Python, and utilizes the PiCamera package. However, given the complexity and performance demands of this project, I will likely port the code over to C++.
- I am aware that I will likely need to adjust the lens shading tables as well, though this is a lesser concern.