Thanks. I have a working version here. I have spent a while modifying your code, just because it is fun. I am now trying to get the mouse working. One thing that may cause problems for the mouse is the distortion in the Y direction (when headless or using composite output). As you reported and I added to here.hanzelpeter wrote:Yes, that's a bug and once I upload it to github.com or somewhere I will try to fix.
This sounds interesting. Playing with the dispmanx vnc server it is clear the performance suffers when the whole screen is changing (reported fps goes down to about 4). The H264 video idea sounds interesting. When I get some time I will try and change the encode example to use vc_dispmanx_snapshot() to fill the buffer. I may just be able to get a VC_IMAGE_YUV420 snapshot. The one concern I have about this is that by using the snapshot function and then encoding, the data is being pulled from the GPU to user space and then pushed back to do the encoding, which may impact on performance. I wonder if there is a way to do this without touching user space at all (except of course to read the encoded video stream).danversj wrote:I've found it a bit hard to work out what will work using hello_encode because all it does is take an internally generated yuv test pattern and encodes it to a raw h264 file. I've changed the constant to def.format.video.eColorFormat = OMX_COLOR_Format16bitRGB565; and the output is just corrupted video - because what's being fed in is YUV. I don't know how to feed it real video.
Hi James,jamesh wrote:Although you could use H264, you might get better performance than currently with a much simpler compress of individual frames - you could either go to JPEG or PNG. But the problem there is that would be done on the CPU whereas H264 would be done on the GPU. Or do you do that already? I pressume you already analyse the frame for area changes and only send the changes?
I believe that will be necessary. Possibly it won't make too much difference, as the transfer is done by dma, and it's probably dwarfed by network costs.AndyD wrote:I am assuming that to take snapshots and encode to H264 we need to pull the snapshot into user space with vc_dispmanx_snapshot/vc_dispmanx_resource_read_data and then push it back to the GPU to encode. Is that correct?
Code: Select all
~/dispman_vnc/dispman_vncserver /usr/lib/arm-linux-gnueabihf/libvncserver.so.0 /usr/lib/arm-linux-gnueabihf/libjpeg.so.8 /usr/lib/arm-linux-gnueabihf/libgnutls.so.26 /usr/lib/arm-linux-gnueabihf/libtasn1.so.3 /usr/lib/arm-linux-gnueabihf/libp11-kit.so.0
This should be fixed in latest firmware. Run rpi-update.danversj wrote:With the vnc server running, when playing video (via omxplayer) you do get frequent black flashes in the video on the locally-connected monitor.
Sorry for the delay. I have tried calling vc_dispmanx_snapshot() using a resource handle created with type VC_IMAGE_YUV420. I can run the code once, and vc_dispmanx_snapshot() returns -1. If I try and run it again the code hangs in vc_dispmanx_snapshot() and requires a reboot.AndyD wrote:... I may just be able to get a VC_IMAGE_YUV420 snapshot. ...