User avatar
HermannSW
Posts: 1646
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany
Contact: Website Twitter YouTube

i420toh264 tool using VPU

Sat Oct 19, 2019 2:09 pm

In thread "Question on yuv I420 to grey conversion"
https://www.raspberrypi.org/forums/view ... ?p=1479262

I learned that by taking just the Y plane of YUV frame is exactly as dealing with the frame as GRAY8 image.

I plan to do motor control based on processing live camera video, eg:
"Camera controlled automatic PT approach for a landing capturing"
https://www.raspberrypi.org/forums/view ... 3&t=252176

I had done gstreamer (yuv) plugin in the past:
"using raspistill to only display red pixels in Real Time"
https://www.raspberrypi.org/forums/view ... ?p=1513256

But in case no other gstreamer features are needed, that is quite some overhead over processing raw yuv (i420) frames from raspividyuv.

While I want to process raspividyuv, I do want to store the video as well. Due to SD card capabilities only .h264 video can be written for long time. So I wanted to have a i420 to h264 converte, best with VPU support.

I searched around and found nearly the complete solution in "/opt/vc/src/hello_pi/hello_encode/encode.c".
That example creates a .h264 stream from artificially created YUV frames normally.

I created a patch file that just enhances hello_encode/encode.c.
You copy the attached patch file into "/opt/vc/src/hello_pi" directory.
Then you apply the patch (you can revert with same command and "-R" flag added).

Code: Select all

[email protected]:/opt/vc/src/hello_pi $ patch -p0 < i420toh264.patch 
patching file hello_encode/encode.ccode.c".
patching file hello_encode/Makefile
[email protected]:/opt/vc/src/hello_pi $ 
Finally you follow instructions in README to build in hello_encode directory.
After applying the patch executable generated is not "hello_encode.bin" but "i420toh264.bin".

I did some stress tests, here recoding 100 seconds of 640x480 video at 90fps with raspividyuv, and then storing as .h264 video:

Code: Select all

$ time ( raspividyuv -md 7 -w 640 -h 480 -fps 90 -t 100000 -o - | ./i420toh264.bin tst.h264 640 480 )
Port 200: in 1/1 15360 16 disabled,not pop.,not cont. 160x64 160x64 @1966080 20
Port 200: in 1/1 15360 16 disabled,not pop.,not cont. 640x480 640x480 @1966080 20
OMX_SetParameter for video_encode:201...
Current Bitrate=1000000
encode to idle...
enabling port buffers for 200...
enabling port buffers for 201...
encode to executing...
looping for buffers...
Writing frame 1/0, len 0
0 0 0 1 27 64 0 1e ac 2b 40 50 1e d0 f 12 26 a0 0 0 0 1 28 ee 2 5c b0 
Writing frame 2/0, len 27
Writing frame 3/0, len 1253
Writing frame 4/0, len 238
Writing frame 5/0, len 1049
Writing frame 6/0, len 580
Writing frame 7/0, len 829
Writing frame 8/0, len 2242
Writing frame 9/0, len 1621
Writing frame 10/0, len 2297
Writing frame 11/0, len 6492
Writing frame 12/0, len 2473
Writing frame 13/0, len 4309
Writing frame 14/0, len 10284
Writing frame 15/0, len 4805
...

This is the end of the execution:

Code: Select all

...
Writing frame 9014/0, len 3480
Writing frame 9015/0, len 3429
Teardown.
disabling port buffers for 200 and 201...

real	1m40.271s
user	0m2.432s
sys	0m41.709s
$

Raspberry v2 camera aligns frames to multiples of 32/16 for width/height.
My patch does work fine for 1280x720 and 640x480 modes 6 and 7, because all widths and heights are multiples.
Worst modes is mode 5 with 1640x922, neither width nor height are multiple of 32/16.
I did upload video created with this command:

Code: Select all

$ raspividyuv -md 5 -w 1640 -h 922 -o - -awb greyworld -t 10000 | ./i420toh264.bin tst.h264 1640 922
https://www.youtube.com/watch?v=rcvzdR- ... e=youtu.be

This is lower right corner of video in youtube player, it shows 24 added columns on the right, as well 6 added rows at bottom:
Image

In worst case I could use ffmpeg in post processing to crop the 1664x928 frames to 1640x922.

I assume that this cropping can be done with VPU as well.
How can the patch solution be enhanced to do that cropping on the fly?


P.S:
I was surprised how small the patch is, instead of creating artificial YUV frames, just read frames from stdin:

Code: Select all

$ cat i420toh264.patch
diff -ruN hello_encode/encode.c i420/encode.c
--- hello_encode/encode.c	2019-10-19 11:03:51.098076345 +0200
+++ i420/encode.c	2019-10-19 11:17:58.592886108 +0200
@@ -40,6 +40,8 @@
 #define WIDTH     640
 #define HEIGHT    ((WIDTH)*9/16)
 
+int width=0, height=0;
+
 // generate an animated test card in YUV format
 static int
 generate_test_card(void *buf, OMX_U32 * filledLen, int frame, OMX_PARAM_PORTDEFINITIONTYPE *def)
@@ -144,8 +146,14 @@
    print_def(def);
 
    // Port 200: in 1/1 115200 16 enabled,not pop.,not cont. 320x240 320x240 @1966080 20
+ if (width==0)
+ {
    def.format.video.nFrameWidth = WIDTH;
    def.format.video.nFrameHeight = HEIGHT;
+ } else {
+   def.format.video.nFrameWidth = width;
+   def.format.video.nFrameHeight = height;
+ }
    def.format.video.xFramerate = 30 << 16;
    def.format.video.nSliceHeight = ALIGN_UP(def.format.video.nFrameHeight, 16);
    def.format.video.nStride = def.format.video.nFrameWidth;
@@ -248,7 +256,14 @@
          printf("Doh, no buffers for me!\n");
       } else {
          /* fill it */
+       if (width==0)
+       {
          generate_test_card(buf->pBuffer, &buf->nFilledLen, framenumber++, &def);
+       } else {
+         buf->nFilledLen = width*height*3/2;
+         framenumber++;
+         fread(buf->pBuffer, buf->nFilledLen, 1, stdin);
+       }
 
          if (OMX_EmptyThisBuffer(ILC_GET_HANDLE(video_encode), buf) !=
              OMX_ErrorNone) {
@@ -269,7 +284,7 @@
             if (r != out->nFilledLen) {
                printf("fwrite: Error emptying buffer: %d!\n", r);
             } else {
-               printf("Writing frame %d/%d, len %u\n", framenumber, NUMFRAMES, out->nFilledLen);
+               printf("Writing frame %d/%d, len %u\n", framenumber, (width==0)?NUMFRAMES:0, out->nFilledLen);
             }
             out->nFilledLen = 0;
          } else {
@@ -282,7 +297,7 @@
          }
       }
    }
-   while (framenumber < NUMFRAMES);
+   while ( ((width==0) && (framenumber < NUMFRAMES)) || ((width!=0) && !feof(stdin)));
 
    fclose(outf);
 
@@ -306,10 +321,12 @@
 int
 main(int argc, char **argv)
 {
-   if (argc < 2) {
-      printf("Usage: %s <filename>\n", argv[0]);
+   if (argc < 4) {
+      printf("Usage: %s <filename> <width> <height>\n", argv[0]);
       exit(1);
    }
+   width  = atoi(argv[2]);  if (width % 32)  width  = (width|0x1F) + 1;
+   height = atoi(argv[3]);  if (height % 16) height = (height|0xF) + 1;
    bcm_host_init();
    return video_encode_test(argv[1]);
 }
Binary files hello_encode/hello_encode.bin and i420/hello_encode.bin differ
diff -ruN hello_encode/Makefile i420/Makefile
--- hello_encode/Makefile	2019-10-19 11:03:51.098076345 +0200
+++ i420/Makefile	2019-10-18 11:24:06.238335112 +0200
@@ -1,5 +1,5 @@
 OBJS=encode.o
-BIN=hello_encode.bin
+BIN=i420toh264.bin
 LDFLAGS+=-lilclient
 
 include ../Makefile.include
$ 
Attachments
i420toh264.patch.zip
contains i420toh264.patch
(1.34 KiB) Downloaded 15 times
⇨https://stamm-wilbrandt.de/en/Raspberry_camera.html

https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/github_repo_i420toh264
https://github.com/Hermann-SW/fork-raspiraw
https://twitter.com/HermannSW

User avatar
HermannSW
Posts: 1646
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany
Contact: Website Twitter YouTube

Re: i420toh264 tool using VPU

Mon Oct 21, 2019 7:46 am

  1. Is it correct that hello_encode.c uses VPU for encoding i420 to .h264?
  2. Is it possible to tell output port to crop to really wanted framesize 1640x922 instead of aligned up 1964x928?
  3. If not, can this cropping be done by VPU differently?
⇨https://stamm-wilbrandt.de/en/Raspberry_camera.html

https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/github_repo_i420toh264
https://github.com/Hermann-SW/fork-raspiraw
https://twitter.com/HermannSW

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 7512
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: i420toh264 tool using VPU

Mon Oct 21, 2019 8:15 am

Lovely mish mash of MMAL and OpenMax IL there then!

Under IL, format.video.nFrameWidth and format.video.nFrameHeight specify the active portion of the image, with format.video.nStride and format.video.nSliceHeight specifying the overall buffer geometry. nStride needs to be aligned to a multiple of 32, and nSliceHeight to a multiple of 16.

If you already have a test app using MMAL that creates the relevant image buffers, then it would be slightly more efficient to stick with MMAL instead of mixing the two APIs, particularly if you use zero copy buffers from MMAL (although those need a little more careful handling).
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

User avatar
HermannSW
Posts: 1646
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany
Contact: Website Twitter YouTube

Re: i420toh264 tool using VPU

Wed Oct 23, 2019 3:01 pm

6by9 wrote:
Mon Oct 21, 2019 8:15 am
Lovely mish mash of MMAL and OpenMax IL there then!
Not my mish mash, I just modified existing "hellopi/hello_encode/encode.c" ...
Under IL, format.video.nFrameWidth and format.video.nFrameHeight specify the active portion of the image, with format.video.nStride and format.video.nSliceHeight specifying the overall buffer geometry. nStride needs to be aligned to a multiple of 32, and nSliceHeight to a multiple of 16.
THANK you!

Now I have everything working, and a patch is not the right vehicle to deliver/maintain that.
I found "hello_pi" directory in userland github repo, created a new fork and placed i420toh264 there.
With sample "plugin" commands, doc, sample files, ...:
https://github.com/Hermann-SW2/userland ... i420toh264

From introduction in repo README.md:
Image


Regarding "mish mash", I did commit copy of hello_encode first, and my changes for i420toh264 second.
This allowed to see exactly what I had changed in diff for commit d4a5f005d91116fe1fa747c07ab1aeb75d5c732b:
https://github.com/Hermann-SW2/userland ... b75d5c732b

Besides i420toh264 tool I did document and provide sample code for
  • minimal (i420toh264) use pipeline
  • Simple non-modifying pipeline
  • Simple modifying pipeline
  • Advanced modifying pipeline
The last section is interesting, implementing alpha (transparency) blending was easy. And the blending happens before conversion to .h264 file.

This is bash pipeline (no need for gstreamer pipeline for processing Raspberry camera videos on the fly):

Code: Select all

$ time (
> raspividyuv -md 5 -w 1640 -h 922 -o - -fps 25 | \
> ./sample_yuv_alpha 1640 922 1640x922.circle900.pgm | \
> ./i420toh264 tst.h264 1640 922 > /dev/null \
> )

real	0m5.359s
user	0m2.792s
sys	0m2.639s
$ 

This is sample transparency file 1640x922.circle900.pgm used:
Image


And this is frame from generated tst.h264:
Image


I will do my "Camera controlled automatic PT approach for a landing capturing" project as plugin using i420toh264 now:
https://www.raspberrypi.org/forums/view ... 3&t=252176


P.S:
Verification that alpha blending works as it should (same scene, frames from videos with and without blending):
Image
⇨https://stamm-wilbrandt.de/en/Raspberry_camera.html

https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/github_repo_i420toh264
https://github.com/Hermann-SW/fork-raspiraw
https://twitter.com/HermannSW

User avatar
HermannSW
Posts: 1646
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany
Contact: Website Twitter YouTube

Re: i420toh264 tool using VPU

Sat Nov 16, 2019 11:36 pm

Today somebody asked on reticle overlay in another thread.
I created fine crosshair reticle .pgm file for alpha blending and added sample to repo:
https://github.com/Hermann-SW2/userland ... nt-reticle

Then I wanted to see reticle overlay live on HDMI monitor while recording.
First I had to enable hello_video to take .h264 from stdin ("-").
Then I had to enable i420toh264 to output VPU generated .h264 to stdout ("-").
This was the command used:

Code: Select all

$ time (~/userland/build/bin/raspividyuv -md 1 -w 1640 -h 922 -p 22,50,820,461 -o - \
> -fps 25 -t 25000 -pts ras.pts -n |
> ./sample_yuv_pts 1640 922 prb1.pts |
> ./sample_yuv_alpha 1640 922 1640x922.fine_crosshair_900.pgm |
> ./sample_yuv_pts 1640 922 prb2.pts |
> ./i420toh264 - 1640 922 2>/dev/null |
> tee tst.h264 |
> ../hello_video/hello_video.bin -)

real	0m25.781s
user	0m10.741s
sys	0m23.346s
$
The tee command stores the generated .h264 into file, which I uploaded to youtube:
https://www.youtube.com/watch?v=kvmiE4eaqos

This is the video displayed on HDMI monitor with hello_video while recording, with reticle overlay:
Image


There is latency in the pipeline, and only 289 frames got created and piped through although it should be 625 for 25s.
I created sample_yuv_pts for getting CLOCK_REALTIME timestamps for correlation with timestamps taken elsewhere in pipeline.
And raspividyuv got enhanced to output CLOCK_REALTIME timestamps along the video frame timestamps for correlation.

So not perfect, but first end-to-end live .h264 display with overlay.
More investigations needed.

In another posting 6by9 proposed to place the reticle on an HDMI overlay,
I think that is the most efficient and simple solution.
The difference is, that with HDMI overlay the reticle is not part of the stored .h264 video, while with commands above it is.

P.S:
The camera sits on stepper PT system
https://forum.arduino.cc/index.php?topic=647703.0
which is controlled by a joystick.
⇨https://stamm-wilbrandt.de/en/Raspberry_camera.html

https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/github_repo_i420toh264
https://github.com/Hermann-SW/fork-raspiraw
https://twitter.com/HermannSW

User avatar
HermannSW
Posts: 1646
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany
Contact: Website Twitter YouTube

Re: i420toh264 tool using VPU

Mon Nov 18, 2019 4:45 pm

In case the overlay is not needed before storing the processed YUV frames as .h264, reticle can be added in post processing without issues, as described here:
https://www.raspberrypi.org/forums/view ... 3#p1568093
Image
⇨https://stamm-wilbrandt.de/en/Raspberry_camera.html

https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/github_repo_i420toh264
https://github.com/Hermann-SW/fork-raspiraw
https://twitter.com/HermannSW

Return to “Camera board”