grimepoch
Posts: 95
Joined: Thu May 12, 2016 1:57 am

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Tue Aug 07, 2018 2:31 pm

Increasing CMA fixed all the issues with segfaulting! Awesome.

When I run "vcgencmd dispmanx_list" this is the output:

Code: Select all

display:3 format:XRGB8888 transform:0 layer:-127 src:0,0,656,416 dst:32,32,656,416 cost:454 lbm:0
display:3 format:YUV420 transform:0 layer:0 src:0,0,720,240 dst:0,120,720,240 cost:579 lbm:12288
The XRGB8888 is there always (so when yavta is not running).

So when you say "display side has no knowledge that it is interlaced", is that something we can feed forward to the IPU to restitch the image, or is the expectation I would be doing that with the buffers I would be receiving?

Just making sure I understand what we did. You basically updated the driver to allow interlaced, and to detect the frames, and to have V4L2 to update the frame information in the info it delivers downstream, in this case to MMAL?

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 5549
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Tue Aug 07, 2018 2:54 pm

grimepoch wrote:
Tue Aug 07, 2018 2:31 pm
Increasing CMA fixed all the issues with segfaulting! Awesome.
I think I've got a software fix as well, but my ADV7282 is now not wanting to stream so I can't test it!
grimepoch wrote:When I run "vcgencmd dispmanx_list" this is the output:

Code: Select all

display:3 format:XRGB8888 transform:0 layer:-127 src:0,0,656,416 dst:32,32,656,416 cost:454 lbm:0
display:3 format:YUV420 transform:0 layer:0 src:0,0,720,240 dst:0,120,720,240 cost:579 lbm:12288
The XRGB8888 is there always (so when yavta is not running).
It will be as you can see the console text which is rendered to the frame buffer.
You can see that it is being presented with a source image that is 720x240, and it is rendering it to the rectangle sized 720x240 at offset 0,120 on your screen (which I assume is therefore 720x480 in size).
grimepoch wrote:So when you say "display side has no knowledge that it is interlaced", is that something we can feed forward to the IPU to restitch the image, or is the expectation I would be doing that with the buffers I would be receiving?
The only deinterlacing we have is an option from the MMAL image_fx component. That is not currently inserted into the MMAL pipeline.
grimepoch wrote:Just making sure I understand what we did. You basically updated the driver to allow interlaced, and to detect the frames, and to have V4L2 to update the frame information in the info it delivers downstream, in this case to MMAL?
Basically yes:
- I've taken a patch for the upstream adv7180 driver that corrects the interleaving mode it advertises because it was previously wrong.
- told the unicam driver not to reject interlaced modes.
- told the unicam driver to look at the frame numbers, and update the V4L2 buffer structure accordingly.
We don't currently tell MMAL the field information as it only passes to the ISP, video_encode, and video_render. None of them do anything with it.

You're now in the same position as luiscgalo in viewtopic.php?f=43&t=218928, except he's trying to deinterlace 1080i50 vs your SD.
From your earlier comments I thought you already had a plan for how to deinterlace the content.
You could merge the two frames into one line interleaved one, and then pass it through the image_fx component, but you then also need to have converted to I420/YU12 as that is all that image_fx supports.

Seeing as there are now two of you looking for alternate field deinterlacing, I will look at whether it is possible to support the YUYV formats and fields in independent buffers. No guarantees on success or timescales though.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

grimepoch
Posts: 95
Joined: Thu May 12, 2016 1:57 am

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Tue Aug 07, 2018 3:03 pm

Absolutely I can take the frame information and de-interlace myself if need be, my questions are more to make sure I am not recreating something that is easily done elsewhere because of me just not knowing what options are available.

My destination format I need is RGBA basically (I forget what that is actually called in format names). Currently I have code that takes YUV and converts it within a shader, using a USB camera interface. The only difference is that was a progressive source, and bringing in 2 textures of half frames is not difficult.

My curiosity was hmmm, is it possible to use the IPU to stitch the frames and convert to RGBA or RGB. If that's a huge amount of work, or is very complex to setup, then I can understand how that might be beyond the scope of what we are working on.

I will take a look at the other thread to understand what you all were doing there too.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 5549
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Tue Aug 07, 2018 4:00 pm

IPU sounds like Freescale IMX6 terminology (Image Processing Unit). The Pi has the ISP (Image Sensor Pipeline), HVS (Hardware Video Scalar), VPU (Vector Processing Unit - general purpose processor), or QPU (Quad Processing Unit in the 3D block).

In theory the ISP could write take in the two buffers and write out as line interleaved or top/bottom RGB/RGBX, but there's no suitable setup at present, and not really one I can see a big demand for.
The HVS doesn't accept YUYV, nor handle interlacing.
The current deinterlacing algorithms run on either the VPU or QPU depending on a few parameters.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

grimepoch
Posts: 95
Joined: Thu May 12, 2016 1:57 am

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Tue Aug 07, 2018 4:09 pm

Sorry, I meant ISP, getting all these acronyms mixed up!

I've started looking at luiscgalo's code and playing around with a copy of yavta setting up the imageFx component.

You mention QPU and VPU is used to do deInterlacing. One question I had is I saw some text in the discussion about using the same resources as the 3D. Given I am using quite a bit of GLSL, do you think using that component is going to make the net gain zero if I just have the GPU do all the work for me (color conversion and de-interlace) or am I misunderstanding what was said?

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 5549
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Tue Aug 07, 2018 4:23 pm

grimepoch wrote:
Tue Aug 07, 2018 4:09 pm
You mention QPU and VPU is used to do deInterlacing. One question I had is I saw some text in the discussion about using the same resources as the 3D. Given I am using quite a bit of GLSL, do you think using that component is going to make the net gain zero if I just have the GPU do all the work for me (color conversion and de-interlace) or am I misunderstanding what was said?
Are you using the vc4-kms-v3d OpenGL, or the firmware OpenGLES? If OpenGL then you can't use the QPUs from image_fx.
VPU or QPU. Never both at the same time.
IIRC The QPUs are needed for the more advanced algorithm above SD resolutions, with the VPU sufficient at SD resolutions (or drops back to the fast algorithm).
Having said that, it gets ugly as to which image formats it supports in there, so I'm not sure the QPU version will give you any gain as the VPU will be doing an image conversion first.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

grimepoch
Posts: 95
Joined: Thu May 12, 2016 1:57 am

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Tue Aug 07, 2018 4:26 pm

I believe we are using firmware OpenGLES, we are not using XWindows. Everything we do, basically goes to the frame buffer.

grimepoch
Posts: 95
Joined: Thu May 12, 2016 1:57 am

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Tue Aug 07, 2018 6:12 pm

As I dig into the MML side of it, I know you mentioned that you didn't have a lot of experience with the deinterlacer. What I am trying to understand is if I am working with MML correctly.

Since I am not encoding to disk, I removed the encoder MMAL component.

I've added the imageFx component.

One difference I am noticing is that the code uses "mmal_connection_create" to make a connection between the ISP component and the imageFx. I do not see this mechanism being used for the render connection.

When I connect the ISP output input imageFx, the program runs but no output. I'm clearly not reconfiguring where render is connected.

I figured "mmal_port_parameter_set_boolean" is what is making the connection. I'll keep digging.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 5549
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Tue Aug 07, 2018 7:39 pm

Yavta is doing a mixture of things.

If doing a 1:1 connection then using mmal_connections is the easiest. If you set the MMAL_CONNECTION_FLAG_TUNNELLING flag when you create the connection then it keeps all the callbacks on the GPU with no intervention with the ARM.

yavta wants to feed the images through the ISP, and then split the resulting image between video_render for the display and video_encode to encode it. We could use a video_splitter component, but that would require a copy for each of the outputs. Instead it is using mmal_buffer_header_replicate to create two buffer headers dependent on the source one - see https://github.com/6by9/yavta/blob/master/yavta.c#L1772. It requires slightly careful handling as the buffers can't be returned to the ISP until both sinks have returned the buffer, hence buffers_to_isp waiting for the buffers to be returned the pool before sending them back.
It relies on all ports being set to ZERO_COPY mode to have any benefit, otherwise the buffer would be copied every time it is sent to or from the GPU.

mmal_port_parameter_set_boolean just sets a parameter on a port, it's nothing specifically about connections.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

grimepoch
Posts: 95
Joined: Thu May 12, 2016 1:57 am

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Wed Aug 08, 2018 1:59 am

Ahh, okay. That makes sense. I guess I can just use connection for now because I just want to go ISP->IMAGEFX->RENDER with the test code I am working on. (Just to wrap my head around all this)

One thing I am unclear about, I don't see the IMAGEFX example creating a buffer pool (unless I missed it). When/how do you decide when you need pools? I see "buffer_size" and "buffer_num" being set, but no call to "mmal_port_pool_create".

I get why we want buffers, especially so things are waiting for a buffer to write into, just not how we decide on them.

grimepoch
Posts: 95
Joined: Thu May 12, 2016 1:57 am

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Wed Aug 08, 2018 4:12 am

Okay, I at least successfully hooked up ISP->image_fx->render as I am getting images out.

Now and try to figure out how I configure image_fx to understand that it is receiving alternating frames and to build a full image and output downstream.

grimepoch
Posts: 95
Joined: Thu May 12, 2016 1:57 am

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Wed Aug 08, 2018 5:08 am

What I am not understanding is we know the video is coming in 720x240 (for NTSC). If I copy input/output all the way up the chain to render, we of course stay at that size.

I don't see how to configure image_fx to output to 720x480. I also feel like image_fx requires some buffer pools as well.

I tried changing the format->es->width/height to 720x480 but then nothing outputs (but it runs).

The other example looks like both frames are being received as one, just sequentially, so it has to interleave between them, which is different than this case.

I've been trying to look at a lot of discussions online, mostly surrounding OMX, but not really seeing anything that helps.

So close!

grimepoch
Posts: 95
Joined: Thu May 12, 2016 1:57 am

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Thu Aug 09, 2018 12:41 am

I finally got at least something to be 720x480 on the output, although it's wrong (it turned the image all green).

Code: Select all

deinter_output->format->es->video.height = 480;
deinter_output->format->es->video.crop.height = 480;
status = mmal_port_format_commit(deinter_output);
I just hard-coded the output after copying from the input, which was copied from the ISP output. Without any documentation on image_fx, and no sample code really explaining how to use it, this pathway is coming to an end.

I tried every number on "img_fx_param.effect_parameter[0]" with no difference in the output, and no complains from MMAL.

grimepoch
Posts: 95
Joined: Thu May 12, 2016 1:57 am

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Thu Aug 09, 2018 5:30 am

While I try and understand the MMAL code (because I'd eventually like to benchmark the two) I though I'd go try and use the method I've always used for grabbing the stream with V4L2. Clearly I do not understand something.

I set the pixel format to match the settings in YAVTA to make sure that's not the issue. (I tried with the returned stride, then updated it to match YAVTA).

Using MMAP method. Setup 4 buffers to start with, request them, all that works with no problem. Queue a buffer, turn the STREAMON, then I call select and get a timeout.

MY understanding is the driver should tell me which frame I have in the v4l2 buffer structure.

However, for the life of me I cannot figure out what I am doing wrong so the driver isn't sending me anything. No errors, checked every ioctl and xioctl call.

I ran the LOG status in both apps (mine and the YAVTA) after setting everything to see, YAVTA on top, they match outside the write pointer. When I use select, I have NULL for WR and EX, like every other example of camera interface I have used.

Code: Select all

79189.923413] unicam 3f801000.csi1: =================  START STATUS  =================
[79189.923429] unicam 3f801000.csi1: -----Receiver status-----
[79189.923436] unicam 3f801000.csi1: V4L2 width/height:   720x240
[79189.923443] unicam 3f801000.csi1: Mediabus format:     00002006
[79189.923449] unicam 3f801000.csi1: V4L2 format:         UYVY
[79189.923455] unicam 3f801000.csi1: Unpacking/packing:   0 / 0
[79189.923460] unicam 3f801000.csi1: ----Live data----
[79189.923468] unicam 3f801000.csi1: Programmed stride:   1472
[79189.923474] unicam 3f801000.csi1: Detected resolution: 0x0
[79189.923480] unicam 3f801000.csi1: Write pointer:       eaf00000
[79189.923486] unicam 3f801000.csi1: ==================  END STATUS  ==================
[79189.923544] unicam 3f801000.csi1: Sensor trying to send interlaced video - results may be unpredictable
[79189.929576] unicam 3f801000.csi1: Sensor trying to send interlaced video - results may be unpredictable
[79225.321463] unicam 3f801000.csi1: =================  START STATUS  =================
[79225.321475] unicam 3f801000.csi1: -----Receiver status-----
[79225.321485] unicam 3f801000.csi1: V4L2 width/height:   720x240
[79225.321491] unicam 3f801000.csi1: Mediabus format:     00002006
[79225.321498] unicam 3f801000.csi1: V4L2 format:         UYVY
[79225.321504] unicam 3f801000.csi1: Unpacking/packing:   0 / 0
[79225.321509] unicam 3f801000.csi1: ----Live data----
[79225.321515] unicam 3f801000.csi1: Programmed stride:   1472
[79225.321521] unicam 3f801000.csi1: Detected resolution: 0x0
[79225.321527] unicam 3f801000.csi1: Write pointer:       eb180000
So not seeing anything there either. Is there some limitation of the unicam driver used in this context I am missing? If I understand the code, it seems you are using some sort of DMA transfer into MMAL. Here, I am not using MMAL.

grimepoch
Posts: 95
Joined: Thu May 12, 2016 1:57 am

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Thu Aug 09, 2018 7:30 am

I've hacked at this code for about 8 hours now. I stripped out every other piece of code, made sure to use the same files as YAVTA, verified the loaded libraries, etc. I apologize that this code is so spaghetti, I literally just pulled everything out except for initializing, creating the buffers and then just trying to deque a buffer. When I run this, select just times out.

Many of the hard codes you might see in were just trying to set things the same as YAVTA was setting them. I just want to understand why this does not function.

Code: Select all

#include <time.h>		// timekeeping


#include <vector>
#include <stack>

#include <linux/fb.h>
#include <sys/mman.h>
#include <sys/ioctl.h>
#include <linux/kd.h>
#include <dirent.h>
#include <stdio.h>
#include <fcntl.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#include <assert.h>
#include <unistd.h>
#include <iostream>
#include <dirent.h>
#include <errno.h>
#include <linux/videodev2.h>
#include <stdint.h>


static void errno_exit                      (const char *           s)
{
        fprintf (stderr, "%s error %d, %s\n",
                 s, errno, strerror (errno));

        exit (EXIT_FAILURE);
}

struct buffer {
	void *start;
	size_t length;
};
int fd_video;
struct buffer       *buffers;
static unsigned int n_buffers;

int xioctl(int fd, int request, void *arg) {
	int r;

	do r = ioctl (fd, request, arg);
	while (-1 == r && EINTR == errno);

	return r;
}
static void get_ts_flags(uint32_t flags, const char **ts_type, const char **ts_source)
{
	switch (flags & V4L2_BUF_FLAG_TIMESTAMP_MASK) {
	case V4L2_BUF_FLAG_TIMESTAMP_UNKNOWN:
		*ts_type = "unk";
		break;
	case V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC:
		*ts_type = "mono";
		break;
	case V4L2_BUF_FLAG_TIMESTAMP_COPY:
		*ts_type = "copy";
		break;
	default:
		*ts_type = "inv";
	}
	switch (flags & V4L2_BUF_FLAG_TSTAMP_SRC_MASK) {
	case V4L2_BUF_FLAG_TSTAMP_SRC_EOF:
		*ts_source = "EoF";
		break;
	case V4L2_BUF_FLAG_TSTAMP_SRC_SOE:
		*ts_source = "SoE";
		break;
	default:
		*ts_source = "inv";
	}
}



//---------------------------------------------------------
//
// var initialization


//---------------------------------------------------------
//
//
//
//
int print_caps(int fd) {
        struct v4l2_capability caps = {};
        if (-1 == xioctl(fd, VIDIOC_QUERYCAP, &caps)){
                perror("Querying Capabilities");
                return 1;
        }

        printf( "Driver Caps:\n"
                "  Driver: \"%s\"\n"
                "  Card: \"%s\"\n"
                "  Bus: \"%s\"\n"
                "  Version: %d.%d\n"
                "  Capabilities: %08x\n",
                caps.driver,
                caps.card,
                caps.bus_info,
                (caps.version>>16)&&0xff,
                (caps.version>>24)&&0xff,
                caps.capabilities);

      int support_grbg10 = 0;

      if (caps.capabilities & V4L2_CAP_VIDEO_CAPTURE_MPLANE)
		   printf(" V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE\n");
	   if (caps.capabilities & V4L2_CAP_VIDEO_OUTPUT_MPLANE)
		   printf(" V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE\n");
	   if (caps.capabilities & V4L2_CAP_VIDEO_CAPTURE)
		   printf("  V4L2_BUF_TYPE_VIDEO_CAPTURE\n");
   	if (caps.capabilities & V4L2_CAP_VIDEO_OUTPUT)
	   	printf(" V4L2_BUF_TYPE_VIDEO_OUTPUT\n");


        struct v4l2_fmtdesc fmtdesc = {0};
        //This is what I see
        fmtdesc.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        char fourcc[5] = {0};
        char c, e;
        printf("  FMT : CE Desc\n--------------------\n");
        while (0 == xioctl(fd, VIDIOC_ENUM_FMT, &fmtdesc)) {
                strncpy(fourcc, (char *)&fmtdesc.pixelformat, 4);
                if (fmtdesc.pixelformat == V4L2_PIX_FMT_SGRBG10)
                    support_grbg10 = 1;
                c = fmtdesc.flags & 1? 'C' : ' ';
                e = fmtdesc.flags & 2? 'E' : ' ';
                printf("  %s: %c%c %s\n", fourcc, c, e, fmtdesc.description);
                fmtdesc.index++;
        }


        struct v4l2_format fmt = {0};
        fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        xioctl(fd, VIDIOC_G_FMT, &fmt);

        printf("TEST: %d \n",fmt.fmt.pix.width);


        //Let's check the format we have
        printf("Video format: %d (%08x) %ux%u (stride %u) field %d buffer size %u\n",
			fmt.fmt.pix.pixelformat, fmt.fmt.pix.pixelformat,
			fmt.fmt.pix.width, fmt.fmt.pix.height, fmt.fmt.pix.bytesperline,
			fmt.fmt.pix_mp.field,
			fmt.fmt.pix.sizeimage);


        //Let's try updating the stride
         fmt.fmt.pix.bytesperline=1472;
         fmt.fmt.pix.sizeimage=345600;
         fmt.fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
		   fmt.fmt.pix.flags = 0;
        ioctl(fd, VIDIOC_LOG_STATUS);

        fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_UYVY;   

        //fmt.fmt.pix.field = V4L2_FIELD_ALTERNATE;

        if (-1 == xioctl(fd, VIDIOC_S_FMT, &fmt)){
            perror("Setting Pixel Format");
            return 1;
        }
        ioctl(fd, VIDIOC_LOG_STATUS);

        printf("Video format: %d (%08x) %ux%u (stride %u) field %d buffer size %u\n",
			fmt.fmt.pix.pixelformat, fmt.fmt.pix.pixelformat,
			fmt.fmt.pix.width, fmt.fmt.pix.height, fmt.fmt.pix.bytesperline,
			fmt.fmt.pix_mp.field,
			fmt.fmt.pix.sizeimage);




        strncpy(fourcc, (char *)&fmt.fmt.pix.pixelformat, 4);
        printf( "Selected Camera Mode:\n"
                "  Width: %d\n"
                "  Height: %d\n"
                "  PixFmt: %s\n"
                "  Field: %d\n",
                fmt.fmt.pix.width,
                fmt.fmt.pix.height,
                fourcc,
                fmt.fmt.pix.field);


        return 0;
} //endof PrintCaps

int init_mmap(int fd) {
    struct v4l2_plane planes[VIDEO_MAX_PLANES];
    struct v4l2_requestbuffers req = {0};
    req.count = 8;
    req.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    req.memory = V4L2_MEMORY_MMAP;

    if (xioctl(fd, VIDIOC_REQBUFS, &req))
    {
        perror("Requesting Buffer");
        return 1;
    }


    if(req.count < 2 ) {
      printf("Insufficient buffers allocated!\n");
      exit(0);
    } else {
      printf("Buffers requested memory succeded: %d\n",req.count);
    }

    buffers = (buffer*)calloc(req.count, sizeof (*buffers));

    if(!buffers) {
      printf("OUT O MEM!\n");
      exit(0);
    }
    for(n_buffers=0;n_buffers<req.count;++n_buffers){
      const char *ts_type, *ts_source;
      struct v4l2_buffer buf = {0};
      memset(planes, 0, sizeof planes);
      buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
      buf.memory = V4L2_MEMORY_MMAP;
      buf.index = n_buffers;
      buf.length = VIDEO_MAX_PLANES;
		buf.m.planes = planes;

      if(-1 == xioctl(fd, VIDIOC_QUERYBUF, &buf))
         errno_exit("VIDIOC_QUERYBUF");

      get_ts_flags(buf.flags, &ts_type, &ts_source);
		printf("length: %u offset: %u timestamp type/source: %s/%s\n",
		       buf.length, buf.m.offset, ts_type, ts_source);


      buffers[n_buffers].length = buf.length;
      buffers[n_buffers].start =
         mmap(NULL,
         buf.length,
         PROT_READ | PROT_WRITE,
         MAP_SHARED,
         fd, buf.m.offset);

      if(MAP_FAILED == buffers[n_buffers].start)
         errno_exit("mmap");
    }

    return 0;
} //endof InitMMAP



//entry point
int Init() {
	fd_video = open("/dev/video0", O_RDWR);
	printf("	INIT - fd_video=%d \n\n", fd_video);
        if (fd_video == -1) {
                perror("Opening video device at /dev/video0");
                return 1;
        }

        if(print_caps(fd_video)) {
                 return 1;
        }

        if(init_mmap(fd_video)) {
                perror("mmap error in fd_video - bailing");
                return 1;
        }
        struct v4l2_buffer buf = {0};
        buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        buf.memory = V4L2_MEMORY_MMAP;
        buf.index = 0;
        int ret;
        errno = 0;
        ret = xioctl(fd_video, VIDIOC_STREAMON, &buf.type);
        printf(" started capture %d \n",ret);
        if(ret) {
         perror("Unable to Start Capture!"); exit(0);
        }
	
}

int CaptureVideo(int fd) {
      printf("Capturing image..............\n");
        struct v4l2_buffer buf = {0};
        buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        buf.memory = V4L2_MEMORY_MMAP;
        buf.index = 0;


        if(xioctl(fd, VIDIOC_QBUF, &buf)) { perror("Query Buffer"); return 1; }

        printf("MADE IT HERE 1\n");
        fd_set fds;
        FD_ZERO(&fds);
        FD_SET(fd, &fds);

        struct timeval tv = {0};
        tv.tv_sec = 2;


        printf("MADE IT HERE 2, fd=%d\n",fd);

        int r = select(fd+1, &fds, NULL, NULL, &tv);

        if(-1 == r) { perror("Waiting for Frame"); return 1; }
        if( 0 == r) { printf("Select Timeout\n"); exit(0); }
        printf("MADE IT HERE 2b\n");

        printf("MADE IT HERE 2c\n");
        if(-1 == xioctl(fd, VIDIOC_DQBUF, &buf)) { perror("Retrieving Frame"); return 1; }

        printf("MADE IT HERE 3\n");


    return 0;
} //endof CaptureVideo

int main() {

	// Video in
	Init();
	CaptureVideo(fd_video);
   printf("TEST EXIT!\n");
   exit(0);


 } //endof main
//---------------------------------------------------------
//---------------------------------------------------------
//---------------------------------------------------------


grimepoch
Posts: 95
Joined: Thu May 12, 2016 1:57 am

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Thu Aug 09, 2018 8:20 am

Finally discovered what was causing the problem with select. I need to queue up all the buffers, not just one. Was able to get the image into my code and display the half frames. Next step will be to combine them.

That said, I'd still like to figure out how to get the image_fx to do this de-interlacing to compare the performance.

I'm also curious, is it also possible to change color as well from YUYV to RGBA?

It is interesting, right now there is a bit of delay in my code, so I'll need to figure out what that is from. When running frames from YAVTA, there is VERY LITTLE delay.

I think the problem is I copy from the buf -> local array so that I can return the buffer, then I copy into a GL texture which is using up even more time. Maybe I can just wait and return the buffer when the texture is done and avoid that second copy.

grimepoch
Posts: 95
Joined: Thu May 12, 2016 1:57 am

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Thu Aug 09, 2018 9:56 am

Now I remember, when I try to use the memory mapped buffer location, for whatever reason, glTexSubImage2D doesn't work. That's why I had copied the data to an intermediate buffer as a test.

I thought that if glTexSubImage2D was passed a pointer and not a buffer object, it would force the driver to copy the buffer right then. For whatever reason, that is not working.

My next step would be to create a couple PBO, copy into them, then let gl load from them when it wants. I haven't used PBO before so we'll see. It seems that is what some people do in this situation.

I see that OMX has a mechanism to convert directly from a stream into a texture bypassing the ARM altogether. I do understand you cannot mix them. They are using EGL_Image apparently. Not sure if you can use the EXT_image_dma_buf_import extension to pull from V4L2 like you are doing for MMAL. Copying into a eglCreateImageKHR.

https://www.khronos.org/registry/EGL/ex ... e_base.txt

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 5549
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Thu Aug 09, 2018 10:11 am

The peripheral is double buffered. The blanking periods can be very short, therefore if you haven't given it a new buffer before the end of frame it will just overwrite the old one again and not release it back to the client.
The only other option is to fully stop the peripheral after every frame, and there you would be relying on interrupt latencies being very low.
This is why request buffers will actually always tell you that 3 buffers is the minimum you can request when you call REQ_BUFS.
I've just done a stripped down version of yavta for DanR on viewtopic.php?f=44&t=219919 - see https://github.com/6by9/v4l2_mmal

image_fx deinterlacing will only accept line interleaved interlaced frames. The height of the input frame will therefore be the same as the output frame as it contains two fields. Converting it to handle fields in independent buffers will be a moderately involved task.
It only handles YU420, which does mean that the chroma planes are also line interleaved and can not be treated as standard YUV420 by any other component (eg ISP). You could treat it as double the width/half the height for two side by side images, and then the chroma subsampling would all work out.

Conversion from YUYV to RGBA is best done via the ISP component.

Copies of image data almost always hurt. Yavta is not touching the image data at all from the ARM, nor the GPU processor - it's all done by hardware blocks.
Yes, just keep hold of the buffer until you've done whatever conversions etc are required, and then requeue it to the relevant source. Again yavta is doing that in the link from V4L2 to MMAL. DQBUF results in a mmal_port_send_buffer to the ISP. When the ISP returns the buffer in the isp_ip_cb callback, it finds the V4L2 and calls QBUF on it.
Also ensure that you have sufficient buffers circulating. If you're dealing with fields and doing anything even vaguely complex in the processing then you'll want at least
- 2 buffers for deinterlacing two fields into one
- 1 buffer being filled by the peripheral
- 1 buffer queued with the kernel to be filled next (to avoid frame drops)
- add at least one for good measure.
ie minimum 5.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 5549
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Thu Aug 09, 2018 10:30 am

Please confirm whether you are using the firmware or kernel GL stack. lsmod would include "vc4" if using the kernel one.
Then again video_render as used in yavta wouldn't work if you were using the vc4 driver, and also it would have set cma as 256M automatically, so I think your earlier statement "I believe we are using firmware OpenGLES" is probably correct.
(video_render would work if you use vc4-fkms-v3d as that retains dispmanx for the rendering).

The kernel GL stack supports EXT_image_dma_buf_import. The firmware one doesn't.

I think I know why the firmware GL won't accept the mmap'ed buffers. It's a funny path in the kernel and the VideoCore VCHI driver. Because the allocation has come from the CMA heap, when mmap'ed it gets a weird flag applied to it. The function VCHIQ uses to convert back from a user virtual address and length to a list of memory pages fails if those flags are set.
If you look at the top two commits at https://github.com/6by9/linux/commits/r ... /interface then they may resolve the issue for you, but no guarantees (it hasn't for https://github.com/raspberrypi/userland/issues/386 which is a similar issue).
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

grimepoch
Posts: 95
Joined: Thu May 12, 2016 1:57 am

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Thu Aug 09, 2018 3:34 pm

No vc4 in my 'lsmod', so pretty sure I am using the firmware version. I'll take a look at those two links for the mmmaped buffers as well.

What is weird to me is that image_fx deinterlacing would require the image to already be interleaved...wouldn't that negate the need for deinterlacing? I imagine the only way I could use it is if I was able to stack the images one on top of another.

If I use the ISP to convert from YUYV to RGBA, if I passed though buffers through into MMAL like you have, would I also be able to maintain the buffer information? So I can determine what field it is?

Right now my options to try and get working to test would be

V4L2->ARM->GL_TEXTURE->SHADER (De-interlace/YUYV2RGBA in shader)
V4L2->ISP->ARM->GL_TEXTURE->SHADER(De-Interlace in shader)

Both of these I would first see if I can get around with the links you sent, or use a PBO instead of a pointer directly into SubTexture routine.

And then try to get either of those to work with EGL_Image with DMA and avoid the ARM side.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 5549
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Thu Aug 09, 2018 4:00 pm

grimepoch wrote:
Thu Aug 09, 2018 3:34 pm
No vc4 in my 'lsmod', so pretty sure I am using the firmware version. I'll take a look at those two links for the mmmaped buffers as well.

What is weird to me is that image_fx deinterlacing would require the image to already be interleaved...wouldn't that negate the need for deinterlacing? I imagine the only way I could use it is if I was able to stack the images one on top of another.
No, they'd still be interlaced. Remember that the two fields are separated in time, therefore horizontal movement has nasty artefacts if you display the two fields as one frame.
A simple line doubling algorithm with this interlave copies all the odd lines to all the even ones, or vice versa. It's useful in that it can be done in place to produce the correct resolution full frames.
More advanced algorithms will combine more fields. Read the description of the VLC deinterlacing modes - https://wiki.videolan.org/Deinterlacing ... lace_modes.
grimepoch wrote:If I use the ISP to convert from YUYV to RGBA, if I passed though buffers through into MMAL like you have, would I also be able to maintain the buffer information? So I can determine what field it is?
Look at MMAL_BUFFER_HEADER_VIDEO_FLAG_INTERLACED, and MMAL_BUFFER_HEADER_VIDEO_FLAG_TOP_FIELD_FIRST in header->flags field or header->type.video.flags.
Whilst TOP_FIELD_FIRST is intended to denote that the top field is on the odd lines, you can treat it as you like.
Almost all the flags should be passed through any component. The ones that aren't are things like EOS, FRAME_END, and DECODEONLY as they apply to a specific component.
grimepoch wrote:Right now my options to try and get working to test would be

V4L2->ARM->GL_TEXTURE->SHADER (De-interlace/YUYV2RGBA in shader)
V4L2->ISP->ARM->GL_TEXTURE->SHADER(De-Interlace in shader)

Both of these I would first see if I can get around with the links you sent, or use a PBO instead of a pointer directly into SubTexture routine.

And then try to get either of those to work with EGL_Image with DMA and avoid the ARM side.
Your call.
You've still got the option of doing the deinterlace as with the edge adaptive algorithm in the ADV chip :D
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

grimepoch
Posts: 95
Joined: Thu May 12, 2016 1:57 am

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Thu Aug 09, 2018 4:27 pm

Haha, I know that ISP block is still in there. :D

Pertaining to the de-interlace, I'm just not sure how I'd do it. It sounds like my option would be figuring out how to combine both fields into 1 field, THEN passing that to the deinterlacer. I believe that is what the other person is doing with the HDMI source.

So I imagine it would be something like

V4L2->MERGEFRAMES?->ISP->DEINTERLACE->.....

For having the ISP change the format, do I just change the format on the OUTPUT side of the MMAL component, and then it handles that internally?

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 5549
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Thu Aug 09, 2018 4:51 pm

grimepoch wrote:
Thu Aug 09, 2018 4:27 pm
For having the ISP change the format, do I just change the format on the OUTPUT side of the MMAL component, and then it handles that internally?
Yes. Change the encoding and the mmal_port_format_commit will compute alternate buffer sizes based on that. Job done.

As noted earlier you will want to tell the ISP to convert a 1440x240 or 1440x288 image to I420 otherwise the chroma subsampling results in chroma only on the top field, or averaged between the two fields. image_fx will need to be told 720x480 or 720x576.

Also note that 720 is not a multiple of 32, so you'll want to use either 736/1472 as the width on the ISP output (crop.width stays at 720/1440 as that defines the number of active pixels in the frame). The ISP is more flexible on alignment requirements than image_fx.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
Please don't send PMs asking for support - use the forum.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

luiscgalo
Posts: 29
Joined: Fri Jun 22, 2018 9:09 am

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Thu Aug 09, 2018 5:04 pm

Hi grimepoch,
To complement 6by9 info about how to merge the individual top and bottom fields into a single frame, you can take a look to the "video_field_cb" function within the "rawcam.c" file of my prototype code available on the following post:
viewtopic.php?f=43&t=218928#p1345272

In that case, since it is just a prototype, I'm merging two individual fields (1920*540) forming a FullHD video frame (1920*1080).

I hope that it helps you a little bit solving the problem related with the deinterlacing using image_fx ;)

grimepoch
Posts: 95
Joined: Thu May 12, 2016 1:57 am

Re: ADV7282 Analogue video to CSI chip with interlaced modes

Thu Aug 09, 2018 5:06 pm

I applied those commits to my install now to test out, no change in output. No errors or anything are spit out, just nothing, it stops at the glTexSubImage2D call and waits indefinitely.

Return to “Camera board”

Who is online

Users browsing this forum: No registered users and 5 guests