Page 1 of 1

resize component edges

Posted: Thu Feb 15, 2018 7:20 pm
by piense
It seems that if I feed the resize component with an image where the width is not equal to the stride, I get a green-ish line down the right edge. Anyone know how I can workaround that? I tried setting the width equal to the stride and then using the input crop as a workaround but that didn't seem to do the trick. I'm assuming it might still be sampling past that edge for the final pixel values though in that case I'd have guessed the edge would just be darker. I'm thinking of copying the right image edge to the dead region next. I'm just not 100% understanding the cause or ideal workaround.

Re: resize component edges

Posted: Thu Feb 15, 2018 7:23 pm
by piense

Re: resize component edges

Posted: Fri Feb 16, 2018 1:56 am
by piense
Seems to be a lot of corner cases with a YUV420 input, like odd input cropping width causing problems too. I'm using a resize component to covert to RGB first without scaling then doing another scale operation and it's all much cleaner.

Re: resize component edges

Posted: Fri Feb 16, 2018 7:50 am
by 6by9
I'm surprised the component accepts an odd width (or height) on YUV420 as the chroma subsampling does mean that you have odd effects at the edges. Things like u and v strides being y stride/2 fall down at that point as integer maths will round the strides down.

Can you give examples of the exact numbers you are using?

Re: resize component edges

Posted: Fri Feb 16, 2018 5:35 pm
by piense
It won't accept odd input widths or heights - throws an OMX_BadParameter. I'm surprised that the cropping won't either though since the pixel data is technically there, just has to be pulled out of a subsampled region. It seems like it should be able to handle odd sizes though if the stride or slice height is bigger. I had a few jpegs with that case and I just round up the dimension and recrop it after a color space conversion. If the output is an odd size it didn't through an error but the last column had issues. Seems like the output port should reflect the smaller dimension but it didn't seem to do that. Wonder if it checked the output dimension against the RGB color format output I'm using but the actual resize operation operates on the input color format.

My initial test photos were 3024x4032, 4592x3448 or a rotation of one of those two. It was just an album mix from my iPhone and a Sony camera. Later I pulled a bunch from wikimedia commons to try and break it. I ended up doing a decode component, followed by a 1:1 resize to convert to RGB then a resize with whatever scaling and cropping that needs to be done. Eventually I should tunnel those but for now I have them setup as classes handing buffers off to each other trying to catch errors and debug as much as possible. That seemed to have really good results as far as handling most jpegs and crop sizes I threw at it.