Page 1 of 1

Lagrangian interpolation-based red edge position image?

Posted: Thu May 18, 2017 9:46 am
by Bryan See
I've been doing for two weeks now to figure out how to produce a red-edge position image from a stream taken by the Pi camera module. The REP image is to be calculated using Lagrangian Interpolation, and it seemed I got the memory-related error when executing it, or worse, it take longer than usual to get a result. It looks like I'm standing on a new approach here.

Here's my Python code (which uses OpenCV to open the bayer image):

Code: Select all

import io
import time
import cv2
import picamera
import picamera.array
import numpy as np

def lgp_rep(original):
    # Get first derivatives
    S_b = cv2.Sobel(original[:,:,0],cv2.CV_64F,1,1,ksize=5)
    S_g = cv2.Sobel(original[:,:,1],cv2.CV_64F,1,1,ksize=5)
    S_r = cv2.Sobel(original[:,:,2],cv2.CV_64F,1,1,ksize=5)

    #First, make containers
    oldHeight,oldWidth = original[:,:,0].shape;
    repImage = np.zeros((oldHeight,oldWidth,3),np.uint8) #make a blank RGB image
    rep = np.zeros((oldHeight,oldWidth), #make a blank b/w image for storing lagrangian interpolation value
    D_l = S_b.astype('float')
    D_m = S_g.astype('float')
    D_r = S_r.astype('float')

    l_l = 695
    l_m = 700
    l_r = 705

    v_1 = D_l / ((l_l - l_m) * (l_l - l_r))
    v_2 = D_m / ((l_m - l_l) * (l_m - l_r))
    v_3 = D_r / ((l_r - l_l) * (l_r - l_m))
    rep = (v_1 * (l_m + l_r)) + (v_2 * (l_l + l_r)) + (v_3 * (l_l + l_m))
    rep = rep / (2 * (v_1 + v_2 + v_3))

    repImage[:,:,2] = (rep-128)*2

    return repImage

with picamera.PiCamera() as camera:
    with picamera.array.PiBayerArray(camera) as stream:
    camera.capture(stream, 'jpeg', bayer=True)
    # Demosaic data and write to output (just use stream.array if you
    # want to skip the demosaic step)
    data = (stream.demosaic() >> 2).astype(np.uint8)

cv2.imshow("Image",cv2.resize(lgp_rep(data), (1024,768)))
I think the Lagrangian interpolation of the image to get a REPI (or red edge position or REP) image should be done by pixel-by-pixel. I'm wondering whether I'm going in the right direction or not, so I'm asking. Also, I want to get a wavelength from a pixel in the Lagrangian interpolation calculations. I'm using Raspberry Pi for this code.

Any suggestions?