Faberix
Posts: 5
Joined: Fri Apr 21, 2017 4:33 pm

Fragment shader per-pixel memory or color attachment

Sat Apr 29, 2017 12:34 pm

Hi

I am writing a real-time motion detection application for the raspberry pi and it's camera to test some algorithms. I have two algorighms in mind. One using a background color and mean absolute deviation per pixel (and some others; in total 4 per-pixel variables [3 uchars and 1 float, but I could use a 4th uchar instead of the float]), the other using a single-layer neural network (actually, it's a filter with a different 3*3 kernel for each pixel, so I would need 9 floats per pixel), both with a gray-scale input. I wanted to implement these using OpenGL ES (in a fragment shader drawing to an FBO), however, it seems to be quite complicated.

For better understanding, here are the essential parts my cpu-only implementation (using OpenCV mats) of the two algorithms:
The first algorithm:

Code: Select all

class PixelHistory {
private:
	unsigned char backgroundColor;
	unsigned char currentColor;
	unsigned char currentColorCount = 0;

	float meanAbsoluteDeviation = 3.0f;

	unsigned char tolerance = 2*meanAbsoluteDeviation+1;
public:

	PixelHistory(unsigned char initColor) {
		backgroundColor = initColor;
		currentColor = initColor;
	}

	PixelHistory() {
		currentColor = 0;
		backgroundColor = 0;
	}

	void init(unsigned char initColor) {
		backgroundColor = initColor;
		currentColor = initColor;
	}

	inline unsigned char getTolerance() {
		return tolerance;
	}
	/**
	 * returns true if newColor != backgroundColor*/
	int update(unsigned char newColor/*, int tolerance*/);
};



int PixelHistory::update(unsigned char newColor) {
	tolerance = 2 * meanAbsoluteDeviation + 2;
	if (absDiff(backgroundColor, newColor) > tolerance) {
		if (absDiff(currentColor, newColor) > tolerance) {
			currentColor = newColor;
			currentColorCount = 1;
		} else {
			currentColorCount++;
		}

		if (currentColorCount > historyTime * fps) {
			backgroundColor = currentColor;
		}
		return 1;
	}

	meanAbsoluteDeviation = meanAbsoluteDeviation * oldColorWeighting + absDiff(backgroundColor, newColor) * newColorWeighting;
	backgroundColor = backgroundColor + ((int) newColor - backgroundColor) * newColorWeighting;
	return 0;
}

For the neural-network-approach:

Code: Select all

#include "Neuron.hpp"

using namespace std;

const float Neuron::NO_MOTION_VALUE = 1;

float* Neuron::defaultKernel = new float[kernelSize];
float Neuron::learningRate = 0.01;

float Neuron::minWeighting = 0.000001;
float Neuron::adjustThreshold = 0.01;

Neuron::Neuron(int x, int y, bool centerCoords) {
	if (centerCoords) {
		this->x = x - kernelSide / 2;
		this->y = y - kernelSide / 2;
	} else {
		this->x = x;
		this->y = y;
	}
	int i = 0;
	for (int y1 = 0; y1 < kernelSide; y1++) {
		for (int x1 = 0; x1 < kernelSide; x1++) {
			weightings[x1][y1] = defaultKernel[i];
			i++;
		}
	}
}

Neuron::Neuron(int x, int y, float **weightings, bool centerCoords) {
	if (centerCoords) {
		this->x = x - kernelSide / 2;
		this->y = y - kernelSide / 2;
	} else {
		this->x = x;
		this->y = y;
	}
	for (int x1 = 0; x1 < kernelSide; x1++) {
		for (int y1 = 0; y1 < kernelSide; y1++) {
			this->weightings[x][y] = weightings[x][y];
		}
	}
}

inline float Neuron::computeError(float output) {
	return NO_MOTION_VALUE - output;
}

void Neuron::initDefaultKernel() {
	float initValue = 1.0 / 128;
	for (int i = 0; i < kernelSize; i++) {
		defaultKernel[i] = initValue;
	}
}

void Neuron::freeMemory() {
	delete[] defaultKernel;
}

float Neuron::computeOutput(const Mat &mat) {
	float output = 0;
	for (int tx = 0; tx < kernelSide; tx++) {
		for (int ty = 0; ty < kernelSide; ty++) {
			output += mat.at<unsigned char>(y + ty, x + tx) * weightings[tx][ty];
		}
	}

//	learn
	float error = computeError(output);
	if (abs(error) > adjustThreshold) {
		for (int tx = 0; tx < kernelSide; tx++) {
			for (int ty = 0; ty < kernelSide; ty++) {
				weightings[tx][ty] += learningRate * weightings[tx][ty] * error * mat.at<unsigned char>(y + ty, x + tx);

				if (weightings[tx][ty] <= 0) {
					weightings[tx][ty] = minWeighting;
				}
			}
		}
	}

	meanAbsoluteDeviation += (abs(error) - meanAbsoluteDeviation) * learningRate;
	return output;
}
I would need the variables mentioned above on the gpu only, so it would be useful to have some per-pixel-buffer I can read from and write to from the fragmentshader. I could bypass that in the first algorithm using a ARGB output and the first bit of the mean absolute deviation as an indicator if there is motion or not and then using the output as a texture. For the second algorithm, a bypass like that would definitly mean too much moving, besides (after what I have read), OpenGL ES 2 does not support additional channels or color attachments. What would be an option is writing to the FBO and directly reuse it in the fragment shader. Is that possible? I have found a promising extension (but haven't tested it yet) for OpenGL ES: https://www.khronos.org/registry/OpenGL ... uffers.txt.

Is there a way to use a per-pixel-memory (per-fragment-memory respectively) or any usable workaround?

I have a raspberry pi 2 model B.

Return to “OpenGLES”

Who is online

Users browsing this forum: No registered users and 3 guests