User avatar
FunkyGandalf
Posts: 14
Joined: Tue Aug 04, 2020 2:26 pm

Fighting squirrel obesity via an open source squirrel detecting birdfeeder guard

Tue Aug 04, 2020 3:39 pm

I have an idea for a project and hoped someone might help point me in the right direction. Or if you're interested in helping I would greatly appreciate it! I would like to make an open source project that would be fun and hopefully enjoyed by a lot of people, and while I have a good start on it, it's quite a bit beyond my current level of coding.

The goal: To create a squirrel detecting birdfeeder defender using a Raspberry Pi, webcam/PiCamera, and a remote control solenoid valve attached to a water hose. The Pi would be running an object recognition program capable of distinguishing bird from squirrel (and/or raccoons) and when a squirrel was detected, would activate a water spray to safely discourage these cute (but persistent) creatures from gorging themselves on birdseed. By doing so we could help alleviate the scourge of squirrel obesity now rampant in many a backyard.

Additional features that could be added: When birds are detected it could take still images and save those either to a cloud or to a thumbdrive. It would also be fun to record video of the squirrels getting chased off. One could also implement a 2 axis motor (such as the pimoroni pan-tilt hat) for object tracking in order to patrol a wider area.

Progress so far: Following the instructions here (https://github.com/EdjeElectronics/Tens ... i_Guide.md) I was able to get Tensorflow Lite and the sample model object detection working on my Pi 4b. It runs well (around 5 frames/sec) and detects around 80 different object classes (person, bird, cat, dog, etc.) I also purchased a Google Coral device and when running the EdgeTPU model does the same around 22 frames per second. I suspect things would work just fine without the extra expense of the Coral.

In this model, squirrels are most often detected as cats. In order to better detect and distinguish between our 3 classes, we would need to fine-tune a Tensorflow Lite model and narrow it down to those 3 classes. I have compiled an image database containing several thousand images of each, all labelled with bounding boxes in XML and text format. In theory, it should be fairly easy to train a new model and distribute this new model in the open source package.

Needed Development: Unfortunately, I have not been able to figure out fine-tuning a model and then converting to Tensorflow Lite. I need someone way smarter in Tensorflow than I am. Additionally, there needs to be work done in being able to access the GPIO ports given an object detected. And hopefully someone more experienced with remote control could chime in with the code and equipment recommendations for this.

I'd be appreciative of any suggestions on where to go with this or how to develop this into a community project. Thanks for your time reading!

blimpyway
Posts: 618
Joined: Mon Mar 19, 2018 1:18 pm

Re: Fighting squirrel obesity via an open source squirrel detecting birdfeeder guard

Wed Aug 05, 2020 12:46 am

You can have simple motion detection.

You said your model already identifies birds.

Is the thing moving a bird? Then keep the picture/video already saved by motion detector.

Isn't a bird? Turn on the sprinkler. Squirrel, cat, polar bear - who cares, not their food

User avatar
FunkyGandalf
Posts: 14
Joined: Tue Aug 04, 2020 2:26 pm

Re: Fighting squirrel obesity via an open source squirrel detecting birdfeeder guard

Wed Aug 05, 2020 1:07 am

Perhaps...except for the poor souls who try to refill the bird feeder...or the guy who mows the lawn...or kids in the back yard. I do think the Tensorflow model would be the coolest solution, and probably not actually very difficult for someone familiar with its workings. The real drive behind this project would be to highlight the artificial intelligence possibilities able to be deployed on a Raspberry Pi, which are really quite impressive, far above and beyond that of a simple motion detector.

blimpyway
Posts: 618
Joined: Mon Mar 19, 2018 1:18 pm

Re: Fighting squirrel obesity via an open source squirrel detecting birdfeeder guard

Thu Aug 06, 2020 9:21 pm

Here-s a hint: you said it can recognize persons did you? (not only birds or only cats)

And I didn't say to use motion detector instead of NN, I said to combine the two... intelligently as you said you like but fail to recognize.

User avatar
FunkyGandalf
Posts: 14
Joined: Tue Aug 04, 2020 2:26 pm

Re: Fighting squirrel obesity via an open source squirrel detecting birdfeeder guard

Sat Sep 26, 2020 1:53 pm

Good progress so far. Massive thanks to Evan Juras/EdjeElectronics (https://github.com/EdjeElectronics/Tens ... spberry-Pi) who posted this code and his Bird/Squirrel/Raccoon custom Tensorflow Lite model. I converted the model to use the Google Coral EdgeTPU format to run faster off the Google Coral, although this is not strictly necessary.

Using the default model was less reliable at detecting adequately with lots of false positives/negatives. This new model uses only the 3 classes and does a much better job. I've attached 2 pictures demonstrating it accurately detecting both a bird and a squirrel in real-time using the webcam.
image1.jpg
image1.jpg (34.25 KiB) Viewed 1555 times
image2.jpg
image2.jpg (42.12 KiB) Viewed 1555 times

My next step is to figure out how to access the GPIO pins based on the result of a detection. What I would like to do is have it save a still image when a bird is detected, and perhaps take an image every 3 seconds or so as long as the bird is detected. When a squirrel or raccoon is detected, I want to set a GPIO to high (initial testing with an LED light) and record a video clip.

Below is the python script I'm currently using, courtesy of Evan Juras. I know I need to import the GPIO configurations, and set them appropriately. I am having difficulty figuring out how to filter the result of the detection and use that label from the labelmap (1=bird, 2=squirrel, 3=raccoon) to trigger an event.

Does anyone have any code suggestions on how best to accomplish this?


Code: Select all

######## Webcam Object Detection Using Tensorflow-trained Classifier #########
#
# Author: Evan Juras
# Date: 10/27/19
# Description: 
# This program uses a TensorFlow Lite model to perform object detection on a live webcam
# feed. It draws boxes and scores around the objects of interest in each frame from the
# webcam. To improve FPS, the webcam object runs in a separate thread from the main program.
# This script will work with either a Picamera or regular USB webcam.
#
# This code is based off the TensorFlow Lite image classification example at:
# https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/examples/python/label_image.py
#
# I added my own method of drawing boxes and labels using OpenCV.

# Import packages
import os
import argparse
import cv2
import numpy as np
import sys
import time
from threading import Thread
import importlib.util

# Define VideoStream class to handle streaming of video from webcam in separate processing thread
# Source - Adrian Rosebrock, PyImageSearch: https://www.pyimagesearch.com/2015/12/28/increasing-raspberry-pi-fps-with-python-and-opencv/
class VideoStream:
    """Camera object that controls video streaming from the Picamera"""
    def __init__(self,resolution=(640,480),framerate=30):
        # Initialize the PiCamera and the camera image stream
        self.stream = cv2.VideoCapture(0)
        ret = self.stream.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG'))
        ret = self.stream.set(3,resolution[0])
        ret = self.stream.set(4,resolution[1])
            
        # Read first frame from the stream
        (self.grabbed, self.frame) = self.stream.read()

	# Variable to control when the camera is stopped
        self.stopped = False

    def start(self):
	# Start the thread that reads frames from the video stream
        Thread(target=self.update,args=()).start()
        return self

    def update(self):
        # Keep looping indefinitely until the thread is stopped
        while True:
            # If the camera is stopped, stop the thread
            if self.stopped:
                # Close camera resources
                self.stream.release()
                return

            # Otherwise, grab the next frame from the stream
            (self.grabbed, self.frame) = self.stream.read()

    def read(self):
	# Return the most recent frame
        return self.frame

    def stop(self):
	# Indicate that the camera and thread should be stopped
        self.stopped = True

# Define and parse input arguments
parser = argparse.ArgumentParser()
parser.add_argument('--modeldir', help='Folder the .tflite file is located in',
                    required=True)
parser.add_argument('--graph', help='Name of the .tflite file, if different than detect.tflite',
                    default='detect.tflite')
parser.add_argument('--labels', help='Name of the labelmap file, if different than labelmap.txt',
                    default='labelmap.txt')
parser.add_argument('--threshold', help='Minimum confidence threshold for displaying detected objects',
                    default=0.5)
parser.add_argument('--resolution', help='Desired webcam resolution in WxH. If the webcam does not support the resolution entered, errors may occur.',
                    default='1280x720')
parser.add_argument('--edgetpu', help='Use Coral Edge TPU Accelerator to speed up detection',
                    action='store_true')

args = parser.parse_args()

MODEL_NAME = args.modeldir
GRAPH_NAME = args.graph
LABELMAP_NAME = args.labels
min_conf_threshold = float(args.threshold)
resW, resH = args.resolution.split('x')
imW, imH = int(resW), int(resH)
use_TPU = args.edgetpu

# Import TensorFlow libraries
# If tflite_runtime is installed, import interpreter from tflite_runtime, else import from regular tensorflow
# If using Coral Edge TPU, import the load_delegate library
pkg = importlib.util.find_spec('tflite_runtime')
if pkg:
    from tflite_runtime.interpreter import Interpreter
    if use_TPU:
        from tflite_runtime.interpreter import load_delegate
else:
    from tensorflow.lite.python.interpreter import Interpreter
    if use_TPU:
        from tensorflow.lite.python.interpreter import load_delegate

# If using Edge TPU, assign filename for Edge TPU model
if use_TPU:
    # If user has specified the name of the .tflite file, use that name, otherwise use default 'edgetpu.tflite'
    if (GRAPH_NAME == 'detect.tflite'):
        GRAPH_NAME = 'edgetpu.tflite'       

# Get path to current working directory
CWD_PATH = os.getcwd()

# Path to .tflite file, which contains the model that is used for object detection
PATH_TO_CKPT = os.path.join(CWD_PATH,MODEL_NAME,GRAPH_NAME)

# Path to label map file
PATH_TO_LABELS = os.path.join(CWD_PATH,MODEL_NAME,LABELMAP_NAME)

# Load the label map
with open(PATH_TO_LABELS, 'r') as f:
    labels = [line.strip() for line in f.readlines()]

# Have to do a weird fix for label map if using the COCO "starter model" from
# https://www.tensorflow.org/lite/models/object_detection/overview
# First label is '???', which has to be removed.
if labels[0] == '???':
    del(labels[0])

# Load the Tensorflow Lite model.
# If using Edge TPU, use special load_delegate argument
if use_TPU:
    interpreter = Interpreter(model_path=PATH_TO_CKPT,
                              experimental_delegates=[load_delegate('libedgetpu.so.1.0')])
    print(PATH_TO_CKPT)
else:
    interpreter = Interpreter(model_path=PATH_TO_CKPT)

interpreter.allocate_tensors()

# Get model details
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
height = input_details[0]['shape'][1]
width = input_details[0]['shape'][2]

floating_model = (input_details[0]['dtype'] == np.float32)

input_mean = 127.5
input_std = 127.5

# Initialize frame rate calculation
frame_rate_calc = 1
freq = cv2.getTickFrequency()

# Initialize video stream
videostream = VideoStream(resolution=(imW,imH),framerate=30).start()
time.sleep(1)

#for frame1 in camera.capture_continuous(rawCapture, format="bgr",use_video_port=True):
while True:

    # Start timer (for calculating frame rate)
    t1 = cv2.getTickCount()

    # Grab frame from video stream
    frame1 = videostream.read()

    # Acquire frame and resize to expected shape [1xHxWx3]
    frame = frame1.copy()
    frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
    frame_resized = cv2.resize(frame_rgb, (width, height))
    input_data = np.expand_dims(frame_resized, axis=0)

    # Normalize pixel values if using a floating model (i.e. if model is non-quantized)
    if floating_model:
        input_data = (np.float32(input_data) - input_mean) / input_std

    # Perform the actual detection by running the model with the image as input
    interpreter.set_tensor(input_details[0]['index'],input_data)
    interpreter.invoke()

    # Retrieve detection results
    boxes = interpreter.get_tensor(output_details[0]['index'])[0] # Bounding box coordinates of detected objects
    classes = interpreter.get_tensor(output_details[1]['index'])[0] # Class index of detected objects
    scores = interpreter.get_tensor(output_details[2]['index'])[0] # Confidence of detected objects
    #num = interpreter.get_tensor(output_details[3]['index'])[0]  # Total number of detected objects (inaccurate and not needed)

    # Loop over all detections and draw detection box if confidence is above minimum threshold
    for i in range(len(scores)):
        if ((scores[i] > min_conf_threshold) and (scores[i] <= 1.0)):

            # Get bounding box coordinates and draw box
            # Interpreter can return coordinates that are outside of image dimensions, need to force them to be within image using max() and min()
            ymin = int(max(1,(boxes[i][0] * imH)))
            xmin = int(max(1,(boxes[i][1] * imW)))
            ymax = int(min(imH,(boxes[i][2] * imH)))
            xmax = int(min(imW,(boxes[i][3] * imW)))
            
            cv2.rectangle(frame, (xmin,ymin), (xmax,ymax), (10, 255, 0), 2)

            # Draw label
            object_name = labels[int(classes[i])] # Look up object name from "labels" array using class index
            label = '%s: %d%%' % (object_name, int(scores[i]*100)) # Example: 'person: 72%'
            labelSize, baseLine = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2) # Get font size
            label_ymin = max(ymin, labelSize[1] + 10) # Make sure not to draw label too close to top of window
            cv2.rectangle(frame, (xmin, label_ymin-labelSize[1]-10), (xmin+labelSize[0], label_ymin+baseLine-10), (255, 255, 255), cv2.FILLED) # Draw white box to put label text in
            cv2.putText(frame, label, (xmin, label_ymin-7), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 0), 2) # Draw label text

    # Draw framerate in corner of frame
    cv2.putText(frame,'FPS: {0:.2f}'.format(frame_rate_calc),(30,50),cv2.FONT_HERSHEY_SIMPLEX,1,(255,255,0),2,cv2.LINE_AA)

    # All the results have been drawn on the frame, so it's time to display it.
    cv2.imshow('Object detector', frame)

    # Calculate framerate
    t2 = cv2.getTickCount()
    time1 = (t2-t1)/freq
    frame_rate_calc= 1/time1

    # Press 'q' to quit
    if cv2.waitKey(1) == ord('q'):
        break

# Clean up
cv2.destroyAllWindows()
videostream.stop()

blimpyway
Posts: 618
Joined: Mon Mar 19, 2018 1:18 pm

Re: Fighting squirrel obesity via an open source squirrel detecting birdfeeder guard

Sun Sep 27, 2020 11:41 pm

I found this small project quite interesting - https://github.com/appinho/SARaspberryP ... Classifier

It depends only on OpenCV's DNN module, it doesn't need tensorflow stuff.

SqueezeNet model on Pi3 A+ classifies an image in under 0.5 seconds. The GoogleNet model being larger gives better results in 1 sec. or more, up to 1.5

And yes it can recognize squirrels, I don't know how good they are in real life.

User avatar
FunkyGandalf
Posts: 14
Joined: Tue Aug 04, 2020 2:26 pm

Re: Fighting squirrel obesity via an open source squirrel detecting birdfeeder guard

Sun Oct 25, 2020 8:38 pm

I've been teaching myself Python through this project, and have been able to cobble together a program that accomplishes most of the goals pretty well. I'm very happy with how the program is functioning and it's pretty much ready to be put into a working prototype.

Using the custom bird/squirrel/raccoon object detection model and Tensorflow Lite, the following code takes a USB webcam or Picam video stream and detects birds, squirrels, and raccoons entering the camera field. When birds are detected, it uses CV2 to take still images and save them to a USB flash drive. When squirrels or raccoons are detected, a GPIO pin is turned on for several seconds before shutting off again which can be used to open a relay to a solenoid valve. I also added in the KeyClipWriter functionality from the PyImageSearch blog at https://www.pyimagesearch.com/2016/02/2 ... th-opencv/ (Thank you, Adrian!!!) to be able to save video clips of squirrel or raccoon detections, not just during the actual detection but for several seconds before and after the actual detection event. That was a really useful feature to be able to include.

The setup involves following the instructions from https://github.com/EdjeElectronics/Tens ... spberry-Pi, including downloading the custom object recognition model.

Here is the working Python script in case anyone is interested in trying it. I'm looking into 433Mhz remote control and will be hopefully getting that added in next.

Code: Select all

######## Webcam Object Detection Using Tensorflow-trained Classifier #########

#

# Author: Evan Juras

# Date: 10/27/19

# Description: 

# This program uses a TensorFlow Lite model to perform object detection on a live webcam

# feed. It draws boxes and scores around the objects of interest in each frame from the

# webcam. To improve FPS, the webcam object runs in a separate thread from the main program.

# This script will work with either a Picamera or regular USB webcam.

#

# This code is based off the TensorFlow Lite image classification example at:

# https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/examples/python/label_image.py

#

# I added my own method of drawing boxes and labels using OpenCV.



# Import packages

import os

import argparse

import cv2

import numpy as np

import sys

import time

from threading import Thread

import importlib.util

from pyimagesearch.keyclipwriter import KeyClipWriter

import imutils

import datetime

from gpiozero import LED



ON_TIME = 2.0

pestpin = LED(11)

squirrel_on_time = 0.0

last_picture_taken = 0.0





# Define VideoStream class to handle streaming of video from webcam in separate processing thread

# Source - Adrian Rosebrock, PyImageSearch: https://www.pyimagesearch.com/2015/12/28/increasing-raspberry-pi-fps-with-python-and-opencv/

class VideoStream:

    """Camera object that controls video streaming from the Picamera"""

    def __init__(self,resolution=(640,480),framerate=30):

        # Initialize the PiCamera and the camera image stream

        self.stream = cv2.VideoCapture(0)

        ret = self.stream.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG'))

        ret = self.stream.set(3,resolution[0])

        ret = self.stream.set(4,resolution[1])

            

        # Read first frame from the stream

        (self.grabbed, self.frame) = self.stream.read()



	# Variable to control when the camera is stopped

        self.stopped = False



    def start(self):

	# Start the thread that reads frames from the video stream

        Thread(target=self.update,args=()).start()

        return self



    def update(self):

        # Keep looping indefinitely until the thread is stopped

        while True:

            # If the camera is stopped, stop the thread

            if self.stopped:

                # Close camera resources

                self.stream.release()

                return



            # Otherwise, grab the next frame from the stream

            (self.grabbed, self.frame) = self.stream.read()



    def read(self):

	# Return the most recent frame

        return self.frame



    def stop(self):

	# Indicate that the camera and thread should be stopped

        self.stopped = True



# Define and parse input arguments

parser = argparse.ArgumentParser()

parser.add_argument('--modeldir', help='Folder the .tflite file is located in',

                    required=True)

parser.add_argument('--graph', help='Name of the .tflite file, if different than detect.tflite',

                    default='detect.tflite')

parser.add_argument('--labels', help='Name of the labelmap file, if different than labelmap.txt',

                    default='labelmap.txt')

parser.add_argument('--threshold', help='Minimum confidence threshold for displaying detected objects',

                    default=0.5)

parser.add_argument('--resolution', help='Desired webcam resolution in WxH. If the webcam does not support the resolution entered, errors may occur.',

                    default='1280x720')

parser.add_argument('--edgetpu', help='Use Coral Edge TPU Accelerator to speed up detection',

                    action='store_true')

parser.add_argument('--fps', help='FPS of output video', type=int, default=20)

parser.add_argument('--codec', type=str, help='codec of output video', default='MJPG')

parser.add_argument('--buffer_size', type=int, help='buffer size of video clip writer', default=64)



args = parser.parse_args()



MODEL_NAME = args.modeldir

GRAPH_NAME = args.graph

LABELMAP_NAME = args.labels

min_conf_threshold = float(args.threshold)

resW, resH = args.resolution.split('x')

imW, imH = int(resW), int(resH)

use_TPU = args.edgetpu

FPS = args.fps

CODEC = args.codec

BUFFER_SIZE = args.buffer_size



# Import TensorFlow libraries

# If tflite_runtime is installed, import interpreter from tflite_runtime, else import from regular tensorflow

# If using Coral Edge TPU, import the load_delegate library

pkg = importlib.util.find_spec('tflite_runtime')

if pkg:

    from tflite_runtime.interpreter import Interpreter

    if use_TPU:

        from tflite_runtime.interpreter import load_delegate

else:

    from tensorflow.lite.python.interpreter import Interpreter

    if use_TPU:

        from tensorflow.lite.python.interpreter import load_delegate



# If using Edge TPU, assign filename for Edge TPU model

if use_TPU:

    # If user has specified the name of the .tflite file, use that name, otherwise use default 'edgetpu.tflite'

    if (GRAPH_NAME == 'detect.tflite'):

        GRAPH_NAME = 'edgetpu.tflite'       



# Get path to current working directory

CWD_PATH = os.getcwd()



# Path to .tflite file, which contains the model that is used for object detection

PATH_TO_CKPT = os.path.join(CWD_PATH,MODEL_NAME,GRAPH_NAME)



# Path to label map file

PATH_TO_LABELS = os.path.join(CWD_PATH,MODEL_NAME,LABELMAP_NAME)



# Initialize key clip writer and the consecutive number of

# frames that have *not* contained any action

kcw = KeyClipWriter(bufSize=BUFFER_SIZE)

consecFrames = 0



# Load the label map

with open(PATH_TO_LABELS, 'r') as f:

    labels = [line.strip() for line in f.readlines()]



# Have to do a weird fix for label map if using the COCO "starter model" from

# https://www.tensorflow.org/lite/models/object_detection/overview

# First label is '???', which has to be removed.

if labels[0] == '???':

    del(labels[0])



# Load the Tensorflow Lite model.

# If using Edge TPU, use special load_delegate argument

if use_TPU:

    interpreter = Interpreter(model_path=PATH_TO_CKPT,

                              experimental_delegates=[load_delegate('libedgetpu.so.1.0')])

    print(PATH_TO_CKPT)

else:

    interpreter = Interpreter(model_path=PATH_TO_CKPT)



interpreter.allocate_tensors()



# Get model details

input_details = interpreter.get_input_details()

output_details = interpreter.get_output_details()

height = input_details[0]['shape'][1]

width = input_details[0]['shape'][2]



floating_model = (input_details[0]['dtype'] == np.float32)



input_mean = 127.5

input_std = 127.5



# Initialize frame rate calculation

frame_rate_calc = 1

freq = cv2.getTickFrequency()



# Initialize video stream

videostream = VideoStream(resolution=(imW,imH),framerate=30).start()

time.sleep(1)



#for frame1 in camera.capture_continuous(rawCapture, format="bgr",use_video_port=True):

while True:



    # Start timer (for calculating frame rate)

    t1 = cv2.getTickCount()



    # Grab frame from video stream

    frame1 = videostream.read()



    # Acquire frame and resize to expected shape [1xHxWx3]

    frame = frame1.copy()

    frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

    frame_resized = cv2.resize(frame_rgb, (width, height))

    input_data = np.expand_dims(frame_resized, axis=0)

    updateConsecFrames = True



    # Normalize pixel values if using a floating model (i.e. if model is non-quantized)

    if floating_model:

        input_data = (np.float32(input_data) - input_mean) / input_std



    # Perform the actual detection by running the model with the image as input

    interpreter.set_tensor(input_details[0]['index'],input_data)

    interpreter.invoke()



    # Retrieve detection results

    boxes = interpreter.get_tensor(output_details[0]['index'])[0] # Bounding box coordinates of detected objects

    classes = interpreter.get_tensor(output_details[1]['index'])[0] # Class index of detected objects

    scores = interpreter.get_tensor(output_details[2]['index'])[0] # Confidence of detected objects

    #num = interpreter.get_tensor(output_details[3]['index'])[0]  # Total number of detected objects (inaccurate and not needed)



    # Loop over all detections and draw detection box if confidence is above minimum threshold

    for i in range(len(scores)):

        if ((scores[i] > min_conf_threshold) and (scores[i] <= 1.0)):

            

            # Get bounding box coordinates and draw box

            # Interpreter can return coordinates that are outside of image dimensions, need to force them to be within image using max() and min()

            ymin = int(max(1,(boxes[i][0] * imH)))

            xmin = int(max(1,(boxes[i][1] * imW)))

            ymax = int(min(imH,(boxes[i][2] * imH)))

            xmax = int(min(imW,(boxes[i][3] * imW)))

            

            cv2.rectangle(frame, (xmin,ymin), (xmax,ymax), (10, 255, 0), 2)



            # Draw label

            object_name = labels[int(classes[i])] # Look up object name from "labels" array using class index

            label = '%s: %d%%' % (object_name, int(scores[i]*100)) # Example: 'person: 72%'

            labelSize, baseLine = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2) # Get font size

            label_ymin = max(ymin, labelSize[1] + 10) # Make sure not to draw label too close to top of window

            cv2.rectangle(frame, (xmin, label_ymin-labelSize[1]-10), (xmin+labelSize[0], label_ymin+baseLine-10), (255, 255, 255), cv2.FILLED) # Draw white box to put label text in

            cv2.putText(frame, label, (xmin, label_ymin-7), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 0), 2) # Draw label text

            

            if object_name == 'bird':

                if time.time() - last_picture_taken > 5:

                    last_picture_taken = time.time()

                    unique_filename = str(datetime.datetime.now().date()) + '_' + str(datetime.datetime.now().time()).replace(':', '.')

                    cv2.imwrite('/media/pi/40D8-5312/birdpics/birdphoto' + unique_filename + '.jpg', frame)

            

            

            if object_name == 'squirrel' or object_name == 'raccoon':

                #reset the number of consecutive frames with no action to zero

                consecFrames = 0

                squirrel_on_time = time.time()

                #if we are not already recording, start recording

                if not kcw.recording:

                    unique_filename = str(datetime.datetime.now().date()) + '_' + str(datetime.datetime.now().time()).replace(':', '.')

                    p = '/media/pi/40D8-5312/squirrelvids/squirrelvid' + unique_filename + '.avi'

                    kcw.start(p, cv2.VideoWriter_fourcc(*CODEC), FPS)

                if pestpin.value != 1: #only turn on if it's off

                    pestpin.on()

                    

    #otherwise, no action has taken place in this frame, so increment the number

    # of consecutive frames that contain no action

    if updateConsecFrames:

        consecFrames += 1

    

    if (time.time() - squirrel_on_time) > ON_TIME:

        pestpin.off()

    

             

    #update the key frame clip buffer

    kcw.update(frame1)

            

    #if we are recording and reached a threshold on consecutive

    #number of frames with no action, stop recording the clip

    if kcw.recording and consecFrames == BUFFER_SIZE:

        kcw.finish()

            



    # All the results have been drawn on the frame, so it's time to display it.

    cv2.imshow('Object detector', frame)



   



    # Press 'q' to quit

    if cv2.waitKey(1) == ord('q'):

        break



#if we are in the middle of recording a clip, wrap it up

if kcw.recording:

    kcw.finish()



# Clean up

cv2.destroyAllWindows()

videostream.stop()



And this is the keyclipwriter.py located in the pyimagesearch subfolder:

Code: Select all

# import the necessary packages
from collections import deque
from threading import Thread
from queue import Queue
import time
import cv2

class KeyClipWriter:
	def __init__(self, bufSize=64, timeout=1.0):
		# store the maximum buffer size of frames to be kept
		# in memory along with the sleep timeout during threading
		self.bufSize = bufSize
		self.timeout = timeout

		# initialize the buffer of frames, queue of frames that
		# need to be written to file, video writer, writer thread,
		# and boolean indicating whether recording has started or not
		self.frames = deque(maxlen=bufSize)
		self.Q = None
		self.writer = None
		self.thread = None
		self.recording = False

	def update(self, frame):
		# update the frames buffer
		self.frames.appendleft(frame)

		# if we are recording, update the queue as well
		if self.recording:
			self.Q.put(frame)

	def start(self, outputPath, fourcc, fps):
		# indicate that we are recording, start the video writer,
		# and initialize the queue of frames that need to be written
		# to the video file
		self.recording = True
		self.writer = cv2.VideoWriter(outputPath, fourcc, fps,
			(self.frames[0].shape[1], self.frames[0].shape[0]), True)
		self.Q = Queue()

		# loop over the frames in the deque structure and add them
		# to the queue
		for i in range(len(self.frames), 0, -1):
			self.Q.put(self.frames[i - 1])

		# start a thread write frames to the video file
		self.thread = Thread(target=self.write, args=())
		self.thread.daemon = True
		self.thread.start()

	def write(self):
		# keep looping
		while True:
			# if we are done recording, exit the thread
			if not self.recording:
				return

			# check to see if there are entries in the queue
			if not self.Q.empty():
				# grab the next frame in the queue and write it
				# to the video file
				frame = self.Q.get()
				self.writer.write(frame)

			# otherwise, the queue is empty, so sleep for a bit
			# so we don't waste CPU cycles
			else:
				time.sleep(self.timeout)

	def flush(self):
		# empty the queue by flushing all remaining frames to file
		while not self.Q.empty():
			frame = self.Q.get()
			self.writer.write(frame)

	def finish(self):
		# indicate that we are done recording, join the thread,
		# flush all remaining frames in the queue to file, and
		# release the writer pointer
		self.recording = False
		self.thread.join()
		self.flush()
		self.writer.release()

JokesOnYou77
Posts: 1
Joined: Sat Mar 27, 2021 11:47 pm

Re: Fighting squirrel obesity via an open source squirrel detecting birdfeeder guard

Sat Mar 27, 2021 11:50 pm

@FunkyGandalf can you share your training data?

User avatar
FunkyGandalf
Posts: 14
Joined: Tue Aug 04, 2020 2:26 pm

Re: Fighting squirrel obesity via an open source squirrel detecting birdfeeder guard

Sun Mar 28, 2021 6:12 pm

@JokesOnYou777 I obtained the model from the tutorial put together by EdjeElectronics, available at https://github.com/EdjeElectronics/Tens ... spberry-Pi

On that tutorial he links to his model available on Dropbox (https://www.dropbox.com/s/cpaon1j1r1yzf ... l.zip?dl=0)

I've still been having difficulty figuring out custom training a model myself and usually find myself running into one error or another, so it was very helpful to find this one. I find it works pretty well at close up to the bird feeder, although tree shadows on grassy areas seem to trick it into thinking a raccoon is there.

LTolledo
Posts: 5375
Joined: Sat Mar 17, 2018 7:29 am
Location: Anime Heartland

Re: Fighting squirrel obesity via an open source squirrel detecting birdfeeder guard

Wed Mar 31, 2021 9:47 pm

how much does a "bird" weigh
how much does a "small squirrel" weigh?
how much does a "raccoon" weigh?
"Don't come to me with 'issues' for I don't know how to deal with those
Come to me with 'problems' and I'll help you find solutions"

Some people be like:
"Help me! Am drowning! But dont you dare touch me nor come near me!"

blimpyway
Posts: 618
Joined: Mon Mar 19, 2018 1:18 pm

Re: Fighting squirrel obesity via an open source squirrel detecting birdfeeder guard

Wed Apr 07, 2021 3:36 pm

LTolledo wrote:
Wed Mar 31, 2021 9:47 pm
how much does a "bird" weigh
how much does a "small squirrel" weigh?
how much does a "raccoon" weigh?
"Before" or "After"?

Ladi1968
Posts: 1
Joined: Mon Apr 12, 2021 2:19 pm

Re: Fighting squirrel obesity via an open source squirrel detecting birdfeeder guard

Mon Apr 12, 2021 2:41 pm

Hello everybody,

I recommend to watch the clip below. Just to give you an idea what you are up against if you want to built a squirrel proof bird feeder:
https://www.youtube.com/watch?v=hFZFjoX2cGg
You should really watch it because first, it is hilarious and second, the guy was a former NASA engineering and (in my opinion) kind of a genius.
Sorry for interfering here as a complete newbie, but when I read the subject I instantly recalled that clip.
Have fun.

Cheers,
Juergen from Germany

Return to “Automation, sensing and robotics”