Posts: 1
Joined: Wed Sep 04, 2019 12:09 pm

Re: The Integrated Cat Flap : Pi Zero

Wed Sep 04, 2019 5:46 pm

I really look forward to when you share the code of you project. I face the same problem with our cat, and I have been thinking about starting to work on a similar project. But I would much appreciate a head start from what you got so far. :D

Posts: 1
Joined: Sun Nov 10, 2019 2:19 pm

Re: The Integrated Cat Flap : Pi Zero

Sun Nov 10, 2019 2:25 pm


I've been busy for some time creating a program that detects if my cat has prey in his mouth.
Currently the detection is based on a self-created Cascade.xml - Created with Cascade Trainer GUI
I used hunted images positive / negative for this. Unfortunately the hit ratio is not very good and I am on the
Looking for a better detection logic.

I use the Sureface cat flap with hub and can lock or unlock the flap with my Phyton program.
Whether the speed is sufficient later, I still have to test :-)
Much time for the detection and locking the flap does not remain.

My program runs under Pyhton 3.7 (64-bit)
Later I want to install the whole thing on a Raspberry Pi. At the moment everything is still running on my Windows PC.

Now to my question:
When creating a model with Google AutoML, how can you integrate it into a Python program?
Is there an xml file out there that you can integrate?

The images I used to train the Cascade.xml are frontal images. Or does it only work with images from below
show the cat's mouth?

What image size did you use for training with Google AutoML?

What did it cost you to create the model? I looked at the price model of Google AutoML Vision, but I did not understand it correctly.

Maybe you have some tips for me on how to improve my recognition?

Best regards

PS: Sorry for the English, I let it translate ;-)

Return to “Automation, sensing and robotics”