I've been busy for some time creating a program that detects if my cat has prey in his mouth.
Currently the detection is based on a self-created Cascade.xml - Created with Cascade Trainer GUI
I used hunted images positive / negative for this. Unfortunately the hit ratio is not very good and I am on the
Looking for a better detection logic.
I use the Sureface cat flap with hub and can lock or unlock the flap with my Phyton program.
Whether the speed is sufficient later, I still have to test
Much time for the detection and locking the flap does not remain.
My program runs under Pyhton 3.7 (64-bit)
Later I want to install the whole thing on a Raspberry Pi. At the moment everything is still running on my Windows PC.
Now to my question:
When creating a model with Google AutoML, how can you integrate it into a Python program?
Is there an xml file out there that you can integrate?
The images I used to train the Cascade.xml are frontal images. Or does it only work with images from below
show the cat's mouth?
What image size did you use for training with Google AutoML?
What did it cost you to create the model? I looked at the price model of Google AutoML Vision, but I did not understand it correctly.
Maybe you have some tips for me on how to improve my recognition?
PS: Sorry for the English, I let it translate