jvandborg
Posts: 1
Joined: Wed Sep 04, 2019 12:09 pm

Re: The Integrated Cat Flap : Pi Zero

Wed Sep 04, 2019 5:46 pm

I really look forward to when you share the code of you project. I face the same problem with our cat, and I have been thinking about starting to work on a similar project. But I would much appreciate a head start from what you got so far. :D

staebchen1
Posts: 1
Joined: Sun Nov 10, 2019 2:19 pm

Re: The Integrated Cat Flap : Pi Zero

Sun Nov 10, 2019 2:25 pm

Hi,

I've been busy for some time creating a program that detects if my cat has prey in his mouth.
Currently the detection is based on a self-created Cascade.xml - Created with Cascade Trainer GUI
I used hunted images positive / negative for this. Unfortunately the hit ratio is not very good and I am on the
Looking for a better detection logic.

I use the Sureface cat flap with hub and can lock or unlock the flap with my Phyton program.
Whether the speed is sufficient later, I still have to test :-)
Much time for the detection and locking the flap does not remain.

My program runs under Pyhton 3.7 (64-bit)
Later I want to install the whole thing on a Raspberry Pi. At the moment everything is still running on my Windows PC.

Now to my question:
When creating a model with Google AutoML, how can you integrate it into a Python program?
Is there an xml file out there that you can integrate?


The images I used to train the Cascade.xml are frontal images. Or does it only work with images from below
show the cat's mouth?

What image size did you use for training with Google AutoML?

What did it cost you to create the model? I looked at the price model of Google AutoML Vision, but I did not understand it correctly.

Maybe you have some tips for me on how to improve my recognition?

thanks
Best regards
Anja

PS: Sorry for the English, I let it translate ;-)

TheGerbil
Posts: 1
Joined: Sat Dec 21, 2019 10:14 am

Re: The Integrated Cat Flap : Pi Zero

Sat Dec 21, 2019 10:21 am

Hi, for so long I had e search for a cat flap that just has a camera and an app to open the flap for the cat. Our cat is relentless, wiping out anything that moves in the garden! We can't change her behavior and simple set the cat flap to allow the cat out only... Then like clockwork at around 3am she's ready to come back in and get warm so I get up and let her in (she chews the cat flap to wake us up!)

Is this to a level that you could share the workings and code so that we can build one? I was about to embark on integrating a webcam into the flap and some sort of servo to release the flap. That's when I found your solution. Which is a much more automated approach once it's trained. But either way, as the cat gets older I'd rather not keep locking it out!!

Thanks, and great work. Really well done.

lucatoldo
Posts: 2
Joined: Sat Feb 16, 2019 4:53 pm

Re: The Integrated Cat Flap : Pi Zero

Tue Dec 24, 2019 1:00 pm

Hi can you share the images so that also other can experiment on the same dataset ?

Lunanero
Posts: 1
Joined: Wed Feb 12, 2020 11:21 pm

Re: The Integrated Cat Flap : Pi Zero

Wed Feb 12, 2020 11:27 pm

I am looking for such a cat flap since years. In Germany there was a project like yours but it seems the project never got transferred into a real product.

I think this is the best concept. Making a photo from the cats face and detect if they have something in their mouth or not.

I would love to pay something for a flap like that. Right now I have to close my flap and only let my cat going out because I already had several living mice in my house.

Can you help me?

BigJinge
Posts: 33
Joined: Sat Jul 09, 2016 12:23 pm

Re: The Integrated Cat Flap : Pi Zero

Mon Jun 29, 2020 1:35 am

staebchen1 wrote:
Sun Nov 10, 2019 2:25 pm
Hi,

I've been busy for some time creating a program that detects if my cat has prey in his mouth.
Currently the detection is based on a self-created Cascade.xml - Created with Cascade Trainer GUI
I used hunted images positive / negative for this. Unfortunately the hit ratio is not very good and I am on the
Looking for a better detection logic.

I use the Sureface cat flap with hub and can lock or unlock the flap with my Phyton program.
Whether the speed is sufficient later, I still have to test :-)
Much time for the detection and locking the flap does not remain.

My program runs under Pyhton 3.7 (64-bit)
Later I want to install the whole thing on a Raspberry Pi. At the moment everything is still running on my Windows PC.

Now to my question:
When creating a model with Google AutoML, how can you integrate it into a Python program?
Is there an xml file out there that you can integrate?


The images I used to train the Cascade.xml are frontal images. Or does it only work with images from below
show the cat's mouth?

What image size did you use for training with Google AutoML?

What did it cost you to create the model? I looked at the price model of Google AutoML Vision, but I did not understand it correctly.

Maybe you have some tips for me on how to improve my recognition?

thanks
Best regards
Anja

PS: Sorry for the English, I let it translate ;-)
Hi Anja,

Sorry I haven't been able to respond before.

As I wrote on this thread before, I gave up using Cascades as they didn't have the flexiblity to detect the huge variations of positions the cat (with or without prey). I found that as you've found out, that the Cascades didn't have enough face data inside them AND that OpenCV didn't have the intellect to work out what was going on if it didn't match the criteria, short of turning the window around.

Hence using Google AutoML as it has better training and detection percentages.

Once you have a dataset, Google normally give you around $200 worth of free rendering time (at least they did when I developing the project) which lasts a year. So creating your model should be free. if you run out of free credit, it normally costs around $20 of render time on their servers to produce the model. I then download an Edge TPU model for my Google AI Stick (see below)

The image size the flap camera takes and is used throughout is 640x480.

Storage of 26k photos / dataset and made models on their servers works out for me 2p a month. Peanuts.

I use Google's Coral Accelerator USB stick plugged into the Pi Zero. This vastly increases the identification time.

Let me know how you're getting on.

Best

BigJinge

BigJinge
Posts: 33
Joined: Sat Jul 09, 2016 12:23 pm

Re: The Integrated Cat Flap : Pi Zero

Mon Jun 29, 2020 2:03 am

Again, I don't know where the time has gone. It's been a year and has passed in flash as other projects and life have taken up so much time.

That said, the flap has been working now for the year, a few reboots but it's fine.

Normally by now we would have had 20 or so mice brought in but since the flap has been operational, we've had none bought in. The missus and I are very happy. Especially as I've not had to dismantle the living room at 2.30am.

I have set the cat flap now to output video if the AI detects the cat with prey three times. Remember, it does this in 60ms or so.

Note once the flap locks, it's locked for 30 seconds. Our cat likes to stay around the flap and try the door (which is locked). As I know the cat will do this, I don't need to keep checking for prey as the flap is locked. If after 30 seconds and the flap unlocks, if cat returns with prey it will lock for 30 seconds again. I'll post some videos if I can.

Over the year the cat has learnt that there is no point trying to get in with prey and looking at the logs it appears that it isn't bringing back the prey it used to. So you CAN teach a cat.

On providing a flap to others.
----------------------------------------------

I don't have the time to make or to train databases for others. If I did the parts list and my labour would come in over £1000 which makes it unviable, hence not making any. Also, the training data is unique to you cat(s) and the location of the flap, lighting etc. My model most likely wouldn't work for you. Our tabby cat only brings in prey at night. I don't have any day prey images, so even if you had a tabby who did bring in prey during the day, the ident score would be different. If you had a black cat, the same thing.

Going forward
----------------------

I really do want to build a blog showing all the steps I did but I will answer questions if I can before then.

Best and stay well

BigJinge

Return to “Automation, sensing and robotics”