Hello. I am a PhD student in biomedical engineering. I am currently working on an rat animal model for a certain type of pain. As part of this model, animals which have pain act differently than animals which do not have the pain. Specifically, rats which do have pain do not wrestle as frequently as those which have pain. NOTE: this wrestling is extremely loud and can be heard from adjacent rooms. To date, no one has really quantified behavior and I think it is a massive untapped information mine. Final note - for those inexperienced with animal models for any type of human disease, quantifying pain is extremely difficult.
As part of this study, I am planning on housing painful animals on the right side of a room and uninjured animals on the left side of the room. I want to be able to place a device between the racks of cages so that I can record and quantify the total amount of wrestling each group does. I don't think there is a practical way to get granularity down to animal level but I think by simply spatially separating the groups, the total amount of wrestling noise should show difference.
I am looking to buy the following items: Rasberry Pi Zero, ReSpeaker 2-Mics Pi HAT, and a large USB battery pack.
The way I see it, as long as the device is set up such that each speaker faces a cage rack direction, I should be able to tell which rack (right or left) the wrestling is coming from. It will be tough to distinguish simultaneous wrestling but I am not super worried about that.
My main question is - will it be possible to separate R and L audio channels with this setup and how easy will it be to quantify this data? I have decent experience with coding in C and Java but I have never used Rasberry Pi.
I am not looking for someone to solve this problem for me - only looking to see if it's possible and direct me to some resources. Thank you!