The sounds of New York City is a novel project by New York University NYU) to monitor noise pollution in the city.
The Sounds Of New York City (SONYC) is a project to monitor noise problems in the city.
“Noise pollution is one of the topmost quality of life issues for urban residents in the United States,” says NYU.
“It has been estimated that nine out of ten adults in New York City (NYC) are exposed to excessive noise levels.”
To analyse the problem, NYU has set up Raspberry Pi devices with microphones to monitor audio levels around NYC.
The MagPi caught up with Charlie Mydlarz, a senior research scientist working on the project.
“We are working with New York City agencies, including the Department of Environmental Protection,”explains Charlie. “We will provide them with an aggregated noise data stream so they can enforce the noise code more efficiently,”
“We also work with the Department of Health and Mental Hygiene in a bid to uncover relationships between health and noise in the city.”
“The sensor node is built around the Raspberry Pi 2B running a headless Jessie,” Charlie reveals.
Attached to the Raspberry Pi is a digital microphone. “All data is transmitted via WiFi,” adds Charlie. It’s currently running on the NYU network across its campus.
“The sensors themselves don’t have a name,” he tells us. “I usually call them Acoustic Sensing Devices or ASDs. ‘Listening box’ sounds way too ominous!”
Data recorded by the Raspberry Pi are being used to create a Mission Control Centre. Information from the network flows through the infrastructure that analyses the retrieved data.
Machine learning techniques are being used to identify sources of noise pollution, such as police sirens or street music.
SONYC has worked hard to ensure that the project doesn’t encroach on privacy. “The audio data is collected in ten-second snippets,” says Charlie.
Furthermore, recordings are randomly separated in time to ensure privacy is maintained
During recording, the audio snippets are compressed using the lossless FLAC encoder, and then encrypted using AES and RSA encryption.
“We have done a lot of work to maintain privacy on the project, and have had an external consultant confirm that street-level, intelligible speech at conversational levels cannot be picked up,” insists Charlie.
“A person would have to shout at the sensor for the speech to be intelligible, and that wouldn’t constitute a private conversation.”
The team also deploy signs below each node to inform people what they are doing.
“As of today, we have 20 nodes up and running,” Charlie tells us. By the end of 2016 they should have 50 deployed, and by spring 2017 there will be 100.
“These poor little Pis have just survived the hot and humid NYC summer and are about to experience the freezing winter,” says Charlie. “They’ve been fantastic so far and have spent weeks running in freezers in the lab, so I’m confident that they can take the winter too.”