Update, my code is now at (changed name of repository):
https://bitbucket.org/togiles/lightshowpi
Indeed we did get the original FFT code from this post (see my previous posts above). Thanks again for your enhancements, they're great!CuriousInventor wrote:I used code from the LightShow Pi project, which I believe got their FFT code from this post.
Stulevine, did you already managed to work out your little project? I am trying to do something similar... I have 1 led strip and every time the MAX4466 mic outputs a signal (if he captures noise) the color of my led strip should change.stulevine wrote:Oh, regarding that link, that's where I originally started with the MCP3008 IC and decided to pursue using the IC with the hardware SPI of the Raspberry Pi.
So, with that said, my problem is not reading the analog input as I am already able to do that via spidev (Installed now under Raspian Wheezy) using a much simpler approach with less GPIO pins utilized. Here's the python code if you are interested.
So, I want to take what I read from analog pin 4 and process it with FTT and graph it on the 8x8 matrix like they do on the Arduino Music visualizer tutorial I'm just not quite understanding how to take that voltage and process it via FTT like you do in your scripts.
Code: Select all
0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0
7 1 1 1 1 1 0 3
3 0 1 1 0 0 0 1
3 1 1 1 0 0 0 0
3 1 1 1 0 0 0 0
3 1 0 1 0 0 0 0
3 1 1 1 1 1 0 3
Code: Select all
#!/usr/bin/env python
# 8 bar Audio equaliser
import alsaaudio as aa
from time import sleep
from struct import unpack
import numpy as np
import wave
matrix = [0,0,0,0,0,0,0,0]
power = []
weighting = [2,4,8,16,32,64,128,256] # Change these according to taste
# Set up audio
wavfile = wave.open('songs/livingonaprayer.wav','r')
sample_rate = wavfile.getframerate()
print "Sample_rate : " + str(sample_rate)
no_channels = wavfile.getnchannels()
chunk = 4096
output = aa.PCM(aa.PCM_PLAYBACK, aa.PCM_NORMAL)
output.setchannels(no_channels)
output.setrate(sample_rate)
output.setformat(aa.PCM_FORMAT_S16_LE)
output.setperiodsize(chunk)
# Return power array index corresponding to a particular frequency
def piff(val):
return int(2*chunk*val/sample_rate)
def calculate_levels(data, chunk,sample_rate):
global matrix
# Convert raw data to numpy array
data = unpack("%dh"%(len(data)/2),data)
data = np.array(data, dtype='h')
# Apply FFT - real data so rfft used
fourier=np.fft.rfft(data)
# Remove last element in array to make it the same size as chunk
fourier=np.delete(fourier,len(fourier)-1)
# Find amplitude
power = np.abs(fourier)
matrix[0]= int(np.mean(power[piff(0) :piff(156):1]))
matrix[1]= int(np.mean(power[piff(156) :piff(313):1]))
matrix[2]= int(np.mean(power[piff(313) :piff(625):1]))
matrix[3]= int(np.mean(power[piff(625) :piff(1250):1]))
matrix[4]= int(np.mean(power[piff(1250) :piff(2500):1]))
matrix[5]= int(np.mean(power[piff(2500) :piff(5000):1]))
matrix[6]= int(np.mean(power[piff(5000) :piff(10000):1]))
matrix[7]= int(np.mean(power[piff(10000):piff(20000):1]))
# Tidy up column values for the LED matrix
matrix=np.divide(np.multiply(matrix,weighting),1000000)
# Set floor at 0 and ceiling at 8 for LED matrix
matrix=matrix.clip(0,8)
return matrix
print "Processing....."
data = wavfile.readframes(chunk)
file = open("livingonaprayer.txt","w")
while data!='':
output.write(data)
matrix=calculate_levels(data, chunk,sample_rate)
for i in range (0,8):
file.write(str((1<<matrix[i])-1)+" ")
file.write( "\n")
data = wavfile.readframes(chunk)
file.close()
snd_loopback was included in the last Wheezy image; not sure of Jessie.
Hello,SpaceGerbil wrote:All done.
Video of it in action below:
http://www.youtube.com/watch?v=Du0zp7AjlBY
The code:
I got over the remaining problems for use with numpy. It is important that you capture in NORMAL (block) mode to always have the same set size of data. It has been running for 5 hours today with no lock ups/underruns/overflows.Code: Select all
#!/usr/bin/env python # 8 bar Audio equaliser using MCP2307 import alsaaudio as aa import smbus from time import sleep from struct import unpack import numpy as np bus=smbus.SMBus(0) #Use '1' for newer Pi boards; ADDR = 0x20 #The I2C address of MCP23017 DIRA = 0x00 #PortA I/O direction, by pin. 0=output, 1=input DIRB = 0x01 #PortB I/O direction, by pin. 0=output, 1=input BANKA = 0x12 #Register address for Bank A BANKB = 0x13 #Register address for Bank B #Set up the 23017 for 16 output pins bus.write_byte_data(ADDR, DIRA, 0); #all zeros = all outputs on Bank A bus.write_byte_data(ADDR, DIRB, 0); #all zeros = all outputs on Bank B def TurnOffLEDS (): bus.write_byte_data(ADDR, BANKA, 0xFF) #set all columns high bus.write_byte_data(ADDR, BANKB, 0x00) #set all rows low def Set_Column(row, col): TurnOffLEDS() bus.write_byte_data(ADDR, BANKA, col) bus.write_byte_data(ADDR, BANKB, row) # Initialise matrix TurnOffLEDS() # Set up audio sample_rate = 44100 no_channels = 2 chunk = 512 # Use a multiple of 8 data_in = aa.PCM(aa.PCM_CAPTURE, aa.PCM_NORMAL) data_in.setchannels(no_channels) data_in.setrate(sample_rate) data_in.setformat(aa.PCM_FORMAT_S16_LE) data_in.setperiodsize(chunk) def calculate_levels(data, chunk,sample_rate): # Convert raw data to numpy array data = unpack("%dh"%(len(data)/2),data) data = np.array(data, dtype='h') # Apply FFT - real data so rfft used fourier=np.fft.rfft(data) # Remove last element in array to make it the same size as chunk fourier=np.delete(fourier,len(fourier)-1) # Find amplitude power = np.log10(np.abs(fourier))**2 # Araange array into 8 rows for the 8 bars on LED matrix power = np.reshape(power,(8,chunk/8)) matrix= np.int_(np.average(power,axis=1)/4) return matrix print "Processing....." while True: TurnOffLEDS() # Read data from device l,data = data_in.read() data_in.pause(1) # Pause capture whilst RPi processes data if l: # catch frame error try: matrix=calculate_levels(data, chunk,sample_rate) for i in range (0,8): Set_Column((1<<matrix[i])-1,0xFF^(1<<i)) except audioop.error, e: if e.message !="not a whole number of frames": raise e sleep(0.001) data_in.pause(0) # Resume capture
The i2c part of the code is well documented so I'll just explain a few bits I didn't comment in the code. The original code referenced by yamanoorsai did not make good use of the powerful numpy routines. For starters the audio data is real (integers) and rfft is approriate for these arrays.
As I was using 8 columns for the equaliser, I arranged the array into 8 rows (each with 64 elements (chunk/8)). These numbers represent the 'amplitudes' for the first 64 frequencies in jumps of (0.5*sample_rate/chunk) (0.5*44100/512= 43 Hz) i.e row 1 is the amplitudes for the following frequencies:
0, 43, 86,.......,2713
2756, 2799..............
.
.
19294,...........,22007 (row 8)
I then took the mean for each row (very crude I know) to return 8 values for the LED matrix.
There are too many areas for improvement to talk about. The most obvious one is to focus on the frequencies more applicable to music/speech etc. (maybe just 60-5000 Hz) and ignore the rest.
I hope some of you have a laugh with this - it is very cool when hooked up to an iPod or the radio.
If some audiophiles could post their successes or suggestions for improvement, I would be grateful (so that I can use it in my lessons!)