chromebookbob
Posts: 93
Joined: Sat Jan 28, 2012 10:10 am
Contact: Website

Re: Lightweight python motion detection

Sun May 25, 2014 7:33 am

I was wondering if there were a way to make your script only check for movement once a minute, to get a sort of Timelapse effect
Visit my website with the original instructions on it for both vanilla and Tekkit Lite Servers: www.picraftbukkit.webs.com

Dedalus
Posts: 8
Joined: Sat Jan 19, 2013 11:56 pm

Re: Lightweight python motion detection

Tue Jul 15, 2014 10:28 am

I have this running well as a security camera detecting mainly people walking into the area, but it triggers to easily for my liking, giving lots of images with nothing in frame.

Can anyone provide more detailed explanation of the "Threshold" and "Sensitivity" settings with perhaps some real-world examples of the values.

Typical "intruders" on my camera take up about a 1/3 to 1/2 (and maybe 1/8 of the width) of the height of the image depending on how close they are, so they're quite large in the frame.

My current settings are:

Threshold=18
Sensitivity=180

On a slightly different note, is it yet possible to alter the ISO settings? I'd like a slightly faster shutter speed than the default if possible.
Thanks!

HerrJemineh
Posts: 10
Joined: Tue Aug 13, 2013 1:35 pm

Re: Lightweight python motion detection

Mon Aug 04, 2014 8:35 am

Hi, in my sytsem a problem suddenly occurred. So far, everything went very well, but recently the time between motion detection and photo became longer. This has the consequence that the photos were taken too late and the trigger is not visible on the photo. Does anyone know how I can reduce the time between detection and trigger?
I'm using the following script:

Code: Select all

#!/usr/bin/python

    # original script by brainflakes, improved by pageauc, peewee2 and Kesthal
    # www.raspberrypi.org/phpBB3/viewtopic.php?f=43&t=45235

    # You need to install PIL to run this script
    # type "sudo apt-get install python-imaging-tk" in an terminal window to do this

    # play with it KLL
    # /home/pi/python_cam/picam4.py
    # rev b with dropbox, ftp, email options
    
import StringIO
import subprocess
import os
import time
from datetime import datetime
from PIL import Image

# Import smtplib to provide email functions
import smtplib
#from email.mime.text import MIMEText

    # Motion detection settings:
    # Threshold          - how much a pixel has to change by to be marked as "changed"
    # Sensitivity        - how many changed pixels before capturing an image, needs to be higher if noisy view
    # ForceCapture       - whether to force an image to be captured every forceCaptureTime seconds, values True or False
    # filepath           - location of folder to save photos
    # filenamePrefix     - string that prefixes the file name for easier identification of files.
    # diskSpaceToReserve - Delete oldest images to avoid filling disk. How much byte to keep free on disk.
    # cameraSettings     - "" = no extra settings; "-hf" = Set horizontal flip of image; "-vf" = Set vertical flip; "-hf -vf" = both horizontal and vertical flip
threshold   = 10
sensitivity = 140
rotation    = 180               # KLL camera mounting

forceCapture = True
forceCaptureTime = 1 * 60 # every minute for webserver* 60      # Once an hour

        # info by print
info_print = True

        # store image files to temp fs 
filepath       = "/run/shm/"
filenamePrefix = "RPICAM"
file_typ       = ".jpg"

prg_msg = "boot"  # used to get more info in print when a picture is made

        # option and the newest one is referred to by a linkfile in same dir
link_tolastpicture = True       # KLL 
lfile = "last.jpg"              # KLL make it as symlink

        # option send file to DROPBOX, ! very long API procedure !
send_dropbox = False            # KLL test files in drop_box and to PC

        # option send ( or move ) file to a FTP server 
send_ftp     = False             # KLL FTP
ftp_remotepath = "/usb1_1/rpi/"                         # a USB stick
ftp_account = "kll-ftp:*****@192.168.1.1:2121"          # in my router: USER:PASSWORD@SERVER:PORT

#wput_option = " -R"            # opt "-R" for move file
wput_option = " "

        # option email
send_email_enable = False
# Define email addresses to use
addr_to   = 'kllsamui@gmail.com'
addr_from = 'kllsamui@gmail.com'

# Define SMTP email server details
GMAIL_USER = 'kllsamui@gmail.com'
GMAIL_PASS = '*****'
SMTP_SERVER = 'smtp.gmail.com:587'

# email control
emaildeltaTime = 1 * 60 * 60                    # send mail again only after ... hour
last_send = time.time() - emaildeltaTime        # so first picture ( at start / boot ) also makes a email

# temp fs ?100MB? should keep 10MB for other program to use
diskSpaceToReserve = 10 * 1024 * 1024   # Keep 10 mb free on disk
# with 95*1024*1024 == 99614720 it deletes all ??
# delets always a file each time is makes a new one! 
# with 90*1024*1024 == 94371840 it deletes (one) the oldest.


cameraSettings = ""

    # settings of the photos to save
saveWidth   = 1296
saveHeight  = 972
saveQuality = 15 # Set jpeg quality (0 to 100)

    # Test-Image settings
testWidth = 100
testHeight = 75

    # this is the default setting, if the whole image should be scanned for changed pixel
testAreaCount = 1
testBorders = [ [[1,testWidth],[1,testHeight]] ]  # [ [[start pixel on left side,end pixel on right side],[start pixel on top side,stop pixel on bottom side]] ]
    # testBorders are NOT zero-based, the first pixel is 1 and the last pixel is testWith or testHeight

    # with "testBorders", you can define areas, where the script should scan for changed pixel
    # for example, if your picture looks like this:
    #
    #     ....XXXX
    #     ........
    #     ........
    #
    # "." is a street or a house, "X" are trees which move arround like crazy when the wind is blowing
    # because of the wind in the trees, there will be taken photos all the time. to prevent this, your setting might look like this:

    # testAreaCount = 2
    # testBorders = [ [[1,50],[1,75]], [[51,100],[26,75]] ] # area y=1 to 25 not scanned in x=51 to 100

    # even more complex example
    # testAreaCount = 4
    # testBorders = [ [[1,39],[1,75]], [[40,67],[43,75]], [[68,85],[48,75]], [[86,100],[41,75]] ]

    # in debug mode, a file debug.bmp is written to disk with marked (?GREEN?) changed pixel an with marked border of scan-area
    # debug mode should only be turned on while testing the parameters above
debugMode = False # False or True
debug_bmp = "debug.bmp"


def send_email(recipient, subject, text):
        smtpserver = smtplib.SMTP(SMTP_SERVER)
        smtpserver.ehlo()
        smtpserver.starttls()
        smtpserver.ehlo
        smtpserver.login(GMAIL_USER, GMAIL_PASS)
        header = 'To:' + recipient + '\n' + 'From: ' + GMAIL_USER
        header = header + '\n' + 'Subject:' + subject + '\n'
        msg = header + '\n' + text + ' \n\n'
        smtpserver.sendmail(GMAIL_USER, recipient, msg)
        smtpserver.close()

    # Capture a small test image (for motion detection)
def captureTestImage(settings, width, height):
        command = "raspistill %s -w %s -h %s -t 200 -e bmp -n -o -" % (settings, width, height)
        imageData = StringIO.StringIO()
        imageData.write(subprocess.check_output(command, shell=True))
        imageData.seek(0)
        im = Image.open(imageData)
        buffer = im.load()
        imageData.close()
        return im, buffer

    # Save a full size image to disk
def saveImage(settings, width, height, quality, diskSpaceToReserve):
        global last_send #it is read and set here
        keepDiskSpaceFree(diskSpaceToReserve)
        #time = datetime.now()    # KLL bad to call this variable time !
        t_now = datetime.now()
        s_now = "-%04d%02d%02d-%02d%02d%02d" % (t_now.year, t_now.month, t_now.day, t_now.hour, t_now.minute, t_now.second)
        filename = filenamePrefix + s_now + file_typ
        fullfilename = filepath + filename
        lastfilename = filepath + lfile
        subprocess.call("raspistill %s -w %s -h %s -t 200 -e jpg -q %s -n -rot %s -o %s" % (settings, width, height, quality, rotation, fullfilename), shell=True)
        if info_print :
                print " %s captured %s" % (prg_msg,fullfilename)

        if link_tolastpicture : # tested ok
                try:
                        os.remove(lastfilename)
                except:
                        pass # not exist at first start
                os.symlink(fullfilename,lastfilename)
#               os.chmod(lastfilename,stat.S_IXOTH)    #stat.S_IRWXG
                pass

        if send_dropbox : # tested ok
                subprocess.call("/home/pi/python_cam/dropbox_uploader.sh upload %s /RPICAM1/" % (fullfilename), shell=True)
                if info_print :
                        print "upload dropbox"
                pass

        if send_ftp :  # only works from that dir! # tested ok
                if info_print :
                        wput_opt = wput_option
                        pass
                else :
                        wput_opt = wput_option+" -q" # quiet
                        pass
                os.chdir(filepath)
                subprocess.call("wput %s %s ftp://%s%s" % (wput_opt,filename,ftp_account,ftp_remotepath), shell=True)
                os.chdir("/")
                if info_print :
                        print "upload ftp"
                pass

        if send_email_enable :
                #print 'now: %s' % (str( time.time()))
                #print 'last_send: %s' % (str(last_send))
                #print 'emaildeltaTime: %s' % (str( emaildeltaTime ))
                if time.time() - last_send > emaildeltaTime :
                        emailsubject = 'from RPI CAM1: '
                        emailtext    = 'motion detect: ' + fullfilename
                        send_email(addr_to, emailsubject , emailtext)
                        last_send = time.time()
                        if info_print :
                                print "send email"
                        pass
                pass

    # Keep free space above given level
def keepDiskSpaceFree(bytesToReserve):
        if (getFreeSpace() < bytesToReserve):
            os.chdir(filepath)     #KLL now works better
            for dfilename in sorted(os.listdir(filepath)):
                if dfilename.startswith(filenamePrefix) and dfilename.endswith(file_typ):
                    print "Deleted %s to avoid filling disk" % (dfilename)
                    os.remove(dfilename)
                    if (getFreeSpace() > bytesToReserve):
                        os.chdir("/")  # KLL
                        return

    # Get available disk space
def getFreeSpace():
        st = os.statvfs(filepath)
        du = st.f_bavail * st.f_frsize
        #print " free space: %s " % (du)
        return du

#_________ main ________________________________________________________________________________________
    # Get first image
if info_print :
        print('get first image')
image1, buffer1 = captureTestImage(cameraSettings, testWidth, testHeight)

    # Reset last capture time
lastCapture = time.time()

if info_print :
        print('start loop')   # and take a very first picture to see start time ...
        

saveImage(cameraSettings, saveWidth, saveHeight, saveQuality, diskSpaceToReserve)
prg_msg = "motion"

while (True):

        # Get comparison image
        image2, buffer2 = captureTestImage(cameraSettings, testWidth, testHeight)

        # Count changed pixels
        changedPixels = 0
        takePicture = False

        if (debugMode): # in debug mode, save a bitmap-file with marked changed pixels and with visible testarea-borders
            debugimage = Image.new("RGB",(testWidth, testHeight))
            debugim = debugimage.load()

        for z in xrange(0, testAreaCount): # = xrange(0,1) with default-values = z will only have the value of 0 = only one scan-area = whole picture
            for x in xrange(testBorders[z][0][0]-1, testBorders[z][0][1]): # = xrange(0,100) with default-values
                for y in xrange(testBorders[z][1][0]-1, testBorders[z][1][1]):   # = xrange(0,75) with default-values; testBorders are NOT zero-based, buffer1[x,y] are zero-based (0,0 is top left of image, testWidth-1,testHeight-1 is botton right)
                    if (debugMode):
                        debugim[x,y] = buffer2[x,y]
                        if ((x == testBorders[z][0][0]-1) or (x == testBorders[z][0][1]-1) or (y == testBorders[z][1][0]-1) or (y == testBorders[z][1][1]-1)):
                            # print "Border %s %s" % (x,y)
                            debugim[x,y] = (0, 0, 255) # in debug mode, mark all border pixel to blue
                    # Just check green channel as it's the highest quality channel
                    pixdiff = abs(buffer1[x,y][1] - buffer2[x,y][1])
                    if pixdiff > threshold:
                        changedPixels += 1
                        if (debugMode):
                            debugim[x,y] = (0, 255, 0) # in debug mode, mark all changed pixel to green
                    # Save an image if pixels changed
                    if (changedPixels > sensitivity):
                        takePicture = True # will shoot the photo later
                    if ((debugMode == False) and (changedPixels > sensitivity)):
                        break  # break the y loop
                if ((debugMode == False) and (changedPixels > sensitivity)):
                    break  # break the x loop
            if ((debugMode == False) and (changedPixels > sensitivity)):
                break  # break the z loop

        if (debugMode):
            debugimage.save(filepath + debug_bmp) # save debug image as bmp
            print "debug.bmp saved, %s changed pixel" % changedPixels
        # else:
        #     print "%s changed pixel" % changedPixels

        # Check force capture
        if forceCapture:
                #print 'now: %s' % (str( time.time()))
                #print 'lastCapture: %s' % (str(lastCapture))
                #print 'forceCaptureTime: %s' % (str( forceCaptureTime ))
                if time.time() - lastCapture > forceCaptureTime:
                        takePicture = True
                        prg_msg = "force"

        if takePicture:
            lastCapture = time.time()
            saveImage(cameraSettings, saveWidth, saveHeight, saveQuality, diskSpaceToReserve)
            prg_msg = "motion"

        # Swap comparison buffers
        image1 = image2
        buffer1 = buffer2

Best regards
HerrJemineh

IonPunk
Posts: 2
Joined: Wed Aug 20, 2014 2:52 pm

Re: Lightweight python motion detection

Wed Aug 20, 2014 3:45 pm

SOLUTION: Camera setting "-t" must be set to more than 0. 1000 worked for me.

I am having trouble getting brainflakes' code to run. The camera preview pops up during the captureTestImage() and gets stuck there. Perhaps it is unable to write the file? Not sure whats going on. I put a print "Blah x" after every line in captureTestImage() and it appears that it is getting stuck or crashing at:

Code: Select all

imageData.write(subprocess.check_output(command, shell=True))
Help?

pageauc
Posts: 199
Joined: Fri Jan 04, 2013 10:52 pm

Re: Lightweight python motion detection

Thu Sep 18, 2014 10:55 am

Updated grive enabled motion detection security camera setup

Note: 25-Nov-2014, I updated program on github to version 2.2 with a recompiled grive and setup.sh to accommodate a change in a library that caused grive to crash. If you use grive and are having a problem crashing then download (wget) the updated pimotion.tar file, extract and run setup.sh to ensure files and libraries are updated. Retry

Code: Select all

sudo ./sync.sh
to verify problem is resolved

Please note this version has been updated to use the picamera python module. If you want the pure python version visit my github repository at https://github.com/pageauc The version will be pi-motion-orig. I have added a source folder so you can browse the files, otherwise you can wget the pimotion.tar file and install per the Readme.txt file. Note this version has an updated grive and grive_setup.sh files but you should run chmod +x for the appropriate files.

My grive enabled security camera setup using the lightweight python motion detection has been updated. grive allows the RPI camera to sync photos to a google_drive so they can be viewed from a web browser, phone/tablet etc. I have a camera located 3000 km away and it has worked without problem for almost 6 months without issues except for occasional park communications outages . This setup optimized grive synching to reduce communications between RPI and google. The package is on github with detailed instructions on how to setup security between RPI and google drive.

17-Sep-2014 Version 2.0 of pimotion.py is now on github.
New Features
- Changed setup.sh so it installs python-imaging, python-picamera and mencoder dependencies and libraries
by default. Note grive_setup.sh has been replaced by setup.sh
- Added option to use picamera to take large photo instead of shelling out to raspian to run raspistill
Note small image still uses raspistill.
- Added picamera option to take low light photos during specified hours. This dramatically improves
low light photos but don't use during bright light conditions or photos will be washed out
- picamera option uses camera settings to make Daylight photo more consistent
- Added makemovie.py to create a movie from contents of google_drive folder.

Fixes
- Fixed bug that crashes pimotion if numsequence is set to False caused by displaying initial settings information

Brief install instructions. ssh or open a terminal on the raspberry pi with a rpi camera module installed and working

Code: Select all

cd ~
mkdir picam
cd picam
wget https://raw.github.com/pageauc/pi-motion-grive/master/pimotion.tar
tar -pxvf pimotion.tar
./setup.sh
Read the Readme.txt for detailed instructions or visit the project on github
You will have to transfer any previous pimotion.py settings to the new pimotion.py. Please note that
picamera feature has additional capability like lowLight setting for night time.

Regards Claude Pageau

See original post here http://www.raspberrypi.org/forums/viewt ... 04#p362504
Last edited by pageauc on Wed Nov 26, 2014 1:18 am, edited 4 times in total.
GitHub - https://github.com/pageauc
YouTube - https://www.youtube.com/user/pageaucp

berkaye
Posts: 2
Joined: Sat Sep 20, 2014 8:03 am

Re: Lightweight python motion detection

Sat Sep 20, 2014 8:08 am

Hello , i want do that :

if camera did detect motion , send a notification to android. How can i do this ?

Sorry for bad English .. :lol:

HerrJemineh
Posts: 10
Joined: Tue Aug 13, 2013 1:35 pm

Re: Lightweight python motion detection

Sat Sep 20, 2014 11:42 am

I'm using "pushbullet" to get a message if my 433Mhz-Smoke-Detector is triggered. That should work for this script as well.

berkaye
Posts: 2
Joined: Sat Sep 20, 2014 8:03 am

Re: Lightweight python motion detection

Sun Sep 21, 2014 9:47 am

Hey i have a problem :(

Code: Select all

from pushbullet import PushBullet
from pushbullet import device

apik="myapikey"
pb=PushBullet(apik)
de=pb.devices[0]
success, push = de.push_note("adsadasd","asdasdasd asd asd")


i did get this error :

Code: Select all


Traceback (most recent call last):
  File "gggg.py", line 6, in <module>
    de = pb.devices[0]
IndexError: list index out of range


HerrJemineh
Posts: 10
Joined: Tue Aug 13, 2013 1:35 pm

Re: Lightweight python motion detection

Mon Sep 22, 2014 1:02 pm

I'm using a script with following code

Code: Select all

#!/bin/bash

curl https://api.pushbullet.com/v2/pushes \
-u XXXXACCESS TOKENXXXX: \
-d device_iden="XXXXXXXdevive_idenXXXXXXX" \
-d type="note" \
-d title="Test" \
-d body="test test test" \
-X POST
If you have installed pushbullet correctly it should work after you've entered your access token and device_iden.
(Pushbullet Setup Instructions: http://atinkerersblog.wordpress.com/201 ... ushbullet/

Alternatively you colud use this script modified by "KLL" which is able to upload the fotos to dropbox (after installing dropbox).
http://kll.engineering-news.org/kllfusi ... picam4.txt
I can adjust my iPhone to show up a message when a new file is uploaded to my dropbox.
You can also let the script send an email when motion is detected.

regards
HerrJemineh
Last edited by HerrJemineh on Fri Sep 26, 2014 12:07 pm, edited 1 time in total.

veames
Posts: 4
Joined: Tue Jul 03, 2012 4:50 pm

Re: Lightweight python motion detection

Thu Sep 25, 2014 6:39 pm

Just discovered this thread. Was modifying brainflakes script myself, and seems someone has already done the work! Thanks!

Following thread.

utpalc
Posts: 1
Joined: Sun Sep 28, 2014 11:00 am

Re: Lightweight python motion detection

Sun Sep 28, 2014 11:19 am

Great code. However, I simplified it a bit and got rid of the write to the sd card on each iteration.

Basically instead of taking a pic and storing it on the card the camera returns a numpy array and that is compared. Once again I compare only the green channel like you do but I guess we can also switch the channel depending on the time (at night it might be better to compare the red channel. I do not know. I will have to experiment). So once the diff shows motion we can actually take the picture to a file and also use the camera annotate feature that automatically puts the date and time on the image.

camera.annotate_text = timenow.strftime('%Y-%m-%d %H:%M:%S')

That way the code is even more simplified.

Following is the example code that I used to test and make sure the idea works. I will put it up on GitHub. Feel free to incorporate it in your solution if you want.

thx.

Code: Select all

import time
import picamera
import picamera.array

width = 100
height = 75
threshold = 25
sensitivity = 25

def takepic():
    with picamera.PiCamera() as camera:
        time.sleep(1)
        camera.resolution = (width, height)
        with picamera.array.PiRGBArray(camera) as stream:
            camera.capture(stream, format='rgb')
            return stream.array


if __name__ == '__main__':
    print 'Taking first pic'
    data1 = takepic()

    time.sleep(10)
    print 'Taking second pic'
    data2 = takepic()
    print 'Diffing'
    
    diffCount = 0L;
    for w in range(0, width):
        for h in range(0, height):
            # get the diff of the pixel. Conversion to int
            # is required to avoid unsigned short overflow.
            diff = abs(int(data1[h][w][1]) - int(data2[h][w][1]))
            if  diff > threshold:
                diffCount += 1
		if diffCount > sensitivity:
                    break; # break inner loop
        if diffCount > sensitivity:
            break; #break outer loop.

    if diffCount > threshold:
        print 'Motion Detected'
            


    
    

trop54
Posts: 1
Joined: Mon Oct 06, 2014 10:05 pm

Re: Lightweight python motion detection

Mon Oct 06, 2014 10:13 pm

Hello!
I'm new the the boards, but have been experimenting with an RPI for quite some time now.

First off, Great script. I use it for my 'autonomous' off grid security system for a remote cabin.
Currently I have my system operating off a 100w solar panel and a 75aHR battery, using a Huawei 3276 LTE dongle. Recently I concluded my local trials and successfully deployed the system in my remote location.

I am using a modified version of the script, detection principles are the same but notification and picture transfer is different.

What I'm having issues with is the sun out from behind clouds and creating a shadow on the cabin or brightening up the scene, this creates false 'motion' pictures which eat up precious SD card space and data transfer.

I've played around with threshold and sensitivity but Im finding that increasing them too much wont trigger motion for human motion. I have also hooked up a PIR sensor to allow alernate motion detection but currently have that disable as I'd like to get better results from 'pixel change motion' also.

Any ideas?

Thanks in advance!

Tim

pageauc
Posts: 199
Joined: Fri Jan 04, 2013 10:52 pm

Re: Lightweight python motion detection

Tue Nov 25, 2014 1:08 pm

Pimotion troubleshooting suggestions

Occasionally I have had problems with sync then it mysteriously starts working again after sometimes a week or so. I had this happen when I first setup the camera. I assume this was because of a google drive or possibly some other telecom issue.

You can also try to
1. Logout of any other google drive accounts for a short time. Make sure there are images to process. From the RPI pimoition folder (or whatever name you gave it) run
CODE: SELECT ALL
sudo ./sync.sh


Sample Output with No files processed. But connection to google verified since remote file list read was successful
Current Directory=/home/pi/deck-cam
grive .. Found pimotion.sync files to synchronize
Change Directory to /home/pi/deck-cam/google_drive
grive .. Running /home/pi/deck-cam/grive -p /home/pi/deck-cam/google_drive
Reading local directories
Synchronizing folders
Reading remote server file list
Detecting changes from last sync
Synchronizing files
Finished!
grive .. processing complete
grive .. remove /home/pi/deck-cam/pimotion.sync file

and see what the sync.sh and grive messages are saying. This might give some idea of what the problem is. Make sure there are images to process. You could also try generating a dummy pimotion.sync file. eg
CODE: SELECT ALL
touch pimotion.sync
I have a 5 minute (300 seconds) timeout built into sync.sh and if you have a very slow connection you could try making this a longer delay. I did this since grive would occasionally hang so I built a kill into sync.sh if it detected that it was running too long. This solved the hang problem but on a very slow connection could interrupt the file sync.
2. On the RPI change to your google_drive folder. execute
CODE: SELECT ALL
ls -al

Example output (excuding jpg files)
-rw------- 1 pi pi 69 Nov 25 06:12 .grive
-rw-r--r-- 1 pi pi 81 Nov 25 06:12 .grive_state
drwxr-xr-x 2 root root 4096 Sep 7 07:15 My Files
drwxr-xr-x 2 root root 4096 Nov 24 16:05 .trash

to get a detailed folder file listing. Make sure the folder contains valid .grive and .grive_state hidden files. If not then manually copy them back into the google_drive folder.
3. Make sure the RPI is not logged into a google account via a browser.
4. You could try accessing your google account eg gmail using a web browser on the RPI, then check the files on the google drive web page. logout and retry a manual sudo ./sync.sh. The script detects duplicate sync processing so it is safe to run sync manually while the crontab sync is still active. Make sure to run under sudo since images run under the crontab are owned by root.
5. From a computer with the google drive app installed try pausing the resuming then check files on the web google drive via a browser. There is a google drive PC application pick (On windows right click the task bar google drive icon for these menu selections.
5. If the sync was working previously there should be no need to rerun the grive -a option and setup the security token again. I am pretty sure it will be a communications or temporary google drive issue.

I have had my security camera running continuously for over 8 months. Anytime it failed to sync to my laptop google drive or to my Nexus tablet, phone or ipod (note the camera was 3000 km away) it was due to work on the RV park internet system and eventually it would come back and catch up. It was off for about 3 -4 weeks once and I thought it was due to the rpi freezing up or something. I only managed 500 images on my google drive to reduce the amount of files to manage. Recently I increased this to 1000 without issues. Currently my google drive has been working pretty reliably with only occasional delays.

Hope this helps you out.
GitHub - https://github.com/pageauc
YouTube - https://www.youtube.com/user/pageaucp

killagreg
Posts: 12
Joined: Wed Dec 17, 2014 10:15 pm

Re: Lightweight python motion detection

Wed Dec 17, 2014 10:23 pm

Hi,
inspired by this nice disscussion and code examples I have written a similar script for a surveillance job.
It combines also some the advanced techniques of picamera 1.8 into the game.

I hope you enjoy that.

Regards Greg

Code: Select all

#!/usr/bin/python

# This script implements a motion capture surveillace cam for raspery pi using picam.
# It uses the motion vecors magnitude of the h264 hw-encoder to detect motion activity.
# At the time of motion detection a jpg snapshot is saved together with a h264 video stream
# some seconds before, during and after motion activity to the 'filepath' directory.

import os
import subprocess
import io
import picamera
import picamera.array
import numpy as np
import datetime as dt
import time

#seup filepath for motion capure data
filepath = '/home/pi/motion/video'
# setup pre and post video recording around motion event
video_preseconds = 3
video_postseconds = 3
#setup video/snapshot resolution
video_width = 640#1280
video_height = 480#720
#setup video rotation (0, 90, 180, 270)
video_rotation = 180

# setup motion detection threshold, i.e. magnitude of a motion block to count as  motion
motion_threshold = 60
# setup motion detection sensitivity, i.e number of motion blocks that trigger a motion detection
motion_sensitivity = 10



# do not change code behind that line
#--------------------------------------
motion_detected = False
motion_timestamp = time.time()

#call back handler for motion output data from h264 hw encoder
class MyMotionDetector(picamera.array.PiMotionAnalysis):
    def analyse(self, a):
        global motion_detected, motion_timestamp
        # calcuate length of motion vectors of mpeg macro blocks
        a = np.sqrt(
            np.square(a['x'].astype(np.float)) +
            np.square(a['y'].astype(np.float))
            ).clip(0, 255).astype(np.uint8)
        # If there're more than 10 vectors with a magnitude greater
        # than 60, then say we've detected motion
        th = ((a > motion_threshold).sum() > motion_sensitivity)
        now = time.time()
        # motion logic, triger on motion and stop after 2 seconds of inactivity
        if th:
                motion_timestamp = now

        if motion_detected:
                if (now - motion_timestamp) >= video_postseconds:
                        motion_detected = False
        else:
                if th:
                        motion_detected = True



def write_video(stream):
    # Write the entire content of the circular buffer to disk. No need to
    # lock the stream here as we're definitely not writing to it
    # simultaneously
    global motion_filename

    with io.open(motion_filename + '-before.h264', 'wb') as output:
        for frame in stream.frames:
            if frame.frame_type == picamera.PiVideoFrameType.sps_header:
                stream.seek(frame.position)
                break
        while True:
            buf = stream.read1()
            if not buf:
                break
            output.write(buf)
    # Wipe the circular stream once we're done
    stream.seek(0)
    stream.truncate()


os.system('clear')
print "Motion Detection"
print "----------------"
print "                "
with picamera.PiCamera() as camera:
    camera.resolution = (video_width, video_height)
    camera.framerate = 25
    camera.rotation = video_rotation
    camera.video_stabilization = True
    camera.annotate_background = True
    # setup a circular buffer
    stream = picamera.PiCameraCircularIO(camera, seconds = video_preseconds)
    # hi resolution video recording into circular buffer from splitter port 1
    camera.start_recording(stream, format='h264', splitter_port=1)
    #camera.start_recording('test.h264', splitter_port=1)
    # low resolution motion vector analysis from splitter port 2
    camera.start_recording('/dev/null', splitter_port=2, resize=(340,240) ,format='h264', motion_output=MyMotionDetector(camera, size=(340,240)))
    # wait some seconds for stable video data
    camera.wait_recording(1, splitter_port=1)
    motion_detected = False

    print "Motion Capture ready!"
    try:
        while True:
                # motion event must trigger this action here
                camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
                if motion_detected:
                        print "Motion detected: " , dt.datetime.now()
                        motion_filename = filepath + "/" + time.strftime("%Y%m%d-%H%M%S", time.gmtime(motion_timestamp))
                        camera.split_recording(motion_filename + '-after.h264', splitter_port=1)
                        # catch an image as video preview during video recording (uses splitter port 0) at time of the motion event
                        camera.capture(motion_filename + '.jpg', use_video_port=True)
                        # save circular buffer before motion event
                        write_video(stream)
                        #wait for end of motion event here
                        while motion_detected:
                                camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
                                camera.wait_recording(1, splitter_port=1)
                        #split video recording back in to circular buffer
                        camera.split_recording(stream, splitter_port=1)
                        subprocess.call("cat %s %s > %s && rm -f %s" % (motion_filename + "-before.h264", motion_filename + "-after.h264", motion_filename + ".h264", motion_filename + "-*.h264"), shell=True)
                        print "Motion stopped:" , dt.datetime.now()


    finally:
        camera.stop_recording(splitter_port=1)
        camera.stop_recording(splitter_port=2)
Last edited by killagreg on Fri Dec 19, 2014 9:34 am, edited 1 time in total.

HerrJemineh
Posts: 10
Joined: Tue Aug 13, 2013 1:35 pm

Re: Lightweight python motion detection

Thu Dec 18, 2014 11:55 am

Hey Greg,
nice code. Is there a possibility to restrict motion detection to a smaller area?

regards
Simon

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Lightweight python motion detection

Thu Dec 18, 2014 11:59 am

HerrJemineh wrote:Hey Greg,
nice code. Is there a possibility to restrict motion detection to a smaller area?

regards
Simon
Probably the simplest way to restrict motion detection to a particular area is to crop the numpy array in MyMotionDetector.analyse (near the top) before doing anything else with it, e.g. to restrict it to the top left quadrant of the capture area:

Code: Select all

...
    def analyse(self, a):
        a = a[:240, :320]
...
There may be a quicker way, by using the camera's zoom property (aka region of interest) to limit the capture area to the top left quadrant (set camera.zoom = (0, 0, 0.5, 0.5) just after it's initialized) but bear in mind that'll also cause all captures (including to the circular buffer) to be limited to that region too which I'm assuming isn't the intention here.

Dave.

HerrJemineh
Posts: 10
Joined: Tue Aug 13, 2013 1:35 pm

Re: Lightweight python motion detection

Thu Dec 18, 2014 2:41 pm

Hi,
thank you for the fast reply!
Unfortunately I did not understand how to adjust the script for my needs..
Could you tell me how I have to change the code to detect motion in this particular area (see Appendix)?
That would be very nice!

Best regards
Simon
Attachments
20141218-141405.jpg
20141218-141405.jpg (15.31 KiB) Viewed 6875 times

User avatar
waveform80
Posts: 303
Joined: Mon Sep 23, 2013 1:28 pm
Location: Manchester, UK

Re: Lightweight python motion detection

Fri Dec 19, 2014 1:36 am

HerrJemineh wrote:Hi,
thank you for the fast reply!
Unfortunately I did not understand how to adjust the script for my needs..
Could you tell me how I have to change the code to detect motion in this particular area (see Appendix)?
That would be very nice!

Best regards
Simon
Okay, here's killagreg's script copy'n'pasted with an extra line inserted at the start of analyse - the trick is to remember that the numpy array is organized in rows,cols so the y limits appear first, then the x limits (and that Python slices are half-open ranges so the upper limit needs to be +1):

Code: Select all

#!/usr/bin/python

# This script implements a motion capture surveillace cam for raspery pi using picam.
# It uses the motion vecors magnitude of the h264 hw-encoder to detect motion activity.
# At the time of motion detection a jpg snapshot is saved together with a h264 video stream
# some seconds before, during and after motion activity to the 'filepath' directory.

import os
import subprocess
import io
import picamera
import picamera.array
import numpy as np
import datetime as dt
import time

#seup filepath for motion capure data
filepath = '/home/pi/motion/video'
# setup pre and post video recording around motion event
video_preseconds = 3
video_postseconds = 3
#setup video/snapshot resolution
video_width = 640#1280
video_height = 480#720
#setup video rotation (0, 90, 180, 270)
video_rotation = 180

# setup motion detection threshold, i.e. magnitude of a motion block to count as  motion
motion_threshold = 60
# setup motion detection sensitivity, i.e number of motion blocks that trigger a motion detection
motion_sensitivity = 10



# do not change code behind that line
#--------------------------------------
motion_detected = False
motion_timestamp = time.time()

#call back handler for motion output data from h264 hw encoder
class MyMotionDetector(picamera.array.PiMotionAnalysis):
    def analyse(self, a):
        a = a[145:285, 96:272]
        global motion_detected, motion_timestamp
        # calcuate length of motion vectors of mpeg macro blocks
        a = np.sqrt(
            np.square(a['x'].astype(np.float)) +
            np.square(a['y'].astype(np.float))
            ).clip(0, 255).astype(np.uint8)
        # If there're more than 10 vectors with a magnitude greater
        # than 60, then say we've detected motion
        th = ((a > motion_threshold).sum() > motion_sensitivity)
        now = time.time()
        # motion logic, triger on motion and stop after 2 seconds of inactivity
        if th:
                motion_timestamp = now

        if motion_detected:
                if (now - motion_timestamp) >= video_postseconds:
                        motion_detected = False
        else:
                if th:
                        motion_detected = True



def write_video(stream):
    # Write the entire content of the circular buffer to disk. No need to
    # lock the stream here as we're definitely not writing to it
    # simultaneously
    global motion_filename

    with io.open(motion_filename + '-before.h264', 'wb') as output:
        for frame in stream.frames:
            if frame.frame_type == picamera.PiVideoFrameType.sps_header:
                stream.seek(frame.position)
                break
        while True:
            buf = stream.read1()
            if not buf:
                break
            output.write(buf)
    # Wipe the circular stream once we're done
    stream.seek(0)
    stream.truncate()


os.system('clear')
print "Motion Detection"
print "----------------"
print "                "
with picamera.PiCamera() as camera:
    camera.resolution = (video_width, video_height)
    camera.framerate = 25
    camera.rotation = video_rotation
    camera.video_stabilization = True
    camera.annotate_background = True
    # setup a circular buffer
    stream = picamera.PiCameraCircularIO(camera, seconds = video_preseconds)
    # hi resolution video recording into circular buffer from splitter port 1
    camera.start_recording(stream, format='h264', splitter_port=1)
    #camera.start_recording('test.h264', splitter_port=1)
    # low resolution motion vector analysis from splitter port 2
    camera.start_recording('/dev/null', splitter_port=2, resize=(340,240) ,format='h264', motion_output=MyMotionDetector(camera, size=(340,240)))
    # wait some seconds for stable video data
    camera.wait_recording(1, splitter_port=1)
    motion_detected = False

    print "Motion Capture ready!"
    try:
        while True:
                # motion event must trigger this action here
                camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
                if motion_detected:
                        print "Motion detected: " , dt.datetime.now()
                        motion_filename = filepath + "/" + time.strftime("%Y%m%d-%H%M%S", time.gmtime(motion_timestamp))
                        camera.split_recording(motion_filename + '-after.h264', splitter_port=1)
                        # catch an image as video preview during video recording (uses splitter port 0) at time of the motion event
                        camera.capture(motion_filename + '.jpg', use_video_port=True)
                        # save circular buffer before motion event
                        write_video(stream)
                        #wait for end of motion event here
                        while motion_detected:
                                camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
                                camera.wait_recording(1, splitter_port=1)
                        #split video recording back in to circular buffer
                        camera.split_recording(stream, splitter_port=1)
                        subprocess.call("cat %s %s > %s && rm -f %s" % (motion_filename + "-before.h264", motion_filename + "-after.h264", motion_filename + ".h264", motion_filename + "-*.h264"), shell=True)
                        print "Motion stopped:" , dt.datetime.now()


    finally:
        camera.stop_recording(splitter_port=1)
        camera.stop_recording(splitter_port=2)
Dave.

killagreg
Posts: 12
Joined: Wed Dec 17, 2014 10:15 pm

Re: Lightweight python motion detection

Fri Dec 19, 2014 9:39 am

Nice approach. But the script reduces the image for motion analysis to 340x240 pix independent fom the cam resolution. Hence the crop oft the motion vector array must be scaled according to this ratio.
Keep in mind that the 'a'-array in the MyMotionDetector represent the 16x16 pixel macro blocks off the h264 encoder and the dimension of this vector is a factor of 16 smaller than the resolution of the motion analysis resolution.

One can calculate the vector size to

cols = (340 +15) // 16
rows = (240 +15) // 16

and adressing the array by a[row, col] to selsct a certein area for motion detection.

Using the 340x240 resolution for motion analysis one get 22x15 motion blocks, i.e. the grabbed images are segmented in 22x15 individual motion analysis areas.

For your example you must calculate the motion vector area to

col_start = (96 * 22) // 640 # =3
col_end = (271 * 22 + 320) // 640 # =9
row_start = (145 * 15) // 480 # = 4
row_end = (248 * 15 + 240) // 480 # = 8

that is effectively a little larger than the area you wanna have.

So I would adopt waveform80's approch to a = a[4:8, 3:9]
Attached there is a revised version of the script implementing individual motion detection areas.
Have Fun!

Code: Select all

#!/usr/bin/python

# This script implements a motion capture surveillace cam for raspery pi using picam.
# It uses the motion vecors magnitude of the h264 hw-encoder to detect motion activity.
# At the time of motion detection a jpg snapshot is saved together with a h264 video stream
# some seconds before, during and after motion activity to the 'filepath' directory.

import os
import subprocess
import io
import picamera
import picamera.array
import numpy as np
import datetime as dt
import time
from PIL import Image

#debug mode?
debug = 1
#seup filepath for motion capure data
filepath = '/home/pi/motion/video'
# setup pre and post video recording around motion event
video_preseconds = 3
video_postseconds = 3
#setup video/snapshot resolution
video_width = 640   #1280
video_height = 480  #720
#setup video rotation (0, 180)
video_rotation = 180

# setup motion detection resolution, equal or smaller than video resolution
motion_width = 640
motion_height = 480
# setup motion detection threshold, i.e. magnitude of a motion block to count as  motion
motion_threshold = 30
# setup motion detection sensitivity, i.e number of motion blocks that trigger a motion detection
motion_sensitivity = 6
# motion masks define areas within the motion analysis picture that are used for motion analysis
 # [ [[start pixel on left side,end pixel on right side],[start pixel on top side,stop pixel on bottom side]] ]
# default this is the whole image frame
#motion_mask_count = 1
#motion_masks = [ [[1,motion_width],[1,motion_height]] ]
# another example
motion_mask_count = 1
motion_masks = [ [[270,370],[190,290]]  ]
# exaple for 2 mask areas
#motion_mask_count = 2
#motion_masks = [ [[1,320],[1,240]], [[400,500],[300,400]] ]


# do not change code behind that line
#--------------------------------------
motion_detected = False
motion_timestamp = time.time()
motion_cols = (motion_width  + 15) // 16 + 1
motion_rows = (motion_height + 15) // 16
motion_array = np.zeros((motion_rows, motion_cols), dtype = np.uint8)
# create motion mask
motion_array_mask = np.zeros((motion_rows, motion_cols), dtype = np.uint8)
for count in xrange(0, motion_mask_count):
   for col in xrange( (motion_masks[count][0][0]-1)//16, (motion_masks[count][0][1]-1+15)//16 ):
      for row in xrange( (motion_masks[count][1][0]-1)//16, (motion_masks[count][1][1]-1+15)//16 ):
         motion_array_mask[row][col] = 1

#motion_array_mask[4:8, 3:9] = 255

#call back handler for motion output data from h264 hw encoder
class MyMotionDetector(picamera.array.PiMotionAnalysis):
   def analyse(self, a):
      global motion_detected, motion_timestamp, motion_array, motion_array_mask
      # calcuate length of motion vectors of mpeg macro blocks
      a = np.sqrt(
          np.square(a['x'].astype(np.float)) +
          np.square(a['y'].astype(np.float))
          ).clip(0, 255).astype(np.uint8)
      a = a * motion_array_mask
      # If there're more than 'sensitivity' vectors with a magnitude greater
      # than 'threshold', then say we've detected motion
      th = ((a > motion_threshold).sum() > motion_sensitivity)
      now = time.time()
      # motion logic, trigger on motion and stop after 2 seconds of inactivity
      if th:
         motion_timestamp = now

      if motion_detected:
          if (now - motion_timestamp) >= video_postseconds:
               motion_detected = False
      else:
         if th:
             motion_detected = True
	     if debug:
                idx = a > motion_threshold
                a[idx] = 255
                motion_array = a


def write_video(stream):
# Write the entire content of the circular buffer to disk. No need to
# lock the stream here as we're definitely not writing to it
# simultaneously
     global motion_filename

     with io.open(motion_filename + '-before.h264', 'wb') as output:
         for frame in stream.frames:
            if frame.frame_type == picamera.PiVideoFrameType.sps_header:
                stream.seek(frame.position)
                break
         while True:
            buf = stream.read1()
            if not buf:
               break
            output.write(buf)
     # Wipe the circular stream once we're done
     stream.seek(0)
     stream.truncate()


os.system('clear')
print "Motion Detection"
print "----------------"
print "                "
with picamera.PiCamera() as camera:
	camera.resolution = (video_width, video_height)
	camera.framerate = 25
	camera.rotation = video_rotation
	camera.video_stabilization = True
	camera.annotate_background = True
	# setup a circular buffer
	stream = picamera.PiCameraCircularIO(camera, seconds = video_preseconds)
	# hi resolution video recording into circular buffer from splitter port 1
	camera.start_recording(stream, format='h264', splitter_port=1)
	#camera.start_recording('test.h264', splitter_port=1)
	# low resolution motion vector analysis from splitter port 2
	camera.start_recording('/dev/null', splitter_port=2, resize=(motion_width,motion_height) ,format='h264', motion_output=MyMotionDetector(camera, size=(motion_width,motion_height)))
	# wait some seconds for stable video data
	camera.wait_recording(2, splitter_port=1)
	motion_detected = False

	print "Motion Capture ready!"
	try:
	    while True:
		    # motion event must trigger this action here
		    camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
		    if motion_detected:
			    print "Motion detected: " , dt.datetime.now()
			    motion_filename = filepath + "/" + time.strftime("%Y%m%d-%H%M%S", time.gmtime(motion_timestamp))
			    camera.split_recording(motion_filename + '-after.h264', splitter_port=1)
			    # catch an image as video preview during video recording (uses splitter port 0) at time of the motion event
			    camera.capture_sequence([motion_filename + '.jpg'], use_video_port=True, splitter_port=0)
			    # dump motion array as image
                            if debug:
			       img = Image.fromarray(motion_array)
			       img.save(motion_filename + "-motion.png")
			    # save circular buffer before motion event
			    write_video(stream)
			    #wait for end of motion event here
			    while motion_detected:
				    camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
				    camera.wait_recording(1, splitter_port=1)
			    #split video recording back in to circular buffer
			    camera.split_recording(stream, splitter_port=1)
			    subprocess.call("cat %s %s > %s && rm -f %s" % (motion_filename + "-before.h264", motion_filename + "-after.h264", motion_filename + ".h264", motion_filename + "-*.h264"), shell=True)
			    print "Motion stopped:" , dt.datetime.now()


	finally:
	    camera.stop_recording(splitter_port=1)
	    camera.stop_recording(splitter_port=2)

pageauc
Posts: 199
Joined: Fri Jan 04, 2013 10:52 pm

Re: Lightweight python motion detection

Thu Dec 25, 2014 5:44 pm

Like the code a lot but could not find a reference on github. So if you don't mind I added to my github repo at https://github.com/pageauc
It can be downloaded to RPI from https://github.com/pageauc/pi-motion-lite with the following command

Code: Select all

wget https://raw.github.com/pageauc/pi-motion-lite/master/pi-motion-lite_2.py
I might try to write a revised version of pi-timolo (pi, timelapse, motion, lowlight) using this approach http://www.raspberrypi.org/forums/viewt ... 03#p657803
Would need to calculate average pix value for twilight/night/day detection. Looks like lots of
options since brainflakes posted his original python only code. I now prefer picamera python library approach.
Merry Christmas to Everyone
http://youtu.be/w6_WZz-oYgg
Last edited by pageauc on Sat Jan 03, 2015 3:06 am, edited 1 time in total.
GitHub - https://github.com/pageauc
YouTube - https://www.youtube.com/user/pageaucp

hydra3333
Posts: 101
Joined: Thu Jan 10, 2013 11:48 pm

Re: Lightweight python motion detection

Fri Dec 26, 2014 12:18 am

Thanks. ... the git readme doesn't see to match the code ? (there's also 2 .py files in that branch).
Nice video, merry Christmas,

hydra3333
Posts: 101
Joined: Thu Jan 10, 2013 11:48 pm

Re: Lightweight python motion detection

Fri Dec 26, 2014 10:03 am

Pending testing on my yet-to-be-reimaged-Pi, a minor suggested update to killagreg's latest script - less calcs in the boundary case of a full-window motion vector search.
I guess I'll need to update the script after testing :)
Anyway, I'm not a programmer however it seems that the script as it stands may use 100% cpu due to constant looping.
Is there a programmer in the house ?

Code: Select all

#!/usr/bin/env python
# -*- coding: utf-8 -*-

# to fix scripts, turning them from MS-DOS format to unix format
# to get rid of MSDOS format do this to this file: sudo sed -i s/\\r//g ./filename

# This script was originally created by by killagreg ¯ Thu Dec 18, 2014 7:53 am
# and                                   by killagreg ¯ Fri Dec 19, 2014 7:09 pm
# see   http://www.raspberrypi.org/forums/viewtopic.php?p=656881#p656881
#
# This script implements a motion capture surveillace cam for raspbery pi using picam.
# It uses the "motion vectors" magnitude of the h264 hw-encoder to detect motion activity.
# At the time of motion detection a jpg snapshot is saved together with a h264 video stream
# some seconds before, during and after motion activity to the 'filepath' directory.
#
# APPARENTLY INSPIRED BY PICAMERA 1.8 TECHNIQUES documented at
# http://picamera.readthedocs.org/en/release-1.8/recipes2.html#rapid-capture-and-processing
# where the PICAMERA code uses efficient underlying mmal access and numpy code
#
# "the original and the best" script code by "killagreg" was at
# http://www.raspberrypi.org/forums/viewtopic.php?p=656881#p656881
#
# Modifications:
# 2014.12.26 
#    modified slightly for the boundary case of no motion detection "windows" - avoid performing the masking step
#
# notes:
#    1. it likely uses 100% cpu since it loops around infinitely in a "while true" condition until motion is detected
#    2. maybe a programmer could look at it and do something different also remembering not to delay start of capture ... feel free
#    3. the output video streams files are raw h264, NOT repeat NOT mpeg4 video files, so youll have to convert them to .mp4 yourself
#    4. To prepare for using this python script (yes, yes, 777, roll your own if you object)
#        sudo apt-get update
#        sudo apt-get upgrade
#        sudo apt-get install python-picamera python-imaging-tk
#        sudo mkdir /var/pymotiondetector
#        sudo chmod 777 /var/pymotiondetector
#
# licensing:
#    this being a derivative, whatever killagreg had (acknowledging killagreg code looks to be substantially from examples in
#    the picamera documentation http://picamera.readthedocs.org/en/release-1.8/recipes2.html#rapid-capture-and-processing
#
#
# Example to convery h264 tp mp4
#   sudo apt-get update
#   sudo apt-get install gpac
#   sudo MP4Box -fps <use capture framerate> -add raw_video.h264 -isma -new wrapped_video.mp4
#
# on windows:
#   "C:\ffmpeg\bin\ffmpeg.exe" -f h264 -r <use capture framerate> -i "raw_video.h264" -c:v copy -an -movflags +faststart -y "wrapped_video.mp4"
#   REM if necessary add   -bsf:v h264_mp4toannexb   before "-r" 
# or
#   "C:\MP4box\MP4Box.exe" -fps <use capture framerate> -add "raw_video.h264" -isma -new "wrapped_video.mp4"
#

import os
import subprocess
import io
import picamera
import picamera.array
import numpy as np
import datetime as dt
import time
from PIL import Image

# ----------------------------------------------------------------------------------------------------------------
# in this section are parameters you can fiddle with

#debug mode?
debug = False  # False

#seup filepath for motion capure data (which is in raw h264 format) plus the start-of-motion jpeg.
# sudo mkdir /var/pymotiondetector
# sudo chmod 777 /var/pymotiondetector
filepath = '/var/pymotiondetector'

# setup pre and post video recording around motion events
video_preseconds = 3   # minimum 1
video_postseconds = 6  # minimum 1

# setup the main video/snapshot camera resolution
# see this link for a full discussion on how to choose a valid resolution that will work
# http://picamera.readthedocs.org/en/latest/fov.html
video_width = 640
#video_width = 1280
video_height = 480
#video_height = 720

#setup video rotation (0, 90, 180, 270)
video_rotation = 0 

# setup the camera video framerate, PAL is 25, let's go for 5 instead
#video_framerate = 25
video_framerate = 5

# setup the camera to perform video stabilization
video_stabilization = True

# setup the camera to put a black background on the annootation (in our case, for date/time)
#video_annotate_background = True
video_annotate_background = False

# setup the camera to put frame number in the annotation
video_annotate_frame_num = True

# setup motion detection video resolution, equal or smaller than capture video resolution
# smaller = less cpu needed thus "better" and less likely to lose frames etc
motion_width  = 320 #640
motion_height = 240 #480

# setup motion detection threshold, i.e. magnitude of a motion block to count as  motion
#motion_threshold = 60
motion_threshold = 30
# setup motion detection sensitivity, i.e number of motion blocks that trigger a motion detection
#motion_sensitivity = 10
motion_sensitivity = 6

# motion masks define areas within the motion analysis picture that are used for motion analysis
#    [ [[start pixel on left side,end pixel on right side],[start pixel on top side,stop pixel on bottom side]] ]
#
# default to no motion masking, ie use the "full area" of the lower-resolution-capture "motion vectors"
motion_mask_count = 0
# this is the whole "motion detecton image frame"
#motion_mask_count = 1
#motion_masks = [ [[1,motion_width],[1,motion_height]] ]
# another example, one motion detection mask area
#motion_mask_count = 1
#motion_masks = [ [[270,370],[190,290]]  ]
# example for 2 mask areas
#motion_mask_count = 2
#motion_masks = [ [[1,320],[1,240]], [[400,500],[300,400]] ]

# ----------------------------------------------------------------------------------------------------------------


# do not change code below the line
#-----------------------------------
motion_detected = False
motion_timestamp = time.time()

if (motion_mask_count > 0) or (debug):
    motion_cols = (motion_width  + 15) // 16 + 1
    motion_rows = (motion_height + 15) // 16
    motion_array = np.zeros((motion_rows, motion_cols), dtype = np.uint8)

# create a zero "AND" motion mask of masked areas 
# and then fill 1's into the mask areas of interest which we specified above
if motion_mask_count > 0:
    motion_array_mask = np.zeros((motion_rows, motion_cols), dtype = np.uint8)
    for count in xrange(0, motion_mask_count):
        for col in xrange( (motion_masks[count][0][0]-1)//16, (motion_masks[count][0][1]-1+15)//16 ):
            for row in xrange( (motion_masks[count][1][0]-1)//16, (motion_masks[count][1][1]-1+15)//16 ):
                motion_array_mask[row][col] = 1

#motion_array_mask[4:8, 3:9] = 255

#call back handler for motion output data from h264 hw encoder
#this processes the motion ventors from the low resolution splitted capture
class MyMotionDetector(picamera.array.PiMotionAnalysis):
   def analyse(self, a):
      global motion_detected, motion_timestamp, motion_array, motion_array_mask
      # calcuate length of motion vectors of mpeg macro blocks
      a = np.sqrt(
          np.square(a['x'].astype(np.float)) +
          np.square(a['y'].astype(np.float))
          ).clip(0, 255).astype(np.uint8)
# zero out (mask out) anything outside our specified areas of interest, if we have a mask
      if motion_mask_count > 0:
          a = a * motion_array_mask

      # If there're more than 'sensitivity' vectors with a magnitude greater
      # than 'threshold', then say we've detected motion
      th = ((a > motion_threshold).sum() > motion_sensitivity)
      now = time.time()
# by now ...
# th               = motion detected on current frame
# motion_timestamp = the last time when motion was detected in a frame (start of time window)
# motion_detected  = whether motion detection time window is currently triggered
#                  = is only turned off if motion has previously been detected
#                    and both "no motion detected" and its time window has expired
      # motion logic, trigger on motion and stop after video_postseconds seconds of inactivity
      if th:
          motion_timestamp = now

      if motion_detected:
          if (now - motion_timestamp) >= video_postseconds:
              motion_detected = False
      else:
          if th:
              motion_detected = True
          if debug:
              idx = a > motion_threshold
              a[idx] = 255
              motion_array = a

def write_video(stream):
# Write the entire content of the circular buffer to disk. No need to
# lock the stream here as we're definitely not writing to it
# simultaneously
     global motion_filename

     with io.open(motion_filename + '-before.h264', 'wb') as output:
         for frame in stream.frames:
             if frame.frame_type == picamera.PiVideoFrameType.sps_header:
                 stream.seek(frame.position)
                 break
         while True:
             buf = stream.read1()
             if not buf:
                 break
             output.write(buf)
     # Wipe the circular stream once we're done
     stream.seek(0)
     stream.truncate()


os.system('clear')
print "Motion Detection"
print "----------------"
print "                "
with picamera.PiCamera() as camera:
   camera.resolution = (video_width, video_height)
   camera.framerate = video_framerate
   camera.rotation = video_rotation
   camera.video_stabilization = video_stabilization
   camera.annotate_background = video_annotate_background
   camera.annotate_frame_num = video_annotate_frame_num

   # setup a circular buffer
   stream = picamera.PiCameraCircularIO(camera, seconds = video_preseconds)
   # 1. split the hi resolution video recording into circular buffer from splitter port 1
   camera.start_recording(stream, format='h264', splitter_port=1)
   #camera.start_recording('test.h264', splitter_port=1)
   # 2. split the low resolution motion vector analysis from splitter port 2, throw away the actual video
   camera.start_recording('/dev/null', splitter_port=2, resize=(motion_width,motion_height) ,format='h264', motion_output=MyMotionDetector(camera, size=(motion_width,motion_height)))
   # wait some seconds for stable video data to be available
   camera.wait_recording(2, splitter_port=1)
   motion_detected = False

   print "Motion Capture ready!"
   try:
       while True:
          # the callback "MyMotionDetector" has been setup above using the low resolution split
          # original code "while true" above ... loop around as fast as we can go until motion is detected ... thus 100 percent cpu ?
          # a motion event must trigger this action here
          camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
          if motion_detected:
             print "Motion detected: " , dt.datetime.now()
             motion_filename = filepath + "/" + time.strftime("%Y%m%d-%H%M%S", time.gmtime(motion_timestamp))
             # split  the high res video stream to a file instead of to the internal circular buffer
             camera.split_recording(motion_filename + '-after.h264', splitter_port=1)
             # catch an image as video preview during video recording (uses splitter port 0) at time of the motion event
             camera.capture_sequence([motion_filename + '.jpg'], use_video_port=True, splitter_port=0)
             # if we want to see debug motion stuff, dump motion array as a png image
             if debug:
                img = Image.fromarray(motion_array)
                img.save(motion_filename + "-motion.png")
             # save circular buffer before motion event, write it to a file
             write_video(stream)
             #wait for end of motion event here, in one second increments
             while motion_detected:
                camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
                camera.wait_recording(1, splitter_port=1)
             #split video recording back in to circular buffer
             camera.split_recording(stream, splitter_port=1)
             subprocess.call("cat %s %s > %s && rm -f %s" % (motion_filename + "-before.h264", motion_filename + "-after.h264", motion_filename + ".h264", motion_filename + "-*.h264"), shell=True)
             print "Motion stopped:" , dt.datetime.now()


   finally:
       camera.stop_recording(splitter_port=1)
       camera.stop_recording(splitter_port=2)

killagreg
Posts: 12
Joined: Wed Dec 17, 2014 10:15 pm

Re: Lightweight python motion detection

Sat Dec 27, 2014 9:55 pm

Hi together,

i've used some hours during the xmas holidays to improve the code a bit.
I have now the h264 to mp4 encoding added and some suggestions form hydra3333 included.
For example i use now an event object for motion activity that reduces the cpu load to only 24% instead of 66%.
There is also now a capture interval defined that triggers regular image captures for a webcam or preview function.

Code: Select all

#!/usr/bin/python

# This script implements a motion capture surveillace cam for raspery pi using picam.
# It uses the motion vecors magnitude of the h264 hw-encoder to detect motion activity.
# At the time of motion detection a jpg snapshot is saved together with a h264 video stream
# some seconds before, during and after motion activity to the 'filepath' directory.

import os, logging
import subprocess
import threading
import io
import picamera
import picamera.array
import numpy as np
import datetime as dt
import time
from PIL import Image

#debug mode?
debug = 0
#seup filepath for motion and capure data output
filepath = '/var/www/motion'
# setup pre and post video recording around motion event
video_preseconds = 3
video_postseconds = 3
#setup video resolution
video_width = 1280 
video_height = 720
video_framerate = 25
#setup cam rotation (0, 180)
cam_rotation = 180

# setup motion detection resolution
motion_width = 320
motion_height = 240
# setup motion detection threshold, i.e. magnitude of a motion block to count as  motion
motion_threshold = 60
# setup motion detection sensitivity, i.e number of motion blocks that trigger a motion detection
motion_sensitivity = 6
# range of interests define areas within the motion analysis is done
 # [ [[start pixel on left side,end pixel on right side],[start pixel on top side,stop pixel on bottom side]] ]
# default is the whole image frame
motion_roi_count = 1
motion_roi = [ [[1,motion_width], [1,motion_height]] ]
# another example
#motion_roi_count = 1
#motion_roi = [ [[270,370], [190,290]]  ]
# exaple for 2 mask areas
#motion_roi_count = 2
#motion_roi = [ [[1,320],[1,240]], [[400,500],[300,400]] ]

# setup capture interval
capture_interval = 10
capture_filename = "snapshot"
# do not change code behind that line
#--------------------------------------
motion_event = threading.Event()
motion_timestamp = time.time()
if(motion_roi_count > 0) or (debug):
   motion_cols = (motion_width  + 15) // 16 + 1
   motion_rows = (motion_height + 15) // 16
   motion_array = np.zeros((motion_rows, motion_cols), dtype = np.uint8)
# create motion mask
if motion_roi_count > 0:
   motion_array_mask = np.zeros((motion_rows, motion_cols), dtype = np.uint8)
   for count in xrange(0, motion_roi_count):
      for col in xrange( (motion_roi[count][0][0]-1)//16, (motion_roi[count][0][1]-1+15)//16 ):
         for row in xrange( (motion_roi[count][1][0]-1)//16, (motion_roi[count][1][1]-1+15)//16 ):
            motion_array_mask[row][col] = 1

capture_timestamp = time.time()

#call back handler for motion output data from h264 hw encoder
class MyMotionDetector(picamera.array.PiMotionAnalysis):
   def analyse(self, a):
      global motion_event, motion_timestamp, motion_array, motion_array_mask, motion_roi_count
      # calcuate length of motion vectors of mpeg macro blocks
      a = np.sqrt(
          np.square(a['x'].astype(np.float)) +
          np.square(a['y'].astype(np.float))
          ).clip(0, 255).astype(np.uint8)
      if motion_roi_count > 0:
         a = a * motion_array_mask
      # If there're more than 'sensitivity' vectors with a magnitude greater
      # than 'threshold', then say we've detected motion
      th = ((a > motion_threshold).sum() > motion_sensitivity)
      now = time.time()
      # motion logic, trigger on motion and stop after 2 seconds of inactivity
      if th:
         motion_timestamp = now

      if motion_event.is_set():
          if (now - motion_timestamp) >= video_postseconds:
             motion_event.clear()  
      else:
         if th:
             motion_event.set()
	     if debug:
                idx = a > motion_threshold
                a[idx] = 255
                motion_array = a


def write_video(stream):
# Write the entire content of the circular buffer to disk. No need to
# lock the stream here as we're definitely not writing to it
# simultaneously
     global motion_filename

     with io.open(motion_filename + '-before.h264', 'wb') as output:
         for frame in stream.frames:
            if frame.frame_type == picamera.PiVideoFrameType.sps_header:
                stream.seek(frame.position)
                break
         while True:
            buf = stream.read1()
            if not buf:
               break
            output.write(buf)
     # Wipe the circular stream once we're done
     stream.seek(0)
     stream.truncate()





# create logger with 'spam_application'
logger = logging.getLogger('PiCam')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('picam.log')
fh.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.info('PiCam has been started')

os.system('clear')
print "Motion Detection Application"
print "----------------------------"
print "                            "
print "Capture videos with %dx%d resolution" % (video_width, video_height)
print "Analyze motion with %dx%d resolution" % (motion_width, motion_height)
print "  resulting in %dx%d motion blocks" % (motion_cols, motion_rows)


with picamera.PiCamera() as camera:
	camera.resolution = (video_width, video_height)
	camera.framerate = video_framerate
	camera.rotation = cam_rotation
	camera.video_stabilization = True
	camera.annotate_background = True
	# setup a circular buffer
	stream = picamera.PiCameraCircularIO(camera, seconds = video_preseconds)
	# hi resolution video recording into circular buffer from splitter port 1
	camera.start_recording(stream, format='h264', splitter_port=1)
	# low resolution motion vector analysis from splitter port 2
	camera.start_recording('/dev/null', splitter_port=2, resize=(motion_width,motion_height) ,format='h264', motion_output=MyMotionDetector(camera, size=(motion_width,motion_height)))
	# wait some seconds for stable video data
	camera.wait_recording(2, splitter_port=1)
        motion_event.clear()
        logger.info('waiting for motion')
	print "Waiting for Motion!"
	try:
	    while True:
		    # motion event must trigger this action here
		    camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
		    if motion_event.wait(1):
                            logger.info('motion detected')
			    print "Motion detected: " , dt.datetime.now()
			    motion_filename = filepath + "/" + time.strftime("%Y%m%d-%H%M%S", time.gmtime(motion_timestamp))
			    camera.split_recording(motion_filename + '-after.h264', splitter_port=1)
			    # catch an image as video preview during video recording (uses splitter port 0) at time of the motion event
			    camera.capture_sequence([motion_filename + '.jpg'], use_video_port=True, splitter_port=0)
			    # dump motion array as image
                            if debug:
			       img = Image.fromarray(motion_array)
			       img.save(motion_filename + "-motion.png")
			    # save circular buffer before motion event
			    write_video(stream)
			    #wait for end of motion event here
			    while motion_event.is_set():
				    camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
				    camera.wait_recording(1, splitter_port=1)
			    #split video recording back in to circular buffer
			    camera.split_recording(stream, splitter_port=1)
			    subprocess.call("MP4Box -cat %s -cat %s %s && rm -f %s" % (motion_filename + "-before.h264", motion_filename + "-after.h264", motion_filename + ".mp4", motion_filename + "-*.h264"), shell=True)
			    logger.info('motion stopped')
                            print "Motion stopped:" , dt.datetime.now()
                    else:
                            # webcam mode, capture images on a regular inerval
		            if capture_interval:
			       if(time.time() > (capture_timestamp + capture_interval) ):
                                   capture_timestamp = time.time()
                                   print "Capture Snapshot:", dt.datetime.now()
                                   camera.capture_sequence([filepath + "/" + capture_filename + ".jpg"], use_video_port=True, splitter_port=0)

	finally:
	    camera.stop_recording(splitter_port=1)
	    camera.stop_recording(splitter_port=2)
Thanks for the nice feedback and hints.

hydra3333
Posts: 101
Joined: Thu Jan 10, 2013 11:48 pm

Re: Lightweight python motion detection

Sun Dec 28, 2014 5:32 am

Thank You ! A few more suggestions are attached below.

Code: Select all

#!/usr/bin/env python
# -*- coding: utf-8 -*-

# to fix scripts, turning them from MS-DOS format to unix format
# to get rid of MSDOS format do this to this file: sudo sed -i s/\\r//g ./filename

# This script was originally created by by killagreg ¯ Thu Dec 18, 2014 7:53 am
# and                                   by killagreg ¯ Fri Dec 19, 2014 7:09 pm
# see   http://www.raspberrypi.org/forums/viewtopic.php?p=656881#p656881
#
# This script implements a motion capture surveillace cam for raspbery pi using picam.
# It uses the "motion vectors" magnitude of the h264 hw-encoder to detect motion activity.
# At the time of motion detection a jpg snapshot is saved together with a h264 video stream
# some seconds before, during and after motion activity to the 'filepath' directory.
#
# APPARENTLY INSPIRED BY PICAMERA 1.8 TECHNIQUES documented at
# http://picamera.readthedocs.org/en/release-1.8/recipes2.html#rapid-capture-and-processing
# where the PICAMERA code uses efficient underlying mmal access and numpy code
#
# "the original and the best" script code by "killagreg" was at
# http://www.raspberrypi.org/forums/viewtopic.php?p=656881#p656881
#
# Modifications:
# 2014.12.26 
#    - modified slightly for the boundary case of no motion detection "windows" - avoid performing the masking step
# 2014.12.28 (hey "killagreg", really nice updates)
#    - incorporate latest changes by killagreg over christmas 2014
#         from http://www.raspberrypi.org/forums/viewtopic.php?p=660572#p660572
#    - added/changed "mp4 mode" to be optional and not the default (also added some MP4box flags)
#    - repositioned a small bit of code to avoid a possible "initial conditions" bug
#    - modified (webcam like) snapshot capture interval processing slightly
#    - added extra logging
#    - made use of localtime instead of GMT, for use in filenames
#    - removed "print" commands and instead rely on logging 
#    - added circular file logging and specified the path of the log file
#
# notes:
#    1. it likely uses 100% cpu since it loops around infinitely in a "while true" condition until motion is detected
#    2. maybe a programmer could look at it and do something different also remembering not to delay start of capture ... feel free
#    3. the output video streams files are raw h264, NOT repeat NOT mpeg4 video files, so youll have to convert them to .mp4 yourself
#    4. To prepare for using this python script (yes, yes, 777, roll your own if you object)
#        sudo apt-get install -y rpi-update
#        sudo rpi-update
#        sudo apt-get update 
#        sudo apt-get upgrade
#        sudo apt-get install python-picamera python-imaging-tk gpac -y
#        sudo mkdir /var/pymotiondetector
#        sudo chmod 777 /var/pymotiondetector
#
# licensing: 
#    this being a derivative, whatever killagreg had (acknowledging killagreg code looks to be substantially from examples in
#    the picamera documentation http://picamera.readthedocs.org/en/release-1.8/recipes2.html#rapid-capture-and-processing
#    i.e. free for any and all use I guess
#
# Example to separately and externally convert h264 files to mp4, on the Pi (using MP4box from gpac)
#   sudo MP4Box -fps <use capture framerate> -add raw_video.h264 -isma -new wrapped_video.mp4
#
# Example to separately and externally convert h264 files to mp4, on the Windows
#   "C:\ffmpeg\bin\ffmpeg.exe" -f h264 -r <use capture framerate> -i "raw_video.h264" -c:v copy -an -movflags +faststart -y "wrapped_video.mp4"
#   REM if necessary add   -bsf:v h264_mp4toannexb   before "-r" 
# or
#   "C:\MP4box\MP4Box.exe" -fps <use capture framerate> -add "raw_video.h264" -isma -new "wrapped_video.mp4"
#
import os
import logging
import logging.handlers
import subprocess
import io
import picamera
import picamera.array
import numpy as np
import datetime as dt
import time
from PIL import Image

# ----------------------------------------------------------------------------------------------------------------
# in this section are parameters you can fiddle with

#debug mode? dumps extra debug info 
debug = False  # False

# mp4 mode ?
# if we set mp4_mode, 
# then the h264 files are converted to an mp4 when the motion capture is completed, using MP4box (part of gpac)
# warning, warning, danger will robinson ... 
# mp4 mode consumes a lot CPU and elapsed time and it is almost certain that we will *lose frames* 
# after a detection has finished, if a new movement occurs during that conversion
# For that reason I don't use mp4 mode
# and instead use the original quicker "cat" and separately convert the .h264 files later, if I want to.
mp4_mode = False

#seup filepath for motion capure data (which is in raw h264 format) plus the start-of-motion jpeg.
# sudo mkdir /var/pymotiondetector
# sudo chmod 777 /var/pymotiondetector
filepath = '/var/pymotiondetector'
logger_filename = filepath + '/pymotiondetector.log'
#logger_filename = 'pymotiondetector.log'

# setup pre and post video recording around motion events
video_preseconds = 5    # minimum 1
video_postseconds = 10  # minimum 1

# setup the main video/snapshot camera resolution
# see this link for a full discussion on how to choose a valid resolution that will work
# http://picamera.readthedocs.org/en/latest/fov.html
video_width = 640
#video_width = 1280
video_height = 480
#video_height = 720

# setup the camera video framerate, PAL is 25, let's go for 5 instead
#video_framerate = 25
video_framerate = 5

#setup video rotation (0, 90, 180, 270)
video_rotation = 0 

# setup the camera to perform video stabilization
video_stabilization = True

# setup the camera to put a black background on the annootation (in our case, for date/time)
#video_annotate_background = True
video_annotate_background = False

# setup the camera to put frame number in the annotation
video_annotate_frame_num = True

# we could setup a webcam mode, to capture images on a regular interval in between motion recordings
# setup jpeg capture snapshot interval and filename prefix
snapshot_capture_interval = 0
#snapshot_capture_interval = 300
snapshot_capture_filename = "snapshot"

#--- now for the motion detection parameters
# define motion detection video resolution, equal or smaller than capture video resolution
# smaller = less cpu needed thus "better" and less likely to lose frames etc
motion_width  = 320 #640
motion_height = 240 #480

# setup motion detection threshold, i.e. magnitude of a motion block to count as  motion
motion_threshold = 60
#motion_threshold = 30
# setup motion detection sensitivity, i.e number of motion blocks that trigger a motion detection
#motion_sensitivity = 10
motion_sensitivity = 6

# Range Of Interests define areas within the motion analysis is done within the smaller "motion detection video resolution"
# ie define areas within the motion analysis picture that are used for motion analysis
#    [ [[start pixel on left side,end pixel on right side],[start pixel on top side,stop pixel on bottom side]] ]
#
# default to no motion masking, ("0")
# ie use the "whole image frame" of the lower-resolution-capture "motion vectors"
# and avoid CPU/memory overheards of doing the masking
motion_roi_count = 0
# this is the whole "motion detecton image frame"
#motion_roi_count = 1
#motion_roi = [ [[1,motion_width],[1,motion_height]] ]
# another example, one motion detection mask area
#motion_roi_count = 1
#motion_roi = [ [[270,370],[190,290]]  ]
# example for 2 mask areas
#motion_roi_count = 2
#motion_roi = [ [[1,320],[1,240]], [[400,500],[300,400]] ]

# ----------------------------------------------------------------------------------------------------------------

# do not change code below the line
#-----------------------------------
# pre-initialise variables in case they're used later
motion_detected = False
motion_timestamp = time.time()
snapshot_capture_timestamp = time.time()

motion_cols = (motion_width  + 15) // 16 + 1
motion_rows = (motion_height + 15) // 16
if (motion_roi_count > 0) or (debug):
    motion_array = np.zeros((motion_rows, motion_cols), dtype = np.uint8)

# create a zero "AND" motion mask of masked areas 
# and then fill 1's into the mask areas of interest which we specified above
if motion_roi_count > 0:
    motion_array_mask = np.zeros((motion_rows, motion_cols), dtype = np.uint8)
    for count in xrange(0, motion_roi_count):
        for col in xrange( (motion_roi[count][0][0]-1)//16, (motion_roi[count][0][1]-1+15)//16 ):
            for row in xrange( (motion_roi[count][1][0]-1)//16, (motion_roi[count][1][1]-1+15)//16 ):
                motion_array_mask[row][col] = 1

#call back handler for motion output data from h264 hw encoder
#this processes the motion ventors from the low resolution splitted capture
class MyMotionDetector(picamera.array.PiMotionAnalysis):
   def analyse(self, a):
      global motion_detected, motion_timestamp, motion_array, motion_array_mask, motion_roi_count
      # calcuate length of motion vectors of mpeg macro blocks
      a = np.sqrt(
          np.square(a['x'].astype(np.float)) +
          np.square(a['y'].astype(np.float))
          ).clip(0, 255).astype(np.uint8)
# zero out (mask out) anything outside our specified areas of interest, if we have a mask
      if motion_roi_count > 0:
          a = a * motion_array_mask
      # If there're more than 'sensitivity' vectors with a magnitude greater
      # than 'threshold', then say we've detected motion
      th = ((a > motion_threshold).sum() > motion_sensitivity)
      now = time.time()
# by now ...
# th               = motion detected on current frame
# motion_timestamp = the last time when motion was detected in a frame (start of time window)
# motion_detected  = whether motion detection time window is currently triggered
#                  = is only turned off if motion has previously been detected
#                    and both "no motion detected" and its time window has expired
      # motion logic, trigger on motion and stop after video_postseconds seconds of inactivity
      if th:
          motion_timestamp = now

      if motion_detected:
          if (now - motion_timestamp) >= video_postseconds:
              motion_detected = False
      else:
          if th:
              motion_detected = True
          if debug:
              idx = a > motion_threshold
              a[idx] = 255
              motion_array = a

def write_video(stream):
# Write the entire content of the circular buffer to disk. No need to
# lock the stream here as we're definitely not writing to it
# simultaneously
     global motion_filename

     with io.open(motion_filename + '-before.h264', 'wb') as output:
         for frame in stream.frames:
             if frame.frame_type == picamera.PiVideoFrameType.sps_header:
                 stream.seek(frame.position)
                 break
         while True:
             buf = stream.read1()
             if not buf:
                 break
             output.write(buf)
     # Wipe the circular stream once we're done
     stream.seek(0)
     stream.truncate()

#-----------------------------------------
# create logger with 'spam_application'
#
logger = logging.getLogger('pymotiondetector')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
#fh = logging.FileHandler('pymotiondetector.log')
fh = logging.handlers.RotatingFileHandler(logger_filename, mode='a', maxBytes=(1024*1000 * 2), backupCount=5, delay=0)
fh.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.info('---------------------------------')
logger.info('pymotiondetector has been started')
logger.info('---------------------------------')
msg = "Capture videos with %dx%d resolution" % (video_width, video_height)
logger.info(msg)
msg = "Analyze motion with %dx%d resolution" % (motion_width, motion_height)
logger.info(msg)
msg = "  resulting in %dx%d motion blocks" % (motion_cols, motion_rows)
logger.info(msg)


#os.system('clear')
#print "Motion Detection Application"
#print "----------------------------"
#print "                            "
#print "Capture videos with %dx%d resolution" % (video_width, video_height)
#print "Analyze motion with %dx%d resolution" % (motion_width, motion_height)
#print "  resulting in %dx%d motion blocks" % (motion_cols, motion_rows)

with picamera.PiCamera() as camera:
   camera.resolution = (video_width, video_height)
   camera.framerate = video_framerate
   camera.rotation = video_rotation
   camera.video_stabilization = video_stabilization
   camera.annotate_background = video_annotate_background
   camera.annotate_frame_num = video_annotate_frame_num

   # setup a circular buffer
   stream = picamera.PiCameraCircularIO(camera, seconds = video_preseconds)
   # 1. split the hi resolution video recording into circular buffer from splitter port 1
   camera.start_recording(stream, format='h264', splitter_port=1)
   #camera.start_recording('test.h264', splitter_port=1)
   # 2. split the low resolution motion vector analysis from splitter port 2, throw away the actual video
   camera.start_recording('/dev/null', splitter_port=2, resize=(motion_width,motion_height) ,format='h264', motion_output=MyMotionDetector(camera, size=(motion_width,motion_height)))
   # wait some seconds for stable video data to be available
   camera.wait_recording(2, splitter_port=1)
   motion_detected = False
   logger.info('pymotiondetector has been started')

   #print "Motion Capture ready - Waiting for motion"
   logger.info('OK. Waiting for first motion to be detected')

   try:
       while True:
          # the callback "MyMotionDetector" has been setup above using the low resolution split
          # original code "while true" above ... loop around as fast as we can go until motion is detected ... thus 100 percent cpu ?
          # a motion event must trigger this action here
          camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
          if motion_detected:
             #print "Motion detected: " , dt.datetime.now()
             logger.info('detected motion')
            #motion_filename = filepath + "/" + time.strftime("%Y%m%d-%H%M%S", time.gmtime(motion_timestamp))
             motion_filename = filepath + "/" + time.strftime("%Y%m%d-%H%M%S", time.localtime(motion_timestamp))
             # split  the high res video stream to a file instead of to the internal circular buffer
             logger.info('splitting video from circular IO buffer to after-motion-detected h264 file ')
             camera.split_recording(motion_filename + '-after.h264', splitter_port=1)
             # catch an image as video preview during video recording (uses splitter port 0) at time of the motion event
             msg = "started  capture jpeg image file %s" % (motion_filename + ".jpg")
             logger.info(msg)
             camera.capture_sequence([motion_filename + '.jpg'], use_video_port=True, splitter_port=0)
             msg = "finished capture jpeg image file %s" % (motion_filename + ".jpg")
             logger.info(msg)
             # if we want to see debug motion stuff, dump motion array as a png image
             if debug:
                 logger.info('saving debug motion vectors')
                 img = Image.fromarray(motion_array)
                 img.save(motion_filename + "-motion.png")
             # save circular buffer containing "before motion" event video, ie write it to a file
             logger.info('started  saving before-motion circular buffer')
             write_video(stream)
             logger.info('finished saving before-motion circular IO buffer')
             #---- wait for the end of motion event here, in one second increments
             logger.info('start waiting to detect end of motion')
             while motion_detected:
                camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
                camera.wait_recording(1, splitter_port=1)
             #---- end of motion event detected
             logger.info('detected end of motion')
             #split video recording back in to circular buffer
             logger.info('splitting video back into the circular IO buffer')
             camera.split_recording(stream, splitter_port=1)
             if mp4_mode:
                 msg = "started  copying h264 into mp4 file %s" % (motion_filename + ".mp4")
                 logger.info(msg)
                 msg = "MP4Box -fps %d -cat %s -cat %s -isma -new %s && rm -f %s && rm -f %s" % (video_framerate, motion_filename + "-before.h264", motion_filename + "-after.h264", motion_filename + ".mp4", motion_filename + "-before.h264", motion_filename + "-after.h264")
                 logger.info(msg)
                 subprocess.call(msg, shell=True)
                 msg = "finished copying h264 into mp4 file %s" % (motion_filename + ".mp4")
                 logger.info(msg)
             else:
                 msg = "started  concatenating h264 files into %s" % (motion_filename + ".h264")
                 logger.info(msg)
                 msg = "cat %s %s > %s && rm -f %s && rm -f %s" % (motion_filename + "-before.h264", motion_filename + "-after.h264", motion_filename + ".h264", motion_filename + "-before.h264", motion_filename + "-after.h264")
                 logger.info(msg)
                 subprocess.call(msg, shell=True)
                 msg = "finished concatenating h264 files into %s" % (motion_filename + ".h264")
                 logger.info(msg)
             msg = "Finished capture processing, entering constant loop state awaiting next motion detection by class MyMotionDetector ..."
             logger.info(msg)
             snapshot_capture_timestamp = time.time()
          else:
             # no motion detected or in progress - if webcam mode, capture images on a regular interval
             if (snapshot_capture_interval > 0):
                 if(time.time() > (snapshot_capture_timestamp + snapshot_capture_interval) ):
                    #snapf = filepath + "/" + snapshot_capture_filename + "-" + time.strftime("%Y%m%d-%H%M%S", time.gmtime(motion_timestamp))
                     snapf = filepath + "/" + snapshot_capture_filename + "-" + time.strftime("%Y%m%d-%H%M%S", time.localtime(motion_timestamp))
                     camera.capture_sequence([snapf + ".jpg"], use_video_port=True, splitter_port=0)
                     snapshot_capture_timestamp = time.time()
                     logger.info("Captured snapshot")

   finally:
       camera.stop_recording(splitter_port=1)
       camera.stop_recording(splitter_port=2)
As for speed, logging suggests there's at least one potential choke point (pending checking for a bug) where we lose around 10 seconds of motions detection and recording - around the time of splitting from the circular IO memory buffer into an "after" h264 file.
2014-12-28 15:54:39,904 - pymotiondetector - INFO - splitting video from circular IO buffer to after-motion-detected h264 file
2014-12-28 15:54:50,184 - pymotiondetector - INFO - started capture jpeg image file /var/pymotiondetector/20141228-155439.jpg
See here :-

Code: Select all

2014-12-28 15:54:36,898 - pymotiondetector - INFO - ---------------------------------
2014-12-28 15:54:36,903 - pymotiondetector - INFO - pymotiondetector has been started
2014-12-28 15:54:36,906 - pymotiondetector - INFO - ---------------------------------
2014-12-28 15:54:36,909 - pymotiondetector - INFO - Capture videos with 640x480 resolution
2014-12-28 15:54:36,911 - pymotiondetector - INFO - Analyze motion with 320x240 resolution
2014-12-28 15:54:36,914 - pymotiondetector - INFO -   resulting in 21x15 motion blocks
2014-12-28 15:54:39,763 - pymotiondetector - INFO - pymotiondetector has been started
2014-12-28 15:54:39,765 - pymotiondetector - INFO - OK. Waiting for first motion to be detected
2014-12-28 15:54:39,901 - pymotiondetector - INFO - detected motion
2014-12-28 15:54:39,904 - pymotiondetector - INFO - splitting video from circular IO buffer to after-motion-detected h264 file 
2014-12-28 15:54:50,184 - pymotiondetector - INFO - started  capture jpeg image file /var/pymotiondetector/20141228-155439.jpg
2014-12-28 15:54:50,358 - pymotiondetector - INFO - finished capture jpeg image file /var/pymotiondetector/20141228-155439.jpg
2014-12-28 15:54:50,361 - pymotiondetector - INFO - started  saving before-motion circular buffer
2014-12-28 15:54:50,433 - pymotiondetector - INFO - finished saving before-motion circular IO buffer
2014-12-28 15:54:50,436 - pymotiondetector - INFO - start waiting to detect end of motion
2014-12-28 15:55:00,466 - pymotiondetector - INFO - detected end of motion
2014-12-28 15:55:00,469 - pymotiondetector - INFO - splitting video back into the circular IO buffer
2014-12-28 15:55:01,999 - pymotiondetector - INFO - started  concatenating h264 files into /var/pymotiondetector/20141228-155439.h264
2014-12-28 15:55:02,001 - pymotiondetector - INFO - cat /var/pymotiondetector/20141228-155439-before.h264 /var/pymotiondetector/20141228-155439-after.h264 > /var/pymotiondetector/20141228-155439.h264 && rm -f /var/pymotiondetector/20141228-155439-before.h264 && rm -f /var/pymotiondetector/20141228-155439-after.h264
2014-12-28 15:55:02,078 - pymotiondetector - INFO - finished concatenating h264 files into /var/pymotiondetector/20141228-155439.h264
2014-12-28 15:55:02,081 - pymotiondetector - INFO - Finished capture processing, entering constant loop state awaiting next motion detection by class MyMotionDetector ...
I guess some next queries may be
- how to get rid of the choke point (looks like an issue intrinsic to the picamera library ? I have a class 10 SD card)
- can those shell things be done "asynchronously" in another thread or something
- can the code be restructured from a constant "while true loop" to some form of "other-thread callback" (? 8-) ) when variable motion_detected gets set to true by the motiondetector component
- what size GPU memory to allocate ? (I currently use 128Mb)

Not being a programmer, and not knowing python nor linux ... is it possible to run the main code as a higher priority and for it to then farm off other code as threads at lower priority ? If this is possible, links welcomed to help me get started. Thanks.

killagreg
Posts: 12
Joined: Wed Dec 17, 2014 10:15 pm

Re: Lightweight python motion detection

Sun Dec 28, 2014 11:16 am

As for speed, logging suggests there's at least one potential choke point (pending checking for a bug) where we lose around 10 seconds of motions detection and recording - around the time of splitting from the circular IO memory buffer into an "after" h264 file.
We do not loss video there, because switching from the circular buffer to the "after" file happens within one frame.
But the function waits for the next key frame in the h264-stream. Due to the case that you have reduced the framerate to 5 it takes
longer for stream switching. The recording itself happens in a parallel thread mamaged by the picamera lib. Anyhow as long as we recording we practically do not loss motion events.

At the moment I am lokking more into threading lib to seperate the transcoding to mp4 from the main loop to put that into the backgrund.

Return to “Camera board”

Who is online

Users browsing this forum: No registered users and 17 guests