hydra3333
Posts: 146
Joined: Thu Jan 10, 2013 11:48 pm

Re: Lightweight python motion detection

Sun Dec 28, 2014 12:46 pm

I notice this all looks a lot like the code in the thread over at
http://www.raspberrypi.org/forums/viewt ... 96#p654662
and described at
http://www.raspberrypi.org/forums/viewt ... 43&t=87997
including the apparent "delay"
Last edited by hydra3333 on Sun Dec 28, 2014 1:05 pm, edited 1 time in total.

killagreg
Posts: 12
Joined: Wed Dec 17, 2014 10:15 pm

Re: Lightweight python motion detection

Sun Dec 28, 2014 12:59 pm

I have invoked the thread seperation now for mp4 transcoding after motion recording.

Code: Select all

#!/usr/bin/python

# This script implements a motion capture surveillace cam for raspery pi using picam.
# It uses the motion vecors magnitude of the h264 hw-encoder to detect motion activity.
# At the time of motion detection a jpg snapshot is saved together with a h264 video stream
# some seconds before, during and after motion activity to the 'filepath' directory.

import os, logging
import subprocess
import threading
import io
import picamera
import picamera.array
import numpy as np
import datetime as dt
import time
from PIL import Image

#debug mode?
debug = False
#setup filepath for motion and capure data output
filepath = '/var/www/motion'
#setup filepath for logfile
logger_filepath = 'picam.log'
# setup pre and post video recording around motion event
video_preseconds = 3
video_postseconds = 3
#setup video resolution
video_width = 1280 
video_height = 720
video_framerate = 25
#setup cam rotation (0, 180)
cam_rotation = 180

# setup motion detection resolution
motion_width = 320
motion_height = 240
# setup motion detection threshold, i.e. magnitude of a motion block to count as  motion
motion_threshold = 60
# setup motion detection sensitivity, i.e number of motion blocks that trigger a motion detection
motion_sensitivity = 6
# range of interests define areas within the motion analysis is done
 # [ [[start pixel on left side,end pixel on right side],[start pixel on top side,stop pixel on bottom side]] ]
# default is the whole image frame
motion_roi_count = 0
# another example
#motion_roi_count = 1
#motion_roi = [ [[270,370], [190,290]]  ]
# exaple for 2 mask areas
#motion_roi_count = 2
#motion_roi = [ [[1,320],[1,240]], [[400,500],[300,400]] ]

# setup capture interval
capture_interval = 10
capture_filename = "snapshot"
# do not change code behind that line
#--------------------------------------
motion_event = threading.Event()
motion_timestamp = time.time()
motion_cols = (motion_width  + 15) // 16 + 1
motion_rows = (motion_height + 15) // 16
if(motion_roi_count or debug):
   motion_array = np.zeros((motion_rows, motion_cols), dtype = np.uint8)
# create motion mask
if motion_roi_count > 0:
   motion_array_mask = np.zeros((motion_rows, motion_cols), dtype = np.uint8)
   for count in xrange(0, motion_roi_count):
      for col in xrange( (motion_roi[count][0][0]-1)//16, (motion_roi[count][0][1]-1+15)//16 ):
         for row in xrange( (motion_roi[count][1][0]-1)//16, (motion_roi[count][1][1]-1+15)//16 ):
            motion_array_mask[row][col] = 1

capture_timestamp = time.time()

#call back handler for motion output data from h264 hw encoder
class MyMotionDetector(picamera.array.PiMotionAnalysis):
   def analyse(self, a):
      global motion_event, motion_timestamp, motion_array, motion_array_mask, motion_roi_count
      # calcuate length of motion vectors of mpeg macro blocks
      a = np.sqrt(
          np.square(a['x'].astype(np.float)) +
          np.square(a['y'].astype(np.float))
          ).clip(0, 255).astype(np.uint8)
      if motion_roi_count > 0:
         a = a * motion_array_mask
      # If there're more than 'sensitivity' vectors with a magnitude greater
      # than 'threshold', then say we've detected motion
      th = ((a > motion_threshold).sum() > motion_sensitivity)
      now = time.time()
      # motion logic, trigger on motion and stop after 2 seconds of inactivity
      if th:
         motion_timestamp = now

      if motion_event.is_set():
          if (now - motion_timestamp) >= video_postseconds:
             motion_event.clear()  
      else:
         if th:
             motion_event.set()
	     if debug:
                idx = a > motion_threshold
                a[idx] = 255
                motion_array = a


def write_video(stream):
# Write the entire content of the circular buffer to disk. No need to
# lock the stream here as we're definitely not writing to it
# simultaneously
     global motion_filename

     with io.open(motion_filename + '-before.h264', 'wb') as output:
         for frame in stream.frames:
            if frame.frame_type == picamera.PiVideoFrameType.sps_header:
                stream.seek(frame.position)
                break
         while True:
            buf = stream.read1()
            if not buf:
               break
            output.write(buf)
     # Wipe the circular stream once we're done
     stream.seek(0)
     stream.truncate()


class myThread (threading.Thread):
    def __init__(self, threadID, name, counter):
        threading.Thread.__init__(self)
        self.threadID = threadID
        self.name = name
        self.counter = counter
    def run(self):
	subprocess.call("MP4Box -cat %s -cat %s %s && rm -f %s" % (self.name + "-before.h264", self.name + "-after.h264", self.name + ".mp4", self.name + "-*.h264"), shell=True)


# create logger
logger = logging.getLogger('PiCam')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler(logger_filepath)
fh.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.info('---------------------------------')
logger.info('PiCam has been started')
msg = "Capture videos with %dx%d resolution" % (video_width, video_height)
logger.info(msg)
msg = "Analyze motion with %dx%d resolution" % (motion_width, motion_height)
logger.info(msg)
msg =  "  resulting in %dx%d motion blocks" % (motion_cols, motion_rows)
logger.info(msg)

with picamera.PiCamera() as camera:
	camera.resolution = (video_width, video_height)
	camera.framerate = video_framerate
	camera.rotation = cam_rotation
	camera.video_stabilization = True
	camera.annotate_background = True
	# setup a circular buffer
	stream = picamera.PiCameraCircularIO(camera, seconds = video_preseconds)
	# hi resolution video recording into circular buffer from splitter port 1
	camera.start_recording(stream, format='h264', splitter_port=1)
	# low resolution motion vector analysis from splitter port 2
	camera.start_recording('/dev/null', splitter_port=2, resize=(motion_width,motion_height) ,format='h264', motion_output=MyMotionDetector(camera, size=(motion_width,motion_height)))
	# wait some seconds for stable video data
	camera.wait_recording(2, splitter_port=1)
        motion_event.clear()
        logger.info('waiting for motion')
        thread_id = 1
	try:
	    while True:
		    # motion event must trigger this action here
		    camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
		    if motion_event.wait(1):
                            logger.info('motion detected')
			    motion_filename = filepath + "/" + time.strftime("%Y%m%d-%H%M%S", time.gmtime(motion_timestamp))
			    camera.split_recording(motion_filename + '-after.h264', splitter_port=1)
		            # catch an image as video preview during video recording (uses splitter port 0) at time of the motion event
			    camera.capture_sequence([motion_filename + '.jpg'], use_video_port=True, splitter_port=0)
			    # dump motion array as image
                            if debug:
			       img = Image.fromarray(motion_array)
			       img.save(motion_filename + "-motion.png")
			    # save circular buffer before motion event
			    write_video(stream)
			    #wait for end of motion event here
			    while motion_event.is_set():
				    camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
				    camera.wait_recording(1, splitter_port=1)
			    #split video recording back in to circular buffer
			    camera.split_recording(stream, splitter_port=1)
                            # transcode to mp4 using a seperate thread
                            thread = myThread(thread_id, motion_filename, thread_id)
                            thread.start()
                            thread_id = thread_id + 1
			    logger.info('motion stopped')
                    else:
                            # webcam mode, capture images on a regular inerval
		            if capture_interval:
			       if(time.time() > (capture_timestamp + capture_interval) ):
                                   capture_timestamp = time.time()
#                                   logger.info('capture snapshot')
                                   camera.capture_sequence([filepath + "/" + capture_filename + ".jpg"], use_video_port=True, splitter_port=0)

	finally:
	    camera.stop_recording(splitter_port=1)
	    camera.stop_recording(splitter_port=2)

hoggerz
Posts: 8
Joined: Sun Dec 29, 2013 1:05 am

Re: Lightweight python motion detection

Sun Dec 28, 2014 1:09 pm

Hi,
Great script, I was just wondering how to increase the 2 second motion detection cut off to say around 5 or 10 seconds before the script decides that motion has ceased and it commits the file. It doesn't appear immediately obvious what I should change without probably breaking something!

Thanks

Mark

hydra3333
Posts: 146
Joined: Thu Jan 10, 2013 11:48 pm

Re: Lightweight python motion detection

Sun Dec 28, 2014 1:16 pm

Nice. The mp4box thread Call seems to be a synchronous call ?
I wonder how to specify a gop size=5 or whatever it is, for low framerates so we only lose 1 second or so; (yes larger h264 filesize)

Mark_T
Posts: 149
Joined: Sat Dec 27, 2014 10:54 am

Re: Lightweight python motion detection

Sun Dec 28, 2014 4:05 pm

I've recently played with motion capture (detect birds at a feeder), but using openCV which seems not to
have been mentioned here.

I use V4L and openCV to calculate difference between successive frames
as a greyscale image, then calculate the SD (standard deviation) of all the pixel
values in this difference (using openCV's histogram function to speed this up).

I threshold the SD to exit the program if enough motion/change is detected.

The advantage I see with this is that this python prog works unchanged on the Pi
and my laptop. It should work unchanged wi

Of course you have to have V4L and openCV setup, but that's easy, merely

Code: Select all

sudo apt-get install python-opencv
and adding

Code: Select all

modprobe bcm2835-v4l2
to /etc/rc.local or a boot-time cronscript.

The python code:

Code: Select all

#!/usr/bin/env python

import cv2.cv as cv
import cv2
import numpy as np
from math import sqrt

import sys


width  = 320          # use a low resolution mode for speed and less noise
height = 240
pixels = width*height

prev_image = None
frame = 0 

threshold_value = 4.0  # adjust this for sensitivity

def track (bgr_image):
    global prev_image
    global frame
    
    gray = cv2.cvtColor (bgr_image, cv.CV_BGR2GRAY)

    if prev_image == None:
      prev_image = gray
    else:
      diff = gray - prev_image + 128
      # cv2.imshow ("diff", gray)  # uncomment to see difference frames


      histb = cv2.calcHist ([diff], [0], None, [256], [0, 256])
      histb = histb [:, 0]

      sum = 0
      sumsq = 0
      for i in range (0, 256):
          val = i
          count = int (histb [i])
          all = count * val
          sum += all
          sumsq += val * all

      mean = sum / pixels
      var  = (sumsq - float(sum)*sum / pixels) / pixels
      sd = sqrt (var)
      # print sd
      if frame > 10 and sd > threshold_value:  # ignore first ten frames so exposure can settle
          print "SD: "+str(sd)
          return False

      tmp = prev_image 
      prev_image = gray
      gray = tmp

    frame += 1

    if cv2.waitKey(5) == 27:  # escape key handling
        return False
    
    return True


# Test with input from camera
if __name__ == '__main__':

    capture = cv2.VideoCapture (-1)
    capture.set (3, width)
    capture.set (4, height)

    while capture.isOpened ():
        capture.grab ()
        ret, im = capture.retrieve ()
        if ret:
            if not track (im):
                break


killagreg
Posts: 12
Joined: Wed Dec 17, 2014 10:15 pm

Re: Lightweight python motion detection

Sun Dec 28, 2014 4:57 pm

I'd like the possibility to play around with the motion vector data (x,y, sad) of the macro blocks of the h264 encoder of the graphics chip. This is a real powerfull tool for further improvements. At the moment we are playing only with the motion vector lenght thesholding for motion detection by calculation the pythagoras of the (x,y) direction components.
The sum of absolute pixel differences (sad) is also a nice metric for motion detection without the effort of float numpy calculations
in the motion daata callback handler.

On the other hand calculation the angle of the motion vectors give the oportunity to trigger even on the direction of motion in the image sequence without a huge cpu load.

martinw
Posts: 9
Joined: Mon Dec 29, 2014 3:17 pm

Re: Lightweight python motion detection

Mon Dec 29, 2014 3:24 pm

Hi :) This is my first post here :)

Thankyou guys for sharing your hard work! I'm a new Pi user and wanted to make a simple security system setup. I've used the last piece of code posted by killagreg and it works great!

I'm new to programming and am having difficulty figuring out the next step if anyone could point me in the right direction that'd be awesome,

I simply want to stream the feed at the same time as running the motion detection script, but not really sure where to go from here,

i can setup a stream on its own ok, but I do not know how to integrate the streaming/receiving process in combination with the motiondetection script.

If anyone had any pointers for me I would be very grateful.

Thanks :)

killagreg
Posts: 12
Joined: Wed Dec 17, 2014 10:15 pm

Re: Lightweight python motion detection

Mon Dec 29, 2014 6:07 pm

What Kind oft streaming do you want? A mjpg stream embedded in a HTML page vor a pure h264 stream?

martinw
Posts: 9
Joined: Mon Dec 29, 2014 3:17 pm

Re: Lightweight python motion detection

Mon Dec 29, 2014 6:12 pm

killagreg wrote:What Kind oft streaming do you want? A mjpg stream embedded in a HTML page vor a pure h264 stream?
Thanks :)

I would be happy with either right now. If I could choose, then an embedded webpage version would be awesome.

I've tried to integrate the code from the picamera docs
http://picamera.readthedocs.org/en/rele ... o-a-stream

But I just do it wrong :(

Any help is greatly appreciated :)

killagreg
Posts: 12
Joined: Wed Dec 17, 2014 10:15 pm

Re: Lightweight python motion detection

Mon Dec 29, 2014 10:31 pm

First install vlc on Pi.
sudo apt-get install vlc

Then modify the python script by adding the line

Code: Select all

import sys
at the start of the script and insert

Code: Select all

camera.start_recording(sys.stdout, splitter_port=3, format='h264', resize=(640, 480) )
to output the h264 data stream to standard out.

Run the python script from shell using

python script.py | cvlc -vvv stream:///dev/stdin --sout '#standard{access=http,mux=ts,dst=:8160}' :demux=h264

You can receive the stream at clinet side using vlc player at “http://raspi-ip:8160/

Regards Greg

martinw
Posts: 9
Joined: Mon Dec 29, 2014 3:17 pm

Re: Lightweight python motion detection

Mon Dec 29, 2014 11:24 pm

Thankyou :)

I tried as you suggested but it did not work :(

i place the extra lines you posted in what I think are the correct places, import at the top with the other import lines, and the other code as below

Code: Select all

   # setup a circular buffer
   stream = picamera.PiCameraCircularIO(camera, seconds = video_preseconds)
   # 1. split the hi resolution video recording into circular buffer from splitter port 1
   camera.start_recording(stream, format='h264', splitter_port=1)
   #camera.start_recording('test.h264', splitter_port=1)
   # 2. split the low resolution motion vector analysis from splitter port 2, throw away the actual video
   camera.start_recording('/dev/null', splitter_port=2, resize=(motion_width,motion_height) ,format='h264', motion_output=MyMotionDetector(camera, size=(motion_width,motion_height)))
  # 3. stream to vlc
   camera.start_recording(sys.stdout, splitter_port=3, format='h264', resize=(640, 480) )
   # wait some seconds for stable video data to be available
   camera.wait_recording(2, splitter_port=1)
   motion_detected = False
   logger.info('pymotiondetector has been started')

   #print "Motion Capture ready - Waiting for motion"
   logger.info('OK. Waiting for first motion to be detected')
Once I run the script as you have posted I get a whole lot of code running in the console that usually is not there when running the motion script before adding this new code;

Code: Select all

c52508] mux_ts mux debug: adjusting rate at 40000/408174 (44/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 0/384048 (118/24)
[0x1c52508] mux_ts mux debug: adjusting rate at 40000/384048 (24/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 80000/394056 (66/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 40000/425986 (41/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 40000/392514 (43/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 40000/370334 (44/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 80000/383507 (63/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 40000/334855 (43/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 40000/326859 (41/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 40000/317228 (45/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 40000/345559 (49/0)
[0x1c52508] main mux warning: late buffer for mux input (31222)
[0x1c52508] main mux warning: late buffer for mux input (229)
[0x1c4da50] main input error: ES_OUT_SET_(GROUP_)PCR  is called too late (jitter of 16788 ms ignored)
[0x1c4da50] main input error: ES_OUT_RESET_PCR called
[0x1c4da50] main input debug: Buffering 0%
[0x1c4da50] main input debug: Buffering 0%
[0x1c4da50] main input debug: Buffering 1%
[0x1c4da50] main input debug: Buffering 2%
[0x1c4da50] main input debug: Buffering 3%
[0x1c4da50] main input debug: Buffering 4%
[0x1c4da50] main input debug: Buffering 5%
[0x1c4da50] main input debug: Buffering 6%
[0x1c4da50] main input debug: Buffering 7%
[0x1c4da50] main input debug: Buffering 8%
[0x1c4da50] main input debug: Buffering 9%
[0x1c4da50] main input debug: Buffering 9%
[0x1c4da50] main input debug: Buffering 10%
[0x1c4da50] main input debug: Buffering 11%
[0x1c4da50] main input debug: Buffering 12%
[0x1c4da50] main input debug: Buffering 13%
[0x1c4da50] main input debug: Buffering 14%
[0x1c4da50] main input debug: Buffering 15%
[0x1c4da50] main input debug: Buffering 16%
[0x1c4da50] main input debug: Buffering 17%
[0x1c4da50] main input debug: Buffering 18%
[0x1c4da50] main input debug: Buffering 18%
[0x1c4da50] main input debug: Buffering 19%
[0x1c4da50] main input debug: Buffering 20%
[0x1c4da50] main input debug: Buffering 21%
[0x1c4da50] main input debug: Buffering 22%
[0x1c4da50] main input debug: Buffering 23%
[0x1c4da50] main input debug: Buffering 24%
[0x1c4da50] main input debug: Buffering 25%
[0x1c4da50] main input debug: Buffering 26%
[0x1c4da50] main input debug: Buffering 27%
[0x1c4da50] main input debug: Buffering 27%
[0x1c4da50] main input debug: Buffering 28%
[0x1c4da50] main input debug: Buffering 29%
[0x1c4da50] main input debug: Buffering 30%
[0x1c4da50] main input debug: Buffering 31%
[0x1c4da50] main input debug: Buffering 32%
[0x1c4da50] main input debug: Buffering 33%
[0x1c4da50] main input debug: Buffering 34%
[0x1c4da50] main input debug: Buffering 35%
[0x1c4da50] main input debug: Buffering 36%
[0x1c4da50] main input debug: Buffering 36%
[0x1c4da50] main input debug: Buffering 37%
[0x1c4da50] main input debug: Buffering 38%
[0x1c4da50] main input debug: Buffering 39%
[0x1c4da50] main input debug: Buffering 40%
[0x1c4da50] main input debug: Buffering 41%
[0x1c4da50] main input debug: Buffering 42%
[0x1c4da50] main input debug: Buffering 43%
[0x1c4da50] main input debug: Buffering 44%
[0x1c4da50] main input debug: Buffering 45%
[0x1c4da50] main input debug: Buffering 46%
[0x1c4da50] main input debug: Buffering 46%
[0x1c4da50] main input debug: Buffering 47%
[0x1c4da50] main input debug: Buffering 48%
[0x1c4da50] main input debug: Buffering 49%
[0x1c4da50] main input debug: Buffering 50%
[0x1c4da50] main input debug: Buffering 51%
[0x1c4da50] main input debug: Buffering 52%
[0x1c4da50] main input debug: Buffering 53%
[0x1c4da50] main input debug: Buffering 54%
[0x1c4da50] main input debug: Buffering 55%
[0x1c4da50] main input debug: Buffering 55%
[0x1c4da50] main input debug: Buffering 56%
[0x1c4da50] main input debug: Buffering 57%
[0x1c4da50] main input debug: Buffering 58%
[0x1c4da50] main input debug: Buffering 59%
[0x1c4da50] main input debug: Buffering 60%
[0x1c4da50] main input debug: Buffering 61%
[0x1c4da50] main input debug: Buffering 62%
[0x1c4da50] main input debug: Buffering 63%
[0x1c4da50] main input debug: Buffering 64%
[0x1c4da50] main input debug: Buffering 64%
[0x1c4da50] main input debug: Buffering 65%
[0x1c4da50] main input debug: Buffering 66%
[0x1c4da50] main input debug: Buffering 67%
[0x1c4da50] main input debug: Buffering 68%
[0x1c4da50] main input debug: Buffering 69%
[0x1c4da50] main input debug: Buffering 70%
[0x1c4da50] main input debug: Buffering 71%
[0x1c4da50] main input debug: Buffering 72%
[0x1c4da50] main input debug: Buffering 73%
[0x1c4da50] main input debug: Buffering 73%
[0x1c4da50] main input debug: Buffering 74%
[0x1c4da50] main input debug: Buffering 75%
[0x1c4da50] main input debug: Buffering 76%
[0x1c4da50] main input debug: Buffering 77%
[0x1c4da50] main input debug: Buffering 78%
[0x1c4da50] main input debug: Buffering 79%
[0x1c4da50] main input debug: Buffering 80%
[0x1c4da50] main input debug: Buffering 81%
[0x1c4da50] main input debug: Buffering 82%
[0x1c4da50] main input debug: Buffering 82%
[0x1c4da50] main input debug: Buffering 83%
[0x1c4da50] main input debug: Buffering 84%
[0x1c4da50] main input debug: Buffering 85%
[0x1c4da50] main input debug: Buffering 86%
[0x1c4da50] main input debug: Buffering 87%
[0x1c4da50] main input debug: Buffering 88%
[0x1c4da50] main input debug: Buffering 89%
[0x1c4da50] main input debug: Buffering 90%
[0x1c4da50] main input debug: Buffering 91%
[0x1c4da50] main input debug: Buffering 92%
[0x1c4da50] main input debug: Buffering 92%
[0x1c4da50] main input debug: Buffering 93%
[0x1c4da50] main input debug: Buffering 94%
[0x1c4da50] main input debug: Buffering 95%
[0x1c4da50] main input debug: Buffering 96%
[0x1c4da50] main input debug: Buffering 97%
[0x1c4da50] main input debug: Buffering 98%
[0x1c4da50] main input debug: Buffering 99%
[0x1c4da50] main input debug: Stream buffering done (4440 ms in 21875 ms)
[0x1c4da50] main input debug: Decoder buffering done in 0 ms
[0x1c52508] main mux warning: late buffer for mux input (30531)
[0x1c52508] mux_ts mux warning: packet with too strange dts (dts=1269843595,old=1247228240,pcr=1247228240)
[0x1c52508] mux_ts mux debug: adjusting rate at 40000/393058 (42/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 80000/390952 (62/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 0/558799 (22/20)
[0x1c52508] mux_ts mux debug: adjusting rate at 40000/558799 (20/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 80000/400291 (159/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 0/377110 (27/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 40000/366229 (42/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 40000/375663 (40/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 40000/380623 (39/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 40000/381908 (39/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 40000/381766 (42/0)
[0x1c52508] mux_ts mux debug: adjusting rate at 40000/378505 (44/0)
^C[0x1bbd8f0] main libvlc debug: deactivating the playlist
[0x1d2af28] main playlist debug: deactivating the playlist
[0x1d2af28] main playlist debug: incoming request - stopping current input
[0x1c53058] main access debug: waitpipe: object killed
[0x1d2af28] main playlist debug: dying input
[0x1c53058] main access debug: socket 6 polling interrupted
[0x1c4da50] main input debug: control: stopping input
[0x1d2af28] main playlist debug: dying input
[0x1c53788] main decoder debug: removing module "packetizer_h264"
[0x1c53788] main decoder debug: killing decoder fourcc `h264', 0 PES in FIFO
[0x1c4fb90] main stream output debug: removing a sout input (sout_input:0x1be1e10)
[0x1c52508] mux_ts mux debug: removing input pid=68
[0x1c52508] mux_ts mux debug: new PCR PID is 8191
[0x1c52508] main mux warning: no more input streams for this mux
[0x1c50938] main demux debug: removing module "h264"
[0x1c50ee8] main demux packetizer debug: removing module "packetizer_h264"
[0x1c53220] main stream debug: removing module "stream_filter_record"
[0x1c53058] main access debug: removing module "filesystem"
[0x1c4da50] main input debug: Program doesn't contain anymore ES
[0x1d2af28] main playlist debug: dead input
[0x1c4fb90] main stream output debug: destroying useless sout
[0x1c4fdc0] main stream out debug: destroying chain... (name=standard)
[0x1c4fdc0] main stream out debug: removing module "stream_out_standard"
[0x1c52508] main mux debug: removing module "mux_ts"
[0x1c51730] main access out debug: removing module "access_output_http"
[0x1c52278] main http host debug: waitpipe: object killed
[0x1c52278] main http host debug: HTTP host removed
[0x1c51730] access_output_http access out debug: Close
[0x1c4fdc0] main stream out debug: destroying chain done
[0x1c4fdc0] main playlist export debug: saving Media Library to file /home/pi/.local/share/vlc/ml.xspf
[0x1c4fdc0] main playlist export debug: looking for playlist export module: 1 candidate
[0x1c4fdc0] main playlist export debug: using playlist export module "export"
[0x1c4fdc0] main playlist export debug: TIMER module_need() : 10.160 ms - Total 10.160 ms / 1 intvls (Avg 10.160 ms)
[0x1c4fdc0] main playlist export debug: removing module "export"
[0x1d2af28] main playlist debug: playlist correctly deactivated
[0x1bbd8f0] main libvlc debug: removing all services discovery tasks
[0x1bbd8f0] main libvlc debug: removing all interfaces
[0x1bbd8f0] main libvlc debug: exiting
[0x1bd06d0] main interface debug: removing module "dummy"
[0x1c58128] main interface debug: removing module "globalhotkeys"
[0x1bddff8] main interface debug: removing module "inhibit"
[0x1c4da50] main input debug: TIMER input launching for 'stdin' : 1944.509 ms - Total 1944.509 ms / 1 intvls (Avg 1944.509 ms)
[0x1c4c118] main interface debug: removing module "hotkeys"
[0x1d2af28] main playlist debug: destroying
[0x1bbd8f0] main libvlc debug: TIMER ML Load : Total 374.377 ms / 1 intvls (Avg 374.377 ms)
[0x1bbd8f0] main libvlc debug: TIMER Items array build : Total 17.225 ms / 2 intvls (Avg 8.613 ms)
[0x1bbd8f0] main libvlc debug: TIMER Preparse run : Total 15.176 ms / 1 intvls (Avg 15.176 ms)
[0x1bbd8f0] main libvlc debug: TIMER ML Dump : Total 16.250 ms / 1 intvls (Avg 16.250 ms)
[0x1bbd8f0] main libvlc debug: removing stats
[0x1bbd8f0] main libvlc debug: removing module "memcpy"
Traceback (most recent call last):
  File "motionstream.py", line 311, in <module>
    camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
  File "/usr/lib/python2.7/dist-packages/picamera/camera.py", line 3434, in _set_annotate_text
    mmal.mmal_port_parameter_get(self._camera[0].control, mp.hdr),
KeyboardInterrupt
[email protected] ~/Desktop $ ^C
[email protected]ypi ~/Desktop $ 
It basically continues to go on like that until such time as I close the program. When trying to view the stream on client side, vlc shows nothing.


Have I missed something?


Thanks for your help :)
Last edited by martinw on Mon Dec 29, 2014 11:39 pm, edited 1 time in total.

martinw
Posts: 9
Joined: Mon Dec 29, 2014 3:17 pm

Re: Lightweight python motion detection

Mon Dec 29, 2014 11:38 pm

EDIT

I dont know why it did not work before but it is working now!

I guess I must have had some open port or some other clashes in VLC, but after rebooting my pc and rebooting the pi again, and trying one more time it works fine!

Thanks Killagreg for your assistance :)

now I need to figure out a few more tasks, such as saving the motion videos to some other location online (I saw a thread about doing this) and then setting up the dns and port forwarding to enable viewing from outside the local network.

I think I should be able to figure out how to add those tasks to your script without too much trouble.

Thanks again for your help :)

hydra3333
Posts: 146
Joined: Thu Jan 10, 2013 11:48 pm

Re: Lightweight python motion detection

Tue Dec 30, 2014 10:30 am

thank you once again killagreg. nice code.
I've commented a variation of your code, so when I come back to it later I can nearly understand what goes on.
Subject to testing :-
edit 1: code broken and removed
edit 2: code fixed. It still takes 10 seconds for the "SPLIT", even with specified GOP size, I wonder what else I can try, maybe increase framerate ?
edit 3: code fixed. the "intra_period=video_intra_period" was on the motion vectors "start_recording" port whereas it should have been on the actual video capture port. Now to try monitoring cpu and upping framerates - if I can figure out how to start this python script upon boot ...
edit 4: upped the framerate a bit and logged the settings. seems to work so far.

Code: Select all

#!/usr/bin/env python
# -*- coding: utf-8 -*-

# to fix scripts, turning them from MS-DOS format to unix format
# to get rid of MSDOS format do this to this file: sudo sed -i s/\\r//g ./filename

# This script was originally created by by killagreg ¯ Thu Dec 18, 2014  7:53 am
# and                                   by killagreg ¯ Fri Dec 19, 2014  7:09 pm
# and                                   by killagreg ¯ Sun Dec 28, 2014 10:29 pm
# see   http://www.raspberrypi.org/forums/viewtopic.php?p=656881#p656881  onwards
# eg    http://www.raspberrypi.org/forums/viewtopic.php?p=660806#p660806
#
# This script implements a motion capture surveillace cam for raspbery pi using picam.
# It uses the "motion vectors" magnitude of the h264 hw-encoder to detect motion activity.
# At the time of motion detection a jpg snapshot is saved together with a h264 video stream
# some seconds before, during and after motion activity to the 'filepath' directory.
#
# APPARENTLY INSPIRED BY PICAMERA 1.8 TECHNIQUES documented at
# http://picamera.readthedocs.org/en/release-1.8/recipes2.html#rapid-capture-and-processing
# where the PICAMERA code uses efficient underlying mmal access and numpy code
#
# Modifications:
# 2014.12.26 
#    - modified slightly for the boundary case of no motion detection "windows" - avoid performing the masking step
# 2014.12.28 (hey "killagreg", really nice updates)
#    - incorporate latest changes by killagreg over christmas 2014
#         from http://www.raspberrypi.org/forums/viewtopic.php?p=660572#p660572
#    - added/changed "mp4 mode" to be optional and not the default (also added some MP4box flags)
#    - repositioned a small bit of code to avoid a possible "initial conditions" bug
#    - modified (webcam like) snapshot capture interval processing slightly
#    - added extra logging
#    - made use of localtime instead of GMT, for use in filenames
#    - removed "print" commands and instead rely on logging 
#    - added circular file logging and specified the path of the log file
# 2014.12.30 (thanks "killagreg", really really nice updates)
#    - use a threading "event" to check for motion detection
#    -  use a variation of killagreg's code to shovel off the 
#      post-capture video processing into separate (hopefully asynchronous) threads
#    - removed snapshot_capture_interval, replaced by combo of threading event timeout and a flag 
#    - added inline_headers=True to facilitate streaming into the future
#    - added intra_period=5 to see if we can reduce long time taken to SPLIT streams due to waiting for I frames 
#
# notes:
#    1. it hopefully no longer uses 100% cpu looping around infinitely in a "while true" condition until motion is detected
#    2. if not mp4_mode then the output video streams files are raw h264, 
#       and NOT repeat NOT mpeg4 video files, so you will need to convert them to .mp4 yourself
#    3. to prepare for using this python script (yes, yes, 777, roll your own if you object)
#        sudo apt-get install -y rpi-update
#        sudo rpi-update
#        sudo apt-get update 
#        sudo apt-get upgrade
#        sudo apt-get install python-picamera python-imaging-tk gpac -y
#        sudo mkdir /var/pymotiondetector
#        sudo chmod 777 /var/pymotiondetector
#
# licensing: 
#    this being a derivative, whatever killagreg had (acknowledging killagreg code looks to be substantially from examples in
#    the picamera documentation http://picamera.readthedocs.org/en/release-1.8/recipes2.html#rapid-capture-and-processing
#    i.e. free for any and all use I guess
#
# Example to separately and externally convert h264 files to mp4, on the Pi (using MP4box from gpac)
#   sudo MP4Box -fps <use capture framerate> -add raw_video.h264 -isma -new wrapped_video.mp4
#
# Example to separately and externally convert h264 files to mp4, on the Windows
#   "C:\ffmpeg\bin\ffmpeg.exe" -f h264 -r <use capture framerate> -i "raw_video.h264" -c:v copy -an -movflags +faststart -y "wrapped_video.mp4"
#   REM if necessary add   -bsf:v h264_mp4toannexb   before "-r" 
# or
#   "C:\MP4box\MP4Box.exe" -fps <use capture framerate> -add "raw_video.h264" -isma -new "wrapped_video.mp4"
#
import os
import logging
import logging.handlers
import threading
import subprocess
import io
import picamera
import picamera.array
import numpy as np
import datetime as dt
import time
from PIL import Image

# ----------------------------------------------------------------------------------------------------------------
# in this section are parameters you can fiddle with

#debug mode? dumps extra debug info 
debug = False  # False

# define a timeout for the "event" waiting for motion to be detected, 
# so that other processing can occur when a timeout occurs, eg jpeg snapshots
motion_event_wait_timeout = 300 # seconds 

# mp4 mode ?
# if we set mp4_mode, 
# then the h264 files are converted to an mp4 when the motion capture is completed, using MP4box (part of gpac)
# warning, warning, danger will robinson ... 
# mp4 mode consumes a lot CPU and elapsed time and it is almost certain that we will *lose frames* 
# after a detection has finished, if a new movement occurs during that conversion
# For that reason I don't use mp4 mode
# and instead use the original quicker "cat" and separately convert the .h264 files later, if I want to.
mp4_mode = False

#seup filepath for motion capure data (which is in raw h264 format) plus the start-of-motion jpeg.
# sudo mkdir /var/pymotiondetector
# sudo chmod 777 /var/pymotiondetector
filepath = '/var/pymotiondetector'
logger_filename = filepath + '/pymotiondetector.log'
#logger_filename = 'pymotiondetector.log'

# setup pre and post video recording around motion events
video_preseconds = 5    # minimum 1
video_postseconds = 10  # minimum 1

# setup the main video/snapshot camera resolution
# see this link for a full discussion on how to choose a valid resolution that will work
# http://picamera.readthedocs.org/en/latest/fov.html
video_width = 640
#video_width = 1280
video_height = 480
#video_height = 720

# setup the camera video framerate, PAL is 25, let's go for 10 instead
#video_framerate = 25
video_framerate = 10

# setup the camera h264 GOP size (I frames and their subsequent P and B friends)
# the higher this value
#  - the more actual video we "lose" whilst delayed waiting for "splitting" to finish (it seeks the next I frame)
#  - the smaller the h264 filesize since it will use more (and smaller) B and P intra-I frames
video_intra_period = 5

#setup video rotation (0, 90, 180, 270)
video_rotation = 0 

# setup the camera to perform video stabilization
video_stabilization = True

# setup the camera to put a black background on the annootation (in our case, for date/time)
#video_annotate_background = True
video_annotate_background = False

# setup the camera to put frame number in the annotation
video_annotate_frame_num = True

# we could setup a webcam mode, to capture images on a regular interval in between motion recordings
# setup jpeg capture snapshot flag and filename prefix
perform_snapshot_capture = False 
snapshot_capture_filename = "snapshot"

#--- now for the motion detection parameters
# define motion detection video resolution, equal or smaller than capture video resolution
# smaller = less cpu needed thus "better" and less likely to lose frames etc
motion_width  = 320 #640
motion_height = 240 #480

# setup motion detection threshold, i.e. magnitude of a motion block to count as  motion
motion_threshold = 30
#motion_threshold = 30
# setup motion detection sensitivity, i.e number of motion blocks that trigger a motion detection
#motion_sensitivity = 10
motion_sensitivity = 6

# Range Of Interests define areas within the motion analysis is done within the smaller "motion detection video resolution"
# ie define areas within the motion analysis picture that are used for motion analysis
#    [ [[start pixel on left side,end pixel on right side],[start pixel on top side,stop pixel on bottom side]] ]
#
# default to no motion masking, ("0")
# ie use the "whole image frame" of the lower-resolution-capture "motion vectors"
# and avoid CPU/memory overheards of doing the masking
motion_roi_count = 0
# this is the whole "motion detecton image frame"
#motion_roi_count = 1
#motion_roi = [ [[1,motion_width],[1,motion_height]] ]
# another example, one motion detection mask area
#motion_roi_count = 1
#motion_roi = [ [[270,370],[190,290]]  ]
# example for 2 mask areas
#motion_roi_count = 2
#motion_roi = [ [[1,320],[1,240]], [[400,500],[300,400]] ]

# ----------------------------------------------------------------------------------------------------------------

# do not change code below the line
#-----------------------------------

# define an event used to set/clear/check whether motion was detected or not-detected 
# with any luck, this means that we won't use 100% cpu looping around 
# inside a WHILE TRUE loop just constantly checking for a true/false condition 
motion_event = threading.Event()
motion_event.clear()
#motion_detected = False # changed this to a threading model (2 lines above)

motion_timestamp = time.time()
snapshot_capture_timestamp = time.time()

motion_cols = (motion_width  + 15) // 16 + 1
motion_rows = (motion_height + 15) // 16
if (motion_roi_count > 0) or (debug):
    motion_array = np.zeros((motion_rows, motion_cols), dtype = np.uint8)

# create a zero "AND" motion mask of masked areas 
# and then fill 1's into the mask areas of interest which we specified above
if motion_roi_count > 0:
    motion_array_mask = np.zeros((motion_rows, motion_cols), dtype = np.uint8)
    for count in xrange(0, motion_roi_count):
        for col in xrange( (motion_roi[count][0][0]-1)//16, (motion_roi[count][0][1]-1+15)//16 ):
            for row in xrange( (motion_roi[count][1][0]-1)//16, (motion_roi[count][1][1]-1+15)//16 ):
                motion_array_mask[row][col] = 1

#call back handler for motion output data from h264 hw encoder
#this processes the motion vectors from the low resolution splitted capture
class MyMotionDetector(picamera.array.PiMotionAnalysis):
   def analyse(self, a):
#      global motion_detected, motion_timestamp, motion_array, motion_array_mask, motion_roi_count
      global motion_event, motion_timestamp, motion_array, motion_array_mask, motion_roi_count
      # calculate length of motion vectors of mpeg macro blocks
      a = np.sqrt(
          np.square(a['x'].astype(np.float)) +
          np.square(a['y'].astype(np.float))
          ).clip(0, 255).astype(np.uint8)
# zero out (mask out) anything outside our specified areas of interest, if we have a mask
      if motion_roi_count > 0:
          a = a * motion_array_mask
      # If there're more than 'sensitivity' vectors with a magnitude greater
      # than 'threshold', then say we've detected motion
      th = ((a > motion_threshold).sum() > motion_sensitivity)
      now = time.time()
# by now ...
# th               = motion detected on current frame
# motion_timestamp = the last time when motion was detected in a frame (start of time window)
# motion_detected  = whether motion detection time window is currently triggered
#                  = is only turned off if motion has previously been detected
#                    and both "no motion detected" and its time window has expired
      # motion logic, trigger on motion and stop after video_postseconds seconds of inactivity
      if th:
          motion_timestamp = now

      #if motion_detected:
      if motion_event.is_set():
          if (now - motion_timestamp) >= video_postseconds:
              #motion_detected = False
              motion_event.clear()  
      else:
          if th:
              #motion_detected = True
              motion_event.set()
          if debug:
              idx = a > motion_threshold
              a[idx] = 255
              motion_array = a

def write_video(stream):
# Write the entire content of the circular buffer to disk. No need to
# lock the stream here as we're definitely not writing to it
# simultaneously
     global motion_filename

     with io.open(motion_filename + '-before.h264', 'wb') as output:
         for frame in stream.frames:
             if frame.frame_type == picamera.PiVideoFrameType.sps_header:
                 stream.seek(frame.position)
                 break
         while True:
             buf = stream.read1()
             if not buf:
                 break
             output.write(buf)
     # Wipe the circular stream once we're done
     stream.seek(0)
     stream.truncate()
     
#-------------------------------------------------------------------------------------
# use asynchronous threads to perform the video processing after capture has completed 

class myThreadCAT (threading.Thread):
    def __init__(self, threadID, fps, name, jcmd, counter):
        threading.Thread.__init__(self)
        self.threadID = threadID
        self.fps = fps
        self.name = name
        self.jcmd = jcmd
        self.counter = counter
    def run(self):
        #subprocess.call("cat %s %s > %s && rm -f %s && rm -f %s" % (self.name + "-before.h264", self.name + "-after.h264", self.name + ".h264", self.name + "-before.h264", self.name + "-after.h264"), shell=True)
        subprocess.call(self.jcmd, shell=True)

class MyThreadMP4box (threading.Thread):
    def __init__(self, threadID, fps, name, jcmd, counter):
        threading.Thread.__init__(self)
        self.threadID = threadID
        self.fps = fps
        self.name = name
        self.jcmd = jcmd
        self.counter = counter
    def run(self):
        #subprocess.call("MP4Box -fps %d -cat %s -cat %s -isma -new %s && rm -f %s && rm -f %s" % (self.fps, self.name + "-before.h264", self.name + "-after.h264", self.name + ".mp4", self.name + "-before.h264", self.name + "-after.h264"), shell=True)
        subprocess.call(self.jcmd, shell=True)
  
#---------------------
# MAIN CODE START HERE
#---------------------

# create a rotating file logger
logger = logging.getLogger('pymotiondetector')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
#fh = logging.FileHandler('pymotiondetector.log')
fh = logging.handlers.RotatingFileHandler(logger_filename, mode='a', maxBytes=(1024*1000 * 2), backupCount=5, delay=0)
fh.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.info('----------------------------------------')
logger.info('pymotiondetector v12.07 has been started')
logger.info('----------------------------------------')
msg = "Video capture filepath: %s" % (filepath)
logger.info(msg)
msg = "Capture videos with %d x %d resolution" % (video_width, video_height)
logger.info(msg)
msg = "Analyse motion vectors from a %d x %d resolution split" % (motion_width, motion_height)
logger.info(msg)
msg = "  resulting in %d x %d motion blocks" % (motion_cols, motion_rows)
logger.info(msg)
msg = "Analyse motion vectors threshold , sensitivity: %d , %d" % (motion_threshold, motion_sensitivity)
logger.info(msg)
msg = "Regions of Interest: %d" % (motion_roi_count)
logger.info(msg)
msg = "Framerate: %d fps" % (video_framerate)
logger.info(msg)
msg = "video_intra_period: %d" % (video_intra_period)
logger.info(msg)
msg = "Rotation: %d degrees" % (video_rotation)
logger.info(msg)
msg = "Stabilization: %r" % (video_stabilization)
logger.info(msg)
msg = "Annotate background: %r" % (video_annotate_background)
logger.info(msg)
msg = "Annotate frame_num: %r" % (video_annotate_frame_num)
logger.info(msg)
msg = "Video detection event capture before-seconds , after-seconds: %d , %d" % (video_preseconds, video_postseconds)
logger.info(msg)
msg = "motion_event_wait_timeout: %s" % (motion_event_wait_timeout)
logger.info(msg)
msg = "perform_snapshot_capture: %r" % (perform_snapshot_capture)
logger.info(msg)
msg = "snapshot_capture_filename: %s" % (snapshot_capture_filename)
logger.info(msg)
msg = "logger_filename: %s" % (logger_filename)
logger.info(msg)

with picamera.PiCamera() as camera:
   camera.resolution = (video_width, video_height)
   camera.framerate = video_framerate
   camera.rotation = video_rotation
   camera.video_stabilization = video_stabilization
   camera.annotate_background = video_annotate_background
   camera.annotate_frame_num = video_annotate_frame_num

   # setup a circular buffer
   stream = picamera.PiCameraCircularIO(camera, seconds = video_preseconds)
   # 1. split the hi resolution video recording into circular buffer from splitter port 1
   camera.start_recording(stream, format='h264', splitter_port=1, inline_headers=True, intra_period=video_intra_period)
   #camera.start_recording('test.h264', splitter_port=1)
   # 2. split the low resolution motion vector analysis from splitter port 2, throw away the actual video
   camera.start_recording('/dev/null', splitter_port=2, resize=(motion_width,motion_height) ,format='h264', motion_output=MyMotionDetector(camera, size=(motion_width,motion_height)))
   # wait some seconds for stable video data to be available
   camera.wait_recording(2, splitter_port=1)
   #motion_detected = False
   motion_event.clear()
   joiner_thread_id = 0
   logger.info('pymotiondetector has been started')
   #print "Motion Capture ready - Waiting for motion"
   logger.info('OK. Waiting for first motion to be detected')

   try:
       while True:
          # the callback "MyMotionDetector" has been setup above using the low resolution split
          # original code "while true" above ... loop around as fast as we can go until motion is detected ... thus 100 percent cpu ?
          # a motion event must trigger this action here
          camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
          # hmm, do an event wait with "motion_event_wait_timeout" seconds before timing out. It returns True if the event was set, false if not.
          #if motion_detected:
          msg = "Entering event wait state awaiting next motion detection by class MyMotionDetector ..."
          logger.info(msg)
          if motion_event.wait(motion_event_wait_timeout):
             #print "Motion detected: " , dt.datetime.now()
             logger.info('detected motion')
            #motion_filename = filepath + "/" + time.strftime("%Y%m%d-%H%M%S", time.gmtime(motion_timestamp))
             motion_filename = filepath + "/" + time.strftime("%Y%m%d-%H%M%S", time.localtime(motion_timestamp))
             # split  the high res video stream to a file instead of to the internal circular buffer
             logger.info('splitting video from circular IO buffer to after-motion-detected h264 file ')
             camera.split_recording(motion_filename + '-after.h264', splitter_port=1)
             # catch an image as video preview during video recording (uses splitter port 0) at time of the motion event
             msg = "started  capture jpeg image file %s" % (motion_filename + ".jpg")
             logger.info(msg)
             camera.capture_sequence([motion_filename + '.jpg'], use_video_port=True, splitter_port=0)
             msg = "finished capture jpeg image file %s" % (motion_filename + ".jpg")
             logger.info(msg)
             # if we want to see debug motion stuff, dump motion array as a png image
             if debug:
                 logger.info('saving debug motion vectors')
                 img = Image.fromarray(motion_array)
                 img.save(motion_filename + "-motion.png")
             # save circular buffer containing "before motion" event video, ie write it to a file
             logger.info('started  saving before-motion circular buffer')
             write_video(stream)
             logger.info('finished saving before-motion circular IO buffer')
             #---- wait for the end of motion event here, in one second increments
             logger.info('start waiting to detect end of motion')
             #while motion_detected stay inside the loop below, recording 
             while motion_event.is_set():
                camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
                camera.wait_recording(1, splitter_port=1)
             #---- end of motion event detected
             logger.info('detected end of motion')
             #split video recording back in to circular buffer
             logger.info('splitting video back into the circular IO buffer')
             camera.split_recording(stream, splitter_port=1)
             if mp4_mode:
                 joiner_thread_id = joiner_thread_id + 1
                 msg = "starting new thread %d copying h264 into mp4 file %s" % (joiner_thread_id, motion_filename + ".mp4")
                 logger.info(msg)
                 joiner_cmd  = "MP4Box -fps %d -cat %s -cat %s -isma -new %s && rm -f %s && rm -f %s" % (video_framerate, motion_filename + "-before.h264", motion_filename + "-after.h264", motion_filename + ".mp4", motion_filename + "-before.h264", motion_filename + "-after.h264")
                 logger.info(joiner_cmd)
                 joiner_thread = myThreadMP4box(joiner_thread_id, video_framerate, motion_filename, joiner_cmd, joiner_thread_id) 
                 joiner_thread.start()
                 msg = "after invoking thread copying h264 into mp4 file %s" % (motion_filename + ".mp4")
                 logger.info(msg)
             else:
                 joiner_thread_id = joiner_thread_id + 1
                 msg = "starting new thread %d concatenating h264 files into %s" % (joiner_thread_id, motion_filename + ".h264")
                 logger.info(msg)
                 joiner_cmd = "cat %s %s > %s && rm -f %s && rm -f %s" % (motion_filename + "-before.h264", motion_filename + "-after.h264", motion_filename + ".h264", motion_filename + "-before.h264", motion_filename + "-after.h264")
                 logger.info(joiner_cmd)
                 joiner_thread = myThreadCAT(joiner_thread_id, video_framerate, motion_filename, joiner_cmd, joiner_thread_id) 
                 joiner_thread.start()
                 msg = "after invoking thread concatenating h264 files into %s" % (motion_filename + ".h264")
                 logger.info(msg)
             msg = "Finished capture processing."
             logger.info(msg)
             snapshot_capture_timestamp = time.time()
          else:
             # no motion detected or in progress
             logger.info("Motion detector timed out")
             # if webcam mode, capture images on the regular timeout interval
             if perform_snapshot_capture:
                 #snapf = filepath + "/" + snapshot_capture_filename + "-" + time.strftime("%Y%m%d-%H%M%S", time.gmtime(motion_timestamp))
                 snapf = filepath + "/" + snapshot_capture_filename + "-" + time.strftime("%Y%m%d-%H%M%S", time.localtime(motion_timestamp))
                 camera.capture_sequence([snapf + ".jpg"], use_video_port=True, splitter_port=0)
                 snapshot_capture_timestamp = time.time()
                 logger.info("Captured snapshot")

   finally:
       camera.stop_recording(splitter_port=1)
       camera.stop_recording(splitter_port=2)

Code: Select all

2014-12-30 23:28:22,458 - pymotiondetector - INFO - ----------------------------------------
2014-12-30 23:28:22,463 - pymotiondetector - INFO - pymotiondetector v12.07 has been started
2014-12-30 23:28:22,465 - pymotiondetector - INFO - ----------------------------------------
2014-12-30 23:28:22,468 - pymotiondetector - INFO - Video capture filepath: /var/pymotiondetector
2014-12-30 23:28:22,470 - pymotiondetector - INFO - Capture videos with 640 x 480 resolution
2014-12-30 23:28:22,473 - pymotiondetector - INFO - Analyse motion vectors from a 320 x 240 resolution split
2014-12-30 23:28:22,475 - pymotiondetector - INFO -   resulting in 21 x 15 motion blocks
2014-12-30 23:28:22,478 - pymotiondetector - INFO - Analyse motion vectors threshold , sensitivity: 30 , 6
2014-12-30 23:28:22,481 - pymotiondetector - INFO - Regions of Interest: 0
2014-12-30 23:28:22,483 - pymotiondetector - INFO - Framerate: 10 fps
2014-12-30 23:28:22,486 - pymotiondetector - INFO - video_intra_period: 5
2014-12-30 23:28:22,488 - pymotiondetector - INFO - Rotation: 0 degrees
2014-12-30 23:28:22,491 - pymotiondetector - INFO - Stabilization: True
2014-12-30 23:28:22,493 - pymotiondetector - INFO - Annotate background: False
2014-12-30 23:28:22,495 - pymotiondetector - INFO - Annotate frame_num: True
2014-12-30 23:28:22,498 - pymotiondetector - INFO - Video detection event capture before-seconds , after-seconds: 5 , 10
2014-12-30 23:28:22,501 - pymotiondetector - INFO - motion_event_wait_timeout: 300
2014-12-30 23:28:22,503 - pymotiondetector - INFO - perform_snapshot_capture: False
2014-12-30 23:28:22,506 - pymotiondetector - INFO - snapshot_capture_filename: snapshot
2014-12-30 23:28:22,508 - pymotiondetector - INFO - logger_filename: /var/pymotiondetector/pymotiondetector.log
2014-12-30 23:28:25,361 - pymotiondetector - INFO - pymotiondetector has been started
2014-12-30 23:28:25,364 - pymotiondetector - INFO - OK. Waiting for first motion to be detected
2014-12-30 23:28:25,372 - pymotiondetector - INFO - Entering event wait state awaiting next motion detection by class MyMotionDetector ...
2014-12-30 23:29:46,807 - pymotiondetector - INFO - detected motion
2014-12-30 23:29:46,810 - pymotiondetector - INFO - splitting video from circular IO buffer to after-motion-detected h264 file 
2014-12-30 23:29:46,879 - pymotiondetector - INFO - started  capture jpeg image file /var/pymotiondetector/20141230-232946.jpg
2014-12-30 23:29:46,994 - pymotiondetector - INFO - finished capture jpeg image file /var/pymotiondetector/20141230-232946.jpg
2014-12-30 23:29:46,997 - pymotiondetector - INFO - started  saving before-motion circular buffer
2014-12-30 23:29:48,283 - pymotiondetector - INFO - finished saving before-motion circular IO buffer
2014-12-30 23:29:48,285 - pymotiondetector - INFO - start waiting to detect end of motion
2014-12-30 23:30:01,324 - pymotiondetector - INFO - detected end of motion
2014-12-30 23:30:01,327 - pymotiondetector - INFO - splitting video back into the circular IO buffer
2014-12-30 23:30:01,700 - pymotiondetector - INFO - starting new thread 1 concatenating h264 files into /var/pymotiondetector/20141230-232946.h264
2014-12-30 23:30:01,702 - pymotiondetector - INFO - cat /var/pymotiondetector/20141230-232946-before.h264 /var/pymotiondetector/20141230-232946-after.h264 > /var/pymotiondetector/20141230-232946.h264 && rm -f /var/pymotiondetector/20141230-232946-before.h264 && rm -f /var/pymotiondetector/20141230-232946-after.h264
2014-12-30 23:30:01,707 - pymotiondetector - INFO - after invoking thread concatenating h264 files into /var/pymotiondetector/20141230-232946.h264
2014-12-30 23:30:01,710 - pymotiondetector - INFO - Finished capture processing.
2014-12-30 23:30:01,735 - pymotiondetector - INFO - Entering event wait state awaiting next motion detection by class MyMotionDetector ...

martinw
Posts: 9
Joined: Mon Dec 29, 2014 3:17 pm

Re: Lightweight python motion detection

Tue Dec 30, 2014 2:28 pm

Hey Hydra :) Thanks for adding the comments, I've got a lot to learn so it is helpful for me indeed :)

I have managed to get this script running on bootup following this simple instructables today -> http://www.instructables.com/id/Raspber ... n-startup/

however it is a very basic method and I need to do something like "sudo killall python" to stop the script,

I am going to try this method later on, to allow using a start-stop deamon, I think it is better like that

http://blog.scphillips.com/2013/07/gett ... e-on-boot/

killagreg
Posts: 12
Joined: Wed Dec 17, 2014 10:15 pm

Re: Lightweight python motion detection

Tue Dec 30, 2014 10:23 pm

I am using this shell script named picam.sh to start my pythom motion application named picam.py located under /home/pi

Code: Select all

#!/bin/sh
# Change the next 3 lines to suit where you install your script and what you want to call it
DIR=/home/pi/
DAEMON=$DIR/picam.py
DAEMON_NAME=picam
DAEMON_USER=root
PIDFILE=/var/run/$DAEMON_NAME.pid

. /lib/lsb/init-functions

do_start () {
    log_daemon_msg "Starting system $DAEMON_NAME daemon"
    start-stop-daemon --start --background --pidfile $PIDFILE --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON -- $DAEMON_OPTS
    log_end_msg $?
}
do_stop () {
    log_daemon_msg "Stopping system $DAEMON_NAME daemon"
    start-stop-daemon --stop --pidfile $PIDFILE --retry 10
    log_end_msg $?
}

case "$1" in

    start|stop)
        do_${1}
        ;;

    restart|reload|force-reload)
        do_stop
        do_start
        ;;

    status)
        status_of_proc "$DAEMON_NAME" "$DAEMON" && exit 0 || exit $?
        ;;
    *)
        echo "Usage: /etc/init.d/$DAEMON_NAME {start|stop|restart|status}"
        exit 1
        ;;

esac
exit 0
You must copy that to /etc/init.d/ and make it executable (sudo chmod +x picam.sh) and activate it by

sudo update-rc.d picam.sh defaults


Regards Greg

hydra3333
Posts: 146
Joined: Thu Jan 10, 2013 11:48 pm

Re: Lightweight python motion detection

Wed Dec 31, 2014 12:42 am

Beaut, thanks, I'll try these.

There's a few obscure (to me) bugs;
1. with timestamps on the video captures, the displayed timestamp says "frozen" as the time of last capture on the "before detection" footage and then it proceeds normally during "after detection" until end of capture. I suspect moving some "if then" time-stamping annotation code into the motion-detection callback function may help ?
2. do the timestamp "annotate" calls have an effect in creating motion vectors ? I made what I thought was a simple change (set the timestamp annotation after each capture finishes to be "" so that the "before detection" footage doesn't display a timestamp, and the knock on effect for me was that after each capture finished then another capture started immediately (ie motion was detected straight away where there was none). Reverting that annotation "blanking" code made that effect go away.
3. setting the mp4 flag to use MP4BOX instead of CAT for post-processing, results in an unknown function call type crash and I can't figure out why.
Suggestions would be welcomed.

killagreg
Posts: 12
Joined: Wed Dec 17, 2014 10:15 pm

Re: Lightweight python motion detection

Wed Dec 31, 2014 2:18 pm

There's a few obscure (to me) bugs;
1. with timestamps on the video captures, the displayed timestamp says "frozen" as the time of last capture on the "before detection" footage and then it proceeds normally during "after detection" until end of capture. I suspect moving some "if then" time-stamping annotation code into the motion-detection callback function may help ?
While waiting on reset of thew motion event the annotation is updated on a 1 second period. After that during the split by to the circular buffer the updated is stopped. Hence the timestamp in the video freezes at that time.
You are right. It's better to move the annotation update in the Call back funktion, ans apdate if ( now != time.time() ) before now is set.
2. do the timestamp "annotate" calls have an effect in creating motion vectors ? I made what I thought was a simple change (set the timestamp annotation after each capture finishes to be "" so that the "before detection" footage doesn't display a timestamp, and the knock on effect for me was that after each capture finished then another capture started immediately (ie motion was detected straight away where there was none). Reverting that annotation "blanking" code made that effect go away.
Yes, the annotation is put into the image before the encoder processes the image. Hence the change of the pixels in the annotation text causes a small "virtual motion". Enable and disable annotion causes a strong motion. Use a ROI that ignores the annotation area.
At the moment i play around with a moderator principle that ignores single frame motion events. Anotation changes are always single frame events and would be effectively suppressed this way.
Real events create motion in more consecutive frames.

hydra3333
Posts: 146
Joined: Thu Jan 10, 2013 11:48 pm

Re: Lightweight python motion detection

Thu Jan 01, 2015 2:52 am

OK Thanks. Attached is a debug version with more logging and increased sensitivity settings.
I like the approach, using the gpu, and it is lightweight, however as a result of testing I am not quite certain whether the motion vectors approach may be as "sensitive" as compared to some other pixel comparison approaches such as "mmal_motion_test").

Code: Select all

#!/usr/bin/env python
# -*- coding: utf-8 -*-

# to fix scripts, turning them from MS-DOS format to unix format
# to get rid of MSDOS format do this to this file: sudo sed -i s/\\r//g ./filename

# This script was originally created by by killagreg ¯ Thu Dec 18, 2014  7:53 am
# and                                   by killagreg ¯ Fri Dec 19, 2014  7:09 pm
# and                                   by killagreg ¯ Sun Dec 28, 2014 10:29 pm
# see   http://www.raspberrypi.org/forums/viewtopic.php?p=656881#p656881  onwards
# eg    http://www.raspberrypi.org/forums/viewtopic.php?p=660806#p660806
#
# This script implements a motion capture surveillace cam for raspbery pi using picam.
# It uses the "motion vectors" magnitude of the h264 hw-encoder to detect motion activity.
# At the time of motion detection a jpg snapshot is saved together with a h264 video stream
# some seconds before, during and after motion activity to the 'filepath' directory.
#
# APPARENTLY INSPIRED BY PICAMERA 1.8 TECHNIQUES documented at
# http://picamera.readthedocs.org/en/release-1.8/recipes2.html#rapid-capture-and-processing
# where the PICAMERA code uses efficient underlying mmal access and numpy code
#
# Modifications:
# 2014.12.26 
#    - modified slightly for the boundary case of no motion detection "windows" - avoid performing the masking step
# 2014.12.28 (hey "killagreg", really nice updates)
#    - incorporate latest changes by killagreg over christmas 2014
#         from http://www.raspberrypi.org/forums/viewtopic.php?p=660572#p660572
#    - added/changed "mp4 mode" to be optional and not the default (also added some MP4box flags)
#    - repositioned a small bit of code to avoid a possible "initial conditions" bug
#    - modified (webcam like) snapshot capture interval processing slightly
#    - added extra logging
#    - made use of localtime instead of GMT, for use in filenames
#    - removed "print" commands and instead rely on logging 
#    - added circular file logging and specified the path of the log file
# 2014.12.30 (thanks "killagreg", really really nice updates)
#    - use a threading "event" to check for motion detection
#    -  use a variation of killagreg's code to shovel off the 
#      post-capture video processing into separate (hopefully asynchronous) threads
#    - removed snapshot_capture_interval, replaced by combo of threading event timeout and a flag 
#    - added inline_headers=True to facilitate streaming into the future
#    - added intra_period=5 to see if we can reduce long time taken to SPLIT streams due to waiting for I frames 
# 2014.12.31 - assess sensitivity and performance
#    - moved frame annotation into motion detection function
#    - added extra debug code into [logging, frame annotation] to assist assessing sensitivity and bugs
#         * after checking the logs and the frames with this version, it turns out 
#              + there may be a bug in the pre-capture buffering (often it saves much more than the limit we set) ... I tried to squash it
#              + the motion sensitivity is not as effective as I'd hoped especially with "z" motion (toward or away)
#                    eg stand about 5m away and wave your arms and detection is not triggered
#                    eg stand about 10m away and walk directly to the camera, detection is not triggered until quite close
#              + it seems possible the "pixel differences" approach may be a more effective approach ?
# notes:
#    0. I am using an IR picam
#    1. it hopefully no longer uses 100% cpu looping around infinitely in a "while true" condition until motion is detected
#    2. if not mp4_mode then the output video streams files are raw h264, 
#       and NOT repeat NOT mpeg4 video files, so you will need to convert them to .mp4 yourself
#    3. to prepare for using this python script (yes, yes, 777, roll your own if you object)
#        sudo apt-get install -y rpi-update
#        sudo rpi-update
#        sudo apt-get update 
#        sudo apt-get upgrade
#        sudo apt-get install python-picamera python-imaging-tk gpac -y
#        sudo mkdir /var/pymotiondetector
#        sudo chmod 777 /var/pymotiondetector
#
# licensing: 
#    this being a derivative, whatever killagreg had (acknowledging killagreg code looks to be substantially from examples in
#    the picamera documentation http://picamera.readthedocs.org/en/release-1.8/recipes2.html#rapid-capture-and-processing
#    i.e. free for any and all use I guess
#
# Example to separately and externally convert h264 files to mp4, on the Pi (using MP4box from gpac)
#   sudo MP4Box -fps <use capture framerate> -add raw_video.h264 -isma -new wrapped_video.mp4
#
# Example to separately and externally convert h264 files to mp4, on the Windows
#   "C:\ffmpeg\bin\ffmpeg.exe" -f h264 -r <use capture framerate> -i "raw_video.h264" -c:v copy -an -movflags +faststart -y "wrapped_video.mp4"
#   REM if necessary add   -bsf:v h264_mp4toannexb   before "-r" 
# or
#   "C:\MP4box\MP4Box.exe" -fps <use capture framerate> -add "raw_video.h264" -isma -new "wrapped_video.mp4"
#
import os
import logging
import logging.handlers
import threading
import subprocess
import io
import picamera
import picamera.array
import numpy as np
import datetime as dt
import time
from PIL import Image

# ----------------------------------------------------------------------------------------------------------------
# in this section are parameters you can fiddle with

#current version number to print in the logger 
pymotiondetector_version = "v12.17-debug"

#dump_png mode? dumps extra dump_png info.  Note: it doesn't work.
dump_png = False  # False
#dump_png = True  # False

# define a timeout for the "event" waiting for motion to be detected, 
# so that other processing can occur when a timeout occurs, eg jpeg snapshots
motion_event_wait_timeout = 300 # seconds 

# mp4 mode ?
# if we set mp4_mode, 
# then the h264 files are converted to an mp4 when the motion capture is completed, using MP4box (part of gpac)
# warning, warning, danger Will Robinson ... 
# mp4 mode consumes a lot CPU and elapsed time - 
# so, to not *lose frames* we have made this an async subprocess (it still consumes cpu though)
#mp4_mode = False
mp4_mode = True

#seup filepath for motion capure data (which is in raw h264 format) plus the start-of-motion jpeg.
# sudo mkdir /var/pymotiondetector
# sudo chmod 777 /var/pymotiondetector
filepath = '/var/pymotiondetector'
logger_filename = filepath + '/pymotiondetector.log'
#logger_filename = 'pymotiondetector.log'

# setup pre and post video recording around motion events
video_preseconds = 5    # minimum 1
video_postseconds = 5  # minimum 1

# setup the main video/snapshot camera resolution
# see this link for a full discussion on how to choose a valid resolution that will work
# http://picamera.readthedocs.org/en/latest/fov.html
video_width = 640
#video_width = 1280
video_height = 480
#video_height = 720

# setup the camera video framerate, PAL is 25, let's go for 10 instead
#video_framerate = 25
video_framerate = 10

# setup the camera h264 GOP size (I frames and their subsequent P and B friends)
# the *higher* this value
#  - the more actual video we "lose" whilst delayed waiting for "splitting" to finish (it seeks the next I frame)
#  - the smaller the h264 filesize since it will use more (and smaller) B and P intra-I frames
video_intra_period = 5

#setup video rotation (0, 90, 180, 270)
video_rotation = 0 

# setup the camera to perform video stabilization
#video_stabilization = True
video_stabilization = False

# setup the camera to put a black background on the annotation (in our case, for date/time)
#video_annotate_background = True
video_annotate_background = False

# setup the camera to put frame number in the annotation
#video_annotate_frame_num = True
video_annotate_frame_num = False

# we could setup a webcam mode, to capture images on a regular interval in between motion recordings
# setup jpeg capture snapshot flag and filename prefix
perform_snapshot_capture = False 
snapshot_capture_filename = "snapshot"

#--- now for the motion detection parameters
# define motion detection video resolution, equal or smaller than capture video resolution
# *smaller* = less cpu needed thus "better" and less likely to lose frames etc
motion_width  = 320
motion_height = 240
#motion_width  = 640
#motion_height = 480

# setup motion detection threshold, 
# i.e. magnitude of a motion block to count as  motion
#motion_threshold = 30
#motion_threshold = 10
motion_threshold = 5
# setup motion detection sensitivity, 
# i.e number of motion blocks that trigger a motion detection 
# eg 640x480 resolution results in 41 x 30 motion blocks, 5% of 1230=61
# eg 320x240 resolution results in 21 x 15 motion blocks, 5% of 315=15
#motion_sensitivity = 10
#motion_sensitivity = 6
motion_sensitivity = 3

# Range Of Interests 
# define areas within which the motion analysis is performed using the smaller "motion detection video resolution" split
# ie define areas within the motion analysis picture that are used for motion analysis
#    [ [[start pixel on left side,end pixel on right side],[start pixel on top side,stop pixel on bottom side]] ]
#
# default to no motion masking, ("0")
# ie use the "whole image frame" of the lower-resolution-capture "motion vectors"
# and avoid CPU/memory overheards of doing the masking
motion_roi_count = 0
# this is the whole "motion detecton image frame"
#motion_roi_count = 1
#motion_roi = [ [[1,motion_width],[1,motion_height]] ]
# another example, one motion detection mask area
#motion_roi_count = 1
#motion_roi = [ [[270,370],[190,290]]  ]
# example for 2 mask areas
#motion_roi_count = 2
#motion_roi = [ [[1,320],[1,240]], [[400,500],[300,400]] ]

# ----------------------------------------------------------------------------------------------------------------

# do not change code below the line
#-----------------------------------

# define an event used to set/clear/check whether motion was detected or not-detected 
# with any luck, this means that we won't use 100% cpu looping around 
# inside a WHILE TRUE loop just constantly checking for a true/false condition 
motion_event = threading.Event()
motion_event.clear()

motion_timestamp = time.time()

motion_window_active = "-"
motion_frame_active = "-"

prev_frame_annotation = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S') + " --"
curr_frame_annotation = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S') + " --"

motion_cols = (motion_width  + 15) // 16 + 1
motion_rows = (motion_height + 15) // 16
if (motion_roi_count > 0) or (dump_png):
    motion_array = np.zeros((motion_rows, motion_cols), dtype = np.uint8)

# create a zero "AND" motion mask of masked areas 
# and then fill 1's into the mask areas of interest which we specified above
if motion_roi_count > 0:
    motion_array_mask = np.zeros((motion_rows, motion_cols), dtype = np.uint8)
    for count in xrange(0, motion_roi_count):
        for col in xrange( (motion_roi[count][0][0]-1)//16, (motion_roi[count][0][1]-1+15)//16 ):
            for row in xrange( (motion_roi[count][1][0]-1)//16, (motion_roi[count][1][1]-1+15)//16 ):
                motion_array_mask[row][col] = 1

#call back handler for motion output data from h264 hw encoder
#this processes the motion vectors from the low resolution splitted capture
class MyMotionDetector(picamera.array.PiMotionAnalysis):
   def analyse(self, a):
      global camera, motion_event, motion_timestamp, motion_array, motion_array_mask, motion_roi_count, prev_frame_annotation, curr_frame_annotation, motion_window_active, motion_frame_active
      # calculate length of motion vectors of mpeg macro blocks
      a = np.sqrt(
          np.square(a['x'].astype(np.float)) +
          np.square(a['y'].astype(np.float))
          ).clip(0, 255).astype(np.uint8)
# zero out (mask out) anything outside our specified areas of interest, if we have a mask
      if motion_roi_count > 0:
          a = a * motion_array_mask
      # If there's more than 'sensitivity' number vectors with a magnitude greater than 'threshold', then say we've detected motion
      th = ((a > motion_threshold).sum() > motion_sensitivity)
      now = time.time()
# by now ...
# th                     = motion detected on current frame
# motion_timestamp       = the last time when motion was detected in a frame (start of time window)
# motion_event.is_set()  = whether motion detection time window is currently triggered
#                        = is only turned off if motion has previously been detected
#                          and both "no motion detected" and its time window has expired
      # motion logic, trigger on motion and stop after video_postseconds seconds of inactivity
      if th:
          motion_timestamp = now
          motion_frame_active = "f"
          logger.info('frame motion detected')
      else:
          motion_frame_active = "-"
      #if motion is detected, don't clear the detection flag until video_postseconds have passed
      if motion_event.is_set():
          if (now - motion_timestamp) >= video_postseconds:
              motion_event.clear()  
              motion_window_active = "-"
              logger.info('window capture cleared')
      else:
          if th:
              motion_event.set()
              motion_window_active = "w"
              logger.info('window capture set')
          if dump_png: # the dump_png .png file doesn't work 
              idx = a > motion_threshold
              a[idx] = 255
              motion_array = a
      curr_frame_annotation = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S') + " " + motion_window_active + motion_frame_active
      if  curr_frame_annotation != prev_frame_annotation:
          camera.annotate_text = curr_frame_annotation
          prev_frame_annotation = curr_frame_annotation

def write_video(stream):
# Write the entire content of the circular buffer to disk. No need to
# lock the stream here as we're definitely not writing to it
# simultaneously
     global motion_filename

     with io.open(motion_filename + '-before.h264', 'wb') as output:
         for frame in stream.frames:
             if frame.frame_type == picamera.PiVideoFrameType.sps_header:
                 stream.seek(frame.position)
                 break
         while True:
             buf = stream.read1()
             if not buf:
                 break
             output.write(buf)
     # Wipe the circular stream once we're done
     stream.seek(0)
     stream.truncate()
     
#-------------------------------------------------------------------------------------
# use asynchronous threads to perform the video processing after capture has completed 

class myThread (threading.Thread):
    def __init__(self, threadID, fps, name, jcmd, counter):
        threading.Thread.__init__(self)
        self.threadID = threadID
        self.fps = fps
        self.name = name
        self.jcmd = jcmd
        self.counter = counter
    def run(self):
        subprocess.call(self.jcmd, shell=True)
          
#---------------------
# MAIN CODE START HERE
#---------------------

# create a rotating file logger
logger = logging.getLogger('pymotiondetector')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
#fh = logging.FileHandler('pymotiondetector.log')
fh = logging.handlers.RotatingFileHandler(logger_filename, mode='a', maxBytes=(1024*1000 * 2), backupCount=5, delay=0)
fh.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.info('----------------------------------------')
logger.info('pymotiondetector %s has been started' % (pymotiondetector_version))
logger.info('----------------------------------------')
msg = "Video capture filepath: %s" % (filepath)
logger.info(msg)
msg = "Capture videos with %d x %d resolution" % (video_width, video_height)
logger.info(msg)
msg = "Analyse motion vectors from a %d x %d resolution split" % (motion_width, motion_height)
logger.info(msg)
msg = "  resulting in %d x %d motion blocks" % (motion_cols, motion_rows)
logger.info(msg)
msg = "Analyse motion vectors threshold , sensitivity: %d , %d" % (motion_threshold, motion_sensitivity)
logger.info(msg)
msg = "Regions of Interest: %d" % (motion_roi_count)
logger.info(msg)
msg = "Framerate: %d fps" % (video_framerate)
logger.info(msg)
msg = "video_intra_period: %d" % (video_intra_period)
logger.info(msg)
msg = "Rotation: %d degrees" % (video_rotation)
logger.info(msg)
msg = "Stabilization: %r" % (video_stabilization)
logger.info(msg)
msg = "Annotate background: %r" % (video_annotate_background)
logger.info(msg)
msg = "Annotate frame_num: %r" % (video_annotate_frame_num)
logger.info(msg)
msg = "Video detection event capture before-seconds , after-seconds: %d , %d" % (video_preseconds, video_postseconds)
logger.info(msg)
msg = "motion_event_wait_timeout: %s" % (motion_event_wait_timeout)
logger.info(msg)
msg = "perform_snapshot_capture: %r" % (perform_snapshot_capture)
logger.info(msg)
msg = "snapshot_capture_filename: %s" % (snapshot_capture_filename)
logger.info(msg)
msg = "logger_filename: %s" % (logger_filename)
logger.info(msg)
msg = "mp4_mode: %r" % (mp4_mode)
logger.info(msg)

with picamera.PiCamera() as camera:
   camera.resolution = (video_width, video_height)
   camera.framerate = video_framerate
   camera.rotation = video_rotation
   camera.video_stabilization = video_stabilization
   camera.annotate_background = video_annotate_background
   camera.annotate_frame_num = video_annotate_frame_num

   # setup a circular IO buffer to contain video of "before motion detection" footage
   stream = picamera.PiCameraCircularIO(camera, seconds = video_preseconds)
   # 1. split the hi resolution video recording into circular buffer from splitter port 1
   camera.start_recording(stream, format='h264', splitter_port=1, inline_headers=True, intra_period=video_intra_period)
   # 2. split the low resolution motion vector analysis from splitter port 2, throw away the actual video
   camera.start_recording('/dev/null', splitter_port=2, resize=(motion_width,motion_height) ,format='h264', motion_output=MyMotionDetector(camera, size=(motion_width,motion_height)))
   # wait some seconds for stable video data to be available
   camera.wait_recording(2, splitter_port=1)
   joiner_thread_id = 0
   logger.info('OK. Waiting for first motion to be detected')

   try:
       while True:
          # the callback "MyMotionDetector" has been setup above using the low resolution split
          # original code "while true" above ... loop around as fast as we can go until motion is detected ... thus 100 percent cpu ?
          # a motion event must trigger this action here
#####         camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
          # hmm, do an event wait with "motion_event_wait_timeout" seconds before timing out. It returns True if the event was set, false if not.
          msg = "Entering event wait state awaiting next motion detection by class MyMotionDetector ..."
          logger.info(msg)
          motion_event.clear()
          msg = "Window capture status reset"
          logger.info(msg)
          if motion_event.wait(motion_event_wait_timeout):
             logger.info('detected motion')
             motion_filename = filepath + "/" + time.strftime("%Y%m%d-%H%M%S", time.localtime(motion_timestamp))
             # split  the high res video stream to a file instead of to the internal circular buffer
             logger.info('splitting video from circular IO buffer to after-motion-detected h264 file ')
             camera.split_recording(motion_filename + '-after.h264', splitter_port=1)
             # catch an image as video preview during video recording (uses splitter port 0) at time of the motion event
             msg = "started  capture jpeg image file %s" % (motion_filename + ".jpg")
             logger.info(msg)
             camera.capture_sequence([motion_filename + '.jpg'], use_video_port=True, splitter_port=0)
             msg = "finished capture jpeg image file %s" % (motion_filename + ".jpg")
             logger.info(msg)
             # if we want to see dump_png motion stuff, dump motion array as a png image
             if dump_png:
                 logger.info('saving dump_png motion vectors')
                 img = Image.fromarray(motion_array)
                 img.save(motion_filename + "-motion.png")
             # save circular buffer containing "before motion" event video, ie write it to a file
             logger.info('started  saving before-motion circular buffer')
             write_video(stream)
             logger.info('finished saving before-motion circular IO buffer')
             #---- wait for the end of motion event here, in one second increments
             logger.info('start waiting to detect end of motion')
             #while motion_detected stay inside the loop below, recording 
             while motion_event.is_set():
                camera.wait_recording(0.5, splitter_port=1)
             #---- end of motion event detected
             logger.info('detected end of motion')
             #split video recording back in to circular buffer
             logger.info('splitting video back into the circular IO buffer')
             camera.split_recording(stream, splitter_port=1)
             if mp4_mode:
                 joiner_cmd  = "MP4Box -fps %d -cat %s -cat %s -isma -new > /dev/null %s && rm -f %s > /dev/null && rm -f %s > /dev/null" % (video_framerate, motion_filename + "-before.h264", motion_filename + "-after.h264", motion_filename + ".mp4", motion_filename + "-before.h264", motion_filename + "-after.h264")
             else:
                 joiner_cmd = "cat %s %s > %s && rm -f %s && rm -f %s" % (motion_filename + "-before.h264", motion_filename + "-after.h264", motion_filename + ".h264", motion_filename + "-before.h264", motion_filename + "-after.h264")
             joiner_thread_id = joiner_thread_id + 1
             msg = "starting new video post-processing thread %d for h264 files" % (joiner_thread_id)
             logger.info(msg)
             logger.info(joiner_cmd)
             joiner_thread = myThread(joiner_thread_id, video_framerate, motion_filename, joiner_cmd, joiner_thread_id) 
             joiner_thread.start()
             msg = "after starting video post-processing thread %d for h264 files" % (joiner_thread_id)
             logger.info(msg)
             msg = "Finished capture processing."
             logger.info(msg)
          else:
             # no motion detected or in progress
             logger.info("Motion detector timed out")
             # if webcam mode, capture images on the regular timeout interval
             if perform_snapshot_capture:
                 #snapf = filepath + "/" + snapshot_capture_filename + "-" + time.strftime("%Y%m%d-%H%M%S", time.gmtime(motion_timestamp))
                 snapf = filepath + "/" + snapshot_capture_filename + "-" + time.strftime("%Y%m%d-%H%M%S", time.localtime(time.time()))
                 camera.capture_sequence([snapf + ".jpg"], use_video_port=True, splitter_port=0)
                 logger.info("Captured snapshot")

   finally:
       camera.stop_recording(splitter_port=1)
       camera.stop_recording(splitter_port=2)

killagreg
Posts: 12
Joined: Wed Dec 17, 2014 10:15 pm

Re: Lightweight python motion detection

Thu Jan 01, 2015 7:11 pm

Now I have a moderator in the motion detection that fires an event only if in 2 consecutive frames a motion above the threshold has been detected. This improves the algo in terms of real motion when the treshold is set very low (maybe around 4)

In addition an external programm can be called at motion event (motion_cmd = ...).
I use that feature to set a system variable in my home automation central via a web request
to trigger some activity there.

To have an idear of the motion vector lengt distribution under certain scenes I have added a histogram output in the logfile. Remove the featrure in case of to high cpu load.

Code: Select all

#!/usr/bin/python

# This script implements a motion capture surveillace cam for raspery pi using the picam.
# It uses the motion vecors magnitude of the h264 hw-encoder to detect motion activity.
# At the time of motion detection a jpg snapshot is saved together with a mp4 video
# some seconds before, during and after motion activity to the 'filepath' directory.
# In addtion one ore multiple  range of interests within the image can be defiend to mask out moving trees.
# It is also possible to define a shell command that will be executed in case of a motion event.

import os, logging
import subprocess
import threading
import io
import picamera
import picamera.array
import numpy as np
import datetime as dt
import time
from PIL import Image

#debug mode?
debug = True
#setup filepath for motion and capure data output
filepath = '/var/www/motion'
#setup filepath for logfile
logger_filepath = '/home/pi/picam.log'
# setup pre and post video recording around motion event
video_preseconds = 3
video_postseconds = 3
#setup video resolution
video_width = 1280
video_height = 720
video_framerate = 25
#setup cam rotation (0, 180)
cam_rotation = 180
# define a shell cmd that should be executed when motion has been recognized
#motion_cmd = ""
motion_cmd = "/home/pi/motion.sh"
# setup motion detection resolution
motion_width = 640
motion_height = 360
# setup motion detection threshold, i.e. magnitude of a motion block to count as  motion
motion_threshold = 7
# setup motion detection sensitivity, i.e number of motion blocks that trigger a motion detection
motion_sensitivity = 4
# range of interests define areas within the motion analysis is done
 # [ [[start pixel on left side,end pixel on right side],[start pixel on top side,stop pixel on bottom side]] ]
# default is the whole image frame
motion_roi =  []
# example for 1 roi
#motion_roi = [ [[1,640], [36,360]]  ]
# example for 2 roi
#motion_roi = [ [[1,320],[1,240]], [[400,500],[200,300]] ]

# setup capture interval
capture_interval = 10
capture_filename = "snapshot"

# do not change code behind that line
# --------------------------------------------------------

#call back handler for motion output data from h264 hw encoder
class MotionDetector(picamera.array.PiMotionAnalysis):
   th_counter = 0       # static variable within the class
   time_lastrun = 0.0   # timestamp in sec since last call
   time_motion = 0.0    # timesstamp in sec of last motion event
   event = threading.Event()
   roi_mask = np.zeros((0,0), dtype = np.uint8)


   def analyse(self, a):
      global camera, motion_array, histo
      # check for pixel changes in macro blocks (sum off absulute differeces)
      # as an alternative to length of motion vectors
      #a = (a['sad']).astype(np.uint16)
      # calcuate length of motion vectors of mpeg macro blocks
      a = np.sqrt(
          np.square(a['x'].astype(np.float)) +
          np.square(a['y'].astype(np.float))
          ).clip(0, 255).astype(np.uint8)
      # mask array if neccessary
      a = a * MotionDetector.roi_mask
      # If there're more than 'sensitivity' vectors with a magnitude greater
      # than 'threshold', then say we've detected motion
      histo = np.histogram(a, bins=[0, 1, 2, 3, 4, 5, 10, 100])
      th = ((a > motion_threshold).sum() > motion_sensitivity)

      # motion logic, trigger on motion and stop after some  seconds of inactivity

      # update annotation on 1 second interval
      now = time.time()
      if( (now - MotionDetector.time_lastrun) >= 1.0 ):
         MotionDetector.time_lastrun = now
         camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')

      # motion ongoing
      if MotionDetector.event.is_set():
          if th:
             MotionDetector.time_motion = now
          if (now - MotionDetector.time_motion) >= video_postseconds:
             MotionDetector.event.clear()
             MotionDetector.th_counter = 0
      else:
         if th:
             # increment threshold counter with motion detection
             MotionDetector.th_counter += 1
             if (MotionDetector.th_counter > 2): # more than 2 consecutive motion thresholds
                MotionDetector.event.set()
                MotionDetector.time_motion = now
                if debug:
                   idx = a > motion_threshold
                   a[idx] = 255
                   motion_array = a
         else:
             # decrement threshold counter without an motion detection
             if(MotionDetector.th_counter):
                MotionDetector.th_counter -= 1



def write_video(stream):
# Write the entire content of the circular buffer to disk. No need to
# lock the stream here as we're definitely not writing to it
# simultaneously
     global motion_filename

     with io.open(motion_filename + '-before.h264', 'wb') as output:
         for frame in stream.frames:
            if frame.frame_type == picamera.PiVideoFrameType.sps_header:
                stream.seek(frame.position)
                break
         while True:
            buf = stream.read1()
            if not buf:
               break
            output.write(buf)
     # Wipe the circular stream once we're done
     stream.seek(0)
     stream.truncate()



class execute_cmd (threading.Thread):

    def __init__(self, cmd):
        threading.Thread.__init__(self)
        self.cmd = cmd

    def run(self):
        subprocess.call(self.cmd, shell=True)

def run_background(cmd):
    if(cmd != ""):
      thread = execute_cmd(cmd)
      thread.start()

# create logger
logger = logging.getLogger('PiCam')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler(logger_filepath)
fh.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)

# create motion mask
motion_cols = (motion_width  + 15) // 16 + 1
motion_rows = (motion_height + 15) // 16
if(len(motion_roi)):
   MotionDetector.roi_mask = np.zeros((motion_rows, motion_cols), dtype = np.uint8)
   for count in xrange(0, len(motion_roi)):
      for col in xrange( (motion_roi[count][0][0]-1)//16, (motion_roi[count][0][1]-1)//16 ):
         for row in xrange( (motion_roi[count][1][0]-1)//16, (motion_roi[count][1][1]-1)//16 ):
            MotionDetector.roi_mask[row][col] = 1
else:
   MotionDetector.roi_mask = np.ones((motion_rows, motion_cols), dtype = np.uint8)

logger.info('---------------------------------')
logger.info('PiCam has been started')
logger.info("Capture videos with %dx%d resolution" % (video_width, video_height))
logger.info("Analyze motion with %dx%d resolution" % (motion_width, motion_height))
logger.info("  resulting in %dx%d motion blocks" % (motion_cols, motion_rows))

capture_timestamp = time.time()

with picamera.PiCamera() as camera:
        camera.resolution = (video_width, video_height)
        camera.framerate = video_framerate
        camera.rotation = cam_rotation
        camera.exposure_mode = 'night'
        camera.video_stabilization = True
        camera.annotate_background = True
        # setup a circular buffer
        stream = picamera.PiCameraCircularIO(camera, seconds = video_preseconds)
        # hi resolution video recording into circular buffer from splitter port 1
        camera.start_recording(stream, format='h264', splitter_port=1)
        # low resolution motion vector analysis from splitter port 2
        camera.start_recording('/dev/null', splitter_port=2, resize=(motion_width,motion_height) ,format='h264', motion_output=MotionDetector(camera, size=(motion_width,motion_height)))
        # wait some seconds for stable video data
        camera.wait_recording(2, splitter_port=1)
        # clear motion event
        MotionDetector.event.clear()
        logger.info('waiting for motion')
        thread_id = 1
        try:
            while True:
                    # motion event must trigger this action here
                    if MotionDetector.event.wait(1):
                            logger.info('motion detected')
                            histb = histo
                            motion_filename = filepath + "/" + time.strftime("%Y%m%d-%H%M%S", time.localtime(MotionDetector.time_motion))
                            logger.info(str(histb))
                            #capture still image from the motion event during video recording
                            camera.capture_sequence([motion_filename + '.jpg'], use_video_port=True, splitter_port=0)
                            # if a shell command has been defined execute it in another thread
                            run_background(motion_cmd)
                            #split video recording at next key-frame from circular buffer to temp file
                            camera.split_recording(motion_filename + '-after.h264', splitter_port=1)
                            # dump motion array as image
                            if debug:
                               img = Image.fromarray(motion_array)
                               img.save(motion_filename + "-motion.png")
                            # save circular buffer before motion event
                            write_video(stream)
                            #wait for end of motion event here
                            while MotionDetector.event.is_set():
                               camera.wait_recording(1, splitter_port=1)
                            #split video recording back in to circular buffer at next key frame
                            camera.split_recording(stream, splitter_port=1)
                            # transcode to mp4 using a seperate thread to aviod blocking this loop
                            cmd = "MP4Box -noprog -force-cat -cat %s -cat %s %s && rm -f %s" % (motion_filename + "-before.h264", motion_filename + "-after.h264", motion_filename + ".mp4", motion_filename + "-*.h264")
                            run_background(cmd)
                            logger.info('motion stopped')
                    else:
                            # webcam mode, capture images on a regular inerval
                            if capture_interval:
                               if(time.time() > capture_timestamp):
                                   logger.info(str(histo))
                                   capture_timestamp = time.time() + capture_interval
                                   camera.capture_sequence([filepath + "/" + capture_filename + ".jpg"], use_video_port=True, splitter_port=0)

        finally:
            camera.stop_recording(splitter_port=1)
            camera.stop_recording(splitter_port=2)


hydra3333
Posts: 146
Joined: Thu Jan 10, 2013 11:48 pm

Re: Lightweight python motion detection

Sat Jan 03, 2015 5:44 am

Thank you. A variation of your code I use, below, has minor additional debugging dumped to the log file (turned off by a flag).

Code: Select all

#!/usr/bin/env python
# -*- coding: utf-8 -*-

# to fix scripts, turning them from MS-DOS format to unix format
# to get rid of MSDOS format do this to this file: 
# sudo sed -i s/\\r//g ./*.py
# sudo sed -i s/\\r//g ./*.sh

# This script was originally created by by killagreg î Thu Dec 18, 2014  7:53 am
# and                                   by killagreg î Fri Dec 19, 2014  7:09 pm
# and                                   by killagreg î Sun Dec 28, 2014 10:29 pm
# see   http://www.raspberrypi.org/forums/viewtopic.php?p=656881#p656881  onwards
# eg    http://www.raspberrypi.org/forums/viewtopic.php?p=660806#p660806
#
# This script implements a motion capture surveillance cam for raspberry pi using picam.
# It uses the "motion vectors" magnitude of the h264 hw-encoder to detect motion activity.
# At the time of motion detection a jpg snapshot is saved together with a h264 video stream
# some seconds before, during and after motion activity to the 'filepath' directory.
#
# APPARENTLY INSPIRED BY PICAMERA 1.8 TECHNIQUES documented at
# http://picamera.readthedocs.org/en/release-1.8/recipes2.html#rapid-capture-and-processing
# where the PICAMERA code uses efficient underlying mmal access and numpy code
#
# Modifications:
# 2014.12.26 
#    - modified slightly for the boundary case of no motion detection "windows" - avoid performing the masking step
# 2014.12.28 (hey "killagreg", really nice updates)
#    - incorporate latest changes by killagreg over Christmas 2014
#         from http://www.raspberrypi.org/forums/viewtopic.php?p=660572#p660572
#    - added/changed "mp4 mode" to be optional and not the default (also added some MP4box flags)
#    - repositioned a small bit of code to avoid a possible "initial conditions" bug
#    - modified (webcam like) snapshot capture interval processing slightly
#    - added extra logging
#    - made use of localtime instead of GMT, for use in filenames
#    - removed "print" commands and instead rely on logging 
#    - added circular file logging and specified the path of the log file
# 2014.12.30 (thanks "killagreg", really really nice updates)
#    - use a threading "event" to check for motion detection
#    -  use a variation of killagreg's code to shovel off the 
#      post-capture video processing into separate (hopefully asynchronous) threads
#    - removed snapshot_capture_interval, replaced by combo of threading event timeout and a flag 
#    - added inline_headers=True to facilitate streaming into the future
#    - added intra_period=5 to see if we can reduce long time taken to SPLIT streams due to waiting for I frames 
# 2014.12.31 - assess sensitivity and performance
#    - moved frame annotation into motion detection function
#    - added extra debug code into [logging, frame annotation] to assist assessing sensitivity and bugs
#         * after checking the logs and the frames with this version, it turns out 
#              + there may be a bug in the pre-capture buffering (often it saves much more than the limit we set) ... I tried to squash it
#              + the motion sensitivity is not as effective as I'd hoped especially with "z" motion (toward or away)
#                    eg stand about 5m away and wave your arms and detection is not triggered
#                    eg stand about 10m away and walk directly to the camera, detection is not triggered until quite close
#              + it seems possible the "pixel differences" approach may be a more effective approach ?
# 2015.01.03 - miscellaneous updates as shown by killagreg
#    - http://www.raspberrypi.org/forums/viewtopic.php?p=663390#p663390
#    - throw a "motion detected" event upon 2 consecutive frames both above the threshold motion parameters
#    - log motion detection histograms 
#    - set a higher resolution for detection window, see if that helps
#   
# notes:
#    0. I am using an IR picam
#    1. it hopefully no longer uses 100% cpu looping around infinitely in a "while true" condition until motion is detected
#    2. if not mp4_mode then the output video streams files are raw h264, 
#       and NOT repeat NOT mpeg4 video files, so you will need to convert them to .mp4 yourself
#    3. to prepare for using this python script (yes, yes, 777, roll your own if you object)
#        sudo apt-get install -y rpi-update
#        sudo rpi-update
#        sudo apt-get update -y
#        sudo apt-get upgrade -y
#        sudo apt-get install python-picamera python-imaging-tk gpac -y
#        sudo mkdir /var/pmd
#        sudo chmod 777 /var/pmd
#    example installation including the python script top be run as a daemon pmd.py:
#        ... copy the pmd.py to /var/pmd/pmd.py
#        cp ./pmd.py /var/pmd/pmd.py/
#        chmod 777 /var/pmd/pmd.py
#        ... copy the pmd_stop_start.sh to /etc/init.d eg
#        cp ./pmd_stop_start.sh /etc/init.d/
#        chmod 777 /etc/init.d/pmd_stop_start.sh
#        # adds symbolic links to the /etc/rc.x directories so that the init script is run at the default times
#        ## old way:
#        sudo update-rc.d pmd_stop_start.sh defaults
#        sudo update-rc.d pmd_stop_start.sh enable
#        # you can see these links if you do this
#        ls -l /etc/rc?.d/*pmd_stop_start.sh
#        #-----------------------------------------------------------------------------------------
#        #You should be able to start your Python script pmd.py using the command 
#        sudo /etc/init.d/pmd_stop_start.sh start
#        #and check its status with 
#        sudo /etc/init.d/pmd_stop_start.sh status 
#        sudo ps -eF | grep pmd
#        #and stop it with 
#        sudo /etc/init.d/pmd_stop_start.sh stop
#        #-----------------------------------------------------------------------------------------
#        #To remove a script from being executed on every boot (notice the .sh stays throughout)
#        #you'll delete all links from that folders. 
#        #Usually on debian systems this is done using the update-rc.d tool:
#        ##sudo update-rc.d pmd_stop_start.sh disable
#        sudo update-rc.d -f pmd_stop_start.sh remove
#        sudo rm /etc/init.d/pmd_stop_start.sh
#        #-----------------------------------------------------------------------------------------
#
#    4. to monitor CPU usage after the script is started in the background,
#       4.1 https://pypi.python.org/pypi/psutil
#           sudo apt-get install python-dev
#           then use this in a python script to show cpu usage every 1 second for one minute
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# for x in range(60):
#    psutil.cpu_percent(interval=1)
#       5.2 use the built-in "top" commandline
#       5.3 use htop (preferable)
#           sudo apt-get install htop -y
#           htop
#
# licensing: 
#    this being a derivative, whatever killagreg had (acknowledging killagreg code looks to be substantially from examples in
#    the picamera documentation http://picamera.readthedocs.org/en/release-1.8/recipes2.html#rapid-capture-and-processing
#    i.e. free for any and all use I guess
#
# Example to separately and externally convert h264 files to mp4, on the Pi (using MP4box from gpac)
#   sudo MP4Box -fps <use capture framerate> -add raw_video.h264 -isma -new wrapped_video.mp4
#
# Example to separately and externally convert h264 files to mp4, on the Windows
#   "C:\ffmpeg\bin\ffmpeg.exe" -f h264 -r <use capture framerate> -i "raw_video.h264" -c:v copy -an -movflags +faststart -y "wrapped_video.mp4"
#   REM if necessary add   -bsf:v h264_mp4toannexb   before "-r" 
# or
#   "C:\MP4box\MP4Box.exe" -fps <use capture framerate> -add "raw_video.h264" -isma -new "wrapped_video.mp4"
#
import os
import logging
import logging.handlers
import threading
import subprocess
import io
import picamera
import picamera.array
import numpy as np
import datetime as dt
import time
from PIL import Image

# ----------------------------------------------------------------------------------------------------------------
# in this section are parameters you can fiddle with

#current version number to print in the logger 
pmd_version = "v12.22"

# define the logging level 
logging_level = logging.INFO
#logging_level  = logging.DEBUG

# extra debugging - dump histo on all frames where some threshold motion is detected (CPU EXHAUSTIVE !)
#debug_dump_extra_motion_histo = True
debug_dump_extra_motion_histo = False

#dump_png mode? dumps extra dump_png info.  Note: it doesn't work.
dump_png = False  # False
#dump_png = True  # False

# define a timeout for the "event" waiting for motion to be detected, 
# so that other processing can occur when a timeout occurs, eg jpeg snapshots
motion_event_wait_timeout = 300 # seconds 

# mp4 mode ?
# if we set mp4_mode, 
# then the h264 files are converted to an mp4 when the motion capture is completed, using MP4box (part of gpac)
# warning, warning, danger Will Robinson ... 
# mp4 mode consumes a lot CPU and elapsed time - 
# so, to not *lose frames* we have made this an async subprocess (it still consumes cpu though)
#mp4_mode = False
mp4_mode = True

#seup filepath for motion capure data (which is in raw h264 format) plus the start-of-motion jpeg.
# sudo mkdir /var/pmd
# sudo chmod 777 /var/pmd
filepath = '/var/pmd'
logger_filename = filepath + '/pmd.log'
#logger_filename = 'pmd.log'

# setup pre and post video recording around motion events
video_preseconds = 2   # minimum 1
video_postseconds = 2  # minimum 1

# setup the main video/snapshot camera resolution
# see this link for a full discussion on how to choose a valid resolution that will work
# http://picamera.readthedocs.org/en/latest/fov.html
video_width = 1280
video_height = 720
#video_width = 640
#video_height = 480

# setup the camera video framerate, PAL is 25, let's go for 10 instead
#video_framerate = 25
video_framerate = 10

# setup the camera h264 GOP size (I frames and their subsequent P and B friends)
# the *higher* this value
#  - the more actual video we "lose" whilst delayed waiting for "splitting" to finish (it seeks the next I frame)
#  - the smaller the h264 filesize since it will use more (and smaller) B and P intra-I frames
video_intra_period = 5

#setup video rotation (0, 90, 180, 270)
video_rotation = 0 

# setup the camera to perform video stabilization
#video_stabilization = True
video_stabilization = False

# setup the camera to put a black background on the annotation (in our case, for date/time)
#video_annotate_background = True
video_annotate_background = False

# setup the camera to put frame number in the annotation
#video_annotate_frame_num = True
video_annotate_frame_num = False

# we could setup a webcam mode, to capture images on a regular interval in between motion recordings
# setup jpeg capture snapshot flag and filename prefix
perform_snapshot_capture = False 
snapshot_capture_filename = "snapshot"

#--- now for the motion detection parameters
# define motion detection video resolution, equal or smaller than capture video resolution
# *smaller* = less cpu needed thus "better" and less likely to lose frames etc
#motion_width  = 320
#motion_height = 240
#motion_width  = 640
#motion_height = 480
motion_width  = 640
motion_height = 360

# setup motion detection threshold, 
# i.e. magnitude of a motion block to count as  motion
#motion_threshold = 30
#motion_threshold = 10
#motion_threshold = 7
motion_threshold = 6
#motion_threshold = 4
# setup motion detection sensitivity, 
# i.e number of motion blocks that trigger a motion detection 
# eg 640x480 resolution results in 41 x 30 motion blocks, 5% of 1230=61
# eg 640x360 resolution results in 41 x 23 motion blocks, 5% of 1230=61
# eg 320x240 resolution results in 21 x 15 motion blocks, 5% of 315=15
#motion_sensitivity = 10
motion_sensitivity = 4
#motion_sensitivity = 3

# Range Of Interests 
# define areas within which the motion analysis is performed using the smaller "motion detection video resolution" split
# ie define areas within the motion analysis picture that are used for motion analysis
#    [ [[start pixel on left side,end pixel on right side],[start pixel on top side,stop pixel on bottom side]] ]
#
# default to no motion masking, ("0")
# ie use the "whole image frame" of the lower-resolution-capture "motion vectors"
# and avoid CPU/memory overheards of doing the masking
motion_roi_count = 0
# this is the whole "motion detecton image frame"
#motion_roi_count = 1
#motion_roi = [ [[1,motion_width],[1,motion_height]] ]
# another example, one motion detection mask area
#motion_roi_count = 1
#motion_roi = [ [[270,370],[190,290]]  ]
# example for 2 mask areas
#motion_roi_count = 2
#motion_roi = [ [[1,320],[1,240]], [[400,500],[300,400]] ]

#---------- 2015.01.03 ----------
# define a shell cmd that should be executed when motion has been recognized
motion_cmd = ""
#motion_cmd = filepath + '/pmd_motion_detected.sh'

# do not change code below the line
#-----------------------------------

# define an event used to set/clear/check whether motion was detected or not-detected 
# with any luck, this means that we won't use 100% cpu looping around 
# inside a WHILE TRUE loop just constantly checking for a true/false condition 
motion_event = threading.Event()
motion_event.clear()

# initialize timestamp stuff
motion_timestamp = time.time()
motion_window_active = "-"
motion_frame_active = "-"

prev_frame_annotation = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S') + " --"
curr_frame_annotation = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S') + " --"

# calculate trhe number of blocks that motion vectors have
# and initialize a regions of of interest array with zeroes
motion_cols = (motion_width  + 15) // 16 + 1
motion_rows = (motion_height + 15) // 16
if (motion_roi_count > 0) or (dump_png):
    motion_array = np.zeros((motion_rows, motion_cols), dtype = np.uint8)

# create a zero "AND" motion mask of masked areas 
# and then fill 1's into the mask areas of interest which we specified above
if (motion_roi_count) > 0:
    motion_array_mask = np.zeros((motion_rows, motion_cols), dtype = np.uint8)
    for count in xrange(0, motion_roi_count):
        for col in xrange( (motion_roi[count][0][0]-1)//16, (motion_roi[count][0][1]-1+15)//16 ):
            for row in xrange( (motion_roi[count][1][0]-1)//16, (motion_roi[count][1][1]-1+15)//16 ):
                motion_array_mask[row][col] = 1

# pre-initialise arrays for histograms (print these in the log)
histo_bins = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 30, 40, 50, 100]
histo0 =     [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 00, 00, 00, 00, 00, 00, 000]
histo1 =     histo0
histo2 =     histo0
histo_nil =  histo0
histo_extra_debug = histo0
              
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#call back handler for motion output data from h264 hw encoder
#this processes the motion vectors from the low resolution splitted capture
class MyMotionDetector(picamera.array.PiMotionAnalysis):
   th_counter = 0       # static variable within the class
   
   def analyse(self, a):
      global histo1, histo2, debug_dump_extra_motion_histo, histo_extra_debug
      global camera, motion_event, motion_timestamp, motion_array, motion_array_mask, motion_roi_count
      global prev_frame_annotation, curr_frame_annotation, motion_window_active, motion_frame_active
      # calculate length of motion vectors of mpeg macro blocks
      a = np.sqrt(
          np.square(a['x'].astype(np.float)) +
          np.square(a['y'].astype(np.float))
          ).clip(0, 255).astype(np.uint8)
# zero out (mask out) anything outside our specified areas of interest, if we have a mask
      if motion_roi_count > 0:
          a = a * motion_array_mask
      # If there's more than 'sensitivity' number vectors with a magnitude greater than 'threshold', then say we've detected motion
      th = ((a > motion_threshold).sum() > motion_sensitivity)
      now = time.time()
# by now ...
# th                     = motion detected on current frame
# motion_timestamp       = the last time when motion was detected in a frame (start of time window)
# motion_event.is_set()  = whether motion detection time window is currently triggered
#                        = is only turned off if motion has previously been detected
#                          and both "no motion detected" and its time window has expired
#
#      # update annotation on 1 second interval
#      now = time.time()
#      if( (now - MyMotionDetector.time_lastrun) >= 1.0 ):
#         MyMotionDetector.time_lastrun = now
#         camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
#
#     instead, use the alternative frame annotation scheme which marks motion on each frame
      if th:
          motion_timestamp = now
          motion_frame_active = "f"
          #logger.debug('frame motion detected')
          if debug_dump_extra_motion_histo:
              histo_extra_debug, histo_nil = np.histogram(a, bins=histo_bins, density=False) # hopefully this resets the "bin counts" too
              logger.debug("Histo motion frame: " + str(histo_extra_debug))
      else:
          motion_frame_active = "-"

      # MOTION DETECTION PROCESSING
      # if motion is detected, don't clear the detection flag until video_postseconds have passed
      if motion_event.is_set():
          if (now - motion_timestamp) >= video_postseconds:
              motion_event.clear()  
              MyMotionDetector.th_counter = 0
              motion_window_active = "-"
              logger.debug('Capture time window cleared')
      else:
          if th:
              MyMotionDetector.th_counter += 1
              if (MyMotionDetector.th_counter == 1): # only 1      consecutive motion thresholds
                  # create a histogram to describe the frame motion thresholds that were detected upon first detection
                  histo1, histo_nil = np.histogram(a, bins=histo_bins, density=False) # hopefully this resets the "bin counts" too
              elif (MyMotionDetector.th_counter == 2): # matched 2   consecutive motion thresholds
                  # create a histogram to describe the frame motion thresholds that were detected upon second detection
                  histo2, histo_nil = np.histogram(a, bins=histo_bins, density=False) # hopefully this resets the "bin counts" too
                  motion_event.set()
                  motion_window_active = "w"
                  logger.debug('Second consecutive frame motion detected - capture time window set.')
                  if dump_png: # the dump_png .png file doesn't work 
                      idx = a > motion_threshold
                      a[idx] = 255
                      motion_array = a
              elif (MyMotionDetector.th_counter > 2): # more than 2 consecutive motion thresholds
                  histo1 = histo0
                  histo2 = histo0
          else:
              # decrement threshold counter when a non-motion detection frame occurs
              #if(MyMotionDetector.th_counter):
                  #MyMotionDetector.th_counter -= 1
              if(MyMotionDetector.th_counter > 0):
                  MyMotionDetector.th_counter = 0
                  
      # annotate the frame
      curr_frame_annotation = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S') + " " + motion_window_active + motion_frame_active
      if  curr_frame_annotation != prev_frame_annotation:
          camera.annotate_text = curr_frame_annotation
          prev_frame_annotation = curr_frame_annotation
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
def write_video(stream):
# Write the entire content of the circular buffer to disk. No need to
# lock the stream here as we're definitely not writing to it
# simultaneously
     global motion_filename

     with io.open(motion_filename + '-before.h264', 'wb') as output:
         for frame in stream.frames:
             if frame.frame_type == picamera.PiVideoFrameType.sps_header:
                 stream.seek(frame.position)
                 break
         while True:
             buf = stream.read1()
             if not buf:
                 break
             output.write(buf)
     # Wipe the circular stream once we're done
     stream.seek(0)
     stream.truncate()
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++     

#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# use asynchronous threads to perform some command processing in the background
# and therefore  does not hold up processing waiting for that command to complete 
class execute_cmd_asynchronously (threading.Thread):
    def __init__(self, threadID, cmd):
        threading.Thread.__init__(self)
        self.threadID = threadID
        self.cmd = cmd
    def run(self):
        subprocess.call(self.cmd, shell=True)
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
def run_background(threadID, cmd):
    if(cmd != ""):
      thread = execute_cmd_asynchronously(threadID, cmd)
      thread.start()
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
          
#--------------------------------
# MAIN CODE EXECUTION STARTS HERE
#--------------------------------

# create a rotating file logger
logger = logging.getLogger('pmd')
logger.setLevel(logging_level)
# create file handler which logs even debug messages
#fh = logging.FileHandler('pmd.log')
fh = logging.handlers.RotatingFileHandler(logger_filename, mode='a', maxBytes=(1024*1000 * 2), backupCount=5, delay=0)
fh.setLevel(logging_level)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.info('----------------------------------------')
logger.info('pmd %s has been started' % (pmd_version))
logger.info('----------------------------------------')
msg = "Video capture filepath: %s" % (filepath)
logger.info(msg)
msg = "Capture videos with %d x %d resolution" % (video_width, video_height)
logger.info(msg)
msg = "Analyse motion vectors from a %d x %d resolution split" % (motion_width, motion_height)
logger.info(msg)
msg = "  resulting in %d x %d motion blocks" % (motion_cols, motion_rows)
logger.info(msg)
msg = "Analyse motion vectors threshold , sensitivity: %d , %d" % (motion_threshold, motion_sensitivity)
logger.info(msg)
msg = "Regions of Interest: %d" % (motion_roi_count)
logger.info(msg)
msg = "Framerate: %d fps" % (video_framerate)
logger.info(msg)
msg = "video_intra_period: %d" % (video_intra_period)
logger.info(msg)
msg = "Rotation: %d degrees" % (video_rotation)
logger.info(msg)
msg = "Stabilization: %r" % (video_stabilization)
logger.info(msg)
msg = "Annotate background: %r" % (video_annotate_background)
logger.info(msg)
msg = "Annotate frame_num: %r" % (video_annotate_frame_num)
logger.info(msg)
msg = "Video detection event capture before-seconds , after-seconds: %d , %d" % (video_preseconds, video_postseconds)
logger.info(msg)
msg = "motion_event_wait_timeout: %s" % (motion_event_wait_timeout)
logger.info(msg)
msg = "perform_snapshot_capture: %r" % (perform_snapshot_capture)
logger.info(msg)
msg = "snapshot_capture_filename: %s" % (snapshot_capture_filename)
logger.info(msg)
msg = "logger_filename: %s" % (logger_filename)
logger.info(msg)
msg = "mp4_mode: %r" % (mp4_mode)
logger.info(msg)
msg = "Histo_frame motion threshold bins: " + str(histo_bins)
logger.info(msg)
msg = "Logging Level: %d (info=%d, debug=%d)" % (logging_level, logging.INFO, logging.DEBUG)
logger.info(msg)
msg = "Dump extra motion histo info: %r" % (debug_dump_extra_motion_histo)
logger.info(msg)

with picamera.PiCamera() as camera:
   camera.resolution = (video_width, video_height)
   camera.framerate = video_framerate
   camera.rotation = video_rotation
   camera.video_stabilization = video_stabilization
   camera.annotate_background = video_annotate_background
   camera.annotate_frame_num = video_annotate_frame_num

   # setup a circular IO buffer to contain video of "before motion detection" footage
   stream = picamera.PiCameraCircularIO(camera, seconds = video_preseconds)
   # 1. split the hi resolution video recording into circular buffer from splitter port 1
   # note: splitting video recording happens at a key frame, so see parameter video_intra_period
   camera.start_recording(stream, format='h264', splitter_port=1, inline_headers=True, intra_period=video_intra_period)
   # 2. split the low resolution motion vector analysis from splitter port 2, throw away the actual video
   camera.start_recording('/dev/null', splitter_port=2, resize=(motion_width,motion_height) ,format='h264', motion_output=MyMotionDetector(camera, size=(motion_width,motion_height)))
   # wait some seconds for stable video data to be available
   camera.wait_recording(2, splitter_port=1)
   joiner_thread_id = 0
   logger.info('OK. Commence waiting for first REAL motion to be detected')
   try:
       while True:
          # the callback "MyMotionDetector" has been setup above using the low resolution split
          # original code "while true" above ... loop around as fast as we can go until motion is detected ... thus 100 percent cpu ?
          # a motion event must trigger this action here
#####         camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
          # hmm, do an event wait with "motion_event_wait_timeout" seconds before timing out. It returns True if the event was set, False if not.
          msg = "Entering event wait state awaiting next motion detection by class MyMotionDetector ..."
          logger.info(msg)
          motion_event.clear()
          msg = "(also, window capture status was reset prior to waiting for motion event)"
          logger.debug(msg)
          if motion_event.wait(motion_event_wait_timeout):
             histo1_tmp = histo1
             histo2_tmp = histo2
             logger.info('Detected motion')
             logger.info("Histo frame 1: " + str(histo1_tmp))
             logger.info("Histo frame 2: " + str(histo2_tmp))
             motion_filename = filepath + "/" + time.strftime("%Y%m%d-%H%M%S", time.localtime(motion_timestamp))
             # split  the high res video stream to a file instead of to the internal circular buffer
             logger.debug('splitting video from circular IO buffer to after-motion-detected h264 file ')
             camera.split_recording(motion_filename + '-after.h264', splitter_port=1)
             # catch an image as video preview during video recording (uses splitter port 0) at time of the motion event
             msg = "started  capture jpeg image file %s" % (motion_filename + ".jpg")
             logger.debug(msg)
             camera.capture_sequence([motion_filename + '.jpg'], use_video_port=True, splitter_port=0)
             msg = "finished capture jpeg image file %s" % (motion_filename + ".jpg")
             logger.debug(msg)
             # if we want to see dump_png motion stuff, dump motion array as a png image
             if dump_png:
                 logger.debug('saving dump_png motion vectors')
                 img = Image.fromarray(motion_array)
                 img.save(motion_filename + "-motion.png")
             # save circular buffer containing "before motion" event video, ie write it to a file
             logger.debug('started  saving before-motion circular buffer')
             write_video(stream)
             logger.debug('finished saving before-motion circular IO buffer')
             #---- wait for the end of motion event here, in one second increments
             logger.debug('start waiting to detect end of motion')
             #while motion_detected stay inside the loop below, recording 
             while motion_event.is_set():
                camera.wait_recording(0.5, splitter_port=1)
             #---- end of motion event detected
             logger.info('Detected end of motion')
             #split video recording back in to circular buffer at next key frame (see parameter video_intra_period)
             logger.debug('splitting video back into the circular IO buffer')
             camera.split_recording(stream, splitter_port=1)
             # transcode to mp4, or h264, using a separate thread to avoid blocking this loop
             if mp4_mode:
                 # MP4box documentation  http://manpages.ubuntu.com/manpages/quantal/man1/MP4Box.1.html
                 # -isma also forces the mp4's MOOV atom to be at the start of the mp4 file, which thus permits streaming the mp4 file prior to its full download
                 #       rewrites the file as an ISMA 1.0 Audio/Video file  (all  systems info rewritten) with proper clock references
                 # -hint  hint the file for RTPRTSP sessions.  Payload type is automatically detected and configured 
                 # -new forces file over-write, in case something has gone belly up in our mp4 file naming logic
                 # -fps forces the mp4 playback speed to be in "normal time"
                 # -noprog disables progress reports during processing
                 # -force-cat skips media configuration check when concatenating file
                 joiner_cmd  = "MP4Box -noprog -force-cat -fps %d -cat %s -cat %s -isma -hint -new %s > /dev/null && rm -f %s > /dev/null && rm -f %s > /dev/null" % (video_framerate, motion_filename + "-before.h264", motion_filename + "-after.h264", motion_filename + ".mp4", motion_filename + "-before.h264", motion_filename + "-after.h264")
             else:
                 # this will generally be a bit quicker in CPU/disk constrained systems but it is not as useful a file
                 joiner_cmd = "cat %s %s > %s && rm -f %s && rm -f %s" % (motion_filename + "-before.h264", motion_filename + "-after.h264", motion_filename + ".h264", motion_filename + "-before.h264", motion_filename + "-after.h264")
             joiner_thread_id = joiner_thread_id + 1
             msg = "starting new video post-processing thread %d for h264 files" % (joiner_thread_id)
             logger.debug(msg)
             logger.debug(joiner_cmd)
             #--- it takes 2 calls to start a thread asynchronously - a setup and then a start
             #joiner_thread = execute_cmd_asynchronously(joiner_thread_id, joiner_cmd) 
             #joiner_thread.start()
             # instead, we do it a new way s shown by killagreg - one wrapper function call
             run_background(joiner_thread_id, joiner_cmd)
             #---
             msg = "after starting video post-processing thread %d for h264 files" % (joiner_thread_id)
             logger.info(msg)
             msg = "Finished motion capture processing after detecting end-of-motion."
             logger.info(msg)
          else:
             # no motion detected or in progress
             logger.debug("Motion detector timed out")
             # if webcam mode, capture images on the regular timeout interval
             if perform_snapshot_capture:
                 #snapf = filepath + "/" + snapshot_capture_filename + "-" + time.strftime("%Y%m%d-%H%M%S", time.gmtime(motion_timestamp))
                 snapf = filepath + "/" + snapshot_capture_filename + "-" + time.strftime("%Y%m%d-%H%M%S", time.localtime(time.time()))
                 camera.capture_sequence([snapf + ".jpg"], use_video_port=True, splitter_port=0)
                 logger.info("Captured snapshot")

   finally:
       camera.stop_recording(splitter_port=1)
       camera.stop_recording(splitter_port=2)
Last edited by hydra3333 on Sun Jan 04, 2015 3:14 am, edited 2 times in total.

hydra3333
Posts: 146
Joined: Thu Jan 10, 2013 11:48 pm

noot auto-start of Lightweight python motion detection

Sun Jan 04, 2015 12:46 am

Lunix/Python newbie alert :?

I used this "install" script to make the process repeatable for me, and it seems to work, generally, sort of ...

Code: Select all

#!/bin/sh
# -*- coding: utf-8 -*-
#
# to fix scripts, turning them from MS-DOS format to unix format
# to get rid of MSDOS format do this to this file: 
# sudo sed -i s/\\r//g ./*.py
# sudo sed -i s/\\r//g ./*.sh
#
# install pmd, assuming the things are all in the local directory
set -x

sudo apt-get install -y rpi-update

sudo rpi-update

sudo apt-get update -y

sudo apt-get upgrade -y

sudo apt-get install python-picamera python-imaging-tk gpac -y

sudo mkdir /var/pmd
sudo chmod 777 /var/pmd

sudo cp ./pmd.py /var/pmd/
sudo chmod 777 /var/pmd/pmd.py

#
# example installation including the python script to be run as a daemon pmd.py:
#        ... copy the pmd.py to /var/pmd/pmd.py
#        cp ./pmd.py /var/pmd/pmd.py/
#        chmod 777 /var/pmd/pmd.py
#        ... copy the pmd_stop_start.sh to /etc/init.d (notice the .sh stays throughout) eg
#        cp ./pmd_stop_start.sh /etc/init.d/
#        chmod 777 /etc/init.d/pmd_stop_start.sh
#        # adds symbolic links to the /etc/rc.x directories so that the init script is run at the default times
#        sudo update-rc.d pmd_stop_start.sh defaults
#        sudo update-rc.d pmd_stop_start.sh enable
#        # you can see these links if you do this
#        ls -l /etc/rc?.d/*pmd_stop_start.sh
#
# You should be able to start your Python script pmd.py using the command 
#   sudo /etc/init.d/pmd_stop_start.sh start
# and check its status with 
#   sudo /etc/init.d/pmd_stop_start.sh status 
# and stop it with 
#   sudo /etc/init.d/pmd_stop_start.sh stop
#
# To remove a script from being executed on every boot (notice the .sh stays throughout)
# you'll delete all links from that folders. 
# Usually on debian systems this is done using the update-rc.d tool:
#   #sudo update-rc.d pmd_stop_start.sh disable
#   sudo update-rc.d -f pmd_stop_start.sh remove
#   sudo rm /etc/init.d/pmd_stop_start.sh

sudo cp ./pmd_stop_start.sh /etc/init.d/
sudo chmod 777 /etc/init.d/pmd_stop_start.sh
# old way: IT WORKS
sudo update-rc.d pmd_stop_start.sh defaults
sudo update-rc.d pmd_stop_start.sh enable
## new way:
## *** Starting with Debian 6.0 *** the insserv command is used instead, if dependency-based booting is enabled:
#insserv pmd_stop_start
sudo ls -l /etc/rc?.d/*pmd_stop_start.sh

# You should be able to start your Python script pmd.py using the command 
#   sudo /etc/init.d/pmd_stop_start.sh start
# and check its status with 
#   sudo /etc/init.d/pmd_stop_start.sh status 
# and stop it with 
#   sudo /etc/init.d/pmd_stop_start.sh stop
#

sudo /etc/init.d/pmd_stop_start.sh status 
sudo /etc/init.d/pmd_stop_start.sh start
sudo /etc/init.d/pmd_stop_start.sh status 

sudo ps -eF | grep pmd

#sudo /etc/init.d/pmd_stop_start.sh stop
#sudo /etc/init.d/pmd_stop_start.sh status 

sudo ps -eF | grep pmd
However, the "sudo /etc/init.d/pmd_stop_start.sh status" fails, even when I run the "start" and "status" manually.
A "sudo ps -eF | grep pmd" tells me there's something running in the background.

Following the kind advice above (and some googling to find out what it means) this is the "pmd_stop_start.sh" I ended up with:

Code: Select all

#!/bin/sh
### BEGIN INIT INFO
# Provides:          pmd_stop_start
# Required-Start:    $all $syslog
# Required-Stop:     $all $syslog
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: pmd motion detector
# Description:       pmd python lightweight motion detector
### END INIT INFO
#
# pmd_stop_start.sh - to start motion detector on boot and stop/start it when you like
#
# http://blog.scphillips.com/2013/07/getting-a-python-script-to-run-in-the-background-as-a-service-on-boot/
# http://www.raspberrypi.org/forums/viewtopic.php?p=662467#p662467
#
# to fix scripts, turning them from MS-DOS format to unix format
# to get rid of MSDOS format do this to this file: 
# sudo sed -i s/\\r//g ./*.py
# sudo sed -i s/\\r//g ./*.sh
#
# example installation including the python script to be run as a daemon pmd.py:
#        ... copy the pmd.py to /var/pmd/pmd.py
#        cp ./pmd.py /var/pmd/pmd.py/
#        chmod 777 /var/pmd/pmd.py
#        ... copy the pmd_stop_start.sh to /etc/init.d (notice the .sh stays throughout) eg
#        cp ./pmd_stop_start.sh /etc/init.d/
#        chmod 777 /etc/init.d/pmd_stop_start.sh
#        # adds symbolic links to the /etc/rc.x directories so that the init script is run at the default times
#        sudo update-rc.d pmd_stop_start.sh defaults
#        sudo update-rc.d pmd_stop_start.sh enable
#        # you can see these links if you do this
#        ls -l /etc/rc?.d/*pmd_stop_start.sh
#
# You should be able to start your Python script pmd.py using the command 
#   sudo /etc/init.d/pmd_stop_start.sh start
# and check its status with 
#   sudo /etc/init.d/pmd_stop_start.sh status 
#   sudo ps -eF | grep pmd
# and stop it with 
#   sudo /etc/init.d/pmd_stop_start.sh stop
#
# To remove a script from being executed on every boot (notice the .sh stays throughout)
# you'll delete all links from that folders. 
# Usually on debian systems this is done using the update-rc.d tool:
#   #sudo update-rc.d pmd_stop_start.sh disable
#   sudo update-rc.d -f pmd_stop_start.sh remove
#   sudo rm /etc/init.d/pmd_stop_start.sh
#
#-----------------------------------------------------------------------------------
# https://wiki.debian.org/LSBInitScripts/DependencyBasedBoot
# In Debian releases *** prior to 6.0 *** a service could be added with update-rc.d:
#    update-rc.d pmd_stop_start defaults
# *** Starting with Debian 6.0 *** the insserv command is used instead, if dependency-based booting is enabled:
#    insserv pmd_stop_start
# Where mydaemon is an executable init script placed in /etc/init.d then insserv will produce no output if everything went OK. 
# Examine the error code in $? if you want to be sure.
# Both the old and the new way requires an init script to be present in /etc/init.d.
#   sudo insserv pmd_stop_start
#-----------------------------------------------------------------------------------
#
DIR=/var/pmd
DAEMON=$DIR/pmd.py
DAEMON_NAME=pmd.py
DAEMON_OPTS=""
# This next line determines what user the script runs as.
# Root generally not recommended but necessary if you are using the Raspberry Pi GPIO from Python.
DAEMON_USER=root
# The process ID of the script when it runs is stored here:
PIDFILE=$DIR/$DAEMON_NAME.pid

. /lib/lsb/init-functions

do_start () {
    log_daemon_msg "Starting system $DAEMON_NAME daemon"
    start-stop-daemon --start --background --pidfile $PIDFILE --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON -- $DAEMON_OPTS
    log_end_msg $?
}
do_stop () {
    log_daemon_msg "Stopping system $DAEMON_NAME daemon"
    start-stop-daemon --stop --pidfile $PIDFILE --retry 10
    log_end_msg $?
}

case "$1" in

    start|stop)
        do_${1}
        ;;

    restart|reload|force-reload)
        do_stop
        do_start
        ;;

    status)
        status_of_proc "$DAEMON_NAME" "$DAEMON" && exit 0 || exit $?
        ;;
    *)
        echo "Usage: /etc/init.d/$DAEMON_NAME {start|stop|restart|status}"
        exit 1
        ;;

esac
exit 0
Do you have any advice why the command "sudo /etc/init.d/pmd_stop_start.sh status" fails ?

Also ... after a reboot "sudo ps -eF | grep pmd" shows it is running in the background, however there's nothing shown in the boot log from a "dmesg | grep pmd" (confused, I though there'd be a bootup log message).

bdalton
Posts: 4
Joined: Fri Jan 02, 2015 9:18 pm

Re: Lightweight python motion detection

Tue Jan 06, 2015 7:13 pm

Sorry, maybe a forum newb question.... I'm having troubles with indentation errors in this script. Is there a proper way to copy/paste from the forum?
killagreg wrote:I have invoked the thread seperation now for mp4 transcoding after motion recording.

Code: Select all

#!/usr/bin/python

# This script implements a motion capture surveillace cam for raspery pi using picam.
# It uses the motion vecors magnitude of the h264 hw-encoder to detect motion activity.
# At the time of motion detection a jpg snapshot is saved together with a h264 video stream
# some seconds before, during and after motion activity to the 'filepath' directory.

import os, logging
import subprocess
import threading
import io
import picamera
import picamera.array
import numpy as np
import datetime as dt
import time
from PIL import Image

#debug mode?
debug = False
#setup filepath for motion and capure data output
filepath = '/var/www/motion'
#setup filepath for logfile
logger_filepath = 'picam.log'
# setup pre and post video recording around motion event
video_preseconds = 3
video_postseconds = 3
#setup video resolution
video_width = 1280 
video_height = 720
video_framerate = 25
#setup cam rotation (0, 180)
cam_rotation = 180

# setup motion detection resolution
motion_width = 320
motion_height = 240
# setup motion detection threshold, i.e. magnitude of a motion block to count as  motion
motion_threshold = 60
# setup motion detection sensitivity, i.e number of motion blocks that trigger a motion detection
motion_sensitivity = 6
# range of interests define areas within the motion analysis is done
 # [ [[start pixel on left side,end pixel on right side],[start pixel on top side,stop pixel on bottom side]] ]
# default is the whole image frame
motion_roi_count = 0
# another example
#motion_roi_count = 1
#motion_roi = [ [[270,370], [190,290]]  ]
# exaple for 2 mask areas
#motion_roi_count = 2
#motion_roi = [ [[1,320],[1,240]], [[400,500],[300,400]] ]

# setup capture interval
capture_interval = 10
capture_filename = "snapshot"
# do not change code behind that line
#--------------------------------------
motion_event = threading.Event()
motion_timestamp = time.time()
motion_cols = (motion_width  + 15) // 16 + 1
motion_rows = (motion_height + 15) // 16
if(motion_roi_count or debug):
   motion_array = np.zeros((motion_rows, motion_cols), dtype = np.uint8)
# create motion mask
if motion_roi_count > 0:
   motion_array_mask = np.zeros((motion_rows, motion_cols), dtype = np.uint8)
   for count in xrange(0, motion_roi_count):
      for col in xrange( (motion_roi[count][0][0]-1)//16, (motion_roi[count][0][1]-1+15)//16 ):
         for row in xrange( (motion_roi[count][1][0]-1)//16, (motion_roi[count][1][1]-1+15)//16 ):
            motion_array_mask[row][col] = 1

capture_timestamp = time.time()

#call back handler for motion output data from h264 hw encoder
class MyMotionDetector(picamera.array.PiMotionAnalysis):
   def analyse(self, a):
      global motion_event, motion_timestamp, motion_array, motion_array_mask, motion_roi_count
      # calcuate length of motion vectors of mpeg macro blocks
      a = np.sqrt(
          np.square(a['x'].astype(np.float)) +
          np.square(a['y'].astype(np.float))
          ).clip(0, 255).astype(np.uint8)
      if motion_roi_count > 0:
         a = a * motion_array_mask
      # If there're more than 'sensitivity' vectors with a magnitude greater
      # than 'threshold', then say we've detected motion
      th = ((a > motion_threshold).sum() > motion_sensitivity)
      now = time.time()
      # motion logic, trigger on motion and stop after 2 seconds of inactivity
      if th:
         motion_timestamp = now

      if motion_event.is_set():
          if (now - motion_timestamp) >= video_postseconds:
             motion_event.clear()  
      else:
         if th:
             motion_event.set()
	     if debug:
                idx = a > motion_threshold
                a[idx] = 255
                motion_array = a


def write_video(stream):
# Write the entire content of the circular buffer to disk. No need to
# lock the stream here as we're definitely not writing to it
# simultaneously
     global motion_filename

     with io.open(motion_filename + '-before.h264', 'wb') as output:
         for frame in stream.frames:
            if frame.frame_type == picamera.PiVideoFrameType.sps_header:
                stream.seek(frame.position)
                break
         while True:
            buf = stream.read1()
            if not buf:
               break
            output.write(buf)
     # Wipe the circular stream once we're done
     stream.seek(0)
     stream.truncate()


class myThread (threading.Thread):
    def __init__(self, threadID, name, counter):
        threading.Thread.__init__(self)
        self.threadID = threadID
        self.name = name
        self.counter = counter
    def run(self):
	subprocess.call("MP4Box -cat %s -cat %s %s && rm -f %s" % (self.name + "-before.h264", self.name + "-after.h264", self.name + ".mp4", self.name + "-*.h264"), shell=True)


# create logger
logger = logging.getLogger('PiCam')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler(logger_filepath)
fh.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.info('---------------------------------')
logger.info('PiCam has been started')
msg = "Capture videos with %dx%d resolution" % (video_width, video_height)
logger.info(msg)
msg = "Analyze motion with %dx%d resolution" % (motion_width, motion_height)
logger.info(msg)
msg =  "  resulting in %dx%d motion blocks" % (motion_cols, motion_rows)
logger.info(msg)

with picamera.PiCamera() as camera:
	camera.resolution = (video_width, video_height)
	camera.framerate = video_framerate
	camera.rotation = cam_rotation
	camera.video_stabilization = True
	camera.annotate_background = True
	# setup a circular buffer
	stream = picamera.PiCameraCircularIO(camera, seconds = video_preseconds)
	# hi resolution video recording into circular buffer from splitter port 1
	camera.start_recording(stream, format='h264', splitter_port=1)
	# low resolution motion vector analysis from splitter port 2
	camera.start_recording('/dev/null', splitter_port=2, resize=(motion_width,motion_height) ,format='h264', motion_output=MyMotionDetector(camera, size=(motion_width,motion_height)))
	# wait some seconds for stable video data
	camera.wait_recording(2, splitter_port=1)
        motion_event.clear()
        logger.info('waiting for motion')
        thread_id = 1
	try:
	    while True:
		    # motion event must trigger this action here
		    camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
		    if motion_event.wait(1):
                            logger.info('motion detected')
			    motion_filename = filepath + "/" + time.strftime("%Y%m%d-%H%M%S", time.gmtime(motion_timestamp))
			    camera.split_recording(motion_filename + '-after.h264', splitter_port=1)
		            # catch an image as video preview during video recording (uses splitter port 0) at time of the motion event
			    camera.capture_sequence([motion_filename + '.jpg'], use_video_port=True, splitter_port=0)
			    # dump motion array as image
                            if debug:
			       img = Image.fromarray(motion_array)
			       img.save(motion_filename + "-motion.png")
			    # save circular buffer before motion event
			    write_video(stream)
			    #wait for end of motion event here
			    while motion_event.is_set():
				    camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
				    camera.wait_recording(1, splitter_port=1)
			    #split video recording back in to circular buffer
			    camera.split_recording(stream, splitter_port=1)
                            # transcode to mp4 using a seperate thread
                            thread = myThread(thread_id, motion_filename, thread_id)
                            thread.start()
                            thread_id = thread_id + 1
			    logger.info('motion stopped')
                    else:
                            # webcam mode, capture images on a regular inerval
		            if capture_interval:
			       if(time.time() > (capture_timestamp + capture_interval) ):
                                   capture_timestamp = time.time()
#                                   logger.info('capture snapshot')
                                   camera.capture_sequence([filepath + "/" + capture_filename + ".jpg"], use_video_port=True, splitter_port=0)

	finally:
	    camera.stop_recording(splitter_port=1)
	    camera.stop_recording(splitter_port=2)

User avatar
DougieLawson
Posts: 37088
Joined: Sun Jun 16, 2013 11:19 pm
Location: Basingstoke, UK
Contact: Website Twitter

Re: Lightweight python motion detection

Tue Jan 06, 2015 7:24 pm

bdalton wrote:Sorry, maybe a forum newb question.... I'm having troubles with indentation errors in this script. Is there a proper way to copy/paste from the forum?
The code is a shell script not python, so the indentation doesn't matter.
Note: Having anything humorous in your signature is completely banned on this forum. Wear a tin-foil hat and you'll get a ban.

Any DMs sent on Twitter will be answered next month.

This is a doctor free zone.

bdalton
Posts: 4
Joined: Fri Jan 02, 2015 9:18 pm

Re: Lightweight python motion detection

Tue Jan 06, 2015 7:27 pm

DougieLawson wrote:The code is a shell script not python, so the indentation doesn't matter.
The code I referenced is python, however I think I found a later version that is indented properly. Sorry for the confustion. Definite newb problems.

User avatar
DougieLawson
Posts: 37088
Joined: Sun Jun 16, 2013 11:19 pm
Location: Basingstoke, UK
Contact: Website Twitter

Re: Lightweight python motion detection

Tue Jan 06, 2015 7:35 pm

What happens when you copy / paste?

Have you tried hitting the forum "QUOTE" option and lifting the whole text of the posting rather than the "CODE SELECT" and copy?

It is the major pain with python and who ever designed a programming language like that needs to meet my educational Louisville Slugger. Using {} to delimit blocks of code may be ugly but it just works and it's easy to identify errors. Python can simply run the wrong code because you missed some spaces or missed a tab.
Note: Having anything humorous in your signature is completely banned on this forum. Wear a tin-foil hat and you'll get a ban.

Any DMs sent on Twitter will be answered next month.

This is a doctor free zone.

Return to “Camera board”