Openhapp has a great video quality WiFi security camera. It has excellent night vision too.
We installed one openhapp wifi security camera in the front door, and can watch the real time video using the included free mobile APP.
What about motion detection and save the captured video clip to cloud?
The video clip saved to Dropbox would be nice as Dropbox has PC/MAC client as well as mobile APPs so that we can watch the video clip on PC/MAC and smart phones.
pcDuino8 Uno was chosen to do the motion detection algorithm and upload the captured video clips to dropbox as pcDuino8 Uno has 8 cores that is powerful enough to performance the computation.
On the pcDuino8 Uno, we will use OpenCV to perform the motion detection. The motion detection algorithm is referred to Adrian’s post. To make things easier, LinkSprite releases an image for pcDuino8 Uno that has OpenCV prebuilt.
How can we read the video frame from the Openhapp wifi security camera? It will use the magic function VideoCapture. The sample code is as below:
1
2
|
(grabbed, frame) = camera.read() |
In the code, 192,168.1.19 is the IP address of the Openhapp wifi security camera. To find out the IP address, we can use the accompanied mobile APP mysnapcam:
Before we dig into the code, we also need to create a dropbox token. Assume that you already have a dropbox account, we need to create an APP in dropbox and obtain the token:
Hit the button ‘Generate’ to generate the token. You will need this token in the python script.
Finally, here is the complete python script:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
|
# please change the IP address of the RTSP camera import argparse import datetime import imutils import time import cv2 import numpy as np import dropbox import os import urllib2 cam_ipaddress= "192.168.1.6" access_token= 'your dropbox token' client=dropbox.client.DropboxClient(access_token) print 'linked account:' , client.account_info() #the moving average forgetting parameter alpha= 0.1 isWritingClip= 0 sen_thresh= 50 # construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument( "-v" , "--video" , help= "path to the video file" ) ap.add_argument( "-a" , "--min-area" , type= int , default = 500 , help= "minimum area size" ) args = var s(ap.parse_args()) camera = cv2.VideoCapture(rtsp_address) # Define the codec and create VideoWriter object fourcc = cv2.cv.CV_FOURCC(* 'XVID' ) # initialize the first frame in the video stream firstFrame = None # loop over the frames of the video i_notgrabbed= 0 while True: # grab the current frame and initialize the occupied/unoccupied # text timestr = time.strftime( "%Y%m%d-%H%M%S" ) try : (grabbed, frame) = camera.read() except: continue try : width=np.size(frame, 1 ) except: continue height=np.size(frame, 0 ) frameSize=(width,height) text = "Unoccupied" # if the frame could not be grabbed, then we have reached the end # of the video if not grabbed: i_notgrabbed=i_notgrabbed+ 1 print timestr, grabbed, i_notgrabbed if i_notgrabbed> 20 : camera.release() try : camera = cv2.VideoCapture(rtsp_address) except: continue i_notgrabbed= 0 continue OriginalFrame=frame # resize the frame, convert it to grayscale, and blur it frame = imutils.resize(frame, width= 300 ) gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) gray = cv2.GaussianBlur(gray, ( 21 , 21 ), 0 ) # if the first frame is None, initialize it if firstFrame is None: firstFrame = gray avg = np.float32(firstFrame) continue # compute the absolute difference between the current frame and # first frame frameDelta = cv2.absdiff(firstFrame, gray) thresh = cv2.threshold(frameDelta, sen_thresh, 255 , cv2.THRESH_BINARY)[ 1 ] # dilate the thresholded image to fill in holes, then find contours # on thresholded image thresh = cv2.dilate(thresh, None, iterations= 2 ) (cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cv2.accumulateWeighted(gray,avg,alpha) firstFrame = cv2.convertScaleAbs(avg) # loop over the contours for c in cnts: # if the contour is too small, ignore it if cv2.contourArea(c) < args[ "min_area" ]: continue # compute the bounding box for the contour, draw it on the frame, # and update the text (x, y, w, h) = cv2.boundingRect(c) cv2.rectangle(frame, (x, y), (x + w, y + h), ( 0 , 255 , 0 ), 2 ) text = "Occupied" if text== 'Occupied' : if isWritingClip== 0 : clipfilename=timestr+ '.avi' out = cv2.VideoWriter(clipfilename,fourcc, 20.0 , frameSize) isWritingClip= 1 out.write(OriginalFrame) start_time=time.time() if text== 'Unoccupied' : if isWritingClip== 1 : elapsed_time=time.time()-start_time #after detect a motion, if the inbetween is longer than 60 seconds, two clips are created. if elapsed_time> 60 : isWritingClip= 0 out.release() try : f=open(clipfilename, 'rb' ) response=client.put_file(clipfilename,f) print "uploaded:" , response os.remove(clipfilename) except: os.remove(clipfilename) continue else : out.write(OriginalFrame) # # print isWritingClip # draw the text and timestamp on the frame cv2.putText(frame, "Room Status: {}" .format(text), ( 10 , 20 ), cv2.FONT_HERSHEY_SIMPLEX, 0.5 , ( 0 , 0 , 255 ), 2 ) cv2.putText(frame, datetime.datetime.now().strftime( "%A %d %B %Y %I:%M:%S%p" ), ( 10 , frame.shape[ 0 ] - 10 ), cv2.FONT_HERSHEY_SIMPLEX, 0.35 , ( 0 , 0 , 255 ), 1 ) # # # show the frame and record if the user presses a key cv2.imshow( "Security Feed" , frame) #cv2.imshow( "Thresh" , thresh) #cv2.imshow( "Frame Delta" , frameDelta) key = cv2.waitKey( 1 ) & 0xFF # if the `q` key is pressed, break from the lop if key == ord( "q" ): break # cleanup the camera and close any open windows camera.release() cv2.destroyAllWindows() |