User Tools

Site Tools


wiki:linux:webcam

Setting up a Linux Webcam

1.0 Introduction

The previous owners of our house were clearly a bit paranoid, since it is festooned with 6 webcams of assorted varieties, none of which we bother to use. When we moved in, the cables to all of them were unceremoniously severed! In retrospect, this was probably a little over-enthusiastic on our part: the concept of video surveillance of the house is a good one. I just wanted it to be a lot more discrete… and cheap! After a couple of weeks of experimentation and dabblement, therefore, we now our the proud owners of a two-camera system that provides good coverage of the front of our property (the rear of it is inaccessible to ne'er-do-wells) for about £120 all up. I went down some blind alleys in getting to a point where I was happy, though, and I thought I'd tell of the failures a little before documenting the final triumphant success!

2.0 First Attempts: Try Free!

My first attempt at a completely free video surveillance system relied on the fact that I've got a drawer-full of old smartphones, ranging from a Samsung Galaxy S2, with an S6 Edge and a Huawei KII thrown into the mix. It's quite easy to do.

First, visit the Google App Store on your phone and install “IP Webcam” (it's free of charge). When run, you can configure various video, audio and photo settings (such as resolution), though the default settings seem mostly fine to me. Scroll down to the bottom of its options and you'll find the 'Start server' one which, when pressed, will start the phone acting as a video camera and also set up a mini-web server so that you can connect to the device from any web browser that shares the same wireless network your phone is connected to. Visit the phone in your desktop's browser and you'll see something like this:

That's not bad, and was extremely easy to do! As a surveillance device, however, it's not ideal: my phones don't have tripod mounts, so positioning them to look out of the study windows in useful directions was practically impossible. I also couldn't see a way to capture the video stream that my browser was happily receiving: the browser application allows 2 hours of video to be recorded to the local PC in a loop, with the newest video over-writing that from 2 hours ago, which is better than nothing -but I want to be able to retain my video footage for about a week before getting rid of it. On the phone itself, you can enable motion detection and have the camera email you footage when triggered; it's also possible to set up specific areas within the frame where motion should trigger this sort of response -but, again, I wanted video saved to my desktop, not emailed somewhere.

Of course, if I'd fiddled longer -and been prepared to shell out some cash for some phone mounts- I could probably have got this working in a way that met all my requirements (and it remains a possibility for a quick-and-dirty surveillance solution for the back of the property, which is the area of least concern as far as our specific home security is concerned). But getting the desktop to regard the phonecam as a 'proper' device involves installing kernel module videoforlinux drivers… and I wasn't keen on doing that on my main desktop. Besides, there were reasons why we gave up using these phones (they are old!), and making do with old kit when shiny new kit can be purchased instead has never been my way of doing things!! So I didn't go further down this route, though it is certainly easy to get quick results and looks promising for the future.

3.0 Proper Webcams with ffmpeg

My next approach was to buy a couple of HD webcams. One was a cheap no-name affair; the other a not-so-cheap Logitech C920. Both plug in as normal USB devices; both were immediately recognised by my Linux desktop (a quick ls -l /dev/video* confirmed that video devices were properly 'attached' to the operating system). I confess I was less than impressed with either camera's video quality: I had expected 1920×1080 to be crystal clear, but behind double-glazing, they're not good enough to make out car number plates unless they are parked on our drive (which makes them of questionable usefulness, I suppose!)

But I pressed on regardless. I needed a way to get the Linux O/S capturing the cameras' output and saving it for 7 days before deleting the old footage.

My first thought was to capture still photographs every few seconds. But I soon realised it takes only a second or so to get from the front of our drive to our front door, so a committed criminal could end up not being photographed at all. I therefore decided to stick with capturing continuous, full-motion video. The tools to do that were relatively few: v4l2-utils and ffmpeg. The first of those utilities gives you a suite of tools to control your video-for-linux (hence the 'v4l' in the name) webcam. For example:

[[email protected] ~]$ v4l2-ctl -d /dev/video0 --list-formats
ioctl: VIDIOC_ENUM_FMT
        Type: Video Capture

        [0]: 'YUYV' (YUYV 4:2:2)
        [1]: 'H264' (H.264, compressed)
        [2]: 'MJPG' (Motion-JPEG, compressed)

You can also use v4l2-ctl –list-formats-ext to get even more detailed information, such as:

Size: Discrete 1920x1080
        Interval: Discrete 0.033s (30.000 fps)
        Interval: Discrete 0.042s (24.000 fps)
        Interval: Discrete 0.050s (20.000 fps)
        Interval: Discrete 0.067s (15.000 fps)
        Interval: Discrete 0.100s (10.000 fps)
        Interval: Discrete 0.133s (7.500 fps)
        Interval: Discrete 0.200s (5.000 fps)

…so you can confirm that your webcam is capable of doing 1920×1080 at 30 frames per second.

When I started looking at the output of the webcams from my study window, I noticed that it kept changing focus (the image would seem to shift quickly forward and backwards as it hunted for correct focus, for example). It would also get confused when the afternoon sunlight started coming directly at the window and the auto-exposure mechanism would kick in and make things too dark or too light. Now, v4l2-utils gives you the ability to check what settings your camera is capable of having manually set:

[[email protected] ~]$ v4l2-ctl -list
VIDIOC_S_INPUT: failed: Device or resource busy
                     brightness 0x00980900 (int)    : min=0 max=255 step=1 default=128 value=128
                       contrast 0x00980901 (int)    : min=0 max=255 step=1 default=128 value=128
                     saturation 0x00980902 (int)    : min=0 max=255 step=1 default=128 value=128
 white_balance_temperature_auto 0x0098090c (bool)   : default=1 value=1
                           gain 0x00980913 (int)    : min=0 max=255 step=1 default=0 value=0
           power_line_frequency 0x00980918 (menu)   : min=0 max=2 default=2 value=2
      white_balance_temperature 0x0098091a (int)    : min=2000 max=6500 step=1 default=4000 value=5882 flags=inactive
                      sharpness 0x0098091b (int)    : min=0 max=255 step=1 default=128 value=128
         backlight_compensation 0x0098091c (int)    : min=0 max=1 step=1 default=0 value=0
                  exposure_auto 0x009a0901 (menu)   : min=0 max=3 default=3 value=3
              exposure_absolute 0x009a0902 (int)    : min=3 max=2047 step=1 default=250 value=17 flags=inactive
         exposure_auto_priority 0x009a0903 (bool)   : default=0 value=1
                   pan_absolute 0x009a0908 (int)    : min=-36000 max=36000 step=3600 default=0 value=0
                  tilt_absolute 0x009a0909 (int)    : min=-36000 max=36000 step=3600 default=0 value=0
                 focus_absolute 0x009a090a (int)    : min=0 max=250 step=5 default=0 value=0
                     focus_auto 0x009a090c (bool)   : default=1 value=1
                  zoom_absolute 0x009a090d (int)    : min=100 max=500 step=1 default=100 value=100

It is then simply a matter of “setting” any of the controls listed to a value you think appropriate. In my case:

v4l2-ctl --set-ctrl=focus_auto=0
v4l2-ctl --set-ctrl=focus_absolute=0

…took care of the camera's tendency to auto-focus by switching it off and instead forcing an absolute focus to fixed infinity. A similar 'v4l2-ctl –set-ctrl=exposure_auto…' and 'v4l2-ctl –set-ctrl=expsoure_absolute…' will also take care of the camera's tendency to get confused by changing sunlight/cloud/time-of-day issues.

With that taken care of, how to automate video capture? Well, I started by simply using ffmpeg to capture 1 minute segments of continuous video with this simple command:

/usr/bin/ffmpeg -nostdin -f v4l2 -input_format yuyv422 -framerate 10 -video_size 1920x1080 -i /dev/video0 -c:v libx264 -vf format=aac -vf "drawtext=fontfile=/usr/share/fonts/truetype/gentium/Gentium-R.ttf:text='{gmtime}':[email protected]:x=7:y=450" -g 3 -pix_fmt yuv420p -c:a aac -preset medium -segment_time 60 -segment_wrap 1440 -f segment $SAVEDIR/hjrstudy-%04d.ts

At which point, welcome to the ghastly world of ffmpeg hocus-pocus!! (And when I said “simple”, I was dripping with sarcasm at the time, in case it wasn't obvious!)

The command line for ffmpeg is horribly convoluted and I can only really claim to understand a bit of all the above, in between taking handfuls of aspirin! In general, the command says to take video at 10 frames a second, using the full 1920×1080 frame size. It's to output in compressed video, with key frames every 3 frames. And it's to save the video in segments that are 60 seconds long, with the first segment only being over-written after 1440 segments have been written (i.e., segment 1441 will over-write segment 1): a quick bit of maths will tell you that there are 1,440 minutes in a day, so this command will output a day's-worth of video before starting to lose the start of the day's video segments. (That's easy to turn into a week's-worth of video by making the script that calls this command create a new folder based on a timestamp, and have the video output saved into the new folder). The other nasty bit of syntax is that “drawtext” one: here, I'm saying to over-lay the video with a timestamp in a particular font, colour and size.

I will freely admit that I didn't just pluck that command out of the air: it took me several days of head-banging trial and error to get it all working correctly! But it did, eventually, do what I wanted… except that I discovered each video segment was coming out at about 40-60MB, so a day's-worth of 1440 such segments required a whopping 44GB+ or so of disk space. This isn't a problem for my desktop -or the server I progressively copied the data to every hour, since we both have tens of terabytes at our disposal. But I also wanted the video copied 'off-site': I'd arranged for my video segments to be written to a directory that I was synchronising to OneDrive (yes, Microsoft's OneDrive… there's an unofficial client that works quite well on Linux!). As a subscriber to Office 365, I get 1TB of cloud storage for 6 users; it was trivially easy to create new accounts for “webcam1” and “webcam2” and share 1TB with each of those accounts. My desktop could then also copy its video segments to OneDrive -a week's-worth of 44GB is still only 308GB. Only I discovered that my Internet connection isn't really up to the task: it took about 30 or 40 seconds to copy each 1-minute segment to OneDrive, which is just about do-able for one webcam. But the moment I started both webcams uploading to the cloud, my Internet upload connection was swamped and it started taking over 2 minutes for both cameras to upload their respective 1-minute segments. The maths therefore indicated that the cameras wouldn't be able to upload faster than they were generating new footage and so would eventually end up having their earliest footage deleted locally before having had a chance to upload it, which is clearly daft.

Naturally, I fiddled with things like slower framerates, better compression algorithms, smaller frame sizes. Yes, I could get file sizes down to something that might be cloud uploadable without completely clogging my wires full-time, but the results were a compromise of quality to the point where I started to wonder if it was worth doing at all!

The conclusion after a week of experimenting was that continuous video was not the solution I was looking for, since for large parts of the time, my cameras are looking out onto an unchanging street scene, with only the occasional passing car or pedestrian to alleviate the boredom! Saving all that unchanging video was pointless. If I could find a way to get rid of it, and only capture video when something interesting was happening, I'd cut down my file sizes massively and free up my Internet connection at the same time! What we need, therefore, is motion detection.

4.0 Motion Detection with ffmpeg

Unfortunately for the idea of implementing motion detection, ffmpeg doesn't have native motion detection capabilities. It instead has “scene detection”, where you can tell it to output only that part of a video where the proportion of pixels which change, as compared to the previous video frame, is above x%.

For example, suppose one of my 1-minute video segments had been saved as “hjrstudy-0000.ts”. I could then run this command to post-process that file to output a new file, like so:

ffmpeg -i hjrstudy-0000.ts -vf "select=gt(scene\,0.004)" motion-capture-0000.mp4

My new file, motion-capture-0000.mp4, would be a video containing only those scenes where 0.4% of the pixels had changed from the previous frame. It's an extra processing step: save the full video segment, then extract the moving parts of it. It would have been nicer to combine this video filter with the original command and do the entire thing in one step, but I couldn't get the syntax right for that (or maybe it's simply not possible?!). Either way, I could live with a second processing step, if the results were good. But they weren't good!

First, that figure of 0.4% is arbitrary -and it takes a lot of iterative experimentation to work out what a good percentage for your own specific circumstances are. In my case, my neighbours castor oil plant is in view of one of the cameras and is forever blowing in the wind, meaning I found I was hitting the 0.4% mark when nothing else was happening at all! Naturally, therefore, you can lower the percentage to things like 0.05% or even lower -but I never found a percentage that was low enough to only capture things of interest without missing out pretty much everything!

Secondly, the filter works only by extracting and saving into the new files those things which are deemed to have moved. The result, in my case, was rather Charlie Chaplin-esque. A slow walk to the car, for example, would result in you seemingly moving at lightning speed in a jerky fashion -because in between steps, your body would be essentially motionless. So the steps would make it pass the scene detection filter into the output file, but the pauses between them wouldn't. Cut out all the pauses and display only the movements detected and you pretty soon end up looking for all the world like a 1914 3-reeler!

I think I could have tweaked and twiddled with ffmpeg for quite a long time before getting results which were passably acceptable, but without me ever really being confident I had truly optimised things! ffmpeg is a hell of a utility and I will freely confess that it's me at fault here, rather than ffmpeg… but in the end, if you can't master your tools, you need a different set of tools!

5.0 Motion Detection with motion

I therefore abandoned ffmpeg and switched to a different Linux package entirely, called “motion”. It's easily installable in Fedora with a simple sudo dnf install motion. There is excellent, if rather dense, documentation on the program at this site. Once the program is installed, you need to create a customised configuration for it:

cd
mkdir .motion
nano .motion/motion.conf

Into that new configuration file can be placed a bazillion and one configuration options! Getting them right is the key to making motion work for you …or not!

Here are my non-default configuration parameters:

# Video device (e.g. /dev/video0) to be used for capturing.
videodevice /dev/video0

# Target directory for pictures, snapshots and movies
target_dir /home/hjr/webcam/motion/

# Image width in pixels.
width 1920

# Image height in pixels.
height 1080

# Video parameters
v4l2_palette 15
ffmpeg_video_codec mpeg4

# Maximum number of frames to be captured per second.
framerate 24

# Text to be overlayed in the lower left corner of images
text_left HJRSTUDY

# Text to be overlayed in the lower right corner of images.
text_right %Y-%m-%d\n%T-%q

# Restrict motion detection area with image mask
mask_file /home/hjr/webcam/motion/mask.pgm

# Threshold for number of changed pixels that triggers motion.
threshold 2000

# Noise threshold for the motion detection.
noise_level 32

# Number of images that must contain motion to trigger an event.
minimum_motion_frames 2

# Gap in seconds of no motion detected that triggers the end of an event.
event_gap 30

# The number of pre-captured (buffered) pictures from before motion.
pre_capture 30

# Number of frames to capture after motion is no longer detected.
post_capture 30

# Create movies of motion events.
movie_output on

# Maximum length of movie in seconds.
movie_max_time 60

# The encoding quality of the movie. (0=use bitrate. 1=worst quality, 100=best)
movie_quality 80

# File name(without extension) for movies relative to target directory
movie_filename %Y%m%d/%t-%v-%Y%m%d%H%M%S

Now, I won't go through every single option in great detail, but instead do a quick tour of the key ones.

First, of course, you need to say which camera device is being used: in my case /dev/video0. Then you say where you want your captured video to be saved to -in my case, a sub-directory hanging off my home directory. By default, I found monitor captured 640×480 video, so the image width and height are used to force full HD recording.

Getting the right value for the v4l2_palette setting was tediously difficult! There are 17 possible values (as outlined on this page, and I literally had to set each one in turn and see what happened before settling on the value you see here …so at least 14 set-try-retry attempts were needed to get this right! If you don't set this parameter, motion automatically cycles through all the possible options until it finds one it can use. In my case, it always found the MJPEG palette to be usable (number 8), because that came first in the list of usable palettes for my camera. It resulted in some pretty weird-looking video, though, with dramatic over-contrasted colouring, so I kept iterating until I found something that looked more normal!

I'm also setting the output format for the captured video to be “mpeg4”. This is another one I experimented with, trying to find a balance between quality and file size. The list of all possible codecs to use can be found here: I had a go with mp4 and hevc amongst others; the “mpeg4” seemed to produce the best results, but I may need to experiment further in time.

The camera, as you saw in section 3 above, can record at 5, 7.5, 10, 15, 20, 24 or 30 frames per second; I've opted for 24 because I want smooth, realistic motion without going over-the-top. Turning this down to the lower settings will result in smaller video files, of course, but potentially with Charlie Chaplin-esque consequences.

The various “text” settings you see allow the video to be overlayed with a date/timestamp and a bit of text describing which camera has taken which footage; in my study camera's case, for example, all the saved video will be labelled “HJRSTUDY” in the lower left-hand corner.

The threshold setting is important in determining when some sort of scene-motion has taken place. It's the number of pixels that need to change before motion thinks it should save the footage to disk. Note that a 1920×1080 frame contains 2,073,600 pixels, so for 2000 pixels to change means I'm setting the motion detection threshold at around 0.9% of the frame. Whether that's too low, too high or just right is something I'll only know over time, but I'd imagine it's low enough to capture the sight of two burly criminals walking up the drive to break in!

Video cameras have quite a lot of video 'noise' when running at low light levels -you can see pixels flickering and fluttering between two tones, for example. That would ordinarily count towards the “threshold” number of changed pixels and trigger motion detection -but the noise_level setting permits some noise to not count towards the total of changed pixels. This is again a setting I'll have to tweak over time, with empirical experimentation. For now, 32 seems to result in acceptable results.

Another setting that affects when motion thinks movement has taken place is the “minimum_motion_frames” one. By default, it's set to '1', meaning that if the relevant number of pixels has changed from one frame to another, that counts as motion. I've upped that to a setting of '2', to make things a little less sensitive: something needs to register as having changed across two frames before it counts as a real bit of movement. This is yet another sensitivity factor that I'll probably have to experiment with over time.

Once movement is detected, it's maybe not good to just save the movement itself: it may be more helpful to put the movement in some sort of context by recording a second or two before and after the movement itself. That's what the “pre” and “post_capture” parameters do: I've opted to record one second before and after whatever event triggered the movement detection in the first place. This is again something that will probably need to be tweaked as time goes on, to get the best 'context' results.

I've then asked for video to be output following motion for no longer than 60 seconds. Continuining movement in front of my window will trigger fresh motion-detection events, so I'll end up with multiple 60-second videos, but no one video will be longer than 60 seconds.

I've opted for 80% quality video output in an attempt to keep file sizes down: potentially yet another parameter to tweak in the light of experience gained over time, but I'm happy enough with the output at that level for now.

Finally, I've cunning arranged for dynamic naming of my movie file directories. Everything gets saved in /home/hjr/webcam/motion, of course, but the last parameter above means that a sub-folder will be created called '%Y%m%d' -or, rather, the year, month and day numbers those percent variable represent. At the time of writing, for example, that movie_filename parameter means I'm saving files in /home/hjr/webcam/motion/20190404 …the day-specific folder means I keep each day's output separate from any other's. I then have a separate cronjob run every night to delete files and folders from my …/motion directory that are older than 7 days -and that's how I can easily retain a week of captured footage without the earlier stuff being overwritten by the later. I am assuming, of course, that the police will never come asking for footage from a time earlier than a week ago; if they do, I won't be able to help.

Now, one parameter I didn't mention there was the mask_file one. This is, perhaps, one of the most interesting. It is the mechanism by which you can prevent motion from being triggered by movement in various unimportant parts of the frame. So, for example, here's the view out of my study window:

Notice the neighbour's flower bed on the bottom left-corner of that image: it keeps moving around whenever the wind blows, but I don't want that triggering the motion detection algorithm! I am also not very interested in people who are walking along the pavement on the opposite side of the road. So what I do is create a new 1920×1080 image in my favourite image editor (mine happens to be Krita these days, but Gimp or Windows Paint will do, if you prefer!) and start painting in black those parts of the picture I don't need to be considered for movement detection. I end up with something like this:

Anything in white there will be an area of interest; anything in black will not be important to the motion detection algorithm -though everything, whether in the black or white regions, will appear in the output video. The black-and-white image is saved as a normal JPG file, but before it can be used, it needs to be converted to a binary PGM file (and whilst Krita, as an example, has an option to export directly to binary PGM, the results seem never to be usable by motion).

So, if you don't already have the djpeg utility installed, install it now. On Fedora, that's done with:

sudo dnf install libjpeg-turbo-utils

…and then, the conversion to PGM takes place with the command:

djpeg -grayscale -pnm original-file.jpg > mask.pgm

The motion mask_file parameter value is then simply the full path and filename to this exported PGM file.

Incidentally, PGMs are 'Portable Greymap Mask' files and they have a long and quite interesting history!.

Once the configuration file is written and in-place, you can run motion simply with the command:

motion

You see this sort of information appear on the screen:

[[email protected] motion]$ motion
[0:motion] [NTC] [ALL] conf_load: Processing thread 0 - config file /home/hjr/.motion/motion.conf
[0:motion] [ALR] [ALL] conf_cmdparse: "ffmpeg_video_codec" replaced with "movie_codec" after version 4.1.1
[0:motion] [NTC] [ALL] motion_startup: Logging to syslog
[0:motion] [NTC] [ALL] motion_startup: Motion 4.2.2 Started
[0:motion] [NTC] [ALL] motion_startup: Using default log type (ALL)
[0:motion] [NTC] [ALL] motion_startup: Using log type (ALL) log level (NTC)
[0:motion] [NTC] [STR] webu_start_strm: Starting all camera streams on port 8081
[0:motion] [NTC] [STR] webu_strm_ntc: Started camera 0 stream on port 8081
[0:motion] [NTC] [STR] webu_start_ctrl: Starting webcontrol on port 8080
[0:motion] [NTC] [STR] webu_start_ctrl: Started webcontrol on port 8080
[0:motion] [NTC] [ENC] ffmpeg_global_init: ffmpeg libavcodec version 58.18.100 libavformat version 58.12.100
[0:motion] [NTC] [ALL] translate_locale_chg: No native language support
[0:motion] [NTC] [ALL] motion_start_thread: Camera ID: 0 is from /home/hjr/.motion/motion.conf
[0:motion] [NTC] [ALL] motion_start_thread: Camera ID: 0 Camera Name: (null) Device: /dev/video0
[0:motion] [NTC] [ALL] main: Waiting for threads to finish, pid: 11636
[1:ml1] [NTC] [ALL] motion_init: Camera 0 started: motion detection Enabled
[1:ml1] [NTC] [VID] vid_start: Opening V4L2 device
[1:ml1] [NTC] [VID] v4l2_device_open: Using videodevice /dev/video0 and input -1
[1:ml1] [NTC] [VID] v4l2_device_capability: - VIDEO_CAPTURE
[1:ml1] [NTC] [VID] v4l2_device_capability: - STREAMING
[1:ml1] [NTC] [VID] v4l2_input_select: Name = "Camera 1"- CAMERA
[1:ml1] [NTC] [VID] v4l2_norm_select: Device does not support specifying PAL/NTSC norm
[1:ml1] [NTC] [VID] v4l2_pixfmt_set: Testing palette YUYV (1920x1080)
[1:ml1] [NTC] [VID] v4l2_pixfmt_set: Using palette YUYV (1920x1080)
[1:ml1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items
[1:ml1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 17 items
[1:ml1] [NTC] [ENC] ffmpeg_set_codec: Low fps. Encoding 5 frames into a 10 frames container.
[1:ml1] [NTC] [EVT] event_newfile: File of type 8 saved to: /home/hjr/webcam/motion//20190403/0-01-20190403200058.avi
[1:ml1] [NTC] [ALL] motion_detected: Motion detected - starting event 1
[1:ml1] [NTC] [ALL] mlp_actions: End of event 1

There's quite a lot of information there -and it's worth reading carefully. Messages displayed there told me, for example, that my original choices for v4l2_palette were incorrect. Other messages told me that I'd got the path or filename to my motion detection mask wrong. Once I'd fixed those parameters to correct values, you get the sort of output you see here -without error messages (though the occasional warning about low frame rates… I haven't yet worked out what that's all about!)

Notice how the final set of lines indicates that motion capture is working: a motion 'event' was detected, recorded and then marked as complete. The result is a set of discontiguous video files, such as these:

Note how the modified time of those files jumps around a bit: yes, something was recorded at 2:06pm, 2:07pm and 2:08pm, but then the street went quiet for three minutes and nothing was recorded until 2:11pm. Also note the file sizes being produced: those are a big improvement on the ~40MB or so per minute I was getting when continuous video was being recorded. It means that the OneDrive upload is pretty busy still, but can easily cope:

Syncing changes from OneDrive ...
Uploading new file ./20190404/0-51-20190404141251.avi ...
Uploading 100% |oooooooooooooooooooooooooooooooooooooooo| DONE IN 00:00:07                                                                                                
 done.
Uploading modified file 20190404/0-51-20190404141251.avi ... 
Downloading 100% |oooooooooooooooooooooooooooooooooooooooo| DONE IN 00:00:17                                                                                              
 done.
Uploading new file ./20190404/0-51-20190404141251-britten.dizwell.home.avi ...
Uploading 100% |oooooooooooooooooooooooooooooooooooooooo| DONE IN 00:00:07                                                                                                
 done.

…files are, on the whole, making it to the cloud in 5 - 10 seconds, rather than clobbering my Internet connection for 30 - 40 seconds at a time. At that rate, my two cameras can upload directly to the cloud without stressing my Internet upload connection too badly.

I still have to tweak the motion detection sensitivity somewhat -cars driving down the street aren't really of interest to me, but they keep triggering video capture. My motion mask will have to be refined a little, I think, for starters!

But on the whole, I'm quite pleased with the results, including the fact that on it's first night, my webcam captured an arch-criminal in full flight:

Not a bad result, for a £50 webcam to have detected that coming in pitch darkness at 3 in the morning! Sure, the security light came on once he'd walked in front of the property proper, but the camera had captured him before the light came on, which I think is pretty neat! (For those unfamiliar with English urban wildlife, it's a fox, not a very bushy-tailed cat!!)

6.0 Other Considerations and Costs

To wrap things up, I went on to implement exactly the same sort of video capture with my second webcam from the other study, belonging to ToH. Since that study comes with a PC running Windows, I try not to go in there very often! It also meant that ToH's PC was not suitable for running motion, which is a Linux-only affair as far as I know. Therefore, I turned to my old friend eBay and purchased this:

For about £50, including postage, I managed to get a very decent pizza-box style Dell PC, running an Intel i3, of around 2012 vintage, with 4GB of RAM included. It runs quietly and cool, and can sit in the corner of ToH's study without really being noticed. It handles Fedora 29 and motion without stressing about it; and its webcam's USB cable stretches easily from the windowsill to the corner of the room in which it sits discretely.

I did originally try running the camera off a 2007 laptop I have sitting around doing nothing very much in the loft, but it just wasn't quite up to the demands of HD video capture and lagged badly, so that experiment didn't last long. The el-cheapo business PC from 7 years ago is much more capable and easily up to the job.

One word of caution for anyone setting up a webcam in the UK (or, indeed, any other part of Europe these days): GDPR. If your webcam points to anything outside of your own property boundary, you are considered to come under the remit of the GDPR and you are considered to be a 'data controller' for the purposes of the GDPR regulations. What that means in practice is pretty much as follows:

  1. Make it clear and obvious that you are capturing video -big warning window stickers, for example
  2. Limit the amount of video you capture to a 'reasonable' amount -deleting video older than a week or so would seem to meet this requirement, for example
  3. Restrict access to the video you capture -so posting it to Facebook would not be GDPR-compliant, but copying it to a server elsewhere on your premises, and even uploading to a cloud server to which only you have password-protected access, is fine
  4. Be prepared to respond to Subject Access Requests (SARs). See below.
  5. Delete footage when requested by subjects captured on video, if possible; or be prepared to not video specific individuals when requested, if possible. See below.

Regarding point 4: it is unlikely, I think, that anyone walking up or down our road would write to me saying, “You would have videoed me yesterday at 11:40am: please remove me from your system”, but if they did, I'd have to respond to their request. I could decline to delete anything, since it might be too difficult, or would compromise the purpose of the webcam surveillance in the first place -but as long as I responded within a week with an answer, either in the negative or not, I would be GDPR compliant.

Point 5 is potentially the most problematic item on the list: if you've pointed your webcam at your neighbour, for example, because they are being a nuisance and you hope to catch them in the act, then the GDPR says they could write to you demanding to not be videoed in future, which would rather undermine the purpose of your surveillance system in the first place. You could therefore decline to comply with their request -but they could go to court to enforce GDPR regulations, which might net you a fine.

On the other hand, the Information Commissioner's Office (ICO) who polices the GDPR in the UK, have declared that, “the ICO would be unlikely to think that taking enforcement action against you was a proportionate use of its resources” if you've established that your camera system is intended for proportionate use and that you comply with other GDPR requirements, so it's pretty unlikely you're actually going to fall foul of the GDPR in any normal domestic residential setting.

Just to be on the safe side, however, I purchased a couple of sets of "This property is under 24 hour video surveillance" stickers and put them in my front windows. So that was point 1 above dealt with.

Point 2's 'only capture a limited amount of footage' requirement is dealt with by having a cron job which runs the command:

find /home/hjr/webcam -mtime +7 -type f -delete

Anything older than 7 days gets automatically deleted, and thus the local system, the server it replicates to with rsync, and the OneDrive cloud server it uploads to, only ever hold the last 7 days of captured video.

Point 3's requirement to 'restrict access' is, I think, dealt with by the fact that it's all running on PCs sitting inside my home. That rules out 99.999% of people on the planet ever being able to access it. The video does leave the house, of course, when it gets up to OneDrive -but since that's on servers that Microsoft are supposed to secure and require a username and password to access, I think we're clear on that score. The fact that the relevant PCs and servers inside the house all run Linux is also a point in my favour, as very few 'ordinary' people would have a clue how to access those systems in the first place!

So, the short version: the GDPR considerations of starting video surveillance around your home are not insignficant, but they are fairly easily complied with.

One other thing I wanted to mention: it's obviously a bit awkward to have to manually run “monitor” when you want the video surveillance to run. Better would be a way of getting it to auto-start whenever the PC the camera is attached to reboots. And indeed, motion has a way of doing this: you configure daemon=on in the configuration file and it's supposed to all just work -except that it doesn't. It will first complain about being unable to open the existing configuration file, since that's buried deep within the /home/<your username> directory structure. Fine, you say: just copy the configuration file from your user directory to /etc/motion, where it then becomes usable by the daemon… And that works.

Except that you then find the daemon can't write files to /home/<user>/webcam, either. Fine: you create a /webcam directory off root, grant read, write and executable rights on that to the user “motion”, and make “motion” the owner of the directory …and it will still complain that it lacks permission to write there.

Frankly, I gave up trying. Instead, I created this entry in my own crontab:

1 0 * * *      pkill motion && /usr/bin/motion

…which means that at 1 minute past midnight every day, anything called “motion” gets killed off, and then the motion program is manually run. That worked reliably for me, though potentially it means I could be without video for up to 24 hours (if it crashed, for example, at 2 minutes past midnight, it wouldn't be restarted until 23 hours and 59 minutes later). But it will do for now, until such time as I work out how to daemonize motion properly.

7.0 Conclusion

For the princely sum of £50 for the Logitech webcam; £24 for the other no-name webcam; £50 for a second-hand i3 PC; and £3 for a set of video surveillance warning stickers… all up, about £130, I have a nicely functional indoor webcam motion-detecting surveillance system that gives me peace of mind. The upload to OneDrive means I can monitor things when I'm not physically at home (though I think I have better things to do when on holiday!!), which is a bonus.

Dabbling with ffmpeg syntax nearly blew my head in early on, but motion is a lot easier to deal with and does all that I need it to do. No doubt I will fiddle with things until I get it 'just right' over the course of the next few weeks and months, but it's been a fun project to get stuck into.

wiki/linux/webcam.txt · Last modified: 2019/04/05 10:14 by dizwell