Category Archives: Python

Matters relating to the Python programming (scripting) language

Particulates Sensing with the NOVA SDS011

Here at Cranfield University we are putting in place plans related to the new ‘Living Laboratory’ project, part of our ‘Urban Observatory’. This project sits within the wider UKCRIC initiative, across a number of universities. Of the many experiments in development, we are gathering environmental data from IoT devices and building data dashboards to show the data and related analyses. One of our projects will be to investigate air quality on the campus, in our lecture rooms and public spaces. Cranfield is a unique University in the UK for having its own airfield as part of the campus – we want to monitor any particular impacts that can arise from this. To do this, one of the tools we will use is the amazing Nova SDS011 particulates sensor (http://www.inovafitness.com/en/a/index.html).

The sensor itself, available from many outlets for instance here, is extremely cheap for what it offers, and is widely reported on with many projects on the Internet. We followed the excellent tutorial laid out on Hackernoon (https://hackernoon.com/how-to-measure-particulate-matter-with-a-raspberry-pi-75faa470ec35). We used a Raspberry Pi Zero, and we used the USB interface to speed the process of prototyping.

Rather than repeat the instructions laid out so well by Hackernoon, here we have some observations, and then some small adaptations to enable notifications and data logging.

One thing to remember in using the Raspberry Pi is that you need adapters (shown above) to connect traditional USB plugs to the micro plugs on the Pi. Also you need to remember that of the two USB ports, one is for powering the device and one is for peripherals. Plugging them in the wrong way round led to lots of unnecessary head scratching!

That said, once the instructions were followed, and the code put in place, the system was up and running and we could access the simple dashboard Hackernoon have developed using lighttpd.

This could be the end of the blog, all worked well, we have readings and a simple dashboard showing AQI. The device is incredibly sensitive – we can attest that during building the setup a late night pizza was accidentally burned (too busy hacking)! But the machine picked up the spike in particulates very well.

So the next challenge was to log the data being generated. In earlier blogs, we have used and liked ThingSpeak as a quick means to log data and build dashboards, so we decided to use this. This meant editing the Python code that hacker noon provided.

To write to ThingSpeak in Python, one can use the ‘urllib2’ library. We followed the excellent Instructables blog to do this. First, at the top of the code we import the urllib2 library and set up a variable to hold the connection string to ThingSpeak (using the API key for writing to the Channel we have created to hold the data):

<code>import urllib2 baseURL = 'http://api.thingspeak.com/update?api_key=CHANNEL_WRITE_API_KEY'</code>

Next, we located in the code where the particulate values for PM2.5 and PM10 are extracted and sent off to the web dashboard (full code used at the end). Here we inserted code to also send the same data to ThingSpeak:

<code>f = urllib2.urlopen(baseURL + '&amp;field1=' + str(values[0]) + '&amp;field2=' + str(values[1]))
f.read()
f.close()</code>

This worked well and data was transmitted to ThingSpeak and with its timestamp, this enabled a more comprehensive dashboard to be created that monitored the data values detected by the device (rather than the AQI values shown in the Hackernoon dashboard – clearly one could write that conversion in python in future if needed).

We then followed Hackernoon’s instructions to make the process start up on boot by placing the script into the crontab file. However, in doing this we realised it isn’t always possible to know when the script has started. As the script only starts on boot, if something goes wrong, the script never runs. We found that this was not a unique issue as others have found this also in other blogs. Thanks to the instructions on the Raspberry Pi website, we realised we could add a sleep command in to the crontab to ensure that the script was only started when there was a good chance the rest of the system was up and running. This solved the problem and now the crontab command was:

<code>@reboot sleep 60 &amp;&amp; cd /home/pi/ &amp;&amp; ./aqi.py</code>

The time could be extended from 60 seconds if needed. In any case, we now wanted to know it had indeed started up OK. We wanted a message sent to a mobile phone to say the process had started up OK. To do this we used the push notification approach of Prowl used in earlier blogs on this site (you need an iPhone for this although there will be equivalents for other phones. To get prowl to work in Python, we used the Python module for Prowl iPhone notification service from jacobb at https://github.com/jacobb/prowlpy. Installing this means downloading the ‘prowlpy.py’ script, and then a further adaptation in the aqi script at the start to call it appropriately, thus:

<code>import prowlpy
 apikey = 'PROWL_API_KEY'
 p = prowlpy.Prowl(apikey)
 try:
     p.add('AirQual','Starting up',"System commencing", 1, None, "http://www.prowlapp.com/")
     print('Success')
 except Exception,msg:
     print(msg)</code>

Finally, were it required, the push notification approach could also be used to inform particulate readings. The values of pm can also be intercepted, as per the ThingSpeak export, to send to the mobile phone too, code to do this would be thus:

<code>_message = "pm25: %.2f, pm10: %.2f, at %s" % (values[0], values[1], time.strftime("%d.%m.%Y %H:%M:%S"))          
print(_message) # debug line 
try:
    p.add('AirQual','Reading', _message, 1, None, "http://www.prowlapp.com/") 
except Exception,msg:
    print(msg)</code>

Although this worked perfectly, the phone was immediately overwhelmed with the number of messages, and this was quickly turned off! Notifications could be used however to message the user’s phone if important air quality thresholds were breached – reminding the operator to, for example, take the pizza out of the oven!

The final code script used for ‘aqi.py’ was:

<code>#!/usr/bin/python -u
# coding=utf-8
# "DATASHEET": http://cl.ly/ekot
# https://gist.github.com/kadamski/92653913a53baf9dd1a8
from __future__ import print_function
import serial, struct, sys, time, json, subprocess

# Customisations ######
import urllib2
baseURL = 'http://api.thingspeak.com/update?api_key=THINGSPEAK_API'

import prowlpy
apikey = 'PROWL_API_CODE'
p = prowlpy.Prowl(apikey)
try:
    p.add('AirQual','Starting up',"System commencing", 1, None, "http://www.prowlapp.com/")
    print('Success')
except Exception,msg:
    print(msg)
####################

DEBUG = 0
CMD_MODE = 2
CMD_QUERY_DATA = 4
CMD_DEVICE_ID = 5
CMD_SLEEP = 6
CMD_FIRMWARE = 7
CMD_WORKING_PERIOD = 8
MODE_ACTIVE = 0
MODE_QUERY = 1
PERIOD_CONTINUOUS = 0

JSON_FILE = '/var/www/html/aqi.json'

MQTT_HOST = ''
MQTT_TOPIC = '/weather/particulatematter'

ser = serial.Serial()
ser.port = "/dev/ttyUSB0"
ser.baudrate = 9600

ser.open()
ser.flushInput()

byte, data = 0, ""

def dump(d, prefix=''):
    print(prefix + ' '.join(x.encode('hex') for x in d))

def construct_command(cmd, data=[]):
    assert len(data) &lt;= 12
    data += [0,]*(12-len(data))
    checksum = (sum(data)+cmd-2)%256
    ret = "\xaa\xb4" + chr(cmd)
    ret += ''.join(chr(x) for x in data)
    ret += "\xff\xff" + chr(checksum) + "\xab"

    if DEBUG:
        dump(ret, '> ')
    return ret

def process_data(d):
    r = struct.unpack('&lt;HHxxBB', d[2:])
    pm25 = r[0]/10.0
    pm10 = r[1]/10.0
    checksum = sum(ord(v) for v in d[2:8])%256
    return [pm25, pm10]
    #print("PM 2.5: {} μg/m^3  PM 10: {} μg/m^3 CRC={}".format(pm25, pm10, "OK" if (checksum==r[2] and r[3]==0xab) else "NOK"))

def process_version(d):
    r = struct.unpack('&lt;BBBHBB', d[3:])
    checksum = sum(ord(v) for v in d[2:8])%256
    print("Y: {}, M: {}, D: {}, ID: {}, CRC={}".format(r[0], r[1], r[2], hex(r[3]), "OK" if (checksum==r[4] and r[5]==0xab) else "NOK"))

def read_response():
    byte = 0
    while byte != "\xaa":
        byte = ser.read(size=1)

    d = ser.read(size=9)

    if DEBUG:
        dump(d, '&lt; ')
    return byte + d

def cmd_set_mode(mode=MODE_QUERY):
    ser.write(construct_command(CMD_MODE, [0x1, mode]))
    read_response()

def cmd_query_data():
    ser.write(construct_command(CMD_QUERY_DATA))
    d = read_response()
    values = []
    if d[1] == "\xc0":
        values = process_data(d)
    return values

def cmd_set_sleep(sleep):
    mode = 0 if sleep else 1
    ser.write(construct_command(CMD_SLEEP, [0x1, mode]))
    read_response()

def cmd_set_working_period(period):
    ser.write(construct_command(CMD_WORKING_PERIOD, [0x1, period]))
    read_response()

def cmd_firmware_ver():
    ser.write(construct_command(CMD_FIRMWARE))
    d = read_response()
    process_version(d)

def cmd_set_id(id):
    id_h = (id>>8) % 256
    id_l = id % 256
    ser.write(construct_command(CMD_DEVICE_ID, [0]*10+[id_l, id_h]))
    read_response()

def pub_mqtt(jsonrow):
    cmd = ['mosquitto_pub', '-h', MQTT_HOST, '-t', MQTT_TOPIC, '-s']
    print('Publishing using:', cmd)
    with subprocess.Popen(cmd, shell=False, bufsize=0, stdin=subprocess.PIPE).stdin as f:
        json.dump(jsonrow, f)


if __name__ == "__main__":
    cmd_set_sleep(0)
    cmd_firmware_ver()
    cmd_set_working_period(PERIOD_CONTINUOUS)
    cmd_set_mode(MODE_QUERY);
    while True:
        cmd_set_sleep(0)
        for t in range(15):
            values = cmd_query_data();
            if values is not None and len(values) == 2 and values[0] != 0 and values[1] != 0:
              print("PM2.5: ", values[0], ", PM10: ", values[1])
              time.sleep(2)

	      # ThingSpeak ######
	      f = urllib2.urlopen(baseURL + '&amp;field1=' + str(values[0]) + '&amp;field2=' + str(values[1]))
	      f.read()
	      f.close()
              ###################

              # Push notifications ######
              #_message = "pm25: %.2f, pm10: %.2f, at %s" % (values[0], values[1], time.strftime("%d.%m.%Y %H:%M:%S"))
              #print(_message)
              #try:
              #	p.add('AirQual','Reading', _message, 1, None, "http://www.prowlapp.com/")
              #except Exception,msg:
              #  print(msg)
              ####################


        # open stored data
        try:
            with open(JSON_FILE) as json_data:
                data = json.load(json_data)
        except IOError as e:
            data = []

        # check if length is more than 100 and delete first element
        if len(data) > 100:
            data.pop(0)

        # append new values
        jsonrow = {'pm25': values[0], 'pm10': values[1], 'time': time.strftime("%d.%m.%Y %H:%M:%S")}
        data.append(jsonrow)

        # save it
        with open(JSON_FILE, 'w') as outfile:
            json.dump(data, outfile)

        if MQTT_HOST != '':
            pub_mqtt(jsonrow)

        print("Going to sleep for 1 min...")
        cmd_set_sleep(1)
        time.sleep(60)</code>

Using multiple WiFi networks with a Raspberry Pi

Here at Cranfield University we use Raspberry Pi computers for a number of applications, such as monitoring environmental sensors and processing data. The devices get moved around the campus and need to work across multiple WiFi networks with ease ad without the need for reconfiguration. Here is how this is done (make backups of the files before editing!).

First, edit file /etc/wpa_supplicant/wpa_supplicant.conf

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
	ssid="SSID_1"
	psk="PASSWORD_1"
}
network={
        ssid="SSID_2"
        psk="PASSWORD_2"
}

Second, edit file /etc/network/interfaces

# The loopback network interface
auto lo
iface lo inet loopback

# The primary wired network interface
iface eth0 inet dhcp

# The wireless network interface
allow-hotplug wlan0
iface wlan0 inet manual
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf

# Default
iface default inet dhcp

Reboot the Pi after the edits. This approach uses DHCP for connecting to the network.

Perl vs Python

Here at Cranfield University we work a lot with data in our GIS and data-related teaching and research. A common challenge is in transferring a complex dataset that is in one format into another format to make it useable. Many times there are tools we can use to help in that manipulation, both proprietary and open source. For the spatial datasets we often work with, we can use the range of data convertors in ArcGIS and QGIS, we can use the fantastic ‘Feature Manipulation Engine’ (FME) from Safe inc., or its manifestation in ArcGIS – the data interoperability tool, then again we can look to libraries such as the Geospatial Data Abstraction Library (gdal) for scripted functionality. As ever in computing, there are many ways of achieving our objectives.

However, sometimes there is nothing for it but to hack away in a favourite programming scripting language to make the conversion. Traditionally we used the wonderfully eclectic ‘Perl‘ language (pathologically eclectic rubbish lister – look it up!!) More recently the emphasis has perhaps shifted to Python as the language of choice. Certainly, if we are asked by our students which general purpose programming language to use for data manipulation, we advise Python is the one to have experience with on the CV.

If we have a simple data challenge, for example, we might want to convert an ASCII text file with data in one format to another format and write it out to a new file. We might want to go say from a file in this format (in ‘input.csv’):

AL1 1AG,1039499.00,0
AL1 1AG,383009.76,10251
etc......

To this format (in ‘output.csv’) …

UK,Item 1,R,AL1 1AG,,,,,,,,,,1039499.00,0,,,,
UK,Item 2,R,AL1 1AG,,,,,,,,,,383009.76,10251,,,,
etc......

For this Perl is a great solution – integration the strengths of awk and sed. Perl can produce code which quickly chomps through huge data files. One has to be careful as to how the code is developed, to ensure its readability. Sometimes, coming back to a piece of code one can struggle to remember how it works for a while – and this is especially so where the code is highly compacted.

#!/usr/bin/env perl
# Call as ‘perl script.pl <in_file> > <out_file>‘
# e.g. perl script.pl input.csv > output.csv
use Text::CSV;
my $csv = Text::CSV->new({sep_char => ',' });
$j=1;
while (<>) {
  chomp;
  if ($csv->parse($_)) {
    my @fields = $csv->fields();
    printf("UK,Item %d,R,%s,,,,,,,,,,%s,%s,,,,\n",$j++,@fields[0],@fields[1],@fields[2]);
  }       
}

The equivalent task in Python is equally simple, and perhaps a little more readable…

#!/usr/bin/env python
# python3 code
# Call as 'python3 script.py'
import csv
o = open('output.csv','w')
with open('input.csv', 'r') as f:
   reader = csv.reader(f)
   mylist = list(reader)
j = 0
for row in mylist:
   j+=1
   o.write('UK,Item {:d},R,{:s},,,,,,,,,{:s},{:s},,,,\n'.format(j, row[0], row[1], row[2]$

Note the code above is Python3 not Python2. Like Perl (with cpan), Python is extensible (with pip) – and in fact one really needs to use extensions (modules, or imported libraries) to get the most out of it (and to help prevent you needing to reinvent the wheel and introducing unnecessary errors). There is no need to write lots of code for handling CSV files for example – the csv library above does this very efficiently in Python. Likewise, if say we want to write data back out to JSON (JavaScript Object Notation format), again the json library can come to the rescue:

import csv
import json
jsonfile = open('/folderlocation/output.json', 'w')
with open('/folderlocation/input.csv', newline='', encoding='utf-8-sig') as csvfile:
    reader = csv.reader(csvfile)
    for row in reader:
        print(', '.join(row))
        json.dump(row, jsonfile)

There is probably not really a lot in the difference between the two languages – it all rather depends on ones preferences. However, for GIS professionals, Python expertise is a must as it is adopted as the scripting language of choice in ArcGIS (in fact even being shipped with ArcGIS). Other alternatives exist of course for these sorts of tasks – ‘R‘ is one that comes to mind – again being equally extensible.

Machine Vision with a Raspberry Pi

In this blog we will describe the steps needed to do some machine vision using the Raspberry Pi Zeros we described in the earlier blog. Here at Cranfield University we are building these amazing devices into our research. In this case we are interested in using the Pi as a device for counting pedestrians passing a site – trying to understand how different design choices influence people’s choice of walking routes.

Contents:
Background
Toolkits
Kerberos
   Installation of Kerberos
   Configuration of Kerberos
   Configuration of the Pi
   Output and Data Capture from Kerberos
Epilogue

Background:

top
In the earlier blog we showed how to set up the Raspberry Pi Zero W, connecting up the new v2 camera in a case and connecting power. Once we had installed Rasbian on a new microSD card all was ready to go.

A bit of research was needed to understand the various options for machine vision on a Pi. There are three levels we might want. First a simple motion detection with the camera would give a presence or absence of activity, but not much more. This could be useful when pointing the camera directly at a location. Second, we can use more sophisticated approaches to consider detecting movement passing across the camera’s view, for example left to right or vice versa. This could be useful when pointing the camera transverse to a route along which pedestrians are travelling. Thirdly, and with the ultimate sophistication, we could try and classify the image to detect what the ‘objects’ passing across the view are. Classifier models might for example detect adults, young persons, and other items such as bicycles and push buggies etc. Needless to say, we wanted to start off easy and then work up the list!

Looking at the various software tools available, it is clear that many solutions draw on OpenCV (Open Source Computer Vision Library) (https://opencv.org). OpenCV is an open source computer vision and machine learning software library, built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception. There are many other potential libraries for machine vision – for example, SOD (https://sod.pixlab.io), and other libraries such as Dlib (http://dlib.net). OpenCV can be daunting, and there are wrappers such as SimpleCV (http://simplecv.org) to try and simplify the process.

Toolkits:

top
We then looked at options for toolkits that use these basic building blocks. A useful reference is Jason Antman’s blog here https://blog.jasonantman.com/2018/05/linux-surveillance-camera-software-evaluation/. Although not Jason’s final choice, the tool that stuck out to us was Kerberos (https://kerberos.io), developed by Cedric Verstraeten and grown out of his earlier OpenCV project (https://github.com/cedricve/motion-detection).

Kerberos:

top
Kerberos has a number of key resources:
Main home website – https://kerberos.io
Documentation – https://doc.kerberos.io
Git – https://github.com/kerberos-io
Helpdesk – https://kerberosio.zendesk.com
Corporate – https://verstraeten.io
Gitter – https://gitter.im/kerberos-io/home

Although the full source for Kerberos is available, and also a docker implementation, what we really liked was the SD image for the Raspberry Pi Zero – so really made for the job.

Installation of Kerberos:

top
We downloaded the cross-platform installer from the Kerberos website. This is based on the Etcher tool, used to install Rasbian so familiar to any Pi user. In our case we selected the Mac installer, downloading an installer dmg file (c.80Mb). Then, ensuring the Micro SD card destined for the Pi was in a flash writer dongle attached to the Mac, we were able to easily install the image. The Etcher app asks a couple of questions on the way about the WiFi network SSID and WiFi and system passcodes, as well as a name for the device, and writes these details onto the SD card with the rest of the image. As a result, on inserting the SD card and booting the Pi with the Kerberos image, the device started up and connected correctly and without issue on the WiFi network. A check on the router on our closed network showed the device had correctly registered itself at IP address 192.168.1.24.

Management of the Pi and camera is achieved via app running on a web server on the Pi. So to access our device, we entered browsed the URL http://192.168.1.24/login.

Configuration of Kerberos:

top
The dashboard app provides complete control over the operation of the Pi and camera.The image here shows the ‘heatmap’ camera view, and statistical graphs and charts of timings of activations.
. To configure the many settings we headed over to https://doc.kerberos.io for the documentation. The concept is that the image processing is undertaken on the ‘Machinery’ configuration, and that the ‘Web’ then controls access to the results.

Selecting ‘Configuration’ we could start adjusting the settings for the Machinery as we required. There are default settings for all the options.
However, the settings you will use depend on the application for the device. We followed the settings for ‘People Counter‘ recommended both in the docs, and a subsequent blog. It seems that there the settings are very sensitive, so one has to adjust until the desired results are obtained.

Being on a Raspberry Pi, one can also ssh connect directly to the device on a terminal connection (eg from terminal on the Mac, or via Putty from a PC). Connect to the device with the command:

ssh root@192.168.1.22
cd /data/machinery/config

This takes you to the location of the configuration files, as written out by the web app. Below are the settings we used to get the People Counter working (the values here correspond to the settings in the web app).

less config.xml
<!--?xml version="1.0"?-->
<kerberos>
    <instance>
        <name type="text">Stationery</name>
        <logging type="bool">false</logging>
        <timezone type="timezone">Europe-London</timezone>
        <capture file="capture.xml">RaspiCamera</capture>
        <stream file="stream.xml">Mjpg</stream>
        <condition file="condition.xml" type="multiple">Enabled</condition>
        <algorithm file="algorithm.xml">DifferentialCollins</algorithm>
        <expositor file="expositor.xml">Rectangle</expositor>
        <heuristic file="heuristic.xml">Counter</heuristic>
        <io file="io.xml" type="multiple">Webhook</io>
        <cloud file="cloud.xml">S3</cloud>
    </instance>
</kerberos>
less capture.xml
<!--?xml version="1.0"?-->
<captures>
    <ipcamera>
        <url type="text">xxxxxxxxx</url>
        <framewidth type="number">640</framewidth>
        <frameheight type="number">480</frameheight>
        <delay type="number">500</delay>
        <angle type="number">0</angle>
    </ipcamera>
    <usbcamera>
        <framewidth type="number">640</framewidth>
        <frameheight type="number">480</frameheight>
        <devicenumber type="number">0</devicenumber>
        <fourcc type="text">MJPG</fourcc>
        <delay type="number">500</delay>
        <angle type="number">0</angle>
    </usbcamera>
    <raspicamera>
        <framewidth type="number">640</framewidth>
        <frameheight type="number">480</frameheight>
        <delay type="number">500</delay>
        <angle type="number">0</angle>
        <framerate type="number">20</framerate>
        <sharpness type="number">0</sharpness>
        <saturation type="number">0</saturation>
        <contrast type="number">0</contrast>
        <brightness type="number">50</brightness>
    </raspicamera>
    <videocapture>
        <framewidth type="number">640</framewidth>
        <frameheight type="number">480</frameheight>
        <path type="text">0</path>
        <delay type="number">500</delay>
        <angle type="number">0</angle>
    </videocapture>
</captures>
less stream.xml
<!--?xml version="1.0"?-->
<streams>
    <mjpg>
    	<enabled type="bool">true</enabled>
    	<streamport type="number">8889</streamport>
    	<quality type="number">75</quality>
    	<fps type="number">15</fps>
    	<username type="text"></username>
    	<password type="text"></password>
    </mjpg>
</streams>
less condition.xml
<!--?xml version="1.0"?-->
<conditions>
    <time>
        <times type="timeselection">0:01,23:59-0:01,23:59-0:01,23:59-0:01,23:59
-0:01,23:59-0:01,23:59-0:01,23:59</times>
        <delay type="number">10000</delay>
    </time>
    <enabled>
    	<active type="bool">true</active>
        <delay type="number">5000</delay>
    </enabled>
</conditions>
less algorithm.xml
<!--?xml version="1.0"?-->
<algorithms>
	<differentialcollins>
		<erode type="number">5</erode>
    	        <threshold type="number">15</threshold>
        </differentialcollins>
	<backgroundsubtraction>
		<shadows type="text">false</shadows>
		<history type="number">15</history>
		<nmixtures type="number">5</nmixtures>
		<ratio type="number">1</ratio>
		<erode type="number">5</erode>
		<dilate type="number">7</dilate>
    	<threshold type="number">10</threshold>
    </backgroundsubtraction>
</algorithms>
less expositor.xml
<!--?xml version="1.0"?-->
<expositors>
	<rectangle>
	    <region>
		    <x1 type="number">0</x1>
		    <y1 type="number">0</y1>
		    <x2 type="number">800</x2>
		    <y2 type="number">600</y2>
		 </region>
	</rectangle>
        <hull>
	    <region type="hullselection">779,588|781,28|588,48|377,31|193,31|32
,45|33,625|191,591|347,600|456,572|556,601|659,629</region>
	</hull>
</expositors>
less heuristic.xml
<!--?xml version="1.0"?-->
<heuristics>
	<sequence>
	    <minimumchanges type="number">20</minimumchanges>
	    <minimumduration type="number">2</minimumduration>
        <nomotiondelaytime type="number">1000</nomotiondelaytime>
	</sequence>
	<counter>
	    <appearance type="number">3</appearance>
	    <maxdistance type="number">140</maxdistance>
	    <minarea type="number">200</minarea>
	    <onlytruewhencounted type="bool">false</onlytruewhencounted>
	    <minimumchanges type="number">5</minimumchanges>
        <nomotiondelaytime type="number">100</nomotiondelaytime>
		<markers type="twolines">34,29|36,461|617,22|614,461</markers>
	</counter>
</heuristics>

Note the settings above for the twolines markers on the video image – used for counting pedestrians passing from left to right, and from right to left, (coordinate position 0,0 is the top left corner)

less io.xml
<!--?xml version="1.0"?-->
<ios>
    <disk>
        <fileformat type="text">timestamp_microseconds_instanceName_regionCoord
inates_numberOfChanges_token.jpg</fileformat>
        <directory type="text">/etc/opt/kerberosio/capture/</directory>
        <markwithtimestamp type="bool">false</markwithtimestamp>
        <timestampcolor type="text">white</timestampcolor>
        <privacy type="bool">false</privacy>
        <throttler type="number">0</throttler>
    </disk>
    <video>
        <fps type="number">30</fps>
        <recordafter type="number">5</recordafter>
        <maxduration type="number">30</maxduration>
        <extension type="number">mp4</extension>
        <codec type="number">h264</codec>
        <fileformat type="text">timestamp_microseconds_instanceName_regionCoord
inates_numberOfChanges_token</fileformat>
        <directory type="text">/etc/opt/kerberosio/capture/</directory>
        <hardwaredirectory type="text">/etc/opt/kerberosio/h264/
        <enablehardwareencoding type="bool">true</enablehardwareencoding>
        <markwithtimestamp type="bool">false</markwithtimestamp>
        <timestampcolor type="text">white</timestampcolor>
        <privacy type="bool">false</privacy>
        <throttler type="number">0</throttler>
    </hardwaredirectory></video>
    <gpio>
        <pin type="number">17</pin>
        <periods type="number">1</periods>
        <periodtime type="number">100000</periodtime>
        <throttler type="number">0</throttler>
    </gpio>
    <tcpsocket>
        <server type="number">IP_ADDRESS:3000/counter</server>
        <port type="number"></port>
        <message type="text">motion-detected</message>
        <throttler type="number">0</throttler>
    </tcpsocket>
    <webhook>
        <url type="text">IP_ADDRESS:3000/counter</url>
        <throttler type="number">500</throttler>
    </webhook>
    <script>
        <path type="text">/etc/opt/kerberosio/scripts/run.sh</path>
        <throttler type="number">0</throttler>
    </script>
    <mqtt>
        <secure type="bool">false</secure>
        <verifycn type="bool">false</verifycn>
        <server type="number">IP_ADDRESS</server>
        <port type="number">1883</port>
        <clientid type="text"></clientid>
        <topic type="text">kios/mqtt</topic>
        <username type="text"></username>
        <password type="text"></password>
        <throttler type="number">0</throttler>
    </mqtt>
    <pushbullet>
        <url type="text">https://api.pushbullet.com</url>
        <token type="text">xxxxxx</token>
        <throttler type="number">10</throttler> 
    </pushbullet>
</ios>

Configuration of the Pi:

top
Another configuration required was to tun off the bright green LED on the Raspberry Pi as it draws attention when the unit is operating. To turn OFF the LEDs for Zero, we followed the instructions at https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-
raspberry-pi. Note that unlike other Raspberry Pi models, the Raspberry Pi Zero only has one LED, led0 (labeled ‘ACT’ on the board). The LED defaults to on (brightness 0), and turns off (brightness 1) to indicate disk activity.

To turn off the LEDs interactively, the following commands can be run each time the Pi boots.

# Set the Pi Zero ACT LED trigger to 'none'.
echo none | sudo tee /sys/class/leds/led0/trigger
# Turn off the Pi Zero ACT LED.
echo 1 | sudo tee /sys/class/leds/led0/brightness

To make these settings permanent, add the following lines to the Pi’s ‘/boot/config.txt’ file and reboot:

# Disable the ACT LED on the Pi Zero.
dtparam=act_led_trigger=none
dtparam=act_led_activelow=on

Note the ‘/’filesystem is made read-only by default in the Kerberos build. To temporarily fix this to force read write for the root ‘/’ filesystem, type:

mount -o remount,rw /

Now the config.txt file can be edited normally, e.g. in the editor nano, and then the Pi can be rebooted.

cd /boot
nano config.txt
reboot

Output and Data Capture from Kerberos:

top
To obtain data from the tool, we are using the ‘script’ setting in io.xml, which runs the script ‘/data/run.sh’ (a bash script). This script just writes the data receives (a JSON structure) out to disk.

#!/bin/bash

# -------------------------------------------
# This is an example script which illustrates
# how to use the Script IO device.
#

# --------------------------------------
# The first parameter is the JSON object
#
# e.g. {"regionCoordinates":[308,250,346,329],"numberOfChanges":194,"timestamp":"1486049622","microseconds":"6-161868","token":344,"pathToImage":"1486049622_6-161868_frontdoor_308-250-346-329_194_344.jpg","instanceName":"frontdoor"}

JSON=$1

# -------------------------------------------
# You can use python to parse the JSON object
# and get the required fields

echo $JSON &amp;gt;&amp;gt; /data/capture_data.json

coordinates=$(echo $JSON | python -c "import sys, json; print json.load(sys.stdin)['regionCoordinates']")
changes=$(echo $JSON | python -c "import sys, json; print json.load(sys.stdin)['numberOfChanges']")
incoming=$(echo $JSON | python -c "import sys, json; print json.load(sys.stdin)['incoming']")
outgoing=$(echo $JSON | python -c "import sys, json; print json.load(sys.stdin)['outgoing']")
time=$(echo $JSON | python -c "import sys, json; print json.load(sys.stdin)['timestamp']")
microseconds=$(echo $JSON | python -c "import sys, json; print json.load(sys.stdin)['microseconds']")
token=$(echo $JSON | python -c "import sys, json; print json.load(sys.stdin)['token']")
instancename=$(echo $JSON | python -c "import sys, json; print json.load(sys.stdin)['instanceName']")

printf "%(%m/%d/%Y %T)T\t%d\t%d\t%d\t%d\n" "$time" "$time" "$changes" "$incoming" "$outgoing" &amp;gt;&amp;gt; /data/results.txt

Note the use of the parameters to convert the Julian timestamp to a readable date/time.

When an event triggers the system (someone walking past the camera view) two actions follow, an image is saved to disk, and the script is run, with a parameter of the JSON structure. The script then processes the JSON. The script here both writes out the whole JSON structure to a the file ‘capture_data.json’ (this is included as a debug and could be omitted), and also extracts out the data elements we actually wanted and writes these to a CSV file called ‘results.txt’.

A sample of ‘capture_data.json’ look like this:

{"regionCoordinates":[413,323,617,406],"numberOfChanges":1496,"incoming":1,"outgoing":0,"name":"Dream","timestamp":"1539760397","microseconds":"6-928567","token":722,"instanceName":"Dream"}
{"regionCoordinates":[190,318,636,398],"numberOfChanges":2349,"incoming":1,"outgoing":0,"name":"Dream","timestamp":"1539760405","microseconds":"6-747074","token":814,"instanceName":"Dream"}
{"regionCoordinates":[185,315,279,436],"numberOfChanges":1793,"incoming":0,"outgoing":1,"name":"Dream","timestamp":"1539760569","microseconds":"6-674179","token":386,"instanceName":"Dream"}

A sample of ‘results.txt’ looks like this:

10/17/2018 08:17:08	1539760628	917	0	1
10/17/2018 08:17:18	1539760638	690	0	1
10/17/2018 08:18:56	1539760736	2937	0	1
10/17/2018 08:19:38	1539760778	3625	1	0
10/17/2018 08:22:05	1539760925	1066	1	0
10/17/2018 08:24:06	1539761046	2743	0	1
10/17/2018 08:24:45	1539761085	1043	1	0
10/17/2018 08:26:11	1539761171	322	0	1

Epilogue:

top
This blog has shown how the Kerberos toolkit has been used with an inexpensive Raspberry Pi for detecting motion and also directional movement across the camera view. The data captures a JSON data structure for each event triggered, and a script extracts from this the data required, which is saved off to disk for later use.

There are still issues to grapple with – for example reduce false positives, and perhaps more importantly not missing events as they occur. The settings of the configuration machinery are very sensitive. The best approach is to successively vary these settings (particularly the expositor and heuristic settings) until the right result is obtained. Kerberos has a verbose setting for event logging, and inspecting the log with this switched on reveals that the Counter conditions are very sensitive – so many more people may be walking past the camera than are being directly logged as such (e.g. motion activations may be greater than count events).

The commands below show how to access the log – it is also shown in the ‘System’ tab of the web dashboard. The command ‘tail -f’ is useful as it shows the log update in real time – helpful if the video live feed screen is being displayed alongside on-screen. Then you can see what is and isn’t being logged very easily.

cd /data/machinery/logs
tail -f log.stash

Ultimately, the Raspberry Pi may not have enough power to operate full classifier models, such as that developed by Joseph Redmon with the Darkweb YOLO tool he developed (‘You Only Look Once’) (https://pjreddie.com/darknet/yolo/). However, Kerberos itself has a cloud model that provides post-processing of images in the cloud on AWS servers, with classifier models available – perhaps something to try in a later blog.

Making the Raspberry Pi work, the next steps – Pi Society

There is a lot of interest in the amazing Raspberry Pi 3 computer here at Cranfield University. Once you have the basics for the Raspberry Pi in place, there are a few tips and tricks you can follow to make the Pi work better for you, explained below. The assumption here is that you already have the Pi connected up to a monitor, and have a keyboard and mouse plugged in and are running Jessie Raspian (if not, see our earlier Pi tutorials). the topics covered here are:
Setting a system password | Getting WiFi running | Accessing the Pi from another computer | Moving files to and from the Pi from another computer – scp | Installing and updating software on the Pi | Setting aliases | Learning Python

Setting a system password

Top
The default account on the Pi is in fact username ‘pi’, and there is no password for the account by default. It is good practice to set a password, particularly once you activate WiFi for the Pi. To do this, Select Menu > Preferences > Raspberry Pi Configuration > System. Select the password option and enter in a secure password.

While you are at this dialogue box, you can explore the other options there – for example, under ‘Localisation’ you can set the Pi’s timezone appropriately.

Getting WiFi running

Top
WiFi on the Pi is very straight forward if you have a home router running WPA security. At the graphical interface, in the op right corner of the screen is the WiFi icon. Right mouse click the WiFi icon and select the first ‘WiFi Networks Settings’ option. In the Network Settings dialogue, in the ‘configure’ drop-down, select ‘SSID’ and search for your router, select it and press OK. If there is a WiFi password required, select the WiFi icon again, locate the connection you just made in the list and double click – a dialogue appears for you to enter in the password. Once done, hopefully the connection is made and you will be online.

To get the Pi to connect wirelessly to the campus WiFi network ‘EduRoam’ is rather more tricky – see https://www.raspberrypi.org/forums/viewtopic.php?f=28&t=86253
Using the Jessie graphical user interface of the Pi, you should be able to follow the same procedure to get the Pi online with Eduroam. If you intend to do this however, you must ensure that users all have passwords set, and that the hostname is something other than ‘pi’!

Accessing the Pi from another computer

Top
To start with you have to connect a monitor and keyboard/mouse up to the pi to connect to it. However, ultimately you may want to have it running on its own, and so then be able to connect to it from a different computer – on the command line, or graphically.

Assuming the Pi is using WiFi and is online, find out what its IP address is. To do this, you can either consult the WiFi router’s web console to look for its connected clients, or you can type:
sudo ifconfig
In a home WiFi situation, the IP address will likely be something like ‘192.168.1.nn’ (e.g. 192.168.1.11).

From your other computer (e.g. a PC or Mac), you can now run an ‘ssh‘ (secure shell) client. To do this, on the PC you might use the excellent freeware tool ‘putty’; on a Mac you can fire up the terminal and at the prompt enter ‘ssh <i><ip number=""></ip></i>‘. Make the connection by incorporating the pi user name (which by default is ‘pi’) into the IP address, thus in a mac terminal window for the user name ‘pi’, you would type:
ssh pi@192.168.1.11

Sometimes accessing via the command line is just not enough, and you may want to run an app with a graphical interface from another computer. If you use the secure shell (ssh) to log in to the Pi, you may be able to carry the ‘X-windows’ session over to the computer using the ‘-X’ parameter, thus:

ssh –X pi@192.168.1.11

On a Mac running Sierra, you may have to run with a ‘-Y’ instead, (assuming you have Quartz installed):

ssh –Y pi@192.168.1.11

On Windows computers, you can install and use Xming for the X-windows, and then use putty as noted above for ssh sessions. Putty has a dialogue tick box option to carry X11 over to the local session.

To test this works, install the standard set of X11 apps, then run say xclock, thus:

sudo apt-get install x11-apps xclock

All the above can be a bit fiddly – and a much easier method is to use a VNC (Virtual Network Computing) server. In former times to do this you would install a software package called ‘tightvncserver’ – and there are lots of instructions out on the web to do this – and you can still do this if you wish. However, as from Sept 2016, the Raspberry Pi 3 now includes a VNC server by default, from RealVNC – (see https://www.realvnc.com/docs/raspberry-pi.html). Although installed it is not activated by default. To activate it, select Menu > Preferences > Raspberry Pi Configuration > Interfaces. Ensure the VNC tick box is Enabled and then reboot the Pi. From now on, VNC Server will start automatically whenever your Pi is powered on.

Now, on the computer you want to connect to the Pi, you will also need to install the RealVNC viewer. Visit ‘https://www.realvnc.com/download/viewer/‘, download and install the appropriate viewer. Note you will need to know and enter the IP address of the Pi, as well as your account and password.

Moving files to and from the Pi from another computer – scp

Top
Very often you will want to copy files to and from the Pi – here’s how:

If you have the ssh tools running as above, you can just as easily use the ‘secure copy’ command ‘scp’ to move files to and from the Pi from your controlling computer, as described at https://www.raspberrypi.org/documentation/remote-access/ssh/scp.md.

Copying a file from your computer TO the Pi

scp myfile.txt pi@192.168.1.11:

or to copy the file to a specific folder:
scp myfile.txt pi@192.168.1.3:myfolder/
this copies the file to the /home/pi/myfolder/ directory on your Raspberry Pi

Copying a file from your Pi TO the computer

scp pi@192.168.1.11:myfile.txt .

a better strategy is to install a graphical tool with as WinSCP to help you copy files to and from the PI.

Installing and updating software on the Pi

Top
You will need to keep the software installed up to date on the Pi. Being open source, it seems that there are constantly updates that need installing – so this is a procedure that should be run through on a regular basis. The tool needed to do this is called ‘apt’.

Open a terminal and enter (‘apt-get’ is all one word, with no spaces)
sudo apt-get update
then
sudo apt-get upgrade
This can take a little while – don’t interrupt until it is finished, and keep an eye on it as it may ask questions needing a response. The apt command is described fully here.

This command line tool is very powerful and can be used to install software as well as keep it updated. For example, to install the package ‘curl’, you type:
sudo apt-get install curl

If you need to update the kernel software too, you can type:
sudo apt-get install rip-update sudo rip-update

Curl is a useful command-line tool to transfer data from or to a server, using one of the supported protocols (DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET and TFTP).

If you prefer a software package manager tool with a graphical interface, you can try ‘aptitude’:
sudo aptitude

Setting aliases

Top
Using the Pi, you quickly realise that the command line, using the ‘terminal’, offers you powerful and efficient command tools for configuring and running the computer. Did you know you can set up shortcuts to help make the use of these commands even more efficient. An example we shall use here is to set an alias of ‘ll’ to run the full command ‘ls –al’ – a little shortcut that speeds up making full file listings on the screen – so that then, every time you type ‘ll’ the command ‘ls –al’ gets run.

Each time you start a terminal session, a configuration file in your home folder named ‘.bashrc’ is read – and shortcuts can be placed into this file. If you have a look in this file (it is a plain text file) you can actually see that there is a line in it making a call to incorporate any aliases defined in a companion file called ‘.bashrc_aliases’. Note this latter file doesn’t exist by default, so enter the following:

cd
nano .bash_aliases

This starts up the text editor ‘nano’ (any text editor would do the job as well!) and creates the file – now enter a new line in it:
alias ll=’ls -al’

Save the file out and exit (in nano this is Ctrl+O, Ctrl+X)

Now close the terminal window (type ‘exit’). Next time you run terminal session, type ‘ll’ to see the file listing – try and see.

Be sure to visit https://www.raspberrypi.org/ to learn more about these amazing machines.

Learning Python

Top
The Pi offers a fantastic means to learn programming – using languages such as Python. There are so many Python tutorials out there that we don’t write our own here. However, here are a few guidelines to get you up and running on the excellent quick Python tutorial from Magnus Lie Hetland. This is a good starting point for anyone who has done a spot of coding before as it is short, sweet and to the point!

The tutorial is online here

Firstly you need a decent editor (did you know there were so many!). Sadly, our favourite editor PyCharm is not yet posted to the Pi.

At the command line you can use the editor ‘nano’:
nano myfile.py

Better still use a graphical editor – assuming it is installed, use say ‘Leafpad’
(the ‘&’ symbol means run editor as a background process):
leafpad myfile.py &amp;

Best of all, use the Idle3 editor on the Pi:
idle3 myfile.py &amp;

Secondly, run the code with the python command as below, (or in Idle – just press ‘F5’ to run the source code).
python myfile.py
Then you can follow Magnus’ examples through! For example, copy and run this code:

<code># Exercise 2 </code><br><code># Part 1: </code><br><code>print ("Part 1\nThis program continually reads in numbers\nand adds them together until the sum reaches 100:") total = 0 goes = 0 while total &lt; 100: </code><br><code># Get the user's choice: </code><br><code>number = int(input("enter a number &gt; ")) total += number goes += 1 print ("The sum is now", total, "and it took you", goes, "turns.") </code><br><code># Part 2: </code><br><code>print ("\nPart 2\nThis program receives a number of values\nand prints the sum") total = 0 number = int(input("enter the number of values to enter &gt; ")) for counter in range(0, number): # Get the user's choice: number = int(input("enter a number &gt; ")) total += number print ("The total was", total)</code>

This tutorial is one of the following set of related Raspberry Pi tutorials, in order:

  1. Unboxing the Raspberry Pi 3 – Pi Society
  2. Raspberry Pi 3: Hello World – Pi Society
  3. Raspberry Pi 3: Operating LEDs – Pi Society
  4. Making the Raspberry Pi work, the next steps – Pi Society
  5. Connecting a Raspberry Pi to eduroam wifi

Raspberry Pi 3: Operating LEDs – Pi Society

This is a tutorial about controlling external components from our Raspberry Pi 3 computer – turning on and off LEDs and sounding a buzzer, using the Python language, as part of the Cranfield University “Pi Society”. These materials were purchased in the UK from the Pi Hut (http://thepihut.com). The components used are part of the excellent Cam Jam suite (http://camjam.me) – a great site where you can download many worksheets.

In this third tutorial, we set up a Raspberry Pi 3 computer, then get it to turn on and off some LEDs and control a buzzer.

This tutorial is one of the following set of related Raspberry Pi tutorials, in order:
1. Unboxing the Raspberry Pi 3 – Pi Society
2. Raspberry Pi 3: Hello World – Pi Society
3. Raspberry Pi 3: Operating LEDs – Pi Society
4. Making the Raspberry Pi work, the next steps – Pi Society
5. Connecting a Raspberry Pi to eduroam wifi