Category Archives: Open Source

Matters pertaining to Open Source software

Logging footfall counts with a Raspberry Pi and camera – results dashboard

Here at Cranfield University we are putting in place plans related to the new ‘Living Laboratory’ project, part of our ‘Urban Observatory’. This project sits within the wider UKCRIC initiative, across a number of universities. Of the many experiments in development, we are exploring machine vision as a means to provide  footfall counts of pedestrian traffic in parts of the campus. This blog builds on an earlier blog summarising some of the technical considerations relating to this work, and shows how a simple dashboard can be developed with NodeJS and the templating engine pug.

In the earlier blog, we captured sensor data from Raspberry Pi Zeros with cameras, running Kerboros, to a Postgres database. This quickly builds up a vast body of data in the database. Finding a way to present this data in a web-based dashboard will help us investigate and evaluate the experiment results.

We investigated a range of tools for presenting the data in this way. The project is already using Node.js to receive postings from the cameras using HTTP POST requests. The Node.js environment then communicates with the back-end Postgres database to INSERT new records. The same Node.js environment can also be used to serve up the results of queries made of the database in response to standard HTTP GET requests.

The Postgres database design has the following structure:

id integer NOT NULL DEFAULT [we used the data type SERIAL to outnumber records, and set this field as the Primary Key]
"regionCoordinates" character varying(30)
"numberOfChanges" integer
incoming integer
outgoing integer
"timestamp" character varying(30)
microseconds character varying(30)
token integer
"instanceName" character varying(30)

An SQL query to extract summary data for a simple results table is, for example:

SELECT database."instanceName" AS location,
  COUNT(id) AS count,
  SUM(database."numberOfChanges") as changes
FROM database
GROUP BY database."instanceName"

Note the use of the single quotation marks around names having mixed case (e.g. ‘”instanceName”). Using pgAdmin to access Postgres, this query produces an output as follows:

Postgres query showing data summary

So with this query and data output, we have the data we need for a simple report. Next we need to build a web page using Node.js. We are already using Express, the lightweight web server. However, a powerful addition is a JavaScript HTML templating engine. Of the many on offer, we like pug, it is a high-performance template engine, implemented with JavaScript for Node.js and browsers, and has a clean and compact approach. We first need to install pug on the server with node present.

# The node app should be stopped
sudo systemctl stop node-api-postgres.service

# Update npm itself if required
sudo npm install -g npm

# Install JavasScript template engine 'pug'
npm i pug

# We may then need to rebuild the app dependencies
npm rebuild

With pug installed, we can update the Node.js scripts built in the earlier blog. First index.js:

index.js

!/usr/bin/env node
// index.js
const express = require('express')
const bodyParser = require('body-parser')
const pug = require('pug')
const app = express()
const db = require('./queries')
const port = 3000

app.use(bodyParser.json())
app.use(
  bodyParser.urlencoded({
    extended: true,
  })
)
app.set('views','views');
app.set('view engine', 'pug');

app.get('/', (request, response) => {
  response.send('Footfall counter - Service running')
  console.log('Footfall counter - Service enquiry received')
})

app.post('/counter', db.createFootfall)
app.get('/stats', db.getStats)

app.listen(port, () => {
  console.log(`App running on port ${port}.`)
})

Note in index.js the app settings. app.set views tells pug where the templates will be stored for generating the web page. app.set. engine tells Node.js to use pug to generate the content.

Note also the new REST endpoint ‘/stats’ introduced with an HTTP GET request (app.get), and is associated with the Node.js function getStats in the associated queries.js module (file).


queries.js

// queries.js

const Pool = require('pg').Pool
const pool = new Pool({
   user: '<<username>>',
   host: '<<hostname>>',
   database: '<<database>>',
   password: '<<password>>',
   port: 5432,
})

const createFootfall = (request, response) => {
  const {regionCoordinates, numberOfChanges, incoming, outgoing, timestamp, microseconds, token, instanceName} = request.body
  pool.query('INSERT INTO <<tablename>> ("regionCoordinates", "numberOfChanges", "incoming", "outgoing", "timestamp", "microseconds", "token", "instanceName") VALUES ($1, $2, $3, $4, $5, $6, $7, $8) RETURNING *', [regionCoordinates, numberOfChanges, incoming, outgoing, timestamp, microseconds, token, instanceName], (error, result) => {
    if (error) {
      console.log(error.stack)
    }
    console.log(`Footfall added with the ID: ${result.rows[0].id}`)
    response.status(200).send(`Footfall added with ID: ${result.rows[0].id}\n`)
  })
}

const getStats = (request, response) => {
  pool.query('SELECT <<tablename>>."instanceName" AS location, COUNT(id) AS count, SUM(<<tablename>>."numberOfChanges") as changes FROM footfall GROUP BY <<tablename>>."instanceName"', (error, result) => {
    if (error) {
      throw error
    }
    response.status(200).render('stats', {title: 'Footfall counter statistical reporter', rows: result.rows})
  })
}

module.exports = {
  createFootfall,
  getStats,
}

Note in queries.js the definition of the function getStats. This function is associated with the HTTP GET request on ‘/stats’ from the earlier index.js file. The function returns a status of ‘200’ and then calls the pug render function with a couple of parameters, first the title of the resultant web page we want, and then more importantly the recordset (‘rows’) resulting from the SQL query – the entire object is passed through. Lastly, getStats is added to the module exports at the end of the file.

Next we have to set up the pug template and style sheet used to generate the HTML file output. Note we defined the folder ‘views’ to hold these files in ‘index.js’. We create a new folder called ‘views’ and create a pug template file called ‘stats.pug’. Pug files all have the extension ‘.pug’.


stats.pug

doctype html
html(lang='en')
  head
    meta(charset='utf-8')
    title #{title}
    style
      include stats.css
  body
    div#header
      h1 #{title}
      ul#minitabs
        li
          a(href='#') Stats 1
        li
          a(href='#') Stats 2
    p Statistical summary of the Footfall counter experiment
    div#content
      table
        thead
          tr
            th Camera Location
            th Number of postings
            th Count of instances
        tbody
          each row in rows
            tr
              td #{row.location}
              td #{row.count}
              td #{row.changes}
    p Above is a summary of the footfall counts recorded to date from the start of the experiment
    div#footer
      h1 Living Laboratory - Urban Observatory

The pug template has a particular markup format that is used. This takes a bit of getting used to, but does result in a clean document ready for rendering down into HTML for return to the HTTP request. Node the way the parameters are integrated into the page, firstly the page title, and then the expression of the recordset as a variable length table, using a ‘for each’ type structure to iterate the recordset. Note lastly the use of the ‘insert’ directive to include the CSS file into the HTML. The file ‘stats.css’ looks like this:

stats.css

/* Example CSS Document - Screen version */

/* Screen partitions */
#header {
  padding: 5px;
  }

#content {
  float: right;
  padding: 5px;
  width: 100%;
  }

#footer {
  clear: right;
  padding: 5px;
  }

/***************************/

/* Top Menu definition */
#minitabs {
  margin: 0;
  padding: 0 0 20px 0;
  font-family: Arial, sans-serif;
  font-size: 15px;
  border-bottom: 1px solid #ffcc00;
  }

#minitabs li {
  margin: 0;
  padding: 0;
  display: inline;
  list-style-type: none;
  }

#minitabs a {
  float: right;
  line-height: 14px;
  font-weight: bold;
  margin: 0 10px 4px 10px;
  text-decoration: none;
  color: #ffcc00;
  }

#minitabs a.active, #minitabs a:hover {
  border-bottom: 4px solid #696;
  padding-bottom: 2px;
  color: #363;
  }

/***************************/

/* Table definitions */

table {
  border-collapse: collapse;
  }
  
tbody {
  color: #999;
  } 
  
th, td {
  border: 1px solid #999;
  padding: 10px;
  }

caption, th {
  font-family: Verdana, sans-serif;
  font-size: 15px;
  font-weight: bold;
  padding: 10px;
  background-color: #696;
  color: white;
  }

/***************************/

/* General text decorations */

/* Drop capital - decorative effect */
.drop {
  float: left;
  font-family: Verdana, sans-serif;
  font-size: 450%;
  line-height: 1em;
  margin: 4px 10px 10px 0;
  padding: 4px 10px;
  border: 2px solid #ccc;
  background: #eee;
  }

/* Inset box */
.inset_box {
  float: right;
  font-family: Arial, sans-serif;
  font-weight: Bold;
  color: #999;
  margin: 10px, 5px, -2.5px, 10px;
  padding: 5px;
  border: 2px solid #ccc;
  background: #eee;
  width: 30%;
  }

/* General body text definition */
body {
  font-family: Georgia, Times, serif;
  line-height: 1.3em;
  font-size: 15px;
  text-align: justify;
  display: block;
  }

.noprint {
  font-size: 20px;
  color: "#FF3333";
  }

/* All link colours */
a:link, a:vlink, a:hlink, a:alink {
  color: #ffcc00;
  }

/* Headings */
h1 {
  font-family: Arial, sans-serif;
  font-size: 24px;
  font-variant: small-caps;
  letter-spacing: 4px;
  color: #ffffdd;
  padding-top: 4px;
  padding-bottom: 4px;
  background-color: #ffcc00;  
  }  

h2 {
  font-family: Arial, sans-serif;
  font-size: 14x;
  font-style: Italic;
  color: #ffff99;
  padding: 0;
  background-color: #ffcc00;  
  }  

/* Horizontal rule */
hr {
  display: inline;
  color: #ffff99;
  }

/* Abbreviations &amp; Acronyms */
abbr, acronym {
  border-bottom: 1px dotted;
  font-weight: Bold;
  cursor: help;
  color: #ffcc00;
  }

Not all the definitions in the CSS are used, but it is a useful set of styles. The file ‘stats.css’ is placed alongside ‘stats.pug’ in the ‘views’ folder.

At this point the Node.js app can be restarted

# The node app should be started
sudo systemctl start node-api-postgres.service

The app is up and running again, and hopefully continuing to receive data from the cameras with the HTTP POST requests arriving at the end-point ‘/footfall’. However, now, we can also point our web browser at the end-point ‘/stats’, and we should hopefully see the following simple dashboard report.

http://<<IP_ADDRESS>>:3000/stats
pug app, up and running, styled by CSS
the pug code

Epilogue

The blog here described taking data from a Postgres database via a custom SELECT statement and using Node.js to paste data to the JavaScrip templating engine pug to prepare a simple HTML dashboard.

Future work could consider improved graphics, perhaps drawing from the graphics library D3, and its port to Node.js, d3-npm.

Installing QGIS on a Macbook

QGIS (https://qgis.org) is a popular open source Geographical Information System (GIS) tool that we use a lot here at Cranfield University. It is possible to get it running on a Mac running MacOS High Sierra, but it can be a bit of a fiddle. The following instructions were found to work well.

The Mac operating system has no built in package manager, like ‘rpm’ for Linux. However, there are tools that can do the job. A popular one is Homebrew (https://brew.sh). Installing this allows one to install both command line tools which are not installed nubby default (e.g. wget), and also whole binary tools, such as QGIS itself.

Having followed the instructions to install Brew, and updated the installation as directed, the next step is to install the X-Windows window manager, Quartz. Brew can be used for achieving this, thus:

brew cask install xquartz

Next, we can turn to the OSGEO open source geospatial foundation (https://www.osgeo.org). OSGEO have a port of their suite of open source GIS tools ready for Brew, and so following the instructions on the OSGEO Github page, here (https://github.com/OSGeo/homebrew-osgeo4mac), we can run the following:

brew tap osgeo/osgeo4mac

ulimit -n 1024

brew install qgis

To then run QGis, type qgis in the terminal to launch, then pin the dock menu icon to simplify launching it in future.

IOT Project – Using an ESP32 device to monitor a web service

There is a lot of interest in the Internet of Things here at Cranfield University. Especially now there is a new generation of super-cheap ESP8266 and ESP32 devices which can be deployed as IoT controllers. Many of these devices are now also available with in-built OLED screens – very helpful for showing messages and diagnostics.

We will use one of these devices to develop our project.

Contents
The project
The device
Coding the EPS32
Configuring the development environment
Connecting to the device
USB drivers
OLED Screen drivers
Configuring the Arduino IDE
Uploading the Source Code
The Source Code
– – Authorisation
In operation
Next steps

The project

top
Nowadays, web services are used for all sorts of applications – for providing access to data and functionality online. A useful application then for the EPS32 is for it to act as a monitor for a web service, repeatedly polling the service to see if it is operating correctly. If the web service goes down, we need to know – the EPS32 can keep an eye on the service and report any problems.

This project describes how an EPS32 device can be configured and programmed to monitor a web service. The web service we will monitor is developed (in node.js) with an API that includes a ‘current status’ call – if all is well calling this returns a success message which we can capture.

The device

top
For this project, we are using the TTGO-WiFi-Bluetooth-Battery-ESP32-Module-ESP32-0-96-inch-OLED-development-tool from Aliexpress, although these devices are widely available from many retailers. This particular model also has a battery holder for a long-use LIPO battery on the rear of the board.

Coding the EPS32

top
There are a few options for coding the ESP devices. Most easily, it is possible to use the Arduino development environment (with a few tweaks). Another possibility is using the Atom implementation at platformio: https://platformio.org/platformio-ide. We used the Arduino tool.

Configuring the development environment

top
The Arduino development environment by default does not have the libraries and configurations to allow it to programme the EPS32. There are a few steps needed to enable this.

Connecting to the device

top
The EPS32 board has a micro-USB port, permitting connection to the programming computer. We used an Apple Mac laptop for programming – so a micro-USB to USB-C cable/convertor was required.

USB drivers

top
The EPS32 device needs a software driver installed on the programming computer. We used the Silicon Labs drivers available online here. There are other alternatives (some commercial) for drivers – notably the Mac-usb-serial drivers online.

OLED Screen drivers

top
The EPS32 device also has an in-built OLED screen. Although very small, at 128*64 pixels, this mono display is quite large enough to show text messages with different fonts, simple bitmap graphics, progress bars and drawing elements (lines, rectangles etc.) – amazing! However, a library is needed to allow access to this device. We used the ThingPulse OLED library.

The library can be downloaded as a zip file from GitHub to the library folder in the Arduino installation folder. On the Mac for example this is:

/Users/USER/Documents/Arduino/libraries/esp8266-oled-ssd1306-master


This library comes with lots of example programmes showing how to programme the screen, how to encode bitmaps, add progress bars, create fonts etc. From the code below, it can be seen that the Sketch ‘SSD1306SimpleDemo.ino‘ offers a great starting point for learning, referencing for example the ways described at Squix for encoding images and fonts. I selected the Roboto Medium font and encoded the WiFi graphic (using this online tool).
When completed, the two header files for fonts and images respectively were:
fonts header file – fonts.h
images header file – images.h

Configuring the Arduino IDE

top
Configuring the Arduino IDEHaving installed these drivers and libraries, the Arduino IDE then needs to be configured.

To do this, the board was set as a device of type ‘ESP32 Arduino’ -> ‘ESP32 Dev Module’, the baud rate set to 115200. Having installed the USB driver above, the port could be set to ‘/dev/cu.SLAB_USBtoUART’.

Uploading the Source Code

top
The Arduino IDE allows one to compile and upload code to the device. Critically, one has to hold down the ‘Boot’ button on the device as the programme is uploaded (for a few seconds) to allow the device code to be uploaded. If the boot button is NOT held down, there will be errors reported and the code will not be uploaded! (this took ages to work out!!)

The Source Code

top
In coding the device in the Arduino development environment, one can refer usefully to the Arduino code reference. The final working code is shown below. Note the calls to the Serial monitor to allow debugging information to be shown while the device is connected to the computer.

// TTGO WiFi &amp; Bluetooth Battery ESP32 Module - webservices checker
// Import required libraries
#include "Wire.h"
#include "OLEDDisplayFonts.h"
#include "OLEDDisplay.h"
#include "OLEDDisplayUi.h"
#include "SSD1306Wire.h"
#include "SSD1306.h"
#include "images.h"
#include "fonts.h"
#include "WiFi.h"
#include "WiFiUdp.h"
#include "WiFiClient.h"
// The built-in OLED is a 128*64 mono pixel display
// i2c address = 0x3c
// SDA = 5
// SCL = 4
SSD1306 display(0x3c, 5, 4);

// WiFi parameters
const char* ssid = "MYSSID";
const char* password = "MYWIFIKEY";

// Web service to check
const int httpPort = 80;
const char* host = "MYWEBSERVICE_HOSTNAME";

void setup() {
	// Initialize the display
	display.init();
	//display.flipScreenVertically();
	display.setFont(Roboto_Medium_14);

	// Start Serial
	Serial.begin(115200);
	// Connect to WiFi
	display.drawString(0, 0, "Going online");
	display.drawXbm(34, 14, WiFi_Logo_width, WiFi_Logo_height, 			 WiFi_Logo_bits);
	display.display();
	WiFi.begin(ssid, password);
	while (WiFi.status() != WL_CONNECTED) {
	 	 delay(500);
	 	 Serial.print(".");
	}
	Serial.println("");
	Serial.println("WiFi now connected at address");
	// Print the IP address
	Serial.println(WiFi.localIP());
	display.clear();
}

void loop() {
	Serial.print("\r\nConnecting to ");
	Serial.println(host);
	display.clear();
	display.setTextAlignment(TEXT_ALIGN_LEFT);
	display.drawString(0, 0, "Check web service");
	display.display();
	Serial.println("Check web service");

	// Setup URI for GET request
	String url = "SPECIFIC_WEBSERVICE_URL";
	// if service is up ok, return string will contain: 'Service running'

	WiFiClient client;
	if (!client.connect(host, httpPort)) {
		Serial.println("Connection failed");
		display.clear();
		display.drawString(0, 0, "Connection failed");
		display.display();
		return;
	}

	client.print("GET " + url + " HTTP/1.1\r\n");
	client.print("Host: " + (String)host + "\r\n");
	// If authorisation is needed it can go here
	//client.print("Authorization: Basic AUTHORISATION_HASH_CODE\r\n");
	client.print("User-Agent: Arduino/1.0\r\n");
	client.print("Cache-Control: no-cache\r\n\r\n");

	Serial.print("GET " + url + " HTTP/1.1\r\n");
	Serial.print("Host: " + (String)host + "\r\n");
	// If authorisation is needed it can go here
	//Serial.print("Authorization: Basic AUTHORISATION_HASH_CODE\r\n");
	Serial.print("User-Agent: Arduino/1.0\r\n");
	Serial.print("Cache-Control: no-cache\r\n\r\n");

// Here's an alternative form if the service API uses HTTP POST
/*
client.print("POST " + url + " HTTP/1.1\r\n");
client.print("Host: " + (String)host + "\r\n");
// If authorisation is needed it can go here
//client.print("Authorization: Basic AUTHORISATION_HASH_CODE\r\n");
client.print("User-Agent: Arduino/1.0\r\n");
client.print("Cache-Control: no-cache\r\n\r\n");
*/

	// Read all the lines of the reply from server
	delay(500);
	bool running = false;
	while (client.available()) {
		String line = client.readStringUntil('\r\n');
	 	Serial.println(line);
	 	if (line == "Service running") {
	 		running = true;
		}
	}
	if (running == true) {
		display.drawString(0, 25, "Service up OK");
	 	display.display();
		delay(3000);
	} else {
	 	display.drawString(0, 25, "Service DOWN");
	 	display.display();
	 	delay(3000);
		// Text/email administrator
	}

// Here's some alternative methods to read web output
/*
while (client.available()) {
	 Serial.print("&gt;");
	 char c = client.read();
	 Serial.print(c);
	 Serial.print("&lt;");
}
*/
/*
int c = '\0';
unsigned long startTime = millis();
unsigned long httpResponseTimeOut = 10000; // 10 sec
while (client.connected() &amp;&amp; ((millis() - startTime) &lt; 	 	 	 httpResponseTimeOut)) {
	 if (client.available()) {
	 	 c = client.read();
	 	 Serial.print((char)c);
	} else {
	 	 Serial.print(".");
	 	 delay(100);
	}
}
*/

	Serial.println();
	Serial.println("Closing connection");
	Serial.println("=================================================");
	Serial.println("Sleeping");
	display.clear();
	display.drawString(0, 0, "Closing connection");
	display.display();
	delay(1000);
	display.clear();
	client.stop();
	// progress bar
	for (int i=1; i<=28; i++) {
	 	float progress = (float) i / 28 * 100;
	 	delay(500); // = all adds up to delay 14000 (14 sec)
		// draw percentage as String
	 	display.drawProgressBar(0, 32, 120, 10, (uint8_t) progress);
	 	display.display();
	 	display.setTextAlignment(TEXT_ALIGN_CENTER);
	 	display.drawString(64, 15, "Sleeping " + String((int) progress) + "%");
	 	display.display();
	 	display.clear();
	 	Serial.print((int) progress);Serial.print(",");
	}
	delay (1000);
}

Authorisation

top
Note that the connection to the service can use either HTTP GET or POST according to need (POST is considered a better approach). A further embellishment for security is if the web service uses authorisation (username and password to connect). If it does, then a hash of the combination of username and password can be passed in the header as shown in the code. To do this we use the excellent Postman tool. Postman allows one to manually create a connection conversation with an API server, including say a basic authorisation, and then view the full code of this – which can be copied into the Arduino code as shown above.

Note that it is critical to have a second carriage return at the end of the HTTP conversation (shown as ‘\r\n‘ in the code – so the last item has ‘\r\n\r\n’ for the blank line). Without this blank line it will not work!

In operation

top
Here is a short video of the device in operation. Excuse the use of image stabilisation – original video was filmed handheld.

The code is designed to open a connection, check on the status of the web service, then sleep for a period before repeating in an endless loop.

Next steps

top
This code currently only flashes up on the tiny screen when the service is found to be up or down. To be really useful, the tool should be able to alert one or more administrators – perhaps by push messaging to their mobile phones, or email.

The next stage can add this capability using approaches using Prowl, and Avviso. Perhaps the subject of a future blog posting.

Apache Spark, Zeppelin and geospatial big data processing

There is much interest here at Cranfield University in the use of Big Data tools, and with our parallel interests in all things geospatial, the question arises – how can Big Data tools process geospatial data?

In this blog, we investigate the use of Apache Spark, Apache Zeppelin and a couple of geospatial libraries. In an earlier blog, we set up Spark and Zeppelin, and now we extend this to use these additional tools. Note that this exercise is undertaken with a MacBook, although the instructions should work with Linux just as well.

There are few geospatial libraries for Big Data processing that work with Spark/Hadoop. Some of those that exist include the Hadoop offering from ESRI, Magellan, and GeoSpark.

GeoSpark

To set up GeoSpark, we downloaded the library ‘geospark-0.3.2-spark-2.x.jar’ from https://github.com/DataSystemsLab/GeoSpark/releases and saved the file off locally, e.g. to

/Users/sparkuser/spark/jars/

Next, in the Apache Spark installation ‘conf’ folder, we copied the template file ‘spark-defaults.conf.template’ to ‘spark-defaults.conf’ ready for editing – we need to tell Spark to use the GeoSpark jar library.

Now, we edited the conf configuration file to add the line at the end to reference the jar, e.g.

spark.jars /Users/sparkuser/spark/jars/geospark-0.3.2-spark-2.x.jar

Sourcing data

We need some spatial data for our test. We downloaded sample data files ‘zcta510-small.csv‘ and ‘arealm-small.csv‘ (online as above), to a local data location, e.g. /Users/sparkuser/spark/data/geospark.

The datasets take the following form:
arealm-small.csv

-88.331492,32.324142
-88.175933,32.360763
-88.388954,32.357073
-88.221102,32.35078
-88.323995,32.950671
...

zcta510-small.csv

-155.940114,19.081331,-155.618917,19.5307
-155.335476,19.802474,-155.104434,19.93224
-155.85966,20.120695,-155.765027,20.268469
-155.396864,19.519641,-154.987674,19.800274
-155.98572,19.53958,-155.822977,19.70849
...

The code

We now followed exactly the GeoSpark example tutorial code, in the Scala language.
First, we need to ensure the correct libraries are loaded and available:

import org.datasyslab.geospark.spatialOperator.RangeQuery
import org.datasyslab.geospark.spatialRDD.PointRDD
import org.datasyslab.geospark.spatialOperator.JoinQuery
import org.datasyslab.geospark.spatialRDD.RectangleRDD
import com.vividsolutions.jts.geom.Envelope
import org.datasyslab.geospark.spatialOperator.KNNQuery
import org.datasyslab.geospark.spatialRDD.PointRDD
import com.vividsolutions.jts.geom.Coordinate
import com.vividsolutions.jts.geom.GeometryFactory
import com.vividsolutions.jts.geom.Point

Now we can run the following code and observe the following:

// Start an example Spatial Range Query without Index
val queryEnvelope=new Envelope (-113.79,-109.73,32.99,35.08);
val objectRDD = new PointRDD(sc, "/Users/sparkuser/spark/data/geospark/arealm-small.csv", 0, "csv"); /* The O means spatial attribute starts at Column 0 */
val resultSize = RangeQuery.SpatialRangeQuery(objectRDD, queryEnvelope, 0).getRawPointRDD().count(); /* The O means consider a point only if it is fully covered by the query window when doing query */


queryEnvelope: com.vividsolutions.jts.geom.Envelope = Env[-113.79 : -109.73, 32.99 : 35.08]
objectRDD: org.datasyslab.geospark.spatialRDD.PointRDD = org.datasyslab.geospark.spatialRDD.PointRDD@52b8d9a6
resultSize: Long = 445

// Start an example Spatial Range Query with Index
val queryEnvelope=new Envelope (-113.79,-109.73,32.99,35.08);
val objectRDD = new PointRDD(sc, "/Users/sparkuser/spark/data/geospark/arealm-small.csv", 0, "csv"); /* The O means spatial attribute starts at Column 0 */
objectRDD.buildIndex("rtree"); /* Build R-Tree index */
val resultSize = RangeQuery.SpatialRangeQueryUsingIndex(objectRDD, queryEnvelope,0).getRawPointRDD().count(); /* The O means consider a point only if it is fully covered by the query window when doing query */

queryEnvelope: com.vividsolutions.jts.geom.Envelope = Env[-113.79 : -109.73, 32.99 : 35.08]
objectRDD: org.datasyslab.geospark.spatialRDD.PointRDD = org.datasyslab.geospark.spatialRDD.PointRDD@2c3e8ebf
resultSize: Long = 445

// Start an example Spatial KNN Query without Index
val fact=new GeometryFactory();
val queryPoint=fact.createPoint(new Coordinate(-109.73, 35.08));
val objectRDD = new PointRDD(sc, "/Users/sparkuser/spark/data/geospark/arealm-small.csv", 0, "csv"); /* The O means spatial attribute starts at Column 0 */
val resultSize = KNNQuery.SpatialKnnQuery(objectRDD, queryPoint, 5); /* The number 5 means 5 nearest neighbors */

fact: com.vividsolutions.jts.geom.GeometryFactory = com.vividsolutions.jts.geom.GeometryFactory@35f6b599
queryPoint: com.vividsolutions.jts.geom.Point = POINT (-109.73 35.08)
objectRDD: org.datasyslab.geospark.spatialRDD.PointRDD = org.datasyslab.geospark.spatialRDD.PointRDD@76d6439b
resultSize: java.util.List[com.vividsolutions.jts.geom.Point] = [POINT (-109.538914 35.123446), POINT (-108.729849 37.196678), POINT (-117.105253 33.48551), POINT (-120.679839 35.25764), POINT (-120.860368 35.398047)]

// Start an example Spatial KNN Query with Index
val fact=new GeometryFactory();
val queryPoint=fact.createPoint(new Coordinate(-109.73, 35.08));
val objectRDD = new PointRDD(sc, "/Users/sparkuser/spark/data/geospark/arealm-small.csv", 0, "csv"); /* The O means spatial attribute starts at Column 0 */
objectRDD.buildIndex("rtree"); /* Build R-Tree index */
val resultSize = KNNQuery.SpatialKnnQueryUsingIndex(objectRDD, queryPoint, 5); /* The number 5 means 5 nearest neighbors */

fact: com.vividsolutions.jts.geom.GeometryFactory = com.vividsolutions.jts.geom.GeometryFactory@24046396
queryPoint: com.vividsolutions.jts.geom.Point = POINT (-109.73 35.08)
objectRDD: org.datasyslab.geospark.spatialRDD.PointRDD = org.datasyslab.geospark.spatialRDD.PointRDD@6db7719d
resultSize: java.util.List[com.vividsolutions.jts.geom.Point] = [POINT (-109.538914 35.123446), POINT (-108.729849 37.196678), POINT (-108.135158 37.242491), POINT (-107.596572 37.000003), POINT (-107.79524 37.225479)]

// Start an example Spatial Join Query without Index
val objectRDD = new PointRDD(sc, "/Users/sparkuser/spark/data/geospark/arealm-small.csv", 0 ,"csv","rtree",4); /* The O means spatial attribute starts at Column 0, number 4 means 4 RDD partitions, "rtree" means use R-Tree Spatial Partitioning Grid */
val rectangleRDD = new RectangleRDD(sc, "/Users/sparkuser/spark/data/geospark/zcta510-small.csv", 0, "csv"); /* The O means spatial attribute starts at Column 0 */
val joinQuery = new JoinQuery(sc,objectRDD,rectangleRDD);
val resultSize = joinQuery.SpatialJoinQuery(objectRDD,rectangleRDD).count();
objectRDD.totalNumberOfRecords  /* see https://github.com/DataSystemsLab/GeoSpark/blob/master/src/main/java/org/datasyslab/geospark/spatialRDD/PointRDD.java for API */

objectRDD: org.datasyslab.geospark.spatialRDD.PointRDD = org.datasyslab.geospark.spatialRDD.PointRDD@730e3723
rectangleRDD: org.datasyslab.geospark.spatialRDD.RectangleRDD = org.datasyslab.geospark.spatialRDD.RectangleRDD@2bf31c8c
joinQuery: org.datasyslab.geospark.spatialOperator.JoinQuery = org.datasyslab.geospark.spatialOperator.JoinQuery@36cecee7
resultSize: Long = 9989

// Start an example Spatial Join Query with Index
val objectRDD = new PointRDD(sc, "/Users/sparkuser/spark/data/geospark/arealm-small.csv", 0 ,"csv","rtree",4); /* The O means spatial attribute starts at Column 0, number 4 means 4 RDD partitions, "rtree" means use R-Tree Spatial Partitioning Grid */
val rectangleRDD = new RectangleRDD(sc, "/Users/sparkuser/spark/data/geospark/zcta510-small.csv", 0, "csv"); /* The O means spatial attribute starts at Column 0 */
val joinQuery = new JoinQuery(sc,objectRDD,rectangleRDD);
objectRDD.buildIndex("rtree"); /* Build R-Tree index */
val resultSize = joinQuery.SpatialJoinQueryUsingIndex(objectRDD,rectangleRDD).count();

objectRDD: org.datasyslab.geospark.spatialRDD.PointRDD = org.datasyslab.geospark.spatialRDD.PointRDD@1301fbdd
rectangleRDD: org.datasyslab.geospark.spatialRDD.RectangleRDD = org.datasyslab.geospark.spatialRDD.RectangleRDD@ebfb5e7
joinQuery: org.datasyslab.geospark.spatialOperator.JoinQuery = org.datasyslab.geospark.spatialOperator.JoinQuery@197ff4a6
resultSize: Long = 9989

The Internet of Things with Photon – Temperature and Humidity logging

Happy New Year from Geothread! Much is written about the Internet of Things, so here at Cranfield University as a post Christmas project, we wanted to explore some of the possibilities for interconnected devices, sensors and data streams. To do this we are using the fantastic ‘Photon’ microprocessor controller (formally called the Spark) from Particle (https://www.particle.io).

The inexpensive Photon device (https://www.particle.io/prototype) provides a microprocessor board and an array of digital and analogue pins for connecting up your sensors and actuators and a USB socket for providing power (and local data services). The Photon’s real strength lies in its onboard Broadcom WiFi chip. Whereas an Arduino or similar board is effectively self-contained and fiddly to connect to the rest of the world, the Photon board allows you to connect directly and immediately to the Particle cloud (a web service provided by Particle) to which all the data streams can be sent. It is therefore straight forward to develop a simple data logging application, streaming data onto the cloud for further processing and analysis. The Photon is also broadly code-compatible with the Arduino – so code can be transposed across easily.

If you are not on WiFi, Particle also offers the ‘Electron’ device, which offers the same capabilities, but takes a mobile phone SIM card instead of WiFi, allowing for remote access. Both the Photon and the Electron are really designed for prototyping up ideas; once you have a working design, you can use Particle’s PØ and P1 devices for mass production! Shown below is the Photon mounted onto a breadboard.

Photon on breadboard

The project at hand is to develop a simple data logger for temperature and humidity, using the trusty DHT11 sensor. In that sense, this project is similar to our earlier Bluetooth data logger – but now the data will go to the Internet via its WiFi connection (it can store up to 5 connections).

The steps required (broadly following the excellent Particle startup guide) are:

  1. Create an account on the Particle website portal – https://build.particle.io/login
  2. Download to your phone (e.g. iPhone/Android) the Particle ‘App’ and log in
  3. Power up the Photon (we used a standard USB micro B cable from a phone charger)
    1. We next need to get the Photon to connect to the local WiFi. Press and hold (carefully!) the Photon setup button to enter its setup mode
    2. Use the phone’s WiFi to connect to the WiFi from the Photon – the SSID is something like ‘Photon-XXX’ where ‘XXX’ is the unique number of the device . Note, we had terrible trouble connecting initially the Photon to a WEP encrypted broadband router. Turning off all router security worked fine – but this is no long-term solution. Enabling WPA router security however was all that was needed to ensure easy connection to the Photon (conclusion – use WPA not WEP security!!). The App guides you through introducing the Photon onto the network, and adding the WPA WiFi security phrase. Once the Photon is finally online, it can take a few minutes (6-12) to update its firmware – leave it alone to do this! You also get a chance to give your device a name – useful if you intend to have several devices.
  4. Next we need to wire up the Photon. A breadboard is a useful aid for initial prototyping.
    1. Connect pin 1 (on the left) of the sensor to +5V
    2. Connect pin 2 of the sensor to whatever your DHTPIN is
    3. Connect pin 4 (on the right) of the DHT11 sensor to GROUND
    4. Connect a 10K resistor from pin 2 (data) to pin 1 (power) of the sensor (we only had a 12k resistor handy but this was OK). Leave the device powered up ready to receive software code via the web.

Photon on breadboard with sensor attached

The next step moves from the hardware to the software. Particle offer a number of means to control and programme the photon. The phone App itself has a ‘tinker’ mode which allows one to turn on and off on-board LEDs etc. Next up, there is a web-based development environment (IDE) (https://build.particle.io/build/) – a very elegant solution to programming the device. Next there is a programme that can be installed, the ‘Particle Dev‘ (rather like the Arduino IDE), and finally command-line directives using Node.JS. To start with at least, it is easiest to use the web IDE interface. Also, in many ways the whole idea of the Internet of Things is to use cloud services – so data collection should also be a cloud-based activity.

Particle Web IDE development

To get us going, we selected the ‘Community Library’ called ‘ADAFRUIT_DHT’ developed by Adafruit (they produce great microprocessor kit too by the way). Their ‘dht-test.ino’ code can be adapted and edited, and the library added to the project. For editing, you will need to indicate the digital pin the DHT11 is connected to, e.g. for pin 2 ‘#define DHTPIN 2’. Also the type of sensor, e.g. for DHT11 ‘#define DHTTYPE DHT11’. One can also edit the loop delay for taking readings (e.g. for 5 seconds, ‘delay(5000);’).

In the run loop, we can also add instructions to publish the data readings to the Particle cloud. This is done by adding the lines:

Particle.publish("Humidity", String(h));
Particle.publish("Temperature", String(t));
Particle.publish("Dew point", String(dp));
Particle.publish("Heat Index", String(hi));

Once ready, the code can be flashed to (written to) the Photon device, over the Internet – neat!
And that is it – the Photon should now be up and running logging temperature and humidity data etc every 5 seconds. With thanks and acknowledgements to Adafruit, the software code used is shown at the end of this article.

The next task is to recover the data arriving on the Particle cloud originating from the device. There are a number of ways to do this, but the easiest initial means is to use the Particle Dashboard (see https://dashboard.particle.io/user/logs). This allows you to connect to, receive and visualise data from your running device.

Particle Dashboard showing data streaming in

You can see the data arriving at the dashboard, each reading being timestamped.

Enhancements for this project

This project is only the start. One can capture and store data streams arriving from the Photon in a database. The database can then be consulted to produce time series runs of data. Multiple Photon devices can be scattered across an area, and a web map of interpolated meteorological data be produced. Other sensors can be added (e.g. a GPS) and so on for locational advice. The whole assembly can be ruggedised in a waterproof box. Really there are so many ways to develop and enhance the basic concept.

What comes next?

The Particle Photon (and Electron) are truly amazing devices – so powerful and so easy to connect up to the Internet. Truly these devices can contribute to the ‘Internet of Things’. To get some real inspiration as to the sorts of projects that exist for these devices, visit https://particle.hackster.io. If you want to store the data arising from the sensor, also have a look at https://data.sparkfun.com/


Here is the software code used in this prototype:

// This #include statement was automatically added by the Particle IDE.
#include "Adafruit_DHT/Adafruit_DHT.h"

// Example testing sketch for various DHT humidity/temperature sensors
// Written by ladyada, public domain

#define DHTPIN 2 // what pin we’re connected to

// Uncomment whatever type you’re using!
#define DHTTYPE DHT11 // DHT 11
//#define DHTTYPE DHT22 // DHT 22 (AM2302)
//#define DHTTYPE DHT21 // DHT 21 (AM2301)

// Connect pin 1 (on the left) of the sensor to +5V
// Connect pin 2 of the sensor to whatever your DHTPIN is
// Connect pin 4 (on the right) of the sensor to GROUND
// Connect a 10K resistor from pin 2 (data) to pin 1 (power) of the sensor

DHT dht(DHTPIN, DHTTYPE);

void setup() {
Serial.begin(9600);
Serial.println(“DHT11 test!”);

dht.begin();
}

void loop() {
// Wait a few seconds between measurements.
delay(2000);

// Reading temperature or humidity takes about 250 milliseconds!
// Sensor readings may also be up to 2 seconds ‘old’ (its a
// very slow sensor)
float h = dht.getHumidity();
// Read temperature as Celsius
float t = dht.getTempCelcius();
// Read temperature as Farenheit
float f = dht.getTempFarenheit();

// Check if any reads failed and exit early (to try again).
if (isnan(h) || isnan(t) || isnan(f)) {
Serial.println(“Failed to read from DHT sensor!”);
return;
}

// Compute heat index
// Must send in temp in Fahrenheit!
float hi = dht.getHeatIndex();
float dp = dht.getDewPoint();
float k = dht.getTempKelvin();

Serial.print(“Humid: “);
Serial.print(h);
Serial.print(“% – “);
Serial.print(“Temp: “);
Serial.print(t);
Serial.print(“*C “);
Serial.print(f);
Serial.print(“*F “);
Serial.print(k);
Serial.print(“*K – “);
Serial.print(“DewP: “);
Serial.print(dp);
Serial.print(“*C – “);
Serial.print(“HeatI: “);
Serial.print(hi);
Serial.println(“*C”);
Serial.println(Time.timeStr());

Particle.publish(“Humidity”, String(h));
Particle.publish(“Temperature”, String(t));
Particle.publish(“Dew point”, String(dp));
Particle.publish(“Heat Index”, String(hi));
delay(5000);
}