Tag Archives: Postgres

Postgres is a popular Open Source SQL database, also well known in the geospatial community for the PostGIS module that adds geospatial capabilities.

Logging footfall counts with a Raspberry Pi and camera – results dashboard

Here at Cranfield University we are putting in place plans related to the new ‘Living Laboratory’ project, part of our ‘Urban Observatory’. This project sits within the wider UKCRIC initiative, across a number of universities. Of the many experiments in development, we are exploring machine vision as a means to provide  footfall counts of pedestrian traffic in parts of the campus. This blog builds on an earlier blog summarising some of the technical considerations relating to this work, and shows how a simple dashboard can be developed with NodeJS and the templating engine pug.

In the earlier blog, we captured sensor data from Raspberry Pi Zeros with cameras, running Kerboros, to a Postgres database. This quickly builds up a vast body of data in the database. Finding a way to present this data in a web-based dashboard will help us investigate and evaluate the experiment results.

We investigated a range of tools for presenting the data in this way. The project is already using Node.js to receive postings from the cameras using HTTP POST requests. The Node.js environment then communicates with the back-end Postgres database to INSERT new records. The same Node.js environment can also be used to serve up the results of queries made of the database in response to standard HTTP GET requests.

The Postgres database design has the following structure:

id integer NOT NULL DEFAULT [we used the data type SERIAL to outnumber records, and set this field as the Primary Key]
"regionCoordinates" character varying(30)
"numberOfChanges" integer
incoming integer
outgoing integer
"timestamp" character varying(30)
microseconds character varying(30)
token integer
"instanceName" character varying(30)

An SQL query to extract summary data for a simple results table is, for example:

SELECT database."instanceName" AS location,
  COUNT(id) AS count,
  SUM(database."numberOfChanges") as changes
FROM database
GROUP BY database."instanceName"

Note the use of the single quotation marks around names having mixed case (e.g. ‘”instanceName”). Using pgAdmin to access Postgres, this query produces an output as follows:

Postgres query showing data summary

So with this query and data output, we have the data we need for a simple report. Next we need to build a web page using Node.js. We are already using Express, the lightweight web server. However, a powerful addition is a JavaScript HTML templating engine. Of the many on offer, we like pug, it is a high-performance template engine, implemented with JavaScript for Node.js and browsers, and has a clean and compact approach. We first need to install pug on the server with node present.

# The node app should be stopped
sudo systemctl stop node-api-postgres.service

# Update npm itself if required
sudo npm install -g npm

# Install JavasScript template engine 'pug'
npm i pug

# We may then need to rebuild the app dependencies
npm rebuild

With pug installed, we can update the Node.js scripts built in the earlier blog. First index.js:

index.js

!/usr/bin/env node
// index.js
const express = require('express')
const bodyParser = require('body-parser')
const pug = require('pug')
const app = express()
const db = require('./queries')
const port = 3000

app.use(bodyParser.json())
app.use(
  bodyParser.urlencoded({
    extended: true,
  })
)
app.set('views','views');
app.set('view engine', 'pug');

app.get('/', (request, response) => {
  response.send('Footfall counter - Service running')
  console.log('Footfall counter - Service enquiry received')
})

app.post('/counter', db.createFootfall)
app.get('/stats', db.getStats)

app.listen(port, () => {
  console.log(`App running on port ${port}.`)
})

Note in index.js the app settings. app.set views tells pug where the templates will be stored for generating the web page. app.set. engine tells Node.js to use pug to generate the content.

Note also the new REST endpoint ‘/stats’ introduced with an HTTP GET request (app.get), and is associated with the Node.js function getStats in the associated queries.js module (file).


queries.js

// queries.js

const Pool = require('pg').Pool
const pool = new Pool({
   user: '<<username>>',
   host: '<<hostname>>',
   database: '<<database>>',
   password: '<<password>>',
   port: 5432,
})

const createFootfall = (request, response) => {
  const {regionCoordinates, numberOfChanges, incoming, outgoing, timestamp, microseconds, token, instanceName} = request.body
  pool.query('INSERT INTO <<tablename>> ("regionCoordinates", "numberOfChanges", "incoming", "outgoing", "timestamp", "microseconds", "token", "instanceName") VALUES ($1, $2, $3, $4, $5, $6, $7, $8) RETURNING *', [regionCoordinates, numberOfChanges, incoming, outgoing, timestamp, microseconds, token, instanceName], (error, result) => {
    if (error) {
      console.log(error.stack)
    }
    console.log(`Footfall added with the ID: ${result.rows[0].id}`)
    response.status(200).send(`Footfall added with ID: ${result.rows[0].id}\n`)
  })
}

const getStats = (request, response) => {
  pool.query('SELECT <<tablename>>."instanceName" AS location, COUNT(id) AS count, SUM(<<tablename>>."numberOfChanges") as changes FROM footfall GROUP BY <<tablename>>."instanceName"', (error, result) => {
    if (error) {
      throw error
    }
    response.status(200).render('stats', {title: 'Footfall counter statistical reporter', rows: result.rows})
  })
}

module.exports = {
  createFootfall,
  getStats,
}

Note in queries.js the definition of the function getStats. This function is associated with the HTTP GET request on ‘/stats’ from the earlier index.js file. The function returns a status of ‘200’ and then calls the pug render function with a couple of parameters, first the title of the resultant web page we want, and then more importantly the recordset (‘rows’) resulting from the SQL query – the entire object is passed through. Lastly, getStats is added to the module exports at the end of the file.

Next we have to set up the pug template and style sheet used to generate the HTML file output. Note we defined the folder ‘views’ to hold these files in ‘index.js’. We create a new folder called ‘views’ and create a pug template file called ‘stats.pug’. Pug files all have the extension ‘.pug’.


stats.pug

doctype html
html(lang='en')
  head
    meta(charset='utf-8')
    title #{title}
    style
      include stats.css
  body
    div#header
      h1 #{title}
      ul#minitabs
        li
          a(href='#') Stats 1
        li
          a(href='#') Stats 2
    p Statistical summary of the Footfall counter experiment
    div#content
      table
        thead
          tr
            th Camera Location
            th Number of postings
            th Count of instances
        tbody
          each row in rows
            tr
              td #{row.location}
              td #{row.count}
              td #{row.changes}
    p Above is a summary of the footfall counts recorded to date from the start of the experiment
    div#footer
      h1 Living Laboratory - Urban Observatory

The pug template has a particular markup format that is used. This takes a bit of getting used to, but does result in a clean document ready for rendering down into HTML for return to the HTTP request. Node the way the parameters are integrated into the page, firstly the page title, and then the expression of the recordset as a variable length table, using a ‘for each’ type structure to iterate the recordset. Note lastly the use of the ‘insert’ directive to include the CSS file into the HTML. The file ‘stats.css’ looks like this:

stats.css

/* Example CSS Document - Screen version */

/* Screen partitions */
#header {
  padding: 5px;
  }

#content {
  float: right;
  padding: 5px;
  width: 100%;
  }

#footer {
  clear: right;
  padding: 5px;
  }

/***************************/

/* Top Menu definition */
#minitabs {
  margin: 0;
  padding: 0 0 20px 0;
  font-family: Arial, sans-serif;
  font-size: 15px;
  border-bottom: 1px solid #ffcc00;
  }

#minitabs li {
  margin: 0;
  padding: 0;
  display: inline;
  list-style-type: none;
  }

#minitabs a {
  float: right;
  line-height: 14px;
  font-weight: bold;
  margin: 0 10px 4px 10px;
  text-decoration: none;
  color: #ffcc00;
  }

#minitabs a.active, #minitabs a:hover {
  border-bottom: 4px solid #696;
  padding-bottom: 2px;
  color: #363;
  }

/***************************/

/* Table definitions */

table {
  border-collapse: collapse;
  }
  
tbody {
  color: #999;
  } 
  
th, td {
  border: 1px solid #999;
  padding: 10px;
  }

caption, th {
  font-family: Verdana, sans-serif;
  font-size: 15px;
  font-weight: bold;
  padding: 10px;
  background-color: #696;
  color: white;
  }

/***************************/

/* General text decorations */

/* Drop capital - decorative effect */
.drop {
  float: left;
  font-family: Verdana, sans-serif;
  font-size: 450%;
  line-height: 1em;
  margin: 4px 10px 10px 0;
  padding: 4px 10px;
  border: 2px solid #ccc;
  background: #eee;
  }

/* Inset box */
.inset_box {
  float: right;
  font-family: Arial, sans-serif;
  font-weight: Bold;
  color: #999;
  margin: 10px, 5px, -2.5px, 10px;
  padding: 5px;
  border: 2px solid #ccc;
  background: #eee;
  width: 30%;
  }

/* General body text definition */
body {
  font-family: Georgia, Times, serif;
  line-height: 1.3em;
  font-size: 15px;
  text-align: justify;
  display: block;
  }

.noprint {
  font-size: 20px;
  color: "#FF3333";
  }

/* All link colours */
a:link, a:vlink, a:hlink, a:alink {
  color: #ffcc00;
  }

/* Headings */
h1 {
  font-family: Arial, sans-serif;
  font-size: 24px;
  font-variant: small-caps;
  letter-spacing: 4px;
  color: #ffffdd;
  padding-top: 4px;
  padding-bottom: 4px;
  background-color: #ffcc00;  
  }  

h2 {
  font-family: Arial, sans-serif;
  font-size: 14x;
  font-style: Italic;
  color: #ffff99;
  padding: 0;
  background-color: #ffcc00;  
  }  

/* Horizontal rule */
hr {
  display: inline;
  color: #ffff99;
  }

/* Abbreviations &amp; Acronyms */
abbr, acronym {
  border-bottom: 1px dotted;
  font-weight: Bold;
  cursor: help;
  color: #ffcc00;
  }

Not all the definitions in the CSS are used, but it is a useful set of styles. The file ‘stats.css’ is placed alongside ‘stats.pug’ in the ‘views’ folder.

At this point the Node.js app can be restarted

# The node app should be started
sudo systemctl start node-api-postgres.service

The app is up and running again, and hopefully continuing to receive data from the cameras with the HTTP POST requests arriving at the end-point ‘/footfall’. However, now, we can also point our web browser at the end-point ‘/stats’, and we should hopefully see the following simple dashboard report.

http://<<IP_ADDRESS>>:3000/stats
pug app, up and running, styled by CSS
the pug code

Epilogue

The blog here described taking data from a Postgres database via a custom SELECT statement and using Node.js to paste data to the JavaScrip templating engine pug to prepare a simple HTML dashboard.

Future work could consider improved graphics, perhaps drawing from the graphics library D3, and its port to Node.js, d3-npm.

Logging footfall counts with a Raspberry Pi and camera – technical considerations

Here at Cranfield University we are putting in place plans related to the new ‘Living Laboratory’ project, part of our ‘Urban Observatory’. This project sits within the wider UKCRIC initiative, across a number of universities. Of the many experiments in development, we are exploring machine vision as a means to provide  footfall counts of pedestrian traffic in parts of the campus. This blog provides summarises some of the technical considerations relating to this work.

Siting the Raspberry Pi and camera

Earlier blogs here on GeoThread have described the use of a Raspberry Pi Zero running the Kerboros software, which uses OpenCV to capture movement in different ways from an attached, or network connected, camera. One of the great additions to Kerboros is a feature permitting ‘counting’ of objects passing between two identified ‘fences’ within the image view. The results of an event being triggered in this way can be output to a range of IO streams. Thus IO can send a result to a stored image file to disk (Disk), send a snippet of video to disk (Video), trigger the GPIO pins on the Raspberry Pi (GPIO), send a packet to a TCP Socket (TCPSocket), send a HTTP POST REST communication (Webhook), run a local (Script), trigger a MQ Telemetry Transport communication (MQTT) or send a push notification message (Pushbullet) – a great selection of choices, to which is also added the ability to write to the Amazon S3 cloud as well as local disk. Outputs can be made in either single streams, or multiple streams – thus one can save an image to disk ‘and‘ run a script for example.

In an earlier blog, we had logged movements in this way to a bash script that executed a series of one-line Python commands to store the triggered output data to disk, appending to a CSV file. However, this approach only offers a limited solution, especially if there are multiple cameras involved. A better strategy is to write data out to a REST endpoint on a web server, using the Webhook capability. In this way the posted JSON data can be collated into a central database (from multiple camera sources).

The option for Webhook was selected, with the URL form of:
<<IP_ADDRESS>>:3000/counter

The reason for this port number, and REST endpoint are made clear in the NodeJS receiver code below.

We set up a separate server running Linux Ubuntu to receive these data. For a database, we selected Postgres as a popular and powerful Open Source SQL database. As a server, able to receive the HTTP POST communication, we selected NodeJS. Once installed, we also used the Node Package Manager (NPM) to install the Express lightweight web server, as well as the Node Postgres framework, node-postgres.

ssh username@IPADDRESS-OF-SERVER
# Update the new server
sudo apt-get update
sudo apt-get upgrade

# Install curl, to enable loading to software repositories
sudo apt install curl

#Installation of Node:
curl -sL https://deb.nodesource.com/setup_11.x | sudo -E bash -<sudo apt-get install -y nodejs
node -v
    v11.13.0
npm -version
    6.7.0

#Installation of Postgres:
#To install PostgreSQL, as well as the necessary server software, we ran the following command:
sudo apt-get install postgresql postgresql-contrib
sudo apt install postgresql-10-postgis-2.4
sudo apt install postgresql-10-postgis-scripts

We then configured Postgres to accept communication from the Raspberry Pi devices

sudo nano /etc/postgresql/10/main/pg_hba.conf<br>
#      host     all     all     <<IPADDRESS>>/24     md5

sudo nano /etc/postgresql/10/main/postgresql.conf
# search for 'listen' in file
#      listen_addresses = '<<SETTING>>'

We then used PSQL (the Postgres command line utility) to update the system password, and then to add a new user to Postgres to own the resultant database.

# Change password for the postgres role/account:
sudo -u postgres psql
\password postgres
#enter a new secure password for the postgres user role account

#create a new user role/account for handling data, and allow it to create new databases
CREATE ROLE <<USERNAME>> WITH LOGIN PASSWORD '<<NEW_PASSWORD>>';
ALTER ROLE <<USERNAME>> CREATEDB;

Show all the users to check this worked

\du
# and then quit psql
\q

Finally, a review of the commands to start, stop and restart Postgres.

sudo service postgresql start
sudo service postgresql stop
sudo service postgresql restart

Next, the new Postgres user was used to create a new table with the following fields

id integer NOT NULL DEFAULT [we used the data type SERIAL to outnumber records, and set this field as the Primary Key]
“regionCoordinates” character varying(30)
“numberOfChanges” integer
incoming integer
outgoing integer
“timestamp” character varying(30)
microseconds character varying(30)
token integer
“instanceName” character varying(30)

Next the nodeJS project environment was established:

cd
mkdir node-api-postgres
cd node-api-postgres
npm init -y

# the editor 'nano' is used to inspect and edit the resultant file 'package.json', and to edit the description as required:
nano package.json  (to edit description)

Lastly, the node package manager npm was used to install the two required node libraries Express and pg:

npm i express pg

Next, drawing from the excellent LogRocket blog, an application was written to receive the data, comprising two files, ‘index.js’, and ‘queries.js’, as follows:

index.js

// index.js
const express = require('express')
const bodyParser = require('body-parser')
const app = express()
const db = require('./queries')
const port = 3000

app.use(bodyParser.json())
app.use(
   bodyParser.urlencoded({
     extended: true,
   })
)

app.get('/', (request, response) =&gt; {
   response.send('Footfall counter - Service running')
})

app.post('/counter', db.createFootfall)

app.listen(port, () =&gt; {
   console.log('App running on port ${port}.')
})

queries.js

// queries.js

const Pool = require('pg').Pool
const pool = new Pool({
   user: '<<username>>',
   host: '<<hostname>>',
   database: '<<database>>',
   password: '<<password>>',
   port: 5432,
})

const createFootfall = (request, response) => {
  const {regionCoordinates, numberOfChanges, incoming, outgoing, timestamp, microseconds, token, instanceName} = request.body
  pool.query('INSERT INTO <<tablename>> ("regionCoordinates", "numberOfChanges", "incoming", "outgoing", "timestamp", "microseconds", "token", "instanceName") VALUES ($1, $2, $3, $4, $5, $6, $7, $8) RETURNING *', [regionCoordinates, numberOfChanges, incoming, outgoing, timestamp, microseconds, token, instanceName], (error, result) => {
    if (error) {
      console.log(error.stack)
    }
    console.log(`Footfall added with the ID: ${result.rows[0].id}`)
    response.status(200).send(`Footfall added with ID: ${result.rows[0].id}\n`)
  })
}

module.exports = {
  createFootfall
}

One thing to note in the code above is the use of ‘RETURNING *’ within the SQL statement – this allows the ‘result’ object to access the full row of data, including the automatically generated Id reference. Otherwise this would not be accessible as it it generated on the server and not passed in by the INSERT statement.

Now the Node application is started up, listening on port 3000 on the REST endpoint /counter’ (both defined in index.js).

node index.js

The result is data streaming into the Postgres database.

Once this is in place, we can run a test to check the system can receive data. For this we can use the curl command again, like this:

curl --data "regionCoordinates=[413,323,617,406]&numberOfChanges=1496&incoming=1&outgoing=0&timestamp=1539760397&microseconds=6-928567&token=722&instanceName=LivingLabTest" http://<<IPADDRESS>>:3000/counter
Footfall added with ID: 1

The result is a new record is added to the database. To check the data row was added correctly, a quick way is to use the graphical pgAdmin tool that is used to interact with Postgres databases. Set up a new connection to the server database, and inspect the table to see the record, thus:

pgAdmin utility

The next step is to connect to the Raspberry Pi Zero running Kerboros, and configure its output to the IO output ‘Webhook’ taking, as noted above, a URL form of:
<<IP_ADDRESS>>:3000/counter.

Note here the port number of 3000 referenced, and also a request from Manchester. Once the Kerboros software is updated, data should be seen to be arriving at the database. pgAdmin can be used again to inspect the result.

A final step then is to set up a daemon service that can stop and start the node programme. This means that if the server is rebooted, the We followed the Hackernoon blog and took the following steps:

First, a file was created “/etc/systemd/system/node-api-postgres.service”, with the following content:

[Unit]
Description=Node.js node-api-postgres Footfall service

[Service]
PIDFile=/tmp/node-api-postgres.pid
User=<<USERNAME>>
Group=<<GROUP>>
Restart=always
KillSignal=SIGQUIT
WorkingDirectory=/home/<<USERFOLDER>>/node-api-postgres/
ExecStart=/home/<<USERFOLDER>>/node-api-postgres/index.js

[Install]
WantedBy=multi-user.target

Next, make the file executable:

chmod +x index.js 

Next, the service can be prepared:

sudo systemctl enable node-api-postgres.service

If the service file needs any editing after the service is prepared, the daemons need to all be reloaded, thus:

systemctl daemon-reload

Finally, the service may now be started, stopped or restarted:

sudo systemctl start node-api-postgres.service
sudo systemctl stop node-api-postgres.service
sudo systemctl restart node-api-postgres.service

A command can also be added to restart the Postgres database on a reboot:

update-rc.d postgresql enable

Conclusion and Epilogue
This blog has shown how to capture footfall counting data sourced from a Rasberry Pi with a camera running Kerboros, to a separate server running the database Postgres, using NodeJS and related packages. The result is a robust logging environment capable of receiving data from one or more cameras logged to database.
In order to operate the system, the general instructions are as follows:
First, log onto the Unix box

ssh <<USER>>@<<IPADDRESS>>

Next, update and upgrade the system (do this regularly)

sudo apt-get update
sudo apt-get upgrade

Ensure Postgres is running

sudo service postgresql start
sudo service postgresql stop
sudo service postgresql restart

Next, ensure the Node app is running

sudo systemctl start node-api-postgres.service

You should now just wait for data to arrive – using pgAdmin to inspect and interrogate the database table.