Project Vita: Chapter 4 – Retrieving Data with a Scalable Sensor Solution
Instead, following MuleSoft’s three-tier architecture, we implemented a ‘Process API’ to act as our master node and deployed it to infrastructure within each ‘site’ (i.e. on the same subnet as an array of nodes - like an employee’s home or an office). This Process API simultaneously searches all nodes (System APIs) for the required sensor and retrieves the data before returning it to the client. This master node also enabled us to retrieve readings from all available sensors within the site simultaneously and aggregate the results into one single response. From a security point of view, it also meant we could restrict access to the System APIs and regulate inbound traffic to the site via a single point of entry.
The next step was aggregating results between the different sites as we had multiple employees running nodes at home and planned to add additional nodes to each of our client's offices. We were able to reuse our master node concept and add an additional tier within the process layer which would be deployed to CloudHub and act as our ‘super node’. This API could simultaneously call each of our master nodes and aggregate the results, or it could act as a single endpoint which would search and retrieve the data for any specific sensor within any site (globally).
Containerization of the Mule Runtime
As we were developing our solution remotely, with multiple team members configuring nodes across various sites, it became imperative that we had a consistent development environment, particularly when it came to troubleshooting or debugging.
We were using the on-premise Mule Runtime but hardware varied considerably with a combination of laptops, MacBooks, Raspberry Pis, and a HP ProLiant Microserver, each setup and configured by a different team member. This meant that when anything went wrong during development or deployment, it was proving difficult to isolate the issue despite it often being a simple developer oversight.
We decided to leverage Docker to containerise the Mule Runtime, which enabled us to run one quick command and automatically access the same environment regardless of the hardware in use. This provided a consistent backbone for all of our nodes and master nodes, but also reduced our initial node setup time from a couple of hours to literally minutes.
We had the beginnings of our Application Network with the master nodes and super node now deployed, and the data was being brought to life using Einstein Analytics, so we set upon finding the next integration challenge while the Salesforce team started looking at embedding Lighting Web Components and leveraging Field Service.
Project Vita's chapters:
Project Vita: Chapter 2 - The Quest for Data
- Plant Sensor
- Bluetooth Enabled Controller
- Mule 4.2.2 Runtime
- Python 3
The first challenge was identifying a feasible means of capturing plant data and narrowing down sensors that are compatible with a wider technological landscape (ie. not locked into a proprietary solution or product).
At our disposal we had a Raspberry Pi (compact and powerful single-board computer) which had multiple GPIO headers that all kinds of sensors could be connected to. On the market you can typically find two types of soil sensors: resistive and capacitive. The capacitive option was preferred since it has no exposed metal by design. Therefore, it was neither at risk of oxidation, nor corrosion unlike a resistive sensor.
One of the first sensors we experimented with was a capacitive moisture sensor from Adafruit called the STEMMA Soil Sensor, which once connected to a Pi, we were able use to get readings for soil moisture and temperature. We successfully leveraged this solution for two plants located side by side, but it quickly became evident that this was not scalable to the extent we required.
It was back to the drawing board to explore what other sensors were available on the market, where we identified a possible candidate created by XiaoMi: a wireless BLE (Bluetooth Low Energy) device which would provide us with readings for both moisture and temperature, as well as light exposure and soil fertility too.
Solution Architecture (v1)
After exploring a few different approaches and iterating through the most promising one, the solution uses IoT sensors, Raspberry Pis, MuleSoft, Python, and Salesforce.
A sensor is placed in each plant pot that monitors the ambient temperature, light exposure, moisture and soil fertility (measured as conductivity). Each Raspberry Pi is placed in a cluster of plants with the ability to scan for nearby sensors and retrieve individual plant data. This is made possible by leveraging Python code running on each Pi and deployed applications which send data to Salesforce at a set interval. These applications were built using MuleSoft - an integration and API platform for connecting applications, data and services.
Taking a deeper dive, we hosted our Mule applications on the Mule Runtime (4.2.2) to serve as a lightweight and scalable runtime engine. Following best practises such as modularization, concise documentation in READMEs and promoting consumption, we developed two applications to take care of our integration requirements:
- System API - leverages the Python application on the backend and transforms the payload in accordance with our API contract (API RAML Specification).
- IoT Batch Scheduler – calls various endpoints of the Mule System API, for example scanning for local devices and retrieving data. This means that in a cluster of plants or in a set location, one Raspberry Pi can be used to retrieve readings from any number of devices within range. Then the batch scheduler application sends the data to Salesforce at a user defined frequency.
In summary, each Pi runs three applications: one Python application and two Mule applications. The sequence diagram illustrates how these three applications interact and how data is sent to Salesforce as a Plant Event.
As a final step, we added automation and in-built resiliency where possible to ensure our solution is easily consumable, robust and scalable. For example, each Pi has a Cron job that pulls the latest source code from our git repository, compiles the applications, and automatically redeploys on reboot in the case that there is a power outage or if applications need to be updated.
Now that we have our plant data successfully feeding into Salesforce, stay tuned next week to see how we are leveraging the power of Salesforce IoT Orchestration to bring the data to life!
Project Vita's chapters: