The PlanIT OS™ - Distributed Cloud Architecture
Controls Layer Distribution
The PlanIT OS™ integrates the controls layer with the network infrastructure to enable the running of the most time-critical functions at the edge of the network, minimizing latency and ensuring performance. This provides for the isolation of ‘chatty’ information from the rest of the network (for example, “don’t send new history data until a sensor value changes”). The PlanIT OS™ also runs within the security context of a local subnet, which forms an important part of the security framework.
This distribution also makes it relatively simple in most environments to provide redundant infrastructure, something unknown in the building controls market. For example a network device that provides control functions for one floor of a building can now be cross-coupled to another floor, so that if one controller fails the other can take over. The architecture is also inherently resilient in that control applications can function autonomously to maintain mission critical functions, even if isolated from the rest of the PlanIT OS™ network. Local storage allows buffering of historical data for many hours before any information is lost.
Supervisory Layer Distribution
In an urban built environment, the supervisory system is also distributed. In general the PlanIT OS™ is distributed in ‘clusters’ around a development or city/region, rather than centralized in a data center. The clusters will usually be located in an equipment room in a multi-tenant building and can be integrated with the building HVAC / district heating/cooling system for maximum thermal efficiency and minimum investment cost.
This approach has several advantages. The clusters will operate autonomously – given power – if disconnected from the rest of the PlanIT OS™ / urban network. Equally PlanIT OS™ clusters will usually be deployed with spare capacity, to be able to take on the most mission critical functions from another cluster if it fails or needs to be brought down for maintenance. When not used for redundancy, this spare capacity can be used for ‘roving’ loads such as analytics and simulation, or sold as compute capacity to office tenants or other partners, or throttled down to eliminate power loss.
Data redundancy is also achieved more elegantly than in the traditional ‘replicate the data center’ model. One copy of data is always stored in the local cluster, because this is where it was generated and will most commonly be referred to again. Equally at least one copy of data is stored in a location remote from the originating cluster. Finally, aggregated information is stored in replicant ‘supernode’ clusters – larger than average clusters with more of the historical city data stored in them - in order to optimize performance of the analytics engine.
The least compressed, most ‘raw’ data – especially video imagery – is typically kept for forensic purposes for a few days only; it is usually kept local to the cluster, minimizing network load.
All PlanIT OS™ functions are designed to enable this ‘multinode’ deployment model and optimize the benefits presented by this physical architecture, while mitigating the disadvantages (such as avoiding database joins that are propagated over multiple dispersed systems). As functions can be redistributed across the architecture at will, this is considered to represent a ‘distributed cloud’ model as the principal characteristic of location independence still applies.
Supervisory Layer – Tiering
As has been noted above, the PlanIT OS™ scales out by way of a distributed multimode model. Further tiers can also be added in a ‘vertical’ domain, aggregating functionality in a ‘pyramid’ topology. For additional optimization, the routing of information from one node to another does not need to follow purely hierarchical lines – the routing model can be continually tuned to meet the requirements of the specific deployment and applications in use.
Supervisory Layer – Cloud Deployment
Certain solutions lend themselves better to a more classical cloud deployment where functions may well be deployed in a typical data center hosting environment. The PlanIT OS™ also adapts to this paradigm, with the ability to scale up and out still being critical to the way in which functionality is deployed on multiple virtual machines to meet demand. These solutions are likely to be used most commonly in M2M solutions, and where the ability to carry a distributed local infrastructure is minimal – for example when integrating the PlanIT OS™ with streetlights, or when retrofitting single family homes.