Central Cloud or Distributed “Fog”?
There is a new technology “push” underway called the “Internet of Things” (IoT) and within the industrial context, the “Industrial Internet of Things”. Devices and equipment are generating large volumes of data which need to be acquired, analyzed and used smartly to enhance operations. There are two perspectives to this:
- Outside the Organization: Such as GE’s pulling data inward from customers running their turbines and locomotives, analyzing that data, predicting faults, anomalies and performing predictive maintenance. In such cases a central cloud with access to that external data may make sense.
- Inside the Organization: Data is generated inside the organization by industrial process control systems, SCADA systems, vendors materials data, warehousing, MRP/ERP, production reporting, customer feedback from the web and customer service, etc. which needs to be archived, analyzed and used to better understand and enhance products and production. In such cases a distributed cloud (“fog computing”) makes the most sense.
Internal Analytics: Sensitive and Secret
We, IntelliDynamics, operate in both spheres, we don’t care where the data comes from, but the vast majority of industrial operations fall under the latter: internal data used for internal purposes. This data is very secretive and tightly controlled. It involves secret recipes and details on the process technologies used to make products and involves closed-loop control of processes, a critical risk to human health, safety and environment, especially if you are using or making explosive or toxic compounds, operating oil and gas platforms, chemical manufacturing or other such situations. These production environments, and their data, are tightly controlled with restricted access. There is little desire by customers to place this data in some vendor’s “cloud” where physical control of the data is lost.
80 / 20 Rule
As a general rule, 80% of the data does not need to leave the location where it is created, but used locally to operate, control, understand and improve the process and products being produced. This suggests the best strategy is to distribute analytics, prediction and optimal control close to the source of the data, not move it to a distant “cloud”. The data is well controlled and only that which must move is moved, saving on bandwidth as well.
Tight Relationship with Control
Commonly our analytics are used to create optimal setpoints for control systems, to drive production smartly to attain multiple business performance objectives. Most customers prefer to put our systems on the “control network”, in a locked cabinet next to the Distributed Control System (DCS), not in a data center thousands of miles away. This reduces vulnerability to network problems, latencies and keeps data under lock and key in the four walls of the plant.
Distributed Fog Computing Analytics
Placing Analytics Right Where It is Needed
While some vendors may be building large centralized clouds, we will take the novel opposite approach for industrial analytics, one that our customers tell us they need and want: distributed “fog computing”, placed strategically throughout our customers’ organizations, with links to customer service for product performance and vendors for materials characteristics, process control for how products are made and quality control for the resulting product characteristics, to deliver them excellent visualization, understanding, prediction and optimization, right where it is needed and used.
Over to you
What do you think? Do you want your product, process and materials data in a vendor’s cloud? I’m also curious: We’re building this distributed cloud today—would you be interested in hearing how that goes in a future blog post? 😉
I cannot wait to chat over this with you in the comments!