Improve Reliability With Digital Twins
EP Editorial Staff | October 12, 2022
Start with a criticality assessment to develop digital twins for your assets that will keep you ahead of maintenance and performance issues.
By Will Goetz, PCA Consulting
A digital twin should be based on a three-dimensional scan of an asset that connects to key static and dynamic data about that asset. From the visual depiction, a user should be able to access static data—equipment make and model, bills of materials, maintenance plans, and warranty information. A digital twin should also make it easy to access, view, and analyze time series indicators of asset health, including pressure, temperature, flow, electrical current, and vibration data. In addition, a digital twin should provide an early-warning system against functional failures and a platform for making corrective-action plans.
While digital twins offer a lot to be excited about, there are many hurdles to cross before they achieve their potential. The biggest obstacles are resource scarcity, building-out sensors, network security, and cultural acceptance. Starting today to lower those hurdles will provide immediate benefits in their own right, as well as prepare your operations for the futuristic potential of digital twins.
Begin by conducting a review to determine which assets should be included in digital-twin development. An accurate criticality assessment of equipment is essential to rank assets from most to least critical. This should be more granular than the A, B, and C ranking typically supported in an enterprise asset-management system. Use the criticality ranking to select a set of assets that does not overwhelm the available resources. Plan to add assets of decreasing criticality over time.
With this ranking as a guide, begin to develop the reliability perspective of your digital twins by cataloging their failure modes and defining how failures will be detected. Dust off any Reliability Centered Maintenance (RCM) analyses for critical systems or Failure Modes and Effects Analyses (FMEA) and review the methods for detecting early failure indicators.
Condition-monitoring and operational sensors are capable of identifying more than 90% of failure effects. The sensor data identified through failure analysis is the digital twin’s reliability data model and should include all of the sensors, whether the data is currently collected or the sensors need to be added in the future. Developing a catalog of potential failures is a critical step toward identifying what sensors will be needed. Failure studies will also inform the operations and maintenance actions to be taken when a failure is detected.
This step requires significant effort. Be wary of claims that you can skip the hard parts by dumping truckloads of data into artificial intelligence and machine-learning algorithms that will find the problems for you. Garbage in, garbage out is as immutable a law as the laws of thermodynamics. Software may be able to monitor staggering amounts of data but will not
impart context to the finding (How important is it?) or direct the organization on what to do next.
Doing the hard work will result in huge efficiency gains along the entire journey and help answer questions such as:
• How important is that sensor?
• Is real-time monitoring necessary?
Also, the few steps described above are the same prescription for laying the foundation of a reliability program.
In addressing resource scarcity, it’s vital to recognize that digital-twin development should be conducted at a marathon pace. It’s not a sprint. Internal resources should be budgeted for multiple years. Given the nature of this program, costs may even be capitalized, which can have a positive impact on operating budgets.
Nevertheless, given today’s labor scarcity, organizations will want to find resource efficiencies. For example, centralized groups can develop data models that can be leveraged across multiple sites, helping larger organizations gain efficiency. All organizations, regardless of size, may save many man-years of effort by purchasing customizable, off-the-shelf, failure-analysis templates. In addition, some analytical offerings include embedded failure analyses.
Sensors and networks
Building out sensors consists of several possible activities. The simplest is to gather existing operational data that is already stored in a data historian. Since this type of data was created to operate a manufacturing process, it’s unlikely to provide a complete picture of asset health. Condition-monitoring data will need to be added, including sensors that measure parameters, such as pressure, temperature, and flow, that are not needed for operating purposes. In addition, PLCs and OEM systems are likely to contain useful data not shared outside these systems. Knowing the importance of your failures will help build out your sensors logically and efficiently.
Companies should also focus on designing a network architecture that preserves process security while making process and non-process data available for cloud access, where maximum computing power is available. The Namur Open Architecture (NOA) provides a blueprint for maintaining security while accessing process data and augmenting it with sensor data required by the digital-twin data model. A detailed description of the concept can be found in the NAMUR Recommendation NE 175.
Developing a robust, scalable wireless strategy is a critical part of efficiently building out sensors. Wireless-sensor networks can also be perfectly aligned with NOA. Wireless is already the lowest-cost approach to deploying sensors and is likely to become even cheaper. It’s reasonable to expect that sensor manufacturers will begin to market lower cost wireless sensors that do not have the reliability or durability required in process control. Those higher-end sensors are not necessary for the purposes we’re talking about, thus the cost of deployment shouldn’t be compared to hardware used in mission-critical process control.
Realizing the benefits
What an organization does with a digital twin matters most. No one should be rewarded for finding a potential failure. Rewards should only be handed out for finding and repairing a problem without a functional failure. I once heard a client tell how they had abandoned their predictive-maintenance program because it failed to deliver value. Their rationale was not what you might think. They were actually very successful at finding potential failures but they garnered no value because they could not move the required actions through their work processes before the assets actually failed. If your work processes are broken and/or your people do not follow your processes, you will not realize value from better early-warning systems such as digital twins.
Peter Drucker famously quipped that, “Culture eats strategy for breakfast.” Digital twins may be an important technological innovation and even part of a larger reliability and workforce-transformation strategy but success depends on cultural adoption. A workforce that views digital twins as the “flavor of the day” will easily scuttle programs by sticking to the status quo. To ensure successful adoption, all ranks of company leadership should be educated on the importance of digital twins and provided with details of the multi-year project and frequent progress reports.
Take stock of your capabilities to compete today and five years from now and think about how the power of digital twins, in their end state, may transform your operations. While detailed, continuous measurement of asset health may seem like a lofty, distant goal, keep in mind that all of the activities required to establish a digital twin will bring benefits in their own right. Overall reliability improvement will not be the least of those benefits. EP
Will Goetz is Vice President of Corporate Development at Performance Consulting Associates, Duluth, GA (pcaconsulting.com). He brings to the role deep operational expertise, extensive market knowledge, and strong quantitative analytical skills. Prior to joining PCA he played a number of thought-leadership roles at Emerson.