2015 Management Uncategorized

Drive Strategy With Performance Metrics

EP Editorial Staff | July 10, 2015

0715perfmet1

Use this culture- and target-based method to unlock the hidden value of your measurements, and holistically improve your manufacturing operations.

By David A. Rosenthal, P.E., CMRP

For most companies, manufacturing strategy focuses on safety, cost, quality, productivity, reliability, delivery, and flexibility. Those elements are measured through performance metrics. Metrics help determine the achievement of goals for the manufacturing strategy elements and, importantly, inform of either success or failure. They can also drive improvement when targets are set incrementally from year to year. But, for them to work, they must be properly designed and implemented to drive the behaviors needed to achieve strategic goals.

The purpose

The purpose of performance metrics is, first, to determine how well the manufacturing unit is meeting its goals for the elements included in its strategy. The maintenance process is connected to the strategy through goals set for safety, cost, reliability (uptime), and productivity. These are part of a cascade of metrics that exists from higher levels to floor-level metrics arranged on a dashboard.

For instance, uptime, or asset utilization, is connected to a higher-level metric such as mechanical availability. This metric, in turn, is connected to mean time to repair (MTTR) and mean time between failures (MTBF). Maintenance productivity is connected to schedule compliance, which, in turn, is connected to mechanic utilization and wrench time. Cost is also connected to MTBF which, in turn, connects to labor, parts, overtime, and storeroom inventory. The dashboard reporting is important to the overall execution of any one performance metric.

Managing manufacturing-strategy implementation through a mix of various performance metrics is always preferred since behaviors are often influenced by what gets measured. If a site focuses exclusively on MTTR, for instance, maintenance personnel may take shortcuts that are detrimental to reliability and cost. Uptime should be driven by failure elimination, along with work-process efficiency improvements.

0715perfmet2

Performance metrics’ second purpose is to optimize the level of asset care. We know that an optimum exists in the total cost of asset care between under-maintaining and over-maintaining equipment (see Fig. 1). Performance metrics provide a gauge of where a site is on this continuum of the cost of  asset care, combined with benchmark information. Site personnel want to know what impact their actions have on manufacturing strategy achievement. For this reason, performance metrics should be included with their activities in mind. They should provide workers ways to determine if they are “moving the needle.” This is primarily done through leading metrics, which measure process activities (discussed later).

Third, common performance metrics allow benchmarking against competitors and across industries. Many performance metrics are accepted as use for comparative analysis. Benchmarking can provide insight regarding the optimum level of asset care. But remember that achieving certain levels of performance metrics may or may not provide a competitive advantage. A site’s MTBF for rotating assets of 40 months, for example, might be considered good until the industry’s benchmark is learned to be more than 60 months. This difference can lead to one site spending more for rotating-asset care than its competitors. The ability to benchmark with performance metrics is best used to allow owners to revise their asset-care and maintenance-delivery strategies to approach an optimum level.

Make metrics work

Performance metrics often lack the proper design (alignment, hierarchy) and are poorly implemented, i.e., they lack identified responsibilities and are hard to collect and report. Also, owners tend to manage with lagging metrics which are strategic in nature. Personnel may not know how they can affect these strategic metrics. This often produces the wrong behaviors needed to achieve the strategy. Also, the right mix of lagging and leading (tactical) metrics are not used. Complicating deployment is the amount of metrics used to drive the manufacturing strategy. The more that are used, the harder it is to see the “big picture” of improvement. Generally, less than five lagging metrics are needed, combined with the same number of leading metrics.

Patience is especially important in making performance metrics work. Managers should avoid the tendency to take a “knee-jerk” reaction to metrics that are not trending toward targets. Many times, the work practices being implemented have not been fully outlined. This can mean that progress, at first, seems slow as participants learn new roles and sometimes revert to old ways. Cultural change is the last to occur. Eventually, the performance metrics will show slow, but steady, progress if the site is patient and persistent.

Proper implementation of performance metrics requires:

  • Assignments for collecting, reporting, analyzing, and follow-up tasks
  • Alignment with maintenance execution work-process steps
  • The setting of intermediate targets, not just those that represent a “stretch”
  • Proper leveraging of the computerized maintenance management system (CMMS) to minimize or eliminate the preparation and collection of individual spreadsheets
  • Proper reporting periods for lagging (strategic) metrics
  • Creation of a proper escalation system to establish actions that will correct out-of-range results.

Successful implementation of performance metrics starts with the design process. The dashboard should start with a few high level “corporate” metrics such as asset utilization and maintenance dollars spent/replacement asset value, e.g., PRV. These metrics allow comparisons between sites and across industry. Plenty of benchmark data exist for these metrics. When starting out, the corporation should monitor each site’s performance, then set reasonable targets that can be increased year-to-year toward a five-year benchmark goal. They should be reported monthly or quarterly.

0715perfmet3

The next level of metrics should then be linked to each of the corporate metrics in the dashboard. They should be aligned with the maintenance execution work process (see Fig. 2). A good mix of leading and lagging metrics is needed for those closer to the work to understand how their activities can “move the needle.”

Lagging vs. leading metrics

Lagging metrics are more strategic in nature and generally the highest level reporting for any one manufacturing strategy element. For instance, for reliability, lagging metrics include mechanical availability. Lagging metrics for productivity include uptime, total product produced, and overall equipment effectiveness (OEE). These should be reported on a weekly to monthly basis.

The optimum that exists between the cost of under-maintaining and over-maintaining equipment can be accurately gauged and tracked by performance metrics.

The optimum that exists between the cost of under-maintaining and over-maintaining equipment can be accurately gauged and tracked by performance metrics.

Leading metrics report the performance of a work process. These are more “floor-level” directed. They have a direct impact on what people do on a day-to-day basis. Analogous to measuring safety performance through the number of safety-related activities, maintenance activities measured by leading metrics include work initiated from preventive-maintenance tasks, time to obtain permits, and the number of critical-asset failures. These should be reported on a more frequent, daily or weekly basis.

A chart that indicates responsibility for collection, reporting, analyzing, and follow-up tasks should accompany each metric. The most important aspect of managing on-site performance metrics is taking action for out-of-range metrics. A system for corrective action and follow-up should be established. These can include Pareto analysis, setting intermediate plans, root-cause investigation, and re-evaluation of targets.

The full capabilities of the CMMS should also be used to provide reporting. Avoid the use of personal spreadsheets. Many sites under-utilize the functionality of their CMMS. In some cases, the functionality isn’t activated. Once the administrative details are settled, responsibilities for reporting field information, producing schedules, and recording work history and failure codes should be set and aligned with the work process. Most CMMSs provide standard reports for common metrics such as schedule compliance, planning accuracy, MTBF, and others.

Performance metrics in use
Use the full capabilities of your CMMS to report field information, produce schedules, and record work history. Avoid use of personal spreadsheets.Photos: Gary L. Parr

Use the full capabilities of your CMMS to report field information, produce schedules, and record work history. Avoid use of personal spreadsheets.Photos: Gary L. Parr

Once deployed at the site, metrics should receive high visibility at meetings, on reference boards, and on internal web pages so all concerned can see progress made toward targets. Remember the timing aspect for leading and lagging metrics: Report leading metrics more often than lagging.

Some sites will build in cross-responsibility for maintenance and operations metrics. For instance, the maintenance manager becomes responsible for certain operations metrics such as production-schedule compliance, and the operations manager becomes responsible for maintenance-schedule compliance.

Cultural change is the key to ultimately reaching intended targets. But it takes time to reach the set of behaviors needed to impact manufacturing strategy. If schedule compliance hovers around 60%, it is important to look at why weekly schedules cannot be met. It might be the level of reactive work, low mechanical utilization, poor planning, the gatekeeper role not being performed adequately, or not enough proactive work on the schedule. Further investigation and perhaps a third-party might be needed to take an objective look at the situation.

It is also important to evaluate progress year to year. Benchmark occasionally across manufacturing units and across the industry. Remember that achieving performance metric goals is a journey and that patience is the key to surviving the trip. MT

David Rosenthal, PE, CMRP, owns Reliability Strategy and Implementation Consultancy LLC, Seabrook, TX, a firm that provides asset-care strategy consulting. He formerly led the delivery of reliability and asset management practices for Jacobs Houston Asset Management Services (Houston) clients in the U.S. Contact Dave at davida.rosenthal@prodigy.net.

FEATURED VIDEO

Sign up for insights, trends, & developments in
  • Machinery Solutions
  • Maintenance & Reliability Solutions
  • Energy Efficiency
Return to top