Measuring lubrication program health

Mike Johnson | TLT Best Practices February 2010

Using this nine-point benchmarking tool, you can grade your performance to industry best practices.
 


www.canstockphoto.com

KEY CONCEPTS
The first step to improving a lubrication program is benchmarking against an objective standard.
The complete array of lubrication tasks in a typical plant environment can be segregated into nine key areas.
The benchmark practice should be immediately followed by a gap analysis and the development of a strategic plan for improvement.

In recent TLT articles we’ve discussed the tools and methods used by reliability engineers and managers to deliver objective, quantifiable methods for plant lubrication decisions and actions, all focused on correcting incomplete or poor work practices. When work practices are derived through honest deliberation, objective and quantifiable practices, it leads to precise, efficient, quality results, which leads to improvements from the bottom to the top of the plant. Precise, clearly defined lubrication practices help improve quality (machine repeatability) and productivity (machine reliability), which produces wealth for the enterprise.

Oddly enough, plant managers shy away from exercising a similar form of objective quantification for the entire program management practice/ process. Without objective quantification of the whole, they can delude themselves into thinking that the parts of their programs are in much better shape than they really are.

So for the benefit of the lubrication program’s long-term health, this article prescribes an activity which can produce enlightening results.

BENCHMARKING 
This simple but true axiom has been stated in a variety of ways. Strategic management of a process is less likely if the process is not quantified and compared to an expected outcome. Benchmarking is a well-used but still progressive measurement technique. Yet some plant managers view benchmarking with the kind of enthusiasm usually saved for a trip to the mortuary.

Conceptually, benchmarking could be characterized in a variety of ways. For example, Xerox, the company that made benchmark a household word, defines benchmarking as “the continuous process of measuring (our) products, services and practices against those of our toughest competitors or companies renowned as leaders (1).” Xerox clearly views ongoing measurement as a means to a competitive advantage. Additionally, Xerox views comparison of itself against its strongest competitors as a key component of that value. This second part may be difficult for many reliability and lubrication program leaders to accomplish, but when there is a will there usually is a way.

An alternative definition for benchmarking is “an ongoing process for measuring and improving business practices against the companies that can be identified as the best worldwide (2).” This is a similar definition with a view toward value proposition, which has a similar challenge in gaining access to the best competitors.

A third definition is “benchmarking sources best practices to feed continuous improvement (3).” Even conceding the open question regarding precisely what is meant by best practice, this view embodies the spirit of the intent. Measuring an activity that clearly influences overall equipment effectiveness for the organization for the expressed purpose of continuous improvement is the right thing to do, even if it isn’t a convenient or pleasant activity.

Following completion of the benchmark, an organization should see strengths and weaknesses vs. stated priorities/values. Once identified, management is in a position to quickly decide to pursue or defer from allocating time and funding for improvement.

It is essential to have an objective and thorough assessment of the current state of conditions before embarking on an improvement trip. An organization cannot know the best path to take if it doesn’t know the starting point.

DESIGNING THE SURVEY
Following is a partial list of the many discrete activities from the lubrication program that should be evaluated:

Vendor selection
Lubricant selection
Lubricant purchasing
Lubricant shipping/receipt
Lubricant storage: Depot level
Lubricant storage: Open stores
Lubricant handling
Quality analysis sampling
Quality analysis testing
New lubricant cleanliness
New lubricant dryness
New lubricant chemical veracity
In-use lubricant testing: Sampling
In-use lubricant testing: Frequency
In-use lubricant testing: Test slate development
In-use lubricant testing: Alarm limits development
In-use lubricant testing: Condemning parameters
In-use lubricant testing: Laboratory quality verification.

The list does go on. The point is that there are many individual considerations to make.

In order to simplify the exercise, it is useful to define categories of relevance and then compile a set of questions for each program category. The list has risen and fallen over the years, but as it stands today we promote a survey incorporating nine categories, as shown in Figure 1.


Figure 1. Key functions/categories for lubrication program development. (Courtesy of Advanced Machine Reliability Resources)

Each category is comprised of subcategories. The subcategories are comprised of multiple questions each. For instance, Section 4 of the survey, which pertains to Lubricant Technical Selection Practices, breaks down into a collection of six subcategories, as shown in Figure 2.


Figure 2. Lubricant Technical Selection category and subcategories. (Courtesy of Advanced Machine Reliability Resources)

These subcategories review the methods and practices in place for properly matching the available lubricants to each machine. Since there are both oil and grease products in use, there are separate subcategories and questions about the nature of the practices for each. Additionally, since there are many considerations for the selection and use of high-performance lubricants, there is a subcategory with questions that address the plant criteria for selection of HP products. Criteria regarding application volumes, frequencies and methods are addressed in a similar manner in their respective subcategories. All of these topics, and the reasoning behind the plant decisions for the topics, are central to optimizing lubrication plans.

The eight other category designations are built up in the same fashion. All categories, including subcategory considerations for each category and questions for each subcategory, are identified. The survey framework is developed, and the questions are written to identify and discreetly grade each single criterion that makes up the subcategory (and the category in turn).

SCORING THE SURVEY 
The scoring method for the survey should be consistent, simple and easily explainable. All of the questions are answered with an objective Yes or No response. For each Yes response, the surveyor must characterize the qualitative state of the claimed condition.

For a review and example, Figure 2 represents the category Lubricant Technical Selection and Application Practices. Figure 3 expands on this portion of the survey.


Figure 3. Lubricant Technical Selection category and subcategory designations. (Courtesy of Advanced Machine Reliability Resources)

The first statement in the subcategory Lubricant Selection for Oil is: Lubricant viscosity and additive type specifications are in place for each machine oil sump. The surveyor reviews the question, considers whether this statement is True or False for the site under consideration and answers accordingly with a 1 for True and a 0 for False.

Additionally, for the True response, the surveyor considers whether this response is True all the time, most of the time or hardly ever true, in order to provide a quality or consistency rating following the absolute rating.

Figure 4 reflects the consistency/quality scoring approach. We followed a 10-point scale, where a 2 is Very Low Consistency/Very Low Compliance/Very Low Quality and a 10 represents Perfect Consistency/Perfect Compliance/Perfect Quality.


Figure 4. Qualitative scores profile and explanation. (Courtesy of Advanced Machine Reliability Resources)

Let’s imagine that a refinery is under review and it does have viscosity and additive specifications in place for every lubricated component, but the specifications are defined by product brand, type and grade designations from a 20-year old lubrication survey, and each and every machine component requirement has been addressed. The quantitative score is True (1) and the qualitative score is high (9-10, meaning near perfect-to-perfect compliance).

The next statement response is: Each application for viscosity and additive selection is based on standardized industry practices or direct machine OEM advice. Based on the noted survey from the previous example as the standing selection reference, only branded product types and grades are provided but without reasonable direction about the rules and principals used to make decisions. Then the answer is True (1). However, since the reasoning behind the selections is hidden from view, the user does not know the principals behind the decisions. Therefore, the quality response to this question is going to be low, perhaps 1 or 2.

It is always best for each machine lubricant selection decision to be rooted in a calculation that is reproducible by the site engineering department, which incorporates the operating conditions and is based on standardized engineering principals and methods. This may seemingly imply that the lubricant vendor’s advice is poor, but that isn’t necessarily the case.

If the vendor’s advice is rooted in standardized principals that are known by the site reliability department and they can reproduce the specifications, then there is quality. However, if the selection is driven by other concerns (brand, margin, price, consolidation, product availability within the brand, color of the ocean, etc.) or the reasoning is hidden and the decision cannot be independently reproduced, then quality is lacking.

The next subcategory statement says: There is a plant record for each lubricant viscosity and additive type specification for each machine sump. This may seem redundant to the first two questions but is not. Often, in this area of maintenance, tribal legend is the reason for many decisions, and the tribal knowledge is retained between the ears of the standing chief. If the chief retires without documenting whatever reasoning is in place, then the site faces the prospect of losing much time to re-evaluate conditions from the beginning. Inevitably the lack of documentation causes the organization to spend time and effort recreating past decisions in order to move forward. This non-documentation practice becomes a defect for the lubrication program and should be corrected. Referencing the lube survey mentioned in this article, a record is intact. So the objective response is True and the subjective response is high (9-10).

Unlike the first two preceding points of interest, this statement is subjective in nature. Site management may argue that a machine can run without having a document that details the lubricant specifications and that the documentation process would be costly and resource-intensive to produce. Management would, of course, be right. However, keeping in mind that this is a practice intended to turn over opportunities for improvement, the subjective point is necessary for the survey.

The final statement for this subsection is: The plant record explains what the lubricant specification is for each sump and how it was derived. This response follows from the previous statement. If there is no documentation, then this question is ignored in the assessment.

However, assuming documentation exists, this statement addresses the quality and effectiveness of the documentation itself. Referencing the lube survey again as the basis of the selection decision, if the available documentation denotes a brand and product type but does not either directly or in referenced notes also denote how the selection decisions are made for each sump, then the quality of the documentation is poor. This statement receives a 1 (True) for consistency and a 1-2 for poorly defined documentation.

Therefore, the scores for this subsection are as follows, as shown in Figure 5.


Figure 5. The oil selection subcategory scores for the Lubricant Technical Selection Practices category. (Courtesy of Advanced Machine Reliability Resources)

TABULATING AND DISPLAYING RESULTS 
For each subcategory column, the column (OBJ, SUB) scores are averaged top to bottom and the averaged values are multiplied together to produce the subcategory final score. Accordingly, the objective grade for each question is multiplied by the subjective grade [((OJB = (1+1+1+1)/4 = 1) * (SUB = (10+2+10+2)/4 = 6))] to produce a score of 6 for the subcategory Standardized Lubricant Selection Criteria for Oil.

The rest of the subcategory scores are combined similarly to make category scores, and the category scores are combined similarly to make overall program scores.

It is helpful to graphically display the collective category findings to avoid data overload during interpretation of the results. Figure 6 shows how the category scores display using a multidimensional chart that can be produced from a spreadsheet program. The background represents a reasonable high-quality target, and the foreground represents the site score. The differential between best-of-class results and the actual results are easily recognized in this view.


Figure 6. Standard multidimensional diagram. (Courtesy of Advanced Machine Reliability Resources)

SUMMARY 
Benchmarking is a well-used and highly beneficial measurement tool. The benchmark can be used to compare company practices between sites, to compare practices against competitors or for the sake of establishing a realistic assessment of the current state of progress.

Lubrication program benchmarking is comprised of nine routine functions regardless of industry. There also may be plant- or industry-specific points of interest. The nine topics represent categories found in every program. Each category is based on multiple questions that are divided into subcategories. A category may have multiple subcategories and a large number of questions.

The questions are answered with an objective and a subjective response. The objective answer provides a Yes or No response to a request about whether a specific parameter exists. If the response is Yes then a quality assessment is made, providing a quality answer ranging from a 10 (superior) to a 1 (no quality). All inputs are averaged into subcategory scores and these are averaged into category scores.

The objective and subjective scores are multiplied together to produce a category score and eventually an aggregate score. These values are easily digestible when presented graphically and compared to a tangible best-of-class score.

REFERENCES
1. Camp, R.A., “A Bible for Benchmarking, by Xerox,” Financial Executive, July 1993.
2. Wireman, T. (2004), Benchmarking Best Practices in Maintenance Management, Industrial Press Inc., Chapter 2, p. 27.
3. IBID


Mike Johnson, CLS, CMRP, MLT, is the principal consultant for Advanced Machine Reliability Resources, in Franklin, Tenn. You can reach him at mike.johnson@precisionlubrication.com.