Complexity in ISFA (in-service fluid analysis): Part XX

Jack Poley | TLT On Condition Monitoring January 2015

Successful CM programs are straightforward—but that doesn’t mean they’re easy.
 


www.canstockphoto.com

PREPARING FOR AN EFFECTIVE CONDITION MONITORING PROGRAM IS A MAJOR UNDERTAKING. The larger the entity or the more varied and complex are the entity’s holdings, the more difficult the task of establishing a successful CM program.

In the previous column, I debunked the notion that ISFA (or any aspect of CM) is trivial to install or somehow runs on autopilot. Such thinking is a recipe for disappointment. Maybe it’s simply a recipe for throwing money away. It needn’t be that way, but a lot of pre-consideration is crucial.

It is important to understand the fundamentals that make or break a program. Toward this point, I want to elaborate on the matrix from my November 2014 column, delving more deeply into the caveats and pitfalls that can occur despite very good intentions.

COMMON END-USER ERRORS AND OMISSIONS
1. Not defining one’s program fully. This involves failure to establish a corporate goal for productivity (uptime) and ROI but still taking lube samples to analyze in a corporate lab or sending them to a commercial lab or perhaps one’s lube supplier. Such programs lack any sort of grounding and, therefore, have no accountability built into their framework.

2. Placing an otherwise good, competent person in charge of the program when that person already has a full slate of responsibilities. CM, at any reasonable level, is a full-time job or very close. Corollary: Placing a person in charge who has no familiarity with CM, at least at a peripheral or overview level or who will not receive any support training, presents a clear risk. Relevant experience is highly valuable and should be a requirement for a CM/ISFA manager.

3. Failure to opt for the most effective ISFA relationship possible:

Corporate in-house labs, whether housed in a building or set up as Tier 2 (on-site) in satellite corporate sites, have seemed most effective in my experience because there was greater demand and consensus for initiation. Such labs are equipped to address the entity’s specific machinery undergoing monitoring, resulting in a better marriage of tests versus tasks. There are often domain expert candidates (who can be promoted from related positions such as lubrication or repair and maintenance) available for such entities when a decision is made to stand up a corporate lab. The learning curve is appropriately lower when a person already involved at the entity is elevated in such a manner.

Commercial labs (Tier 3) have the location limitation, wherein there is often no ongoing discussion or review of the program. Essentially an arm’s length relationship that’s, in effect, hit and miss. Labs with strong evaluators will usually be effective, but without any first-name relationships and ongoing dialogue the program’s unlikely to amount to what it could/should be.

OEM- or oil company-supplied programs, while free are, well…free. That’s often the respect such programs are accorded and where there is the least amount of responsibility shown on the entity side. I’ve seen dated reports stacked unread on desks, indicative of the value placed in them. In fair defense, most of these supplied programs are beneficial to a tangible extent, if not as effective as they might be, when reports are promptly read and a decent amount of follow up is undertaken. Again, programs do not run on autopilot. The entity needs blame itself if that attitude exists.

4. Selecting testing equipment or services (or an ISFA laboratory) without sufficiently vetting the situation as to pertinent tests for the machinery placed on CM or, for that matter, the lab’s qualifications in terms of customer base (experience with the entity’s machinery types).

5. Using a Tier 3 (offsite) lab but not forming a relationship or even knowing the name of the evaluator(s) who renders opinions and commentary on the test data. The user may know the lab has a fine reputation, and that may be absolutely so, but I suggest to you that the lab’s reputation is primarily based on the rendering of accurate test data from promptly analyzed samples (i.e., the act of being a good lab). Here’s where oil analysis is simply an inadequate term. Most established labs are quite competent and diligent. Virtually all of them are now accredited as testers. They’re not accredited as to evaluation, however, because there’s no established standard for that, and a method would be a huge problem undertaking at the very least. 

*Note: Please do not offer ASTM D7720-11 as an example―that’s a (very reasonable) guideline and advisory for using sound statistical techniques and models to assess and flag data. Many labs provide nice summary data in that regard, graphs and the like (lots of visuals in a dashboard), but flagged or otherwise categorized data don’t spit out recommendations. Humans or expert systems do. Not all expert systems, and certainly not all humans, are created equal when it comes to evaluation. Evaluation is, as I’ve stated time and again, the end-game in ISFA. Uptime maximization and ROI from ISFA and CM is post end-game work and should be vetted in the corporate office in consideration of good CM data.

Read the above initiation section again if you’re embarking on an ISFA program. Here is another matrix, regarding one’s selection of lab configuration, to build on that of the previous column:



Given the entity conducts a good program initiation, one must then look at ongoing concerns that should be addressed―lest they squelch program effectiveness and objectives. 

For the next article, we’ll explore the program in action.


Jack Poley is managing partner of Condition Monitoring International (CMI), Miami, consultants in fluid analysis. You can reach him at jpoley@conditionmonitoringintl.com. For more information about CMI, visit www.conditionmonitoringintl.com.