Tagarchief: conundrum

Resnikoff’s Conundrum

In terms of quality of the component-data, there is still much to do. Reliability of components is significantly dependent on a wide range of environmental and operational factors in addition to the component characteristics. Therefore, the data are at the best average values with certain confidence levels. Most of the data present only failure rates, test reports aren’t existing or available to maintenance decision-makers.

The main source of reliability data has been the performance data from the equipment. To retrieve data from equipment one must ensure that these data represent failure modes in technically homogeneous equipment under the same operating conditions. In changing conditions, like preventive actions, modify the failure rate distribution currently observed.

The acquisition of the information thought the most needed is data about critical failures. This is because critical failures entail losses (i.e. fatalities, production), most without knowing which losses are acceptable to an organization. Thus, maintenance must prevent losses of life over the planned operational lifetime of the asset. This leads to maintenance policies is that are designed without properly knowing the failures which the policy is meant to avoid. But in fact, (fatal) accidents are always coincidences.

This is the background of the misconception behind de Resnikoff’s Conundrum. Resnikoff argues that if serious failures have been designed out at their first occurrence, there will never be an adequate sample for analysis. The calculated probability of the accident is made very small by design.

When failure data includes the effects of current and past maintenance practices, the point of potential failure (P) would be before the point of functional failure (F). So, the interval recorded would be shorter than MTBF. Failure is pre-empted and thus the interval would probably be longer than what would have been if there were no preventive maintenance carried out. Similarly, occurrence of one failure mode causes corrective action that may, in turn, prevent the occurrences of other failure modes.

When if such data is collected it is not likely to have enough sample size to be reliable. Even if data were accurate, the approach with reliability theory is cumbersome. Most organizations do not necessarily have enough competent personnel to change data in models in terms of applying them into the daily operations. Especially if they are to identify and perform analysis of several system and components. Modelling data is difficult to use in practice, especially if the user do not have experience with obtaining probability functions, programing or program simulations. As for end users, who work with maintenance in practice rather than in theory, they might not have the program they would need in order to run the simulations. Having the end users to understand and analyze the results is, however, not something that should be expected.