Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Reliability Mathematics: Concepts, Methods, and Applications, Lecture notes of Mechatronics

Reliability Maintanenace and Managment Notes GGSIPU

Typology: Lecture notes

2017/2018

Uploaded on 06/05/2018

vishalkumar14
vishalkumar14 🇮🇳

4.5

(2)

4 documents

1 / 79

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f

Partial preview of the text

Download Reliability Mathematics: Concepts, Methods, and Applications and more Lecture notes Mechatronics in PDF only on Docsity!

Reliability Mathematics

2.1 Introduction

The methods used to quantify reliability are the mathematics of probability and statistics. In reliability work we are dealing with uncertainty. As an example, data may show that a certain type of power supply fails at a constant average rate of once per 10 7 h. If we build 1000 such units, and we operate them for 100 h, we cannot say with certainty whether any will fail in that time. We can, however, make a statement about the probability of failure. We can go further and state that, within specified statistical confidence limits , the probability of failure lies between certain values above and below this probability. If a sample of such equipment is tested, we obtain data which are called statistics. Reliability statistics can be broadly divided into the treatment of discrete functions, continuous functions and point processes. For example, a switch may either work or not work when selected or a pressure vessel may pass or fail a test—these situations are described by discrete functions. In reliability we are often concerned with two-state discrete systems, since equipment is in either an operational or a failed state. Continuous functions describe those situations which are governed by a continuous variable, such as time or distance travelled. The electronic equipment mentioned above would have a reliability function in this class. The distinction between discrete and continuous functions is one of how the problem is treated, and not necessarily of the physics or mechanics of the situation. For example, whether or not a pressure vessel fails a test may be a function of its age, and its reliability could therefore be treated as a continuous function. The statistics of point processes are used in relation to repairable systems, when more than one failure can occur in a time continuum. The choice of method will depend upon the problem and on the type of data available.

2.2 Variation

Reliability is influenced by variability, in parameter values such as resistance of resistors, material properties, or dimensions of parts. Variation is inherent in all manufacturing processes, and designers should understand the nature and extent of possible variation in the parts and processes they specify. They should know how to measure and control this variation, so that the effects on performance and reliability are minimized. Variation also exists in the environments that engineered products must withstand. Temperature, mechanical stress, vibration spectra, and many other varying factors must be considered.

Practical Reliability Engineering , Fifth Edition. Patrick D. T. O’Connor and Andre Kleyner. © 2012 John Wiley & Sons, Ltd. Published 2012 by John Wiley & Sons, Ltd.

20 Chapter 2 Reliability Mathematics

Statistical methods provide the means for analysing, understanding and controlling variation. They can help us to create designs and develop processes which are intrinsically reliable in the anticipated environments over their expected useful lives. Of course, it is not necessary to apply statistical methods to understand every engineering problem, since many are purely deterministic or easily solved using past experience or information available in sources such as databooks, specifications, design guides, and in known physical relationships such as Ohm’s law. However, there are also many situations in which appropriate use of statistical techniques can be very effective in optimizing designs and processes, and for solving quality and reliability problems.

2.2.1 A Cautionary Note

Whilst statistical methods can be very powerful, economic and effective in reliability engineering applications, they must be used in the knowledge that variation in engineering is in important ways different from variation in most natural processes, or in repetitive engineering processes such as repeated, in-control machining or diffusion processes. Such processes are usually:

  • Constant in time, in terms of the nature (average, spread, etc.) of the variation.
  • Distributed in a particular way, describable by a mathematical function known as the normal distribution (which will be described later in this chapter).

In fact, these conditions often do not apply in engineering. For example:

  • A component supplier might make a small change in a process, which results in a large change (better or worse) in reliability. The change might be deliberate or accidental, known or unknown. Therefore the use of past data to forecast future reliability, using purely statistical methods, might be misleading.
  • Components might be selected according to criteria such as dimensions or other measured parameters. This can invalidate the normal distribution assumption on which much of the statistical method is based. This might or might not be important in assessing the results.
  • A process or parameter might vary in time, continuously or cyclically, so that statistics derived at one time might not be relevant at others.
  • Variation is often deterministic by nature, for example spring deflection as a function of force, and it would not always be appropriate to apply statistical techniques to this sort of situation.
  • Variation in engineering can arise from factors that defy mathematical treatment. For example, a thermostat might fail, causing a process to vary in a different way to that determined by earlier measurements, or an operator or test technician might make a mistake.
  • Variation can be discontinuous. For example, a parameter such as a voltage level may vary over a range, but could also go to zero.

These points highlight the fact that variation in engineering is caused to a large extent by people, as designers, makers, operators and maintainers. The behaviour and performance of people is not as amenable to mathematical analysis and forecasting as is, say, the response of a plant crop to fertilizer or even weather patterns to ocean temperatures. Therefore the human element must always be considered, and statistical analysis must not be relied on without appropriate allowance being made for the effects of factors such as motivation, training, management, and the many other factors that can influence reliability. Finally, it is most important to bear in mind, in any application of statistical methods to problems in science and engineering, that ultimately all cause and effect relationships have explanations, in scientific theory, engineering design, process or human behaviour, and so on. Statistical techniques can be very useful in helping us to understand and control engineering situations. However, they do not by themselves provide

22 Chapter 2 Reliability Mathematics

1 2 3 4 5 6 7 8 9

10

Batch

Figure 2.1 Samples with defectives (black squares).

A sample represents a population if all the members of the population have an equal chance of being sampled. This can be achieved if the sample is selected so that this condition is fulfilled. Of course in engineering this is not always practicable; for example, in reliability engineering we often need to make an assertion about items that have not yet been produced, based upon statistics from prototypes. To the extent that the sample is not representative, we will alter our assertions. Of course, subjective assertions can lead to argument, and it might be necessary to perform additional tests to obtain more data to use in support of our assertions. If we do perform more tests, we need to have a method of interpreting the new data in relation to the previous data: we will cover this aspect later. The assertions we can make based on sample statistics can be made with a degree of confidence which depends upon the size of the sample. If we had decided to test ten items after introducing a change to the process, and found one defective, we might be tempted to assert that we have improved the process, from 30 % defectives being produced to only 10 %. However, since the sample is now much smaller, we cannot make this assertion with as high confidence as when we used a sample of 100. In fact, the true probability of any item being defective might still be 30 %, that is, the population might still contain 30 % defectives. Figure 2.1 shows the situation as it might have occurred, over the first 100 tests. The black squares indicate defectives, of which there are 30 in our batch of 100. If these are randomly distributed, it is possible to pick a sample batch of ten which contains fewer (or more) than three defectives. In fact, the smaller the sample, the greater will be the sample-to-sample variation about the population average, and the confidence associated with any estimate of the population average will be accordingly lower. The derivation of confidence limits is covered later in this chapter.

2.4 Rules of Probability

In order to utilize the statistical methods used in reliability engineering, it is necessary to understand the basic notation and rules of probability. These are:

1 The probability of obtaining an outcome A is denoted by P (A), and so on for other outcomes. 2 The joint probability that A and B occur is denoted by P (AB).

Rules of Probability 23

3 The probability that A or B occurs is denoted by P (A + B). 4 The conditional probability of obtaining outcome A, given that B has occurred , is denoted by P (A | B). 5 The probability of the complement, that is, of A not occurring, is P ( ¯A) = 1 – P (A) 6 If (and only if) events A and B are independent , then

P (A|B) = P (A| B)¯ = P (A)

and

P (B|A) = P (B| A)¯ = P (B) (2.1)

that is, P (A) is unrelated to whether or not B occurs, and vice versa. 7 The joint probability of the occurrence of two independent events A and B is the product of the individual probabilities:

P (AB) = P (A) P (B) (2.2)

This is also called the product rule or series rule. It can be extended to cover any number of independent events. For example, in rolling a die, the probability of obtaining any given sequence of numbers in three throws is

1 6

×

×

8 If events A and B are dependent , then

P (AB) = P (A) P (B|A) = P (B) P (A|B) (2.3)

that is, the probability of A occurring times the probability of B occurring given that A has already occurred, or vice versa. If P (A) = 0, (2.3) can be rearranged to

P (B|A) =

P (AB)

P (A)

9 The probability of any one of two events A or B occurring is

P (A + B) = P (A) + P (B) − P (AB) (2.5)

10 The probability of A or B occurring, if A and B are independent, is

P (A + B) = P (A) + P (B) − P (A) P (B) (2.6)

The derivation of this equation can be shown by considering the system shown in Figure 2.2, in which either A or B, or A and B, must work for the system to work. If we denote the system success probability

Rules of Probability 25

Example 2.

The reliability of a missile is 0.85. If a salvo of two missiles is fired, what is the probability of at least one hit? (Assume independence of missile hits.) Let A be the event ‘first missile hits’ and B the event ‘second missile hits’. Then

P (A) = P (B) = 0. 85

P ( ¯A) = P ( ¯B) = 0. 15

There are four possible, mutually exclusive outcomes, AB, A ¯B, ¯AB; ¯A ¯B. The probability of both missing, from Eq. (2.2), is

P ( ¯A) P ( ¯B) = P ( ¯A ¯B)

Therefore the probability of at least one hit is

P s = 1 − 0. 0225 = 0. 9775

We can derive the same result by using Eq. (2.6):

P (A + B) = P (A) + P (B) − P (A) P (B)

Another way of deriving this result is by using the sequence tree diagram :

A

A

B

B

B

B

P (AB) = P (A) P (B) = 0.85 0.85 = 0.

P (AB) = P (A) P (B) = 0.85 0.15 = 0.

P (AB) = P (A) P (B) = 0.15 0.85 = 0.

P (AB) = P (A) P (B) = 0.15 0.15 = 0.

The probability of a hit is then derived by summing the products of each path which leads to at least one hit. We can do this since the events defined by each path are mutually exclusive.

P (AB) + P (A ¯B) + P ( ¯AB) = 0. 9775

(Note that the sum of all the probabilities is unity.)

26 Chapter 2 Reliability Mathematics

Example 2.

In Example 2.1 the missile hits are not independent, but are dependent, so that if the first missile fails the probability that the second will also fail is 0.2. However, if the first missile hits, the hit probability of the second missile is unchanged at 0.85. What is the probability of at least one hit?

P (A) = 0. 85

P (B|A) = 0. 85

P ( ¯B|A) = 0. 15

P ( ¯B| A)¯ = 0. 2

P (B| A)¯ = 0. 8

The probability of at least one hit is

P (AB) + P ( ¯AB) + P ( ¯BA)

Since A, B and A ¯B are independent,

P (AB) = P (A) P (B)

= 0. 85 × 0. 85 = 0. 7225

and

P (A ¯B) = P (A) P ( ¯B)

= 0. 85 × 0. 15 = 0. 1275

Since ¯A and B are dependent, from Eq. (2.3),

P ( ¯AB) = P ( ¯A) P (B| A)¯

= 0. 15 × 0. 8 = 0. 12

and the sum of these probabilities is 0.97. This result can also be derived by using a sequence tree diagram:

Two hits

One hit One hit

No hits

A

A

B

B

0.15 B

0.85 0.

B

28 Chapter 2 Reliability Mathematics

Example 2.

A test set has a 98 % probability of correctly classifying a defective item and a 4 % probability of classifying a good item as defective. If in a batch of items tested 3 % are actually defective, what is the probability that when an item is classified as defective, it is truly defective? Let D represent the event that an item is defective and C represent the event that an item is classified defective. Then

P (D) = 0. 03 P (C|D) = 0. 98 P (C| D)¯ = 0. 04

We need to determine P (D|C). Using Eq. (2.10),

P (D|C) =

P (D) P (C|D)

P (C|D) P (D) + P (C| D)¯ P ( ¯D)

This indicates the importance of a test equipment having a high probability of correctly classifying good items as well as bad items. More practical applications of the Bayesian statistical approach to reliability can be found in Martz and Waller (1982) or Kleyner et al. (1997).

2.5 Continuous Variation

The variation of parameters in engineering applications (machined dimensions, material strengths, transistor gains, resistor values, temperatures, etc.) are conventionally described in two ways. The first, and the simplest, is to state maximum and minimum values, or tolerances. This provides no information on the nature, or shape, of the actual distribution of values. However, in many practical cases, this is sufficient information for the creation of manufacturable, reliable designs. The second approach is to describe the nature of the variation, using data derived from measurements. In this section we will describe the methods of statistics in relation to describing and controlling variation in engineering. If we plot measured values which can vary about an average (e.g. the diameters of machined parts or the gains of transistors) as a histogram, for a given sample we may obtain a representation such as Figure 2.3(a). In this case 30 items have been measured and the frequencies of occurrence of the measured values are as shown. The values range from 2 to 9, with most items having values between 5 and 7. Another random sample of 30 from the same population will usually generate a different histogram, but the general shape is likely to be similar, for example, Figure 2.3(b). If we plot a single histogram showing the combined data of many such samples, but this time show the values in measurement intervals of 0.5, we get Figure 2.3(c). Note that now we have used a percentage frequency scale. We now have a rather better picture of the distribution of values, as we have more information from the larger sample. If we proceed to measure a large number and we further reduce the measurement interval, the histogram tends to a curve which describes the population probability density function (pdf) or simply the distribution of values. Figure 2.4 shows a general unimodal probability distribution, f( x ) being the probability density of occurrence, related to the variable x.

Continuous Variation 29

1 23 4 56 7 89

2

4

6

8

10

Frequency

(a)

Value

1 2 3 4 56 78 9 1 2 3 4 5 6 7 8 9

2

4

6

8

10

Frequency Frequency (%)

(b) (c)

Value Value

15

10

5

Figure 2.3 (a) Frequency histogram of a random sample, (b) frequency histogram of another random sample from the same population, (c) data of many samples shown with measurement intervals of 0.5.

The value of x at which the distribution peaks is called the mode. Multimodal distributions are encountered in reliability work as well as unimodal distributions. However, we will deal only with the statistics of unimodal distributions in this book, since multimodal distributions are usually generated by the combined effects of separate unimodal distributions. The area under the curve is equal to unity, since it describes the total probability of all possible values of x , as we have defined a probability which is a certainty as being a probability of one. Therefore

∫ (^) ∞

−∞

f( x ) d x = 1 (2.11)

The probability of a value falling between any two values x 1 and x 2 is the area bounded by this interval, that is,

P ( x 1 < x < x 2 ) =

∫ (^) x 2

x 1

f( x ) d x (2.12)

To describe a pdf we normally consider four aspects:

1 The central tendency , about which the distribution is grouped. 2 The spread , indicating the extent of variation about the central tendency. 3 The skewness , indicating the lack of symmetry about the central tendency. Skewness equal to zero is a characteristic of a symmetrical distribution. Positive skewness indicates that the distribution has a longer tail to the right (see for example Figure 2.5) and negative skewness indicates the opposite.

pdf, f(

x )

x

Figure 2.4 Continuous probability distribution.