Measurement is often seen as nonvalueadded work. However, if we properly account for the expected costs involved in passing defects on to customers, then the increased value of the product can be clearly shown. This approach makes it possible to make rational databased decisions about when to reduce inspection frequency and how much time and money to spend on metrology. To put it another way, imagine if we didn’t inspect the product and simply sold it without telling anyone that it hadn’t been inspected. By doing this, we’ve saved the cost of inspection and also reduced the cost of scrap since we won’t reject any of the product if we never inspected it. When we initially stop inspecting, the product would still sell for the same price and so it might seem that inspection doesn’t add any value. However, the uninspected product has a higher probability of failing in service than an inspected product. A failure is likely to cost the manufacturer money, perhaps due to paying a refund or a concession charge to a major customer. If the failure has major safety or operational importance, then there may be a legal settlement. The failure may also harm the company’s reputation and cause a decline in future sales. I’ve looked at some of these issues before when I discussed how we can go beyond sixsigma in our thinking on quality.
Expected Costs
A key concept to understand when considering the value added by inspection is that of expected costs. An expected cost is a way of accounting for an event that might happen. It is calculated by multiplying the probability of the event occurring, by the cost of the event. For example, if a defective product is returned for a refund it will cost the manufacturer $10. The probability of the product failing due to a defect is estimated to be 0.2 percent but it has been found that only half of these failures are returned. The probability of a product return resulting in a refund is, therefore, 0.1 percent. Multiplying the cost by the probability gives the expected cost, in this case the expected cost is $0.01.
Combining Defects with Inspection – Conditional Probabilities
The first challenge in quantifying the value added by an inspection process is that it depends on the production process. If the production process is very highly controlled, then very few defective parts will be produced. This means that even if we don’t inspect the final product, there will still be a very low probability of the customer experiencing a product failure. Actually, the probability of a defect reaching the customer depends on three things: variation in the production process; uncertainty in the measurement process; and how close the acceptance limits are set to the specified limits for the product. If the acceptance limits, or conformance limits, are much tighter than the specification limits, this allows for uncertainty in the measurement and makes it very unlikely that defective products will get past inspection. However, it also means that more parts will be rejected, many of which actually did conform to the specification. Any change we make to the process will increase cost somewhere:

Reducing process variation is likely to require more expensive production machinery or process controls.

Reducing measurement uncertainty is likely to require more expensive measurement instrumentation or process controls.

Tightening conformance limits will increase the scrap rate.

Relaxing any of the above three parameters will increase the probability of defects reaching the customer, increasing the associated expected cost.
These relationships can be most easily understood by considering a simple Monte Carlo simulation. This is a way of simulating random events using random number generators. An event is simulated many times to understand the probability of different outcomes. A simple example would be rolling dice many times and counting how many times different scores are rolled. We will consider the simulation of production and inspection for a part.
The simulation process for a single iteration is as follows:

Simulate the production of a part by making a random draw from the standard normal distribution, multiplying this by the standard deviation for the manufacturing process, and adding the nominal part dimension. This gives the simulated true value of the part dimension.

Simulate this part being measured by making another random draw from the standard normal distribution, multiplying this by the standard uncertainty for the measurement process and adding the true value for the part. This gives the simulated measurement result.

Check whether the part was in specification—the true value of the part was within the specification limits.

Check whether the part passed inspection—the measurement result was within the conformance limits.
The computer repeats these calculations millions of times in less than a second, simulating many parts being produced and inspected. The simulation then counts the percentage that had each of the four possible outcomes:

The part was in specification and passed inspection, a True Pass

The part was out of specification and failed inspection, a True Fail

The part was out of specification but, due to a measurement error, it passed inspection, a False Pass

The part was in specification but, due to a measurement error, it failed inspection, a False Fail
Fig. 1: Simulation Algorithm to Determine Conditional Probabilities.
Only two of these are important, the probability of a true pass (P1) and the probability of a false pass (P2). The scrap rate can be found from these (1P1P2) and it’s not important how many rejected parts were actually conforming—the cost of scrap is the same.
If the process variation or the measurement uncertainty are complex, then this type of Monte Carlo simulation is the only way to accurately determine the conditional probability. Where they can both be approximated by a normal distribution more efficient algorithms are available, although there is no simple analytical equation that will find the probability. For two normal distributions the algorithm required is for the bivariate normal distribution. This is not available in Excel but it can be calculated by more advanced packages, such as Matlab or R, or using macros in Excel.
Fig. 2: A Bivariate Normal Distribution.
Quantifying the Cost of Defects
The above method gives the scrap rate. Every manufacturer should know the cost of producing a part; it is therefore, easy to determine the cost associated with this scrap rate. The conditional probability P2 is the probability that a defect will reach the customer. This probability needs to be multiplied by the cost to give the expected cost. The difference between the expected cost, before and after inspection, is the value added by the inspection process. This increase in value must be offset against the increased cost due to the inspection process itself and the increase in scrap caused by inspecting the product. If inspection does not produce a net gain then it does not generate profit for the business. Calculating these costs is clearly useful for making informed decisions in manufacturing. However, obtaining the costs of a defect reaching the customer will require detailed knowledge of the business and, even then, may involve some estimation. It’s not possible to provide a generic explanation of this part of the process although considerations might include the cost of refunding the compensating the customer, any contractual concession fees in place, and potential for liability. For consumer products, we can obtain a worstcase estimate by considering the amount of compensation required to fully restore consumer confidence and loyalty in the brand. For a safety critical product with a very low probability of a defect reaching the customer, the worstcase cost might be bounded as the total value of a limited liability company.
Make Optimal Choices
Many production planning and operational decisions involve choices between processes, machines and instruments, and the setting of control limits. Evaluating the value added by inspection processes, by considering how expected costs are reduced, allows all of these decisions to be optimized. I’ve previously explained how to calculate the Quality Adjusted Cost (CQ) of producing a saleable product, taking into account all of the costs associated with quality. In summary, this includes the actual costs of producing and inspecting the part (C1) and the cost of passing a defect to the customer (C2). The cost to produce a saleable part is given by simply dividing the production cost by the proportion of parts that get sold: C1/(P1+P2). Also considering the expected cost of defects reaching the customer gives the quality adjusted cost:
Reducing CQ equates to making the production system more profitable. For a given level of production process variation and inspection measurement uncertainty, the conformance limits can be adjusted to minimize this cost. The quality adjusted cost tends to infinity when the conformance limits get so tight that every part is rejected, since an infinite number of parts would need to be produced before selling one. This occurs when the zscore is equal to the range between the specification limits (T) divided by 2 standard uncertainties for the measurement system. Depending on how many defective parts the production processes produce, and the cost of passing defects to the customer, the quality adjusted cost may also increase significantly for very low confidence levels. This produces a characteristic curve that looks something like Fig. 3.
Fig. 3: Quality adjusted cost as a function of conformance limits.
Identifying the minimum of this curve gives the optimum conformance limits for the given combination of production and inspection process. Optimization algorithms can efficiently find such a minimum by calculating the quality adjusted cost within the known limits and following an iterative search such as a binary search. If a Monte Carlo Simulation is performed for each value, then this may not give an instant result. This type of optimization can be performed for each possible combination of production process and inspection process. The combination with the lowest quality adjusted cost may then be selected. Increasingly, traditional methods of process planning based on analytical equations are being superseded by optimized methods based on simulation.
Confidence in Processes
Increasingly, there is an emphasis on gaining confidence in processes rather than measurements. The availability of lowcost sensors, operating on internet protocols, means that continuously monitoring all of a process’ key input variables is becoming increasingly affordable. Such an approach can potentially eliminate both scrap and the additional time and expense of endofline inspection.
Gaining high levels of confidence in processes requires going far beyond traditional Statistical Process Control and SixSigma methods, which are purely datadriven. Evaluating the variability of a process and then attempting to keep it in control by monitoring the outputs using control charts is analogous to using measurement systems analysis (MSA) to gain confidence in measurements. This type of approach cannot take into account known uncertainties if they cannot be regularly observed in the process output. It is for this reason that the metrology community has moved to a modelbased approach to uncertainty evaluation.
Understanding the uncertainty of a process requires first having a mathematical model that gives the process output variables in terms of the process input variables. Uncertainties in the inputs can then be propagated through this model, either using an uncertainty budget or a simulation. The uncertainty of each input must be known.
Consideration of expected costs should be applied to processes upstream of the end of line inspection. This should fully consider the value added by processes, together with the hidden costs of downstream scrap and defects reaching the customer. Analyzing production systems in this way can reveal the true benefits that Industry 4.0 and the Internet of Things (IoT) can bring.
When we start evaluating the value generated by an inspection process it becomes possible to make much more informed decisions on a range of topics. Instead of considering different measurement processes in terms of capability, we can look at how much value each process could add. When we know what a process costs and what value it adds, it becomes clear whether it makes business sense. This clarity is replacing traditional ways of evaluating measurement processes, such as SixSigma. It’s even possible to produce costoptimal production systems using algorithms to automatically select the best production and inspection processes that will maximize profit.