I find there is lot of confusion about control limits and tolerance limits and about 3 sigma, and 6 six sigma. Shewhart and Deming did run plots of process variables to determine their natural variation so that they could place and maintain their production process “under control”. If they are under control, the process is predictable. It’s only a slight over simplification to state that perdictable means history is your guide. You might not get the results you need, but you know what to expect.
Your history can follow all sorts of distributions,the Gaussian being a common one. Gaussian, is a case of the central limit theorem for averaged values. If your parameter is the average of a number of other values, it’s mean will distribute as a Gaussian. If, however, the contributers multiply, they will distribute log-normal (gaussian). There are lots of other possibilities, many in between.
Fortunately it doesn’t matter much for Deming purposes, outside 3 sigma variation is a big enough deviation to make you stop and examine your system for a special or external cause. It may be a false positive, but it’s worth the effort to look. This is an engineering economic trade-off decision. While we could apply math for a distribution and try to optimize, Deming thought this was a bad idea. There are number of reasons I won’t go into now, but I think Deming was correct.
Three sigma is a “control limit”, as distinguished from a tolerance limit. Tolerance defines what you actually need, outside tolerance is a defect. If tolerance limits are farther out than control limits you have a capable process. You will get few defects if your tolerance limits are outside control. Intuitively, the farther beyond the control limits, the fewer defects you get.
Six Sigma strives to reduce the variation in the process so that the tolerance limit is 6 standard deviations from the target mean. The methodology claims this assures <3.4 Defects per Million Opportunities. DPMO.
If you know the distribution you can compute the number of events outside tolerance (defects).
Six Sigma assumes
-the distribution is Gaussian (not always true) and seldom true in the tail,
-that you know the distribution mean and variance precisely enough to locate a point six sigma away
The Z score for six sigmas is 1 part per billion (PPB).
Surprisingly (NOT), six sigma did not provide 10^(-9) defects, but several orders greater. But if the process were drifting by 1.5 sigma you could get 3.4*10^(-6), which seemed to be abou right to the original six sigma engineers (I’m over simplifiying, but this is the generally what happened) . Thus was born the 1.5 sigma shift. Set to Six Sigmas so you can get 4.5 even with a drifting process. Call it 6, but only promise 4.5 results.
Personally I see a lot of implausible assumptions and hand waving in this part of Six Sigma. As a package you will no doubt get benefit, but I don’t care for this use of math. I have a number of objections which will wait for another day.