Let's say we have a batch of 3000 pieces, with each piece required to weigh 200g ± 5g. Based on previous experience, the pass rate is about 98%. Using this pass rate, we estimate that we need to sample 60 pieces. For this example, let's assume we sample 100 pieces.
We have sampled 100 pieces and measured their weights, resulting in 100 data points. How can we determine if the weights of this batch are consistent with our standard of 200g?
The data is as follows:
201.67, 202.33, 196.55, 197.94, 199.76, 195.77, 198.74, 199.81, 197.87, 198.49, 198.32, 199.14, 197.74, 200.36, 199.34, 197.67, 200.29, 200.98, 200.75, 202.73, 200.11, 201.47, 200.47, 201.23, 201.76, 204.01, 203, 200.3, 201.34, 197.02, 198.01, 196.63, 200.96, 201.84, 199.06, 201.19, 196.05, 198.24, 198.34, 201.16, 199, 199.12, 202.25, 200.77, 198.83, 201, 200.1, 199.7, 199.93, 199.86, 202.2, 198.8, 201.31, 200.96, 199.83, 202.44, 198.76, 197.26, 197.17, 201.26, 200.59, 197.6, 201.03, 203.05, 199.63, 197.48, 200.34, 200.42, 197.59, 198.16, 197.9, 198.05, 199.36, 202.68, 198.53, 201.11, 197.29, 200.38, 200.02, 201.64, 199.89, 199.5, 195.33, 203.19, 199.45, 199.66, 202.58, 201.08, 198.01, 199.08, 200.82, 197.92, 199.55, 198.81, 201.74, 201.54, 199.58, 198.09, 197.81, 201.56
If we use SPC (Statistical Process Control) analysis, we encounter the following issues:
SPC analysis is typically based on time order, and in this case, the sequence of samples affects the control chart analysis and judgment. Therefore, if we cannot determine the order of the sampled items, using SPC control charts might lead to inaccurate conclusions.
Instead of SPC, we should use a one-sample T-test. This test can determine if the sample mean is significantly different from the target value. The null hypothesis of the test is that the sample mean is equal to the target value (200g), and the alternative hypothesis is that the sample mean is not equal to the target value.
The target value of 200.0 is within the confidence interval (199.4078, 200.1328), indicating no significant difference between the sample mean and the target value of 200.0.
The sample mean does not significantly differ from the target value of 200.0, so we cannot reject the null hypothesis (the sample mean is consistent with the target value).
Suppose we want to test a sample mean , sample standard deviation ( s ), and sample size ( n ) against a target mean ( μ ) of 200g. The t-value can be calculated using the following formula:
Confidence Interval Formula:
where is the critical value from the t-distribution with n-1 degrees of freedom.
Using both the t-statistic/p-value and the confidence interval, we can determine if the sample weight is consistent with the target value of 200g.
SPC is not suitable for determining if a sample is consistent with the specification center; it is a tool for anomaly analysis. For determining if a sample is consistent with the specification center, the recommended method is the one-sample T-test.
When discussing quality anomaly analysis, almost everyone familiar with the field will mention SPC (Statistical Process Control) control charts. Indeed, SPC control charts are currently the most widely used tool for quality anomaly analysis (here, "widely used" refers to companies that already utilize quality analysis, though most manufacturing enterprises have not yet reached the stage of conducting quality anomaly analysis).
So, what other charts can we use for quality analysis related to SPC, besides the SPC control charts?
The chart below illustrates a 6sigma diagram; actual data rarely shows such an ideal state. When we look at control charts, they typically appear as shown below:
However, we often want to know the number or proportion of these points located in the upper and lower ABC zones. The horizontal axis divisions of a normal distribution histogram are often automatically generated, and not necessarily corresponding to each sigma. The following chart divides the upper and lower ABC zones into intervals for each sigma.
Often, the number of samples in each batch is not fixed (not sampled by a fixed subgroup size). For the obtained data, we use the average to plot the chart and display each batch's data points on the chart to intuitively observe the distribution of the points.
The pre-control chart is a newer, simpler, more user-friendly, and more economical quality control technique. Compared to traditional Shewhart control charts, it is statistically more powerful, allowing quicker identification and response to anomalies by setting predefined limits. It is suitable for real-time production line monitoring and small batch production environments.
The capability analysis chart is a comprehensive tool for process capability analysis. It includes a data distribution chart, overall and within-group distribution fitting curves, and indicators such as PPK, CPK, and Ca. These charts and indicators help us evaluate process performance and capability comprehensively, identify potential issues, and provide a basis for process improvement. Through these analyses, we can better understand process stability and consistency, thus more effectively controlling quality.
The capability comparison chart visually compares specification limits, within-group actual deviation (range), and overall actual deviation (range). It allows us to see:
This chart helps identify issues within the process, such as if within-group deviation is much smaller than the overall deviation, it may indicate significant differences between batches. Conversely, if within-group and overall deviations are close, it indicates a more consistent and stable process.
This chart shows the distribution of measurement values for each subgroup. Each subgroup's measurement data is represented by a box plot. The box plot displays the median, interquartile range, and potential outliers for each subgroup. Red dots indicate outliers within each subgroup, significantly deviating from other data points, suggesting further attention and analysis may be needed.
This chart helps us intuitively understand the central tendency and dispersion of each subgroup's data, identify potential outliers, and thus better conduct quality control and data analysis.
The normality test chart is a tool used to assess whether data follows a normal distribution. By plotting data points on a specific chart, such as a Q-Q plot (Quantile-Quantile Plot) or a P-P plot (Probability-Probability Plot), it compares the actual data distribution to the theoretical normal distribution. If the data points roughly follow a straight line, the data conforms to a normal distribution; significant deviations indicate that the data may not follow a normal distribution. This chart is intuitive and easy to interpret, commonly used in statistical analysis.
Machine learning for anomaly detection involves classifying data into two categories using regression-based binary classification methods, where the less frequent category is deemed anomalous. Algorithms such as linear regression, support vector machines (SVM), random forests, and K-means can be used.
However, machine learning for anomaly detection has a critical weakness: the judgment process is often a black box, making it difficult to understand. Unlike SPC's standard eight anomaly detection rules, we cannot clearly know why the machine learning algorithm determines a point as anomalous, making it challenging to find the underlying cause.
Among the analysis charts and tools mentioned above, there are always some that suit your needs.
Statistical Process Control (SPC), as a systematic analytical tool, has been widely implemented in the manufacturing industry. It is not only easy to implement but also demonstrates significant effectiveness in improving product quality, optimizing production processes, and enhancing work performance. This discussion will elaborate on the ease of implementation, significant effectiveness, and work performance enhancement of SPC, while comparing it with other manufacturing analysis systems in terms of cost and employee participation.
SPC primarily relies on statistical tools such as control charts and process capability indices. These tools are conceptually simple, easy to understand, and apply. Enterprises can equip employees with these basic tools through simple training.
Modern manufacturing enterprises are usually equipped with data acquisition systems and computer-aided tools, which can automatically generate the control charts and data analysis results required for SPC, further reducing the implementation difficulty.
SPC is suitable for manufacturing enterprises of various scales and types, ranging from small businesses to large multinational corporations, from manual operations to automated production lines.
Whether it is a continuous production process or a discrete production process, SPC can be effectively applied to achieve comprehensive quality control.
Compared to other complex analytical tools, the implementation cost of SPC is relatively low. Enterprises only need basic statistical knowledge and tools to start implementation, without requiring substantial upfront investment.
The automation of data collection and analysis further reduces labor costs, making SPC an affordable and efficient choice for enterprises.
SPC monitors the production process in real-time through control charts, promptly identifying and providing feedback on abnormal situations, preventing defective products from flowing into the next stage or the final market.
This real-time monitoring mechanism not only improves product quality but also reduces rework and scrap costs, directly enhancing production efficiency and economic benefits.
SPC can identify and control variations in the process, helping enterprises achieve process stability. A stable production process signifies product quality consistency and reliability.
Through continuous process capability analysis (e.g., Cp, Cpk), enterprises can continuously optimize the production process, enhance process capability, and make production more efficient.
SPC emphasizes prevention rather than post-event correction, preventing quality problems from occurring by controlling process variation. This prevention-oriented management philosophy helps enterprises fundamentally solve quality issues and elevate overall quality levels.
The data and analysis results provided by SPC offer scientific evidence for enterprise decision-making. Management can make informed decisions based on data, optimize resource allocation, and enhance production efficiency.
With data support, employees can better understand and control the production process, improving individual and team work performance.
Implementing SPC requires employees to master basic statistical tools and quality management knowledge, which implicitly enhances their skill levels.
Through participation in SPC implementation, employees gain a deeper understanding and identification with quality management, leading to improved work motivation and a sense of responsibility, further promoting work performance enhancement.
SPC advocates continuous improvement, fostering a corporate culture that pursues excellence by continuously monitoring and optimizing the production process.
Under the guidance of this culture, enterprises and employees continuously seek opportunities for improvement and enhancement, resulting in continuous improvement in work performance and overall competitiveness.
Statistical Process Control (SPC), with its ease of implementation, significant effectiveness, and notable enhancement of work performance, stands out as the most accessible, effective, and performance-demonstrating analytical tool in the manufacturing field.
Compared to other manufacturing analysis systems, SPC possesses unique advantages in real-time monitoring, focus on quality control and prevention, simple implementation, low cost, and employee participation. By widely applying SPC, manufacturing enterprises can achieve continuous improvement in quality management, enhancing overall production efficiency and market competitiveness.
Our point is that the SPC system is easier to implement, promote, cost-effective, and yields obvious benefits. Other manufacturing systems/systems are very important, but they are not at the same level as SPC tools/systems and are even more critical than SPC.
Our suggestion is that a low-cost, high-benefit SPC project can be used to enhance digitalization and improve product quality.
In an SPC system, there is a significant amount of test data stored, and there might be correlations between test items from common sources. Generally, we organize this test data, use Minitab or Excel, and perform correlation and regression analyses in pairs, adjusting the lag period to find the optimal leading influence.
If the test dates of the items cannot be perfectly aligned (for instance, if they differ by a few seconds but belong to the same batch/time), it becomes even more complicated.
This process is cumbersome, requiring data organization before analyzing with tools, and each lag period needs analysis to find or fail to find a pattern.
Let’s see how our SPC product handles this.
Next, let's look at the specific operations:
We select the test items from the list that need correlation analysis (requiring single values).
Click the correlation analysis button to open the following page:
We fill in the lag period from 0 to 2 to see the correlation of different lag periods.
From the scatter plots and regression analyses of lag 0 to 2, it is evident that the correlation is more significant with a lag of 2 of test item C and test item D. This means that the N-2 period of test item 1 has a noticeable influence on the N period of test item 2.
We all know the theory is well understood and highly useful, but without a good tool, it is challenging to apply. Our product is such a tool.