• Panyu District, Guangzhou, Guangdong, China
  • 2044, 2nd Floor, Yayun Avenue, Dalong Street
  • whatsapp:
  • +86 18924269949

SPC Blog

Is SPC or Another Method Better for Determining Batch Consistency with Standards? A Recommended Analysis Approach

Let's say we have a batch of 3000 pieces, with each piece required to weigh 200g ± 5g. Based on previous experience, the pass rate is about 98%. Using this pass rate, we estimate that we need to sample 60 pieces. For this example, let's assume we sample 100 pieces.

We have sampled 100 pieces and measured their weights, resulting in 100 data points. How can we determine if the weights of this batch are consistent with our standard of 200g?

The data is as follows:

201.67, 202.33, 196.55, 197.94, 199.76, 195.77, 198.74, 199.81, 197.87, 198.49, 198.32, 199.14, 197.74, 200.36, 199.34, 197.67, 200.29, 200.98, 200.75, 202.73, 200.11, 201.47, 200.47, 201.23, 201.76, 204.01, 203, 200.3, 201.34, 197.02, 198.01, 196.63, 200.96, 201.84, 199.06, 201.19, 196.05, 198.24, 198.34, 201.16, 199, 199.12, 202.25, 200.77, 198.83, 201, 200.1, 199.7, 199.93, 199.86, 202.2, 198.8, 201.31, 200.96, 199.83, 202.44, 198.76, 197.26, 197.17, 201.26, 200.59, 197.6, 201.03, 203.05, 199.63, 197.48, 200.34, 200.42, 197.59, 198.16, 197.9, 198.05, 199.36, 202.68, 198.53, 201.11, 197.29, 200.38, 200.02, 201.64, 199.89, 199.5, 195.33, 203.19, 199.45, 199.66, 202.58, 201.08, 198.01, 199.08, 200.82, 197.92, 199.55, 198.81, 201.74, 201.54, 199.58, 198.09, 197.81, 201.56

SPC Analysis

If we use SPC (Statistical Process Control) analysis, we encounter the following issues:

  1. Mean Value: The mean of the data is 199.77, very close to 200. But how do we determine if this is consistent? What defines "close"?
  2. Control Limits: The data does not exceed the upper and lower three-sigma limits, but these limits are calculated based on the sample's mean and standard deviation, not the specification center.
  3. Capability Index (Ca): The Ca value is -0.04, indicating the data is close to the specification center, but it doesn't define what is considered "close."

SPC analysis is typically based on time order, and in this case, the sequence of samples affects the control chart analysis and judgment. Therefore, if we cannot determine the order of the sampled items, using SPC control charts might lead to inaccurate conclusions.

Recommended Method: One-Sample T-Test

Instead of SPC, we should use a one-sample T-test. This test can determine if the sample mean is significantly different from the target value. The null hypothesis of the test is that the sample mean is equal to the target value (200g), and the alternative hypothesis is that the sample mean is not equal to the target value.

Results:

  • Sample Mean: 199.77
  • Sample Standard Deviation: 1.83
  • Sample Size: 100
  • T-Statistic: -1.26
  • P-Value: 0.2116

The target value of 200.0 is within the confidence interval (199.4078, 200.1328), indicating no significant difference between the sample mean and the target value of 200.0.

The sample mean does not significantly differ from the target value of 200.0, so we cannot reject the null hypothesis (the sample mean is consistent with the target value).

The T-Test Calculation Process

Suppose we want to test a sample mean , sample standard deviation ( s ), and sample size ( n ) against a target mean ( μ ) of 200g. The t-value can be calculated using the following formula:

Confidence Interval Formula:

where is the critical value from the t-distribution with n-1 degrees of freedom.

T-Statistic and P-Value:

  • If the p-value is less than the significance level (e.g., 0.05), reject the null hypothesis, indicating the sample mean is significantly different from 200g.
  • If the p-value is greater than the significance level, fail to reject the null hypothesis, indicating no significant difference between the sample mean and 200g.

Confidence Interval:

  • If 200g is within the confidence interval, it indicates no significant difference between the sample mean and 200g.
  • If 200g is outside the confidence interval, it indicates a significant difference between the sample mean and 200g.

Using both the t-statistic/p-value and the confidence interval, we can determine if the sample weight is consistent with the target value of 200g.

Conclusion

SPC is not suitable for determining if a sample is consistent with the specification center; it is a tool for anomaly analysis. For determining if a sample is consistent with the specification center, the recommended method is the one-sample T-test.

Beyond SPC Control Charts: Lesser-Known but Effective Quality Analysis Tools

When discussing quality anomaly analysis, almost everyone familiar with the field will mention SPC (Statistical Process Control) control charts. Indeed, SPC control charts are currently the most widely used tool for quality anomaly analysis (here, "widely used" refers to companies that already utilize quality analysis, though most manufacturing enterprises have not yet reached the stage of conducting quality anomaly analysis).

So, what other charts can we use for quality analysis related to SPC, besides the SPC control charts?

3σ Distribution Chart (My own coined term)

The chart below illustrates a 6sigma diagram; actual data rarely shows such an ideal state. When we look at control charts, they typically appear as shown below: 

However, we often want to know the number or proportion of these points located in the upper and lower ABC zones. The horizontal axis divisions of a normal distribution histogram are often automatically generated, and not necessarily corresponding to each sigma. The following chart divides the upper and lower ABC zones into intervals for each sigma. 

Variable Subgroup Control Chart (Another coined term)

Often, the number of samples in each batch is not fixed (not sampled by a fixed subgroup size). For the obtained data, we use the average to plot the chart and display each batch's data points on the chart to intuitively observe the distribution of the points.

Rainbow Chart (Pre-control Chart)

The pre-control chart is a newer, simpler, more user-friendly, and more economical quality control technique. Compared to traditional Shewhart control charts, it is statistically more powerful, allowing quicker identification and response to anomalies by setting predefined limits. It is suitable for real-time production line monitoring and small batch production environments.

 

 

Capability Analysis Chart

The capability analysis chart is a comprehensive tool for process capability analysis. It includes a data distribution chart, overall and within-group distribution fitting curves, and indicators such as PPK, CPK, and Ca. These charts and indicators help us evaluate process performance and capability comprehensively, identify potential issues, and provide a basis for process improvement. Through these analyses, we can better understand process stability and consistency, thus more effectively controlling quality. 

Capability Comparison Chart

The capability comparison chart visually compares specification limits, within-group actual deviation (range), and overall actual deviation (range). It allows us to see:

  • Whether the data is within the specified range.
  • The variation within subgroups.
  • The overall data variation.

This chart helps identify issues within the process, such as if within-group deviation is much smaller than the overall deviation, it may indicate significant differences between batches. Conversely, if within-group and overall deviations are close, it indicates a more consistent and stable process. 

Subgroup Distribution Chart

This chart shows the distribution of measurement values for each subgroup. Each subgroup's measurement data is represented by a box plot. The box plot displays the median, interquartile range, and potential outliers for each subgroup. Red dots indicate outliers within each subgroup, significantly deviating from other data points, suggesting further attention and analysis may be needed.

This chart helps us intuitively understand the central tendency and dispersion of each subgroup's data, identify potential outliers, and thus better conduct quality control and data analysis. 

Normality Test Chart

The normality test chart is a tool used to assess whether data follows a normal distribution. By plotting data points on a specific chart, such as a Q-Q plot (Quantile-Quantile Plot) or a P-P plot (Probability-Probability Plot), it compares the actual data distribution to the theoretical normal distribution. If the data points roughly follow a straight line, the data conforms to a normal distribution; significant deviations indicate that the data may not follow a normal distribution. This chart is intuitive and easy to interpret, commonly used in statistical analysis. 

Machine Learning Anomaly Detection Chart

Machine learning for anomaly detection involves classifying data into two categories using regression-based binary classification methods, where the less frequent category is deemed anomalous. Algorithms such as linear regression, support vector machines (SVM), random forests, and K-means can be used.

However, machine learning for anomaly detection has a critical weakness: the judgment process is often a black box, making it difficult to understand. Unlike SPC's standard eight anomaly detection rules, we cannot clearly know why the machine learning algorithm determines a point as anomalous, making it challenging to find the underlying cause. 

Among the analysis charts and tools mentioned above, there are always some that suit your needs.

SPC is the most accessible, effective, and performance-demonstrating analytical tool in the manufacturing industry.

Statistical Process Control (SPC), as a systematic analytical tool, has been widely implemented in the manufacturing industry. It is not only easy to implement but also demonstrates significant effectiveness in improving product quality, optimizing production processes, and enhancing work performance. This discussion will elaborate on the ease of implementation, significant effectiveness, and work performance enhancement of SPC, while comparing it with other manufacturing analysis systems in terms of cost and employee participation.

Ease of Implementation

Simple and User-friendly Tools:

SPC primarily relies on statistical tools such as control charts and process capability indices. These tools are conceptually simple, easy to understand, and apply. Enterprises can equip employees with these basic tools through simple training.

Modern manufacturing enterprises are usually equipped with data acquisition systems and computer-aided tools, which can automatically generate the control charts and data analysis results required for SPC, further reducing the implementation difficulty.

Wide Range of Applications:

SPC is suitable for manufacturing enterprises of various scales and types, ranging from small businesses to large multinational corporations, from manual operations to automated production lines.

Whether it is a continuous production process or a discrete production process, SPC can be effectively applied to achieve comprehensive quality control.

Low Implementation Cost:

Compared to other complex analytical tools, the implementation cost of SPC is relatively low. Enterprises only need basic statistical knowledge and tools to start implementation, without requiring substantial upfront investment.

The automation of data collection and analysis further reduces labor costs, making SPC an affordable and efficient choice for enterprises.

Significant Effectiveness

Real-time Monitoring and Feedback:

SPC monitors the production process in real-time through control charts, promptly identifying and providing feedback on abnormal situations, preventing defective products from flowing into the next stage or the final market.

This real-time monitoring mechanism not only improves product quality but also reduces rework and scrap costs, directly enhancing production efficiency and economic benefits.

Process Stability and Capability Enhancement:

SPC can identify and control variations in the process, helping enterprises achieve process stability. A stable production process signifies product quality consistency and reliability.

Through continuous process capability analysis (e.g., Cp, Cpk), enterprises can continuously optimize the production process, enhance process capability, and make production more efficient.

Prevention-oriented Quality Management:

SPC emphasizes prevention rather than post-event correction, preventing quality problems from occurring by controlling process variation. This prevention-oriented management philosophy helps enterprises fundamentally solve quality issues and elevate overall quality levels.

Work Performance Enhancement

Data-driven Decision-making:

The data and analysis results provided by SPC offer scientific evidence for enterprise decision-making. Management can make informed decisions based on data, optimize resource allocation, and enhance production efficiency.

With data support, employees can better understand and control the production process, improving individual and team work performance.

Employee Skill Enhancement:

Implementing SPC requires employees to master basic statistical tools and quality management knowledge, which implicitly enhances their skill levels.

Through participation in SPC implementation, employees gain a deeper understanding and identification with quality management, leading to improved work motivation and a sense of responsibility, further promoting work performance enhancement.

Continuous Improvement Culture:

SPC advocates continuous improvement, fostering a corporate culture that pursues excellence by continuously monitoring and optimizing the production process.

Under the guidance of this culture, enterprises and employees continuously seek opportunities for improvement and enhancement, resulting in continuous improvement in work performance and overall competitiveness.

Comparison with Other Manufacturing Analysis Systems

Comparison with BI:

  • Implementation Difficulty: BI systems require complex data warehouses and ETL (Extract, Transform, Load) processes; in contrast, SPC implementation is simpler.
  • Real-time Capability: BI systems are typically used for strategic analysis in high-level decision-making, with lower data update frequency, while SPC emphasizes real-time process monitoring and feedback.
  • Cost: BI system implementation costs are relatively high, involving substantial software and hardware investments and complex implementation processes, whereas SPC costs are lower, mainly involving basic statistical tools and simple training.
  • Employee Participation: BI systems are primarily targeted at management and decision-making levels, while SPC requires extensive participation from frontline employees, promoting the enhancement of quality awareness among all staff.

Comparison with MES:

  • Functional Focus: MES covers various aspects of production execution, including scheduling, tracking, quality control, etc., while SPC focuses on quality control and process monitoring, with a more focused function.
  • Complexity: MES systems are highly complex, with high implementation costs and long cycles, whereas SPC implementation is relatively simple and cost-effective.
  • Cost: MES system implementation is expensive, involving hardware, software, and extensive training investments, while SPC is less costly and easy to integrate into existing systems.
  • Employee Participation: MES systems are primarily used by production management personnel, while SPC implementation requires the participation of all employees, especially the active cooperation of frontline workers, to enhance the overall quality control level.

Comparison with APS:

  • Analysis Scope: APS focuses on production planning and scheduling optimization, while SPC focuses on quality monitoring and improvement in the production process.
  • Complexity: APS requires complex algorithms and models for planning and scheduling optimization, with higher implementation and maintenance difficulty, whereas SPC is relatively simple and easy to use.
  • Cost: APS system implementation and maintenance costs are relatively high, while SPC costs are lower and suitable for enterprises of all sizes.
  • Employee Participation: APS is primarily used by planning personnel and management, while SPC requires the participation of all employees, from operators to management, to collectively enhance production quality.

Comparison with SCADA:

  • Focus: SPC focuses on statistical analysis and quality control in the production process, while SCADA focuses on real-time monitoring and data acquisition, covering comprehensive monitoring of production equipment and process parameters.
  • Implementation Complexity: SPC implementation is simple, mainly relying on control charts and process capability analysis, with quick results. SCADA implementation is complex, requiring the installation of sensors, configuration of hardware and software, and involving a significant amount of system integration work.
  • Cost: SPC costs are lower, mainly involving statistical tools and employee training. SCADA costs are higher, including hardware, sensors, and high-performance software systems.
  • Employee Participation: SPC requires the participation of all employees on the production line, especially operators and quality control personnel. SCADA is mainly used by engineers and operators for equipment monitoring and control, with relatively narrow participation.

Comparison with Simulation Analysis:

  • Focus: SPC focuses on quality control and variation management in actual production processes, while simulation analysis optimizes production layout, processes, and resource allocation through modeling and simulation.
  • Implementation Complexity: SPC implementation and use are relatively simple, mainly relying on statistical analysis tools. Simulation analysis requires complex modeling and simulation techniques, with higher implementation and maintenance difficulty.
  • Cost: SPC costs are lower and suitable for enterprises of all sizes. Simulation analysis costs are higher, involving specialized software, hardware, and technical personnel.
  • Employee Participation: SPC requires the participation of all employees, especially the cooperation of operators. Simulation analysis is primarily used by engineers and management for strategic decision-making and process optimization.

Comparison with Equipment Analysis Software:

  • Focus: SPC focuses on process quality control and statistical analysis, controlling variations in the production process. Equipment analysis software focuses on equipment operating status, maintenance needs, and fault prediction.
  • Implementation Complexity: SPC implementation is simple, mainly involving statistical tools and basic training. Equipment analysis software implementation is complex, usually requiring the integration of equipment sensors and advanced analysis software.
  • Cost: SPC costs are lower and easy to integrate into existing systems. Equipment analysis software implementation costs are higher, involving hardware, software, and technical personnel.
  • Employee Participation: SPC requires the participation of all employees, especially frontline workers and quality management personnel. Equipment analysis software is mainly used by equipment maintenance personnel and engineers to monitor and maintain equipment status.

Comparison with QMS:

  • Focus: QMS covers the entire enterprise's quality management system, including document control, audits, supplier management, etc., while SPC focuses on statistical analysis and quality control in the production process.
  • Implementation Complexity: QMS system implementation is highly complex, requiring comprehensive quality management system establishment, while SPC can be quickly implemented within the existing production system.
  • Cost: QMS systems involve comprehensive quality management system establishment, with higher costs, while SPC only requires basic statistical tools and simple training, resulting in lower costs.
  • Employee Participation: QMS systems require the participation of all employees, but the implementation process is complex and time-consuming. SPC implementation is simple and can quickly yield results, enhancing employee participation and a sense of responsibility.

Conclusion

Statistical Process Control (SPC), with its ease of implementation, significant effectiveness, and notable enhancement of work performance, stands out as the most accessible, effective, and performance-demonstrating analytical tool in the manufacturing field.

Compared to other manufacturing analysis systems, SPC possesses unique advantages in real-time monitoring, focus on quality control and prevention, simple implementation, low cost, and employee participation. By widely applying SPC, manufacturing enterprises can achieve continuous improvement in quality management, enhancing overall production efficiency and market competitiveness.

Our point is that the SPC system is easier to implement, promote, cost-effective, and yields obvious benefits. Other manufacturing systems/systems are very important, but they are not at the same level as SPC tools/systems and are even more critical than SPC.

Our suggestion is that a low-cost, high-benefit SPC project can be used to enhance digitalization and improve product quality.

How to Quickly Identify Hidden Correlations Between Test Items Using the SPC System?

In an SPC system, there is a significant amount of test data stored, and there might be correlations between test items from common sources. Generally, we organize this test data, use Minitab or Excel, and perform correlation and regression analyses in pairs, adjusting the lag period to find the optimal leading influence.

If the test dates of the items cannot be perfectly aligned (for instance, if they differ by a few seconds but belong to the same batch/time), it becomes even more complicated.

This process is cumbersome, requiring data organization before analyzing with tools, and each lag period needs analysis to find or fail to find a pattern.

Let’s see how our SPC product handles this.

Features

  • Can create scatter plots and perform regression analysis for any two test items.
  • Offers various options to align test dates.
  • Allows the lag period to be a range, making it easier to find the leading influence without trying each period one by one.
  • Can complete all these tasks within one system with just a few clicks, without the need to organize data into Excel, Minitab, or other analysis tools.

Next, let's look at the specific operations:

Select the test Items to Analyze

We select the test items from the list that need correlation analysis (requiring single values).

Click the correlation analysis button to open the following page:

Set Parameters

  • Confirm that the test items have records.
  • Choose the time alignment method, as the actual time values of the same batch/time point for the two test items may differ by milliseconds/seconds, so different alignment methods can be chosen.
  • One-to-Many Handling: If aligned data results in one test item corresponding to multiple entries of another item (i.e., one x corresponds to multiple y), we need to choose how to handle this. Options include taking the average value, the latest entry, or the oldest entry.
  • Lag: Lag 0 means analyzing the current period's influence. A lag of -2 means the influence of test item 1's current period on test item 2's two periods ahead, which can also be seen as the leading influence on test item 2.
  • N: The number of aligned records might be numerous; the latest N records can be extracted for analysis.

Analysis Results

We fill in the lag period from 0 to 2 to see the correlation of different lag periods.

From the scatter plots and regression analyses of lag 0 to 2, it is evident that the correlation is more significant with a lag of 2 of test item C and test item D. This means that the N-2 period of test item 1 has a noticeable influence on the N period of test item 2.

We all know the theory is well understood and highly useful, but without a good tool, it is challenging to apply. Our product is such a tool.