Process monitoring for quality – A big data and artificial Intelligence-based manufacturing quality control
Citation
Share
Date
Abstract
As manufacturing companies stand on the brink of the fourth industrial revolution, Industrial Big Data (IBD) highlights the importance of the application of Artificial Intelligence (AI) in this domain. These technologies have the potential to enhance Traditional Quality Control (TQC) systems. Although most mature organizations generate only a few Defects Per Million of Opportunities (DPMO), as in today’s highly competitive global-market, customers expect perfect quality. Therefore, the detection of rare quality events represents not only a research challenge; but, also an opportunity to move quality standards forward. In this research, detection is formulated as a binary classification problem, where the main objective is to develop a predictive system that projects features into a hyper-dimensional space where those defects can be detected. Manufacturing-derived data sets for classification of quality pose two main challenges: (1) highly/ultra-unbalanced classes, (2) hyper-dimensional feature spaces, that often contain insignificant/irrelevant information. A new Big Data-Driven Manufacturing—Process-Monitoring-for-Quality (PMQ) philosophy has been developed. PMQ is a blend of Process Monitoring (PM) and Quality Control (QC), which is founded on Big Data and Big Models (BDBM). PMQ uses data from the process to make a real-time detection (classification). This new quality philosophy poses several theoretical challenges that must be addressed before it can be generalized across the manufacturing industry and materialize its contribution in the quality movement. This research work addresses a few of them. An analysis is presented on how PMQ enhances the quality movement by addressing three quality problems, which traditional QC techniques cannot. The predictive modeling paradigm of Big Models (BM) has been developed by aiming at rare quality event detection. Four Model Selection (MS) criteria have been developed: (1) Penalized Maximum Probability of Correct Decision (PMPCD), a general criterion that can be applied to Candidate Model (CM) in which complexity is defined by the number of features, based on simulations, criterion induces parsimony while maintaining detection as the main driver. And a series of three-objective optimization MS criteria that use three of the most relevant competing attributes of each CM to project them into a Three-Dimensional (3D) space where the final model that solves the posed tradeoff between them the best is selected: (2) a MS criterion for the Genetic Programming (GP), 3D-GP, supported by a novel Separability Index (SI), (3) a MS criterion for the Support Vector Machine (SVM), 3D-SVM, also supported by a novel SI, and (4) a MS criterion for the Logistic Regression (LR), 3D-LR, proposed criterion outperforms widely used criteria in highly/ultra-unbalanced data structures. A learning scheme for the L1-regularized LR algorithm has been developed following the BM paradigm, and it includes the development of two novel algorithms: (1) Optimal Classification Threshold (OCTM), aimed at finding the classification threshold of a LR-based model, which is one of the main challenges of this learning algorithm and (2) Hybrid Correlation and Ranking-based (HCR), aimed at eliminating redundant features that filter feature selection methods cannot. Proposed learning scheme outperforms typical learning schemes in highly/ultra-unbalanced data structures with respect to detection and parsimony. The applicability of all theoretical developments and their capacity to solve real manufacturing complex-problems is demonstrated in three case studies, with the following detection results: (1) Ultrasonic Welding of Battery Tabs (UWBT) – 100% detection, (2) Laser Spot Welding (LSW) – 100% detection and (3) Sensorless Drive Diagnosis (SDD) – 99.72% detection.