14 research outputs found

    Manufacturing variation models in multi-station machining systems

    Get PDF
    In product design and quality improvement fields, the development of reliable 3D machining variation models for multi-station machining processes is a key issue to estimate the resulting geometrical and dimensional quality of manufactured parts, generate robust process plans, eliminate downstream manufacturing problems, and reduce ramp-up times. In the literature, two main 3D machining variation models have been studied: the stream of variation model, oriented to product quality improvement (fault diagnosis, process planning evaluation and selection, etc.), and the model of the manufactured part, oriented to product and manufacturing design activities (manufacturing and product tolerance analysis and synthesis). This paper reviews the fundamentals of each model and describes step by step how to derive them using a simple case study. The paper analyzes both models and compares their main characteristics and applications. A discussion about the drawbacks and limitations of each model and some potential research lines in this field are also presented

    Process-oriented tolerancing using the extended stream of variation model

    Get PDF
    Current works on process-oriented tolerancing for multi-station manufacturing processes (MMPs) have been mainly focused on allocating fixture tolerances to ensure part quality specifications at a minimum manufacturing cost. Some works have also included fixture maintenance policies into the tolerance allocation problem since they are related to both manufacturing cost and final part qual- ity. However, there is a lack of incorporation of other factors that lead to increase of manufacturing cost and degrade of product quality, such as cutting-tool wear and machine-tool thermal state. The allocation of the admissible values of these process variables may be critical due to their impact on cutting-tool replacement and quality loss costs. In this paper, the process-oriented tolerancing is ex- panded based on the recently developed, extended stream of variation (SoV) model, which explicitly represents the influence of machining process variables in the variation propagation along MMPs. In addition, the probability distribution functions (pdf) for some machining process variables are ana- lyzed, and a procedure to derive part quality constraints according to GD&T specifications is also shown. With this modeling capability extension, a complete process-oriented tolerancing can be con- ducted, reaching a real minimum manufacturing cost. In order to demonstrate the advantage of the proposed methodology over a conventional method, a case study is analyzed in detail

    Synthesis of Products, Processes and Control for Dimensional Quality in Reconfigurable Assembly Systems.

    Full text link
    Reconfigurable systems and tools have given manufacturers the possibility to quickly adapt to changes in the market place. Such systems allow the production of different products with simple and quick reconfiguration. Another advantage of reconfigurable systems is that the accuracy of the tools provides a unique opportunity to compensate errors and deviations as they occur along the manufacturing system, hence improving product quality. This dissertation deals with the design of products, processes and controllers to enhance dimensional quality of products produced in reconfigurable assembly processes. The successful synthesis of these topics will lead to new levels of quality and responsiveness. Fundamental research has been conducted in dimensional control of reconfigurable multistation assembly systems. This includes three topics related to the design of products, processes, and controls. These are: o Development of feedforward controllers: Feedforward controllers allow deviation compensation on a part-by-part basis using reconfigurable tools. The control actions are obtained through the combination of multistation assembly models, in-line measurements (used to measure deviations along the process), and the characteristics and requirements of products/processes, in an optimization framework. Simulation results show that the proposed control approach is effective on reducing variation. o Optimal selection and distribution of actuators in multistation assembly processes: The availability of reconfigurable tools in the process enables error correction; however, it is too expensive to install at every location. The selection and distribution of the actuators is focused on cost effectively reducing variation in multistation assembly processes. Simulations results prove that dimensional variation could be significantly reduced through an appropriate distribution of actuators. o Robust fixture design for a product family assembled in a reconfigurable multistation line: The assembly of a product family in a reconfigurable line demands fixtures sharing across products. The sharing impacts the products robustness to fixture variation due to frequent systems reconfiguration and tradeoffs made in the design of fixtures to accommodate the family in the single system. A robust fixture layout for a product family is achieved by reducing the combined sensitivity of the whole family to fixture variation and considering product and process constraints.Ph.D.Mechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/57657/2/leiv_1.pd

    A Sequential Inspection Procedure for Fault Detection in Multistage Manufacturing Processes

    Get PDF
    Fault diagnosis in multistage manufacturing processes (MMPs) is a challenging task where most of the research presented in the literature considers a predefined inspection scheme to identify the sources of variation and make the process diagnosable. In this paper, a sequential inspection procedure to detect the process fault based on a sequential testing algorithm and a minimum monitoring system is proposed. After the monitoring system detects that the process is out of statistical control, the features to be inspected (end of line or in process measurements) are defined sequentially according to the expected information gain of each potential inspection measurement. A case study is analyzed to prove the benefits of this approach with respect to a predefined inspection scheme and a randomized sequential inspection considering both the use and non-use of fault probabilities from historical maintenance data

    Data fusion for system modeling, performance assessment and improvement

    Get PDF
    Due to rapid advancements in sensing and computation technology, multiple types of sensors have been embedded in various applications, on-line automatically collecting massive production information. Although this data-rich environment provides great opportunity for more effective process control, it also raises new research challenges on data analysis and decision making due to the complex data structures, such as heterogeneous data dependency, and large-volume and high-dimensional characteristics. This thesis contributes to the area of System Informatics and Control (SIAC) to develop systematic data fusion methodologies for effective quality control and performance improvement in complex systems. These advanced methodologies enable (1) a better handling of the rich data environment communicated by complex engineering systems, (2) a closer monitoring of the system status, and (3) a more accurate forecasting of future trends and behaviors. The research bridges the gaps in methodologies among advanced statistics, engineering domain knowledge and operation research. It also forms close linkage to various application areas such as manufacturing, health care, energy and service systems. This thesis started from investigating the optimal sensor system design and conducting multiple sensor data fusion analysis for process monitoring and diagnosis in different applications. In Chapter 2, we first studied the couplings or interactions between the optimal design of a sensor system in a Bayesian Network and quality management of a manufacturing system, which can improve cost-effectiveness and production yield by considering sensor cost, process change detection speed, and fault diagnosis accuracy in an integrated manner. An algorithm named “Best Allocation Subsets by Intelligent Search” (BASIS) with optimality proof is developed to obtain the optimal sensor allocation design at minimum cost under different user specified detection requirements. Chapter 3 extended this line of research by proposing a novel adaptive sensor allocation framework, which can greatly improve the monitoring and diagnosis capabilities of the previous method. A max-min criterion is developed to manage sensor reallocation and process change detection in an integrated manner. The methodology was tested and validated based on a hot forming process and a cap alignment process. Next in Chapter 4, we proposed a Scalable-Robust-Efficient Adaptive (SERA) sensor allocation strategy for online high-dimensional process monitoring in a general network. A monitoring scheme of using the sum of top-r local detection statistics is developed, which is scalable, effective and robust in detecting a wide range of possible shifts in all directions. This research provides a generic guideline for practitioners on determining (1) the appropriate sensor layout; (2) the “ON” and “OFF” states of different sensors; and (3) which part of the acquired data should be transmitted to and analyzed at the fusion center, when only limited resources are available. To improve the accuracy of remaining lifetime prediction, Chapter 5 proposed a data-level fusion methodology for degradation modeling and prognostics. When multiple sensors are available to measure the degradation mechanism of a same system, it becomes a high dimensional and challenging problem to determine which sensors to use and how to combine them together for better data analysis. To address this issue, we first defined two essential properties if present in a degradation signal, can enhance the effectiveness for prognostics. Then, we proposed a generic data-level fusion algorithm to construct a composite health index to achieve those two identified properties. The methodology was tested using the degradation signals of aircraft gas turbine engine, which demonstrated a much better prognostic result compared to relying solely on the data from an individual sensor. In summary, this thesis is the research drawing attention to the area of data fusion for effective employment of the underlying data gathering capabilities for system modeling, performance assessment and improvement. The fundamental data fusion methodologies are developed and further applied to various applications, which can facilitate resources planning, real-time monitoring, diagnosis and prognostics.Ph.D

    Architecting Networked Engineering Systems

    Get PDF
    The primary goal in this dissertation is to create a new knowledge, make a transformative influence in the design of networked engineering systems adaptable to ambitious market demands, and to accommodate the Industry 4.0 design principles based on the philosophy that design is fundamentally a decision making process. The principal motivation in this dissertation is to establish a computational framework that is suitable for the design of low-cost and high-quality networked engineering systems adaptable to ambitious market demands in the context of Industry 4.0. Dynamic and ambitious global market demands make it necessary for competitive enterprises to have low-cost manufacturing processes and high-quality products. Smart manufacturing is increasingly being adopted by companies to respond to changes in the market. These smart manufacturing systems must be adaptable to dynamic changes and respond to unexpected disturbances, and uncertainty. Accordingly, a decision-based design computational framework, Design for Dynamic Management (DFDM), is proposed as a support to flexible, operable and rapidly configurable manufacturing processes. DFDM has three critical components: adaptable and concurrent design, operability analysis and reconfiguration strategies. Adaptable and concurrent design methods offer flexibility in selection of design parameters and the concurrent design of the mechanical and control systems. Operability analysis is used to determine the functionality of the system undergoing dynamic change. Reconfiguration strategies allow multiple configurations of elements in the system. It is expected that proposed computational framework results in next generation of networked engineering systems, where tools and sensors communicate with each other via the Internet of Things (IoT), sensors data would be used to create enriched digital system models, adaptable to fast-changing market requirements, which can produce higher quality products over a longer lifetime and at a lower cost. The computational framework and models proposed in this dissertation are applicable in system design, and/or product-service system design. This dissertation is a fundamental research and a way forward is DFDM transition to the industry through decision-based design platform. Decision-based design platform is a step toward new frontiers, Cyber-Physical-Social System Design, Manufacturing, and Services, contributing to further digitization

    Building transformative framework for isolation and mitigation of quality defects in multi-station assembly systems using deep learning

    Get PDF
    The manufacturing industry is undergoing significant transformation towards electrification (e-mobility). This transformation has intensified critical development of new lightweight materials, structures and assembly processes supporting high volume and high variety production of Battery Electric Vehicles (BEVs). As new materials and processes get developed it is crucial to address quality defects detection, prediction, and prevention especially given that e-mobility products interlink quality and safety, for example, assembly of ‘live’ battery systems. These requirements necessitate the development of methodologies that ensure quality requirements of products are satisfied from Job 1. This means ensuring high right-first-time ratio during process design by reducing manual and ineffective trial-and-error process adjustments; and, then continuing this by maintaining near zero-defect manufacturing during production by reducing Mean-Time-to-Detection and Mean-Time-to-Resolution for critical quality defects. Current technologies for isolating and mitigating quality issues provide limited performance within complex manufacturing systems due to (i) limited modelling abilities and lack capabilities to leverage point cloud quality monitoring data provided by recent measurement technologies such as 3D scanners to isolate defects; (ii) extensive dependence on manual expertise to mitigate the isolated defects; and, (iii) lack of integration between data-driven and physics-based models resulting in limited industrial applicability, scalability and interpretability capabilities, hence constitute a significant barrier towards ensuring quality requirements throughout the product lifecycle. The study develops a transformative framework that goes beyond improving the accuracy and performance of current approaches and overcomes fundamental barriers for isolation and mitigation of product shape error quality defects in multi-station assembly systems (MAS). The proposed framework is based on three methodologies which explore MAS: (i) response to quality defects by isolating process parameters (root causes (RCs)) causing unaccepted shape error defects; (ii) correction of the isolated RCs by determining corrective actions (CA) policy to mitigate unaccepted shape error defects; and, (iii) training, scalability and interpretability of (i) and (ii) by establishing closed-loop in-process (CLIP) capability that integrates in-line point cloud data, deep learning approaches of (i) and (ii) and physics-based models to provide comprehensive data-driven defect identification and RC isolation (causality analysis). The developed methodologies include: (i) Object Shape Error Response (OSER) to isolate RCs within single- and multi-station assembly systems (OSER-MAS) by developing Bayesian 3D-convolutional neural network architectures that process point cloud data and are trained using physics-based models and have capabilities to relate complex product shape error patterns to RCs. It quantifies uncertainties and is applicable during the design phase when no quality monitoring data is available. (ii) Object Shape Error Correction (OSEC) to generate CAs that mitigate RCs and simultaneously account for cost and quality key performance indicators (KPIs), MAS reconfigurability, and stochasticity by developing a deep reinforcement learning framework that estimates effective and feasible CAs without manual expertise. (iii) Closed-Loop In-Process (CLIP) to enable industrial adoption of approaches (i) & (ii) by firstly enhancing the scalability by using (a) closed-loop training, and (b) continual/transfer learning. This is important as training deep learning models for a MAS is time-intensive and requires large amounts of labelled data; secondly providing interpretability and transparency for the estimated RCs that drive costly CAs using (c) 3D gradient-based class activation maps. The methods are implemented as independent kernels and then integrated within a transformative framework which is further verified, validated, and benchmarked using industrial-scale automotive sheet metal assembly case studies such as car door and cross-member. They demonstrate 29% better performance for RC isolation and 40% greater effectiveness for CAs than current statistical and engineering-based approaches

    Robust dimensional variation control of compliant assemblies through the application of sheet metal joining process sequencing

    No full text
    Imperfections are inherent in every manufactured part - when the hundreds of sheet metal components that form the automotive Body In White (BIW) are assembled together, significant deformation and variability are possible. Although early work by Takazawa (1980) showed that compliant components can absorb individual component variability when assembled, interactions between the components and successive operations complicates analysis of the assembly process and prediction of the assembled output. Therefore, improving vehicle dimensional quality requires more detailed knowledge of the assembly process and control of features critical to functionality and aesthetic appeal. In the automotive industry, these features include: uneven gaps and flushness between panels, high closure forces, and incorrect seal gaps leading to leaks and excessive noise. Despite significant research in the field of compliant assembly, there have not been sufficiently detailed studies regarding the joining sequence process. Further, existing works are based on a number of assumptions that limits the applicability of their results. This thesis addresses this gap by utilising the joining process sequence to control deformations and minimise dimensional variation during the assembly of complex non-ideal compliant components. In this work, a geometry class to represent complex compliant assemblies is presented; the interactions of process sequences and variations examined; the criteria for robust sequence selection established; and a method for the rapid identification of robust sequences is developed. In addressing the aim of this research a number of key findings were developed. A broad method of classifying the input variation of the components is presented. Using this basis, identifying when the joining process has a significant influence on final assembly dimensionality can be established. The pre-existing guidelines of fixed-to-free end were then further generalised for complex geometries, resulting in the approach of most-to-least rigid configuration, noting the importance of prior joining operations and the fixture boundary conditions. In determining the potential impact of the joining sequence, the need to consider the build-up of internal stresses while modelling the assembly process is highlighted. A novel method, which analyses the natural frequency shift of the structure between successive joins, is presented as a technique of calculating a robust joining sequence. This technique requires no knowledge of the part deformations, only the component geometry, fixture configuration and weld locations, and hence is more practical to industry. Experimental studies to validate the simulation-based work were then performed. Although sequence-based trends are identifiable in some of the extracted data patterns, the twist induced in the experimental structure was less significant when compared to the simulated results. This difference is a result of a number of factors which are then postulated and analysed. Further investigation of this effect would be beneficial to further validate the approach. To build on the work in this thesis, two notable directions in addition to further industrial validation have been identified. By assessing the functional impact of variation patterns in measurement data, functional variation sequence synthesis could be investigated; where the goal is to select a sequence that best controls these critical functional variations. Evolvable assembly systems that utilise the input work and holding forces to optimise the sequence operations and minimise potential spring-back of the component can also be considered. This strategy would negate the need for the additional measurement step for process feedback which currently hampers existing adaptive techniques. With between four and six thousand joins performed per vehicle, an estimated 300 billion joining operations are performed annually worldwide. With minimal knowledge available to industry regarding the impact of the joining process sequence, the results from this work have the potential to significantly improve quality in the automotive market
    corecore