2,968 research outputs found

    Real-Time Monitoring and Fault Diagnostics in Roll-To-Roll Manufacturing Systems

    Full text link
    A roll-to-roll (R2R) process is a manufacturing technique involving continuous processing of a flexible substrate as it is transferred between rotating rolls. It integrates many additive and subtractive processing techniques to produce rolls of product in an efficient and cost-effective way due to its high production rate and mass quantity. Therefore, the R2R processes have been increasingly implemented in a wide range of manufacturing industries, including traditional paper/fabric production, plastic and metal foil manufacturing, flexible electronics, thin film batteries, photovoltaics, graphene films production, etc. However, the increasing complexity of R2R processes and high demands on product quality have heightened the needs for effective real-time process monitoring and fault diagnosis in R2R manufacturing systems. This dissertation aims at developing tools to increase system visibility without additional sensors, in order to enhance real-time monitoring, and fault diagnosis capability in R2R manufacturing systems. First, a multistage modeling method is proposed for process monitoring and quality estimation in R2R processes. Product-centric and process-centric variation propagation are introduced to characterize variation propagation throughout the system. The multistage model mainly focuses on the formulation of process-centric variation propagation, which uniquely exists in R2R processes, and the corresponding product quality measurements with both physical knowledge and sensor data analysis. Second, a nonlinear analytical redundancy method is proposed for sensor validation to ensure the accuracy of sensor measurements for process and quality control. Parity relations based on nonlinear observation matrix are formulated to characterize system dynamics and sensor measurements. Robust optimization is designed to identify the coefficient of parity relations that can tolerate a certain level of measurement noise and system disturbances. The effect of the change of operating conditions on the value of the optimal objective function – parity residuals and the optimal design variables – parity coefficients are evaluated with sensitivity analysis. Finally, a multiple model approach for anomaly detection and fault diagnosis is introduced to improve the diagnosability under different operating regimes. The growing structure multiple model system (GSMMS) is employed, which utilizes Voronoi sets to automatically partition the entire operating space into smaller operating regimes. The local model identification problem is revised by formulating it into an optimization problem based on the loss minimization framework and solving with the mini-batch stochastic gradient descent method instead of least squares algorithms. This revision to the GSMMS method expands its capability to handle the local model identification problems that cannot be solved with a closed-form solution. The effectiveness of the models and methods are determined with testbed data from an R2R process. The results show that those proposed models and methods are effective tools to understand variation propagation in R2R processes and improve estimation accuracy of product quality by 70%, identify the health status of sensors promptly to guarantee data accuracy for modeling and decision making, and reduce false alarm rate and increase detection power under different operating conditions. Eventually, those tools developed in this thesis contribute to increase the visibility of R2R manufacturing systems, improve productivity and reduce product rejection rate.PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146114/1/huanyis_1.pd

    How the Application of Machine Learning Systems Changes Business Processes: A Multiple Case Study

    Get PDF
    Machine Learning (ML) systems are applied in organizations to substitute or complement human knowledge work. Although organizations invest heavily in ML, the resulting business benefits often remain unclear. To explain the impact of ML systems, it is necessary to understand how their application changes business processes and affects process performance. In our exploratory multiple case study, we analyze the application of multiple productive ML systems in one organization to (1.) describe how activity composition, allocation, and sequence change in ML-supported processes; (2.) distinguish how the applied ML system type and task characteristics influence process changes; and (3.) explain how process efficiency and quality are affected. As a result, we develop three preliminary change patterns: Lift & Shift, Divide & Conquer, and Expand & Intensify. Our research aims to contribute to the future of work and IS value literature by connecting the emerging knowledge on ML systems to their process-level implications

    Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of Slack

    Get PDF
    Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria

    Global Warming and the Problem of Policy Innovation: Lessons From the Early Environmental Movement

    Get PDF
    When it comes to influencing government decisions, special interests have some built-in advantages over the general public interest. When the individual members of special interest groups have a good deal to gain or lose as a result of government action, special interests can organize more effectively, and generate benefits for elected officials, such as campaign contributions and other forms of political support. They will seek to use those advantages to influence government decisions favorable to them. The public choice theory of government decision making sometimes comes close to elevating this point into a universal law, suggesting that the general public interest can never prevail over powerful special interests. In the period of the late 1960s and early 1970s, however, Congress enacted numerous significant environmental laws, laws that continue to form the backbone of federal policies toward environmental problems. These laws were truly innovative in their policies and their designs, and they pitted the general public interest in improving environmental quality against powerful, special interests. In each case, the general public interest was able to prevail. This policy “window” did not stay open for long. It was quickly succeeded by an extended period in which enacting additional innovative statutes has proven nearly impossible, which continues to this day. Yet we need innovative approaches to address continuing and emerging environmental problems more than ever. This is self-evidently true with respect to the problem of global warming and climate change. The questions worth asking are whether we can identify the factors that once made policy innovation possible in the late 1960s and early 1970s and if those factors can be produced once again. For the public’s David to be able to stand up against the special interest Goliaths, a broad base of the public must first be mobilized, and then that mobilization must be sustained, which typically occurs when the public embraces a sense of great urgency. Urgency can be generated when the public appreciates that failure to address a problem threatens them or their loved ones with significant harm. Media attention plays a key role in creating the public’s awareness of any urgent problem. These factors can succeed in putting general concerns of the public on the public agenda, at which time acceptable proposals for workable solutions need to be available. When the first window for policy innovation opened up in the late 1960s and early 1970s, each of these favorable factors was present for many of our conventional pollution problems. At the same time, the strength of the special interests was at a low ebb. This Essay argues that under current circumstances, the conditions for policy innovation are not yet as favorable as they were in this earlier period. Strong presidential leadership may be capable of altering those conditions, but as yet the public’s concern about the adverse effects of climate change does not appear to have achieved the same strength or intensity as comparable concerns over conventional pollution problems had earlier

    TFormer: A throughout fusion transformer for multi-modal skin lesion diagnosis

    Full text link
    Multi-modal skin lesion diagnosis (MSLD) has achieved remarkable success by modern computer-aided diagnosis technology based on deep convolutions. However, the information aggregation across modalities in MSLD remains challenging due to severity unaligned spatial resolution (dermoscopic image and clinical image) and heterogeneous data (dermoscopic image and patients' meta-data). Limited by the intrinsic local attention, most recent MSLD pipelines using pure convolutions struggle to capture representative features in shallow layers, thus the fusion across different modalities is usually done at the end of the pipelines, even at the last layer, leading to an insufficient information aggregation. To tackle the issue, we introduce a pure transformer-based method, which we refer to as ``Throughout Fusion Transformer (TFormer)", for sufficient information intergration in MSLD. Different from the existing approaches with convolutions, the proposed network leverages transformer as feature extraction backbone, bringing more representative shallow features. We then carefully design a stack of dual-branch hierarchical multi-modal transformer (HMT) blocks to fuse information across different image modalities in a stage-by-stage way. With the aggregated information of image modalities, a multi-modal transformer post-fusion (MTP) block is designed to integrate features across image and non-image data. Such a strategy that information of the image modalities is firstly fused then the heterogeneous ones enables us to better divide and conquer the two major challenges while ensuring inter-modality dynamics are effectively modeled. Experiments conducted on the public Derm7pt dataset validate the superiority of the proposed method. Our TFormer outperforms other state-of-the-art methods. Ablation experiments also suggest the effectiveness of our designs

    Isolation of malicious external inputs in a security focused adaptive execution environment

    Get PDF
    pre-printReliable isolation of malicious application inputs is necessary for preventing the future success of an observed novel attack after the initial incident. In this paper we describe, measure and analyze, Input-Reduction, a technique that can quickly isolate malicious external inputs that embody unforeseen and potentially novel attacks, from other benign application inputs. The Input-Reduction technique is integrated into an advanced, security-focused, and adaptive execution environment that automates diagnosis and repair. In experiments we show that Input-Reduction is highly accurate and efficient in isolating attack inputs and determining casual relations between inputs. We also measure and show that the cost incurred by key services that support reliable reproduction and fast attack isolation is reasonable in the adaptive execution environment

    On the production testing of analog and digital circuits

    Get PDF
    This thesis focuses on the production testing of Analog and Digital circuits. First, it addresses the issue of finding a high coverage minimum test set for the second generation current conveyor as this was not tackled before. The circuit under test is used in active capacitance multipliers, V-I scalar circuits, Biquadratic filters and many other applications. This circuit is often used to implement voltage followers, current followers and voltage to current converters. Five faults are assumed per transistor. It is shown that, to obtain 100% fault coverage, the CCII has to be operated in voltage to current converter mode. Only two test values are required to obtain this fault coverage. Additionally, the thesis focuses on the production testing of Memristor Ratioed Logic (MRL) gates because this was not studied before. MRL is a family that uses memristors along with CMOS inverters to design logic gates. Two-input NAND and NOR gates are investigated using the stuck at fault model for the memristors and the five-fault model for the transistors. It is shown that in order to obtain full coverage for the MRL NAND and NOR gates, two solutions are proposed. The first is the usage of scaled input voltages to prevent the output from falling in the undefined region. The second proposed solution is changing the switching threshold VM of the CMOS inverter. In addition, it is shown that test speed and order should be taken into consideration. It is proven that three ordered test vectors are needed for full coverage in MRL NAND and NOR gates, which is different from the 100% coverage test set in the conventional NAND and NOR CMOS designs
    corecore