518,609 research outputs found

    Real-Time data-driven average active period method for bottleneck detection

    Get PDF
    Prioritising improvement and maintenance activities is an important part of the production management and development process. Companies need to direct their efforts to the production constraints (bottlenecks) to achieve higher productivity. The first step is to identify the bottlenecks in the production system. A majority of the current bottleneck detection techniques can be classified into two categories, based on the methods used to develop the techniques: Analytical and simulation based. Analytical methods are difficult to use in more complex multi-stepped production systems, and simulation-based approaches are time-consuming and less flexible with regard to changes in the production system. This research paper introduces a real-Time, data-driven algorithm, which examines the average active period of the machines (the time when the machine is not waiting) to identify the bottlenecks based on real-Time shop floor data captured by Manufacturing Execution Systems (MES). The method utilises machine state information and the corresponding time stamps of those states as recorded by MES. The algorithm has been tested on a real-Time MES data set from a manufacturing company. The advantage of this algorithm is that it works for all kinds of production systems, including flow-oriented layouts and parallel-systems, and does not require a simulation model of the production system

    Novel monitoring systems to obtain dairy cattle phenotypes associated with sustainable production

    Get PDF
    Improvements in production efficiencies and profitability of products from cattle are of great interest to farmers. Furthermore, improvements in production efficiencies associated with feed utilization and fitness traits have also been shown to reduce the environmental impact of cattle systems, which is of great importance to society. The aim of this paper was to discuss selected novel monitoring systems to measure dairy cattle phenotypic traits that are considered to bring more sustainable production with increased productivity and reduced environmental impact through reduced greenhouse gas emissions. With resource constraints and high or fluctuating commodity prices the agricultural industry has seen a growing need by producers for efficiency savings (and innovation) to reduce waste and costs associated with production. New data obtained using fast, in some cases real-time, and affordable objective measures are becoming more readily available to aid farm level monitoring, awareness, and decision making. These objective measures may additionally provide an accurate and repeatable method for improving animal health and welfare, and phenotypes for selecting animals. Such new data sources include image analysis and further data-driven technologies (e.g., infrared spectra, gas analysis), which bring non-invasive methods to obtain animal phenotypes (e.g., enteric methane, feed utilization, health, fertility, and behavioral traits) on commercial farms; this information may have been costly or not possible to obtain previously. Productivity and efficiency gains often move largely in parallel and thus bringing more sustainable systems

    Data-driven Prediction of Internal Turbulences in Production Using Synthetic Data

    Get PDF
    Production planning and control are characterized by unplanned events or so-called turbulences. Turbulences can be external, originating outside the company (e.g., delayed delivery by a supplier), or internal, originating within the company (e.g., failures of production and intralogistics resources). Turbulences can have far-reaching consequences for companies and their customers, such as delivery delays due to process delays. For target-optimized handling of turbulences in production, forecasting methods incorporating process data in combination with the use of existing flexibility corridors of flexible production systems offer great potential. Probabilistic, data-driven forecasting methods allow determining the corresponding probabilities of potential turbulences. However, a parallel application of different forecasting methods is required to identify an appropriate one for the specific application. This requires a large database, which often is unavailable and, therefore, must be created first. A simulation-based approach to generate synthetic data is used and validated to create the necessary database of input parameters for the prediction of internal turbulences. To this end, a minimal system for conducting simulation experiments on turbulence scenarios was developed and implemented. A multi-method simulation of the minimal system synthetically generates the required process data, using agent-based modeling for the autonomously controlled system elements and event-based modeling for the stochastic turbulence events. Based on this generated synthetic data and the variation of the input parameters in the forecast, a comparative study of data-driven probabilistic forecasting methods was conducted using a data analytics tool. Forecasting methods of different types (including regression, Bayesian models, nonlinear models, decision trees, ensemble, deep learning) were analyzed in terms of prediction quality, standard deviation, and computation time. This resulted in the identification of appropriate forecasting methods, and required input parameters for the considered turbulences

    QuPARA: Query-Driven Large-Scale Portfolio Aggregate Risk Analysis on MapReduce

    Full text link
    Stochastic simulation techniques are used for portfolio risk analysis. Risk portfolios may consist of thousands of reinsurance contracts covering millions of insured locations. To quantify risk each portfolio must be evaluated in up to a million simulation trials, each capturing a different possible sequence of catastrophic events over the course of a contractual year. In this paper, we explore the design of a flexible framework for portfolio risk analysis that facilitates answering a rich variety of catastrophic risk queries. Rather than aggregating simulation data in order to produce a small set of high-level risk metrics efficiently (as is often done in production risk management systems), the focus here is on allowing the user to pose queries on unaggregated or partially aggregated data. The goal is to provide a flexible framework that can be used by analysts to answer a wide variety of unanticipated but natural ad hoc queries. Such detailed queries can help actuaries or underwriters to better understand the multiple dimensions (e.g., spatial correlation, seasonality, peril features, construction features, and financial terms) that can impact portfolio risk. We implemented a prototype system, called QuPARA (Query-Driven Large-Scale Portfolio Aggregate Risk Analysis), using Hadoop, which is Apache's implementation of the MapReduce paradigm. This allows the user to take advantage of large parallel compute servers in order to answer ad hoc risk analysis queries efficiently even on very large data sets typically encountered in practice. We describe the design and implementation of QuPARA and present experimental results that demonstrate its feasibility. A full portfolio risk analysis run consisting of a 1,000,000 trial simulation, with 1,000 events per trial, and 3,200 risk transfer contracts can be completed on a 16-node Hadoop cluster in just over 20 minutes.Comment: 9 pages, IEEE International Conference on Big Data (BigData), Santa Clara, USA, 201

    Principles for aerospace manufacturing engineering in integrated new product introduction

    Get PDF
    This article investigates the value-adding practices of Manufacturing Engineering for integrated New Product Introduction. A model representing how current practices align to support lean integration in Manufacturing Engineering has been defined. The results are used to identify a novel set of guiding principles for integrated Manufacturing Engineering. These are as follows: (1) use a data-driven process, (2) build from core capabilities, (3) develop the standard, (4) deliver through responsive processes and (5) align cross-functional and customer requirements. The investigation used a mixed-method approach. This comprises case studies to identify current practice and a survey to understand implementation in a sample of component development projects within a major aerospace manufacturer. The research contribution is an illustration of aerospace Manufacturing Engineering practices for New Product Introduction. The conclusions will be used to indicate new priorities for New Product Introduction and the cross-functional interactions to support flawless and innovative New Product Introduction. The final principles have been validated through a series of consultations with experts in the sponsoring company to ensure that correct and relevant content has been defined

    On the Scalability of Data Reduction Techniques in Current and Upcoming HPC Systems from an Application Perspective

    Full text link
    We implement and benchmark parallel I/O methods for the fully-manycore driven particle-in-cell code PIConGPU. Identifying throughput and overall I/O size as a major challenge for applications on today's and future HPC systems, we present a scaling law characterizing performance bottlenecks in state-of-the-art approaches for data reduction. Consequently, we propose, implement and verify multi-threaded data-transformations for the I/O library ADIOS as a feasible way to trade underutilized host-side compute potential on heterogeneous systems for reduced I/O latency.Comment: 15 pages, 5 figures, accepted for DRBSD-1 in conjunction with ISC'1

    Mining Spatial-Temporal Patterns and Structural Sparsity for Human Motion Data Denoising

    Get PDF
    Motion capture is an important technique with a wide range of applications in areas such as computer vision, computer animation, film production, and medical rehabilitation. Even with the professional motion capture systems, the acquired raw data mostly contain inevitable noises and outliers. To denoise the data, numerous methods have been developed, while this problem still remains a challenge due to the high complexity of human motion and the diversity of real-life situations. In this paper, we propose a data-driven-based robust human motion denoising approach by mining the spatial-temporal patterns and the structural sparsity embedded in motion data. We first replace the regularly used entire pose model with a much fine-grained partlet model as feature representation to exploit the abundant local body part posture and movement similarities. Then, a robust dictionary learning algorithm is proposed to learn multiple compact and representative motion dictionaries from the training data in parallel. Finally, we reformulate the human motion denoising problem as a robust structured sparse coding problem in which both the noise distribution information and the temporal smoothness property of human motion have been jointly taken into account. Compared with several state-of-the-art motion denoising methods on both the synthetic and real noisy motion data, our method consistently yields better performance than its counterparts. The outputs of our approach are much more stable than that of the others. In addition, it is much easier to setup the training dataset of our method than that of the other data-driven-based methods
    corecore