552 research outputs found

    Integrating Multiobjective Optimization With The Six Sigma Methodology For Online Process Control

    Get PDF
    Over the past two decades, the Define-Measure-Analyze-Improve-Control (DMAIC) framework of the Six Sigma methodology and a host of statistical tools have been brought to bear on process improvement efforts in today’s businesses. However, a major challenge of implementing the Six Sigma methodology is maintaining the process improvements and providing real-time performance feedback and control after solutions are implemented, especially in the presence of multiple process performance objectives. The consideration of a multiplicity of objectives in business and process improvement is commonplace and, quite frankly, necessary. However, balancing the collection of objectives is challenging as the objectives are inextricably linked, and, oftentimes, in conflict. Previous studies have reported varied success in enhancing the Six Sigma methodology by integrating optimization methods in order to reduce variability. These studies focus these enhancements primarily within the Improve phase of the Six Sigma methodology, optimizing a single objective. The current research and practice of using the Six Sigma methodology and optimization methods do little to address the real-time feedback and control for online process control in the case of multiple objectives. This research proposes an innovative integrated Six Sigma multiobjective optimization (SSMO) approach for online process control. It integrates the Six Sigma DMAIC framework with a nature-inspired optimization procedure that iteratively perturbs a set of decision variables providing feedback to the online process, eventually converging to a set of tradeoff process configurations that improves and maintains process stability. For proof of concept, the approach is applied to a general business process model – a well-known inventory management model – that is formally defined and specifies various process costs as objective functions. The proposed iv SSMO approach and the business process model are programmed and incorporated into a software platform. Computational experiments are performed using both three sigma (3σ)-based and six sigma (6σ)-based process control, and the results reveal that the proposed SSMO approach performs far better than the traditional approaches in improving the stability of the process. This research investigation shows that the benefits of enhancing the Six Sigma method for multiobjective optimization and for online process control are immense

    STOCK PREDICTION VIA SENTIMENT AND ONLINE SOCIAL STATUS

    Get PDF
    Studies of stock market prediction show that stock movements are related to the sentiment of social media. However, few studies have investigated the role of online social relations in predicting stock movements. This paper aims at constructing features that capture users’ online social status and incorporating these into stock prediction models. Online opinions are often developed through interactions and are weaker in their early stages. We developed a feature-enhancing procedure motivated by statistical surveillance approaches to strengthen the ability to capture emerging trends. We evaluated our feature-enhancing procedure by developing models to predict stock returns in the following 20-minute period. A comparison of experimental results with baseline models shows that our feature-enhancing design helped to predict stock movements. The model (SE_CUSUM) that adopted features enhanced by cumulative sum (CUSUM), a statistical surveillance approach, performed better than baseline models in terms of directional accuracy, balanced error rate, root mean square error, and mean absolute error. Our simulated trading also showed that SE_CUSUM realized a higher profit than the baseline approaches. These results suggest that incorporating online social status and our feature-enhancing procedure improve high frequency stock prediction performance

    Design and properties of the predictive ratio cusum (PRC) control charts

    Get PDF
    In statistical process control/monitoring (SPC/M), memory-based control charts aim to detect small/medium persistent parameter shifts. When a phase I calibration is not feasible, self-starting methods have been proposed, with the predictive ratio cusum (PRC) being one of them. To apply such methods in practice, one needs to derive the decision limit threshold that will guarantee a preset false alarm tolerance, a very difficult task when the process parameters are unknown and their estimate is sequentially updated. Utilizing the Bayesian framework in PRC, we will provide the theoretic framework that will allow to derive a decision-making threshold, based on false alarm tolerance, which along with the PRC closed-form monitoring scheme will permit its straightforward application in real-life practice. An enhancement of PRC is proposed, and a simulation study evaluates its robustness against competitors for various model type misspecifications. Finally, three real data sets (normal, Poisson, and binomial) illustrate its implementation in practice. Technical details, algorithms, and R-codes reproducing the illustrations are provided as supplementary material

    Nonparametric monitoring of sunspot number observations: a case study

    Full text link
    Solar activity is an important driver of long-term climate trends and must be accounted for in climate models. Unfortunately, direct measurements of this quantity over long periods do not exist. The only observation related to solar activity whose records reach back to the seventeenth century are sunspots. Surprisingly, determining the number of sunspots consistently over time has remained until today a challenging statistical problem. It arises from the need of consolidating data from multiple observing stations around the world in a context of low signal-to-noise ratios, non-stationarity, missing data, non-standard distributions and many kinds of errors. The data from some stations experience therefore severe and various deviations over time. In this paper, we propose the first systematic and thorough statistical approach for monitoring these complex and important series. It consists of three steps essential for successful treatment of the data: smoothing on multiple timescales, monitoring using block bootstrap calibrated CUSUM charts and classifying of out-of-control situations by support vector techniques. This approach allows us to detect a wide range of anomalies (such as sudden jumps or more progressive drifts), unseen in previous analyses. It helps us to identify the causes of major deviations, which are often observer or equipment related. Their detection and identification will contribute to improve future observations. Their elimination or correction in past data will lead to a more precise reconstruction of the world reference index for solar activity: the International Sunspot Number.Comment: 27 pages (without appendices), 6 figure

    Temporal instability of evidence base: A threat to policy making?

    Get PDF
    A shift towards evidence-based conservation and environmental managementover the last two decades has resulted in an increased use of systematic reviewsand meta-analyses as tools to combine existing scientific evidence. However, toguide policy making decisions in conservation and management, the conclu-sions of meta-analyses need to remain stable for at least some years. Alarmingly,numerous recent studies indicate that the magnitude, statistical significance,and even the sign of the effects reported in the literature might change overrelatively short time periods. We argue that such rapid temporal changes incumulative evidence represent a real threat to policy making in conservationand environmental management and call for systematic monitoring of temporalchanges in evidence and exploration of their causes

    Multi-agent-based DDoS detection on big data systems

    Get PDF
    The Hadoop framework has become the most deployed platform for processing Big Data. Despite its advantages, Hadoop s infrastructure is still deployed within the secured network perimeter because the framework lacks adequate inherent security mechanisms against various security threats. However, this approach is not sufficient for providing adequate security layer against attacks such as Distributed Denial of Service. Furthermore, current work to secure Hadoop s infrastructure against DDoS attacks is unable to provide a distributed node-level detection mechanism. This thesis presents a software agent-based framework that allows distributed, real-time intelligent monitoring and detection of DDoS attack at Hadoop s node-level. The agent s cognitive system is ingrained with cumulative sum statistical technique to analyse network utilisation and average server load and detect attacks from these measurements. The framework is a multi-agent architecture with transducer agents that interface with each Hadoop node to provide real-time detection mechanism. Moreover, the agents contextualise their beliefs by training themselves with the contextual information of each node and monitor the activities of the node to differentiate between normal and anomalous behaviours. In the experiments, the framework was exposed to TCP SYN and UDP flooding attacks during a legitimate MapReduce job on the Hadoop testbed. The experimental results were evaluated regarding performance metrics such as false-positive ratio, false-negative ratio and response time to attack. The results show that UDP and TCP SYN flooding attacks can be detected and confirmed on multiple nodes in nineteen seconds with 5.56% false-positive ration, 7.70% false-negative ratio and 91.5% success rate of detection. The results represent an improvement compare to the state-of the-ar

    Quality techniques of dispersion processes in production line.

    Get PDF
    Εθνικό Μετσόβιο Πολυτεχνείο--Μεταπτυχιακή Εργασία. Διεπιστημονικό-Διατμηματικό Πρόγραμμα Μεταπτυχιακών Σπουδών (Δ.Π.Μ.Σ.) “Εφαρμοσμένες Μαθηματικές Επιστήμες

    Online Kernel CUSUM for Change-Point Detection

    Full text link
    We propose an efficient online kernel Cumulative Sum (CUSUM) method for change-point detection that utilizes the maximum over a set of kernel statistics to account for the unknown change-point location. Our approach exhibits increased sensitivity to small changes compared to existing methods, such as the Scan-B statistic, which corresponds to a non-parametric Shewhart chart-type procedure. We provide accurate analytic approximations for two key performance metrics: the Average Run Length (ARL) and Expected Detection Delay (EDD), which enable us to establish an optimal window length on the order of the logarithm of ARL to ensure minimal power loss relative to an oracle procedure with infinite memory. Such a finding parallels the classic result for window-limited Generalized Likelihood Ratio (GLR) procedure in parametric change-point detection literature. Moreover, we introduce a recursive calculation procedure for detection statistics to ensure constant computational and memory complexity, which is essential for online procedures. Through extensive experiments on simulated data and a real-world human activity dataset, we demonstrate the competitive performance of our method and validate our theoretical results
    corecore