527,174 research outputs found

    A blind hierarchical coherent search for gravitational-wave signals from coalescing compact binaries in a network of interferometric detectors

    Full text link
    We describe a hierarchical data analysis pipeline for coherently searching for gravitational wave (GW) signals from non-spinning compact binary coalescences (CBCs) in the data of multiple earth-based detectors. It assumes no prior information on the sky position of the source or the time of occurrence of its transient signals and, hence, is termed "blind". The pipeline computes the coherent network search statistic that is optimal in stationary, Gaussian noise, and allows for the computation of a suite of alternative statistics and signal-based discriminators that can improve its performance in real data. Unlike the coincident multi-detector search statistics employed so far, the coherent statistics are different in the sense that they check for the consistency of the signal amplitudes and phases in the different detectors with their different orientations and with the signal arrival times in them. The first stage of the hierarchical pipeline constructs coincidences of triggers from the multiple interferometers, by requiring their proximity in time and component masses. The second stage follows up on these coincident triggers by computing the coherent statistics. The performance of the hierarchical coherent pipeline on Gaussian data is shown to be better than the pipeline with just the first (coincidence) stage.Comment: 12 pages, 3 figures, accepted for publication in Classical and Quantum Gravit

    Two stages optimization model on make or buy analysis and quality improvement considering learning and forgetting curve

    Get PDF
    Purpose: The aim of this research is to develop a two stages optimization model on make or buy analysis and quality improvement considering learning and forgetting curve. The first stage model is developed to determine the optimal selection of process/suppliers and the component allocation to those corresponding process/suppliers. The second stage model deals with quality improvement efforts to determine the optimal investment to maximize Return on Investment (ROI) by taking into consideration the learning and forgetting curve. Design/methodology/approach: The research used system modeling approach by mathematically modeling the system consists of a manufacturer with multi suppliers where the manufacturer tries to determine the best combination of their own processes and suppliers to minimize certain costs and provides funding for quality improvement efforts for their own processes and suppliers sides. Findings: This research provides better decisions in make or buy analysis and to improve the components by quality investment considering learning and forgetting curve. Research limitations/implications: This research has limitations concerning investment fund that assumed to be provided by the manufacturer which in the real system the fund may be provided by the suppliers. In this model we also does not differentiate two types of learning, namely autonomous and induced learning. Practical implications: This model can be used by a manufacturer to gain deeper insight in making decisions concerning process/suppliers selection along with component allocation and how to improve the component by investment allocation to maximize ROI. Originality/value: This paper combines two models, which in previous research the models are discussed separately. The inclusions of learning and forgetting also gives a new perspective in quality investment decision.Peer Reviewe

    Adaptive Two-stage Stochastic Programming with an Application to Capacity Expansion Planning

    Full text link
    Multi-stage stochastic programming is a well-established framework for sequential decision making under uncertainty by seeking policies that are fully adapted to the uncertainty. Often such flexible policies are not desirable, and the decision maker may need to commit to a set of actions for a number of planning periods. Two-stage stochastic programming might be better suited to such settings, where the decisions for all periods are made here-and-now and do not adapt to the uncertainty realized. In this paper, we propose a novel alternative approach, where the stages are not predetermined but part of the optimization problem. Each component of the decision policy has an associated revision point, a period prior to which the decision is predetermined and after which it is revised to adjust to the uncertainty realized thus far. We motivate this setting using the multi-period newsvendor problem by deriving an optimal adaptive policy. We label the proposed approach as adaptive two-stage stochastic programming and provide a generic mixed-integer programming formulation for finite stochastic processes. We show that adaptive two-stage stochastic programming is NP-hard in general. Next, we derive bounds on the value of adaptive two-stage programming in comparison to the two-stage and multi-stage approaches for a specific problem structure inspired by the capacity expansion planning problem. Since directly solving the mixed-integer linear program associated with the adaptive two-stage approach might be very costly for large instances, we propose several heuristic solution algorithms based on the bound analysis. We provide approximation guarantees for these heuristics. Finally, we present an extensive computational study on an electricity generation capacity expansion planning problem and demonstrate the computational and practical impacts of the proposed approach from various perspectives

    Computational procedures for stochastic multi-echelon production systems

    Get PDF
    This paper is concerned with the numerical evaluation of multi-echelon production systems. Each stage requires a fixed predetermined leadtime; furthermore, we assume a stochastic, stationary end-time demand process. In a previous paper, we have developed an analytical framework for determining optimal control policies for such systems under an average cost criterion.\ud \ud The current paper is based on this analytical theory but discusses computational aspects, in particular for serial and assembly systems. A hierarchical (exact) decomposition of these systems can be obtained by considering echelon stocks and by transforming penalty and holding costs accordingly. The one-dimensional problems arising after this decomposition however involve incomplete convolutions of distribution functions, which are only recursively defined. We develop numerical procedures for analysing these incomplete convolutions; these procedures are based on approximations of distribution functions by mixtures of Erlang distributions. Combining the analytically obtained (exact) decomposition results with these numerical procedures enables us to quickly determine optimal order-up-to levels for all stages. Moreover, expressions for the customer service level of such a multi-stage are obtained, yielding the possibility to determine policies which minimize average inventory holding costs, given a service level constraint

    Variable dimension weighted universal vector quantization and noiseless coding

    Get PDF
    A new algorithm for variable dimension weighted universal coding is introduced. Combining the multi-codebook system of weighted universal vector quantization (WUVQ), the partitioning technique of variable dimension vector quantization, and the optimal design strategy common to both, variable dimension WUVQ allows mixture sources to be effectively carved into their component subsources, each of which can then be encoded with the codebook best matched to that source. Application of variable dimension WUVQ to a sequence of medical images provides up to 4.8 dB improvement in signal to quantization noise ratio over WUVQ and up to 11 dB improvement over a standard full-search vector quantizer followed by an entropy code. The optimal partitioning technique can likewise be applied with a collection of noiseless codes, as found in weighted universal noiseless coding (WUNC). The resulting algorithm for variable dimension WUNC is also described

    On multi-stage production/inventory systems under stochastic demand

    Get PDF
    This paper was presented at the 1992 Conference of the International Society of Inventory Research in Budapest, as a tribute to professor Andrew C. Clark for his inspiring work on multi-echelon inventory models both in theory and practice. It reviews and extends the work of the authors on periodic review serial and convergent multi-echelon systems under stochastic stationary demand. In particular, we highlight the structure of echelon cost functions which play a central role in the derivation of the decomposition results and the optimality of base stock policies. The resulting optimal base stock policy is then compared with an MRP system in terms of cost effectiveness, given a predefined target customer service level. Another extension concerns an at first glance rather different problem; it is shown that the problem of setting safety leadtimes in a multi-stage production-to-order system with stochastic lead times leads to similar decomposition structures as those derived for multi-stage inventory systems. Finally, a discussion on possible extensions to capacitated models, models with uncertainty in both demand and production lead time as well as models with an aborescent structure concludes the paper
    • …
    corecore