3 research outputs found

    On-line Non-stationary Inventory Control using Champion Competition

    Full text link
    The commonly adopted assumption of stationary demands cannot actually reflect fluctuating demands and will weaken solution effectiveness in real practice. We consider an On-line Non-stationary Inventory Control Problem (ONICP), in which no specific assumption is imposed on demands and their probability distributions are allowed to vary over periods and correlate with each other. The nature of non-stationary demands disables the optimality of static (s,S) policies and the applicability of its corresponding algorithms. The ONICP becomes computationally intractable by using general Simulation-based Optimization (SO) methods, especially under an on-line decision-making environment with no luxury of time and computing resources to afford the huge computational burden. We develop a new SO method, termed "Champion Competition" (CC), which provides a different framework and bypasses the time-consuming sample average routine adopted in general SO methods. An alternate type of optimal solution, termed "Champion Solution", is pursued in the CC framework, which coincides the traditional optimality sense under certain conditions and serves as a near-optimal solution for general cases. The CC can reduce the complexity of general SO methods by orders of magnitude in solving a class of SO problems, including the ONICP. A polynomial algorithm, termed "Renewal Cycle Algorithm" (RCA), is further developed to fulfill an important procedure of the CC framework in solving this ONICP. Numerical examples are included to demonstrate the performance of the CC framework with the RCA embedded.Comment: I just identified a flaw in the paper. It may take me some time to fix it. I would like to withdraw the article and update it once I finished. Thank you for your kind suppor

    Random observations on random observations: Sparse signal acquisition and processing

    Get PDF
    In recent years, signal processing has come under mounting pressure to accommodate the increasingly high-dimensional raw data generated by modern sensing systems. Despite extraordinary advances in computational power, processing the signals produced in application areas such as imaging, video, remote surveillance, spectroscopy, and genomic data analysis continues to pose a tremendous challenge. Fortunately, in many cases these high-dimensional signals contain relatively little information compared to their ambient dimensionality. For example, signals can often be well-approximated as a sparse linear combination of elements from a known basis or dictionary. Traditionally, sparse models have been exploited only after acquisition, typically for tasks such as compression. Recently, however, the applications of sparsity have greatly expanded with the emergence of compressive sensing, a new approach to data acquisition that directly exploits sparsity in order to acquire analog signals more efficiently via a small set of more general, often randomized, linear measurements. If properly chosen, the number of measurements can be much smaller than the number of Nyquist-rate samples. A common theme in this research is the use of randomness in signal acquisition, inspiring the design of hardware systems that directly implement random measurement protocols. This thesis builds on the field of compressive sensing and illustrates how sparsity can be exploited to design efficient signal processing algorithms at all stages of the information processing pipeline, with a particular focus on the manner in which randomness can be exploited to design new kinds of acquisition systems for sparse signals. Our key contributions include: (i) exploration and analysis of the appropriate properties for a sparse signal acquisition system; (ii) insight into the useful properties of random measurement schemes; (iii) analysis of an important family of algorithms for recovering sparse signals from random measurements; (iv) exploration of the impact of noise, both structured and unstructured, in the context of random measurements; and (v) algorithms that process random measurements to directly extract higher-level information or solve inference problems without resorting to full-scale signal recovery, reducing both the cost of signal acquisition and the complexity of the post-acquisition processing

    Compressed Sensing in Multi-Signal Environments.

    Full text link
    Technological advances and the ability to build cheap high performance sensors make it possible to deploy tens or even hundreds of sensors to acquire information about a common phenomenon of interest. The increasing number of sensors allows us to acquire ever more detailed information about the underlying scene that was not possible before. This, however, directly translates to increasing amounts of data that needs to be acquired, transmitted, and processed. The amount of data can be overwhelming, especially in applications that involve high-resolution signals such as images or videos. Compressed sensing (CS) is a novel acquisition and reconstruction scheme that is particularly useful in scenarios when high resolution signals are difficult or expensive to encode. When applying CS in a multi-signal scenario, there are several aspects that need to be considered such as the sensing matrix, the joint signal model, and the reconstruction algorithm. The purpose of this dissertation is to provide a complete treatment of these aspects in various multi-signal environments. Specific applications include video, multi-view imaging, and structural health monitoring systems. For each application, we propose a novel joint signal model that accurately captures the joint signal structure, and we tailor the reconstruction algorithm to each signal model to successfully recover the signals of interest.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/98007/1/jaeypark_1.pd
    corecore