120 research outputs found

    Topics in Mixed Integer Nonlinear Optimization

    Get PDF
    Mixed integer nonlinear optimization has many applications ranging from machine learning to power systems. However, these problems are very challenging to solve to global optimality due to the inherent non-convexity. This typically leads the problem to be NP-hard. Moreover, in many applications, there are time and resource limitations for solving real-world problems, and the sheer size of real instances can make solving them challenging. In this thesis, we focus on important elements of nonconvex optimization - including mixed integer linear programming and nonlinear programming, where both theoretical analyses and computational experiments are presented. In the first chapter we look at Mixed Integer Quadratic Programming (MIQP), the problem of minimizing a convex quadratic function over mixed integer points in a rational polyhedron. We utilize the augmented Lagrangian dual (ALD), which augments the usual Lagrangian dual with a weighted nonlinear penalty on the dualized constraints. We first prove that ALD will reach a zero duality gap asymptotically as the weight on the penalty goes to infinity under some mild conditions on the penalty function. We next show that a finite penalty weight is enough for a zero gap when we use any norm as the penalty function. Finally, we prove a polynomial bound on the weight on the penalty term to obtain a zero gap. In the second chapter we apply the technique of lifting to bilinear programming, a special case of quadratic constrained quadratic programming. We first show that, for sets described by one bilinear constraint together with bounds, it is always possible to sequentially lift a seed inequality. To reduce computational burden, we develop a framework based on subadditive approximations of lifting functions that permits sequence-independent lifting of seed inequalities for separable bilinear sets. We then study a separable bilinear set where the coefficients form a minimal cover with respect to the right-hand-side. For this set, we introduce a bilinear cover inequality, which is second-order cone representable. We study the lifting function of the bilinear cover inequality and lift fixed variable pairs in closed-form, thus deriving a lifted bilinear cover inequality that is valid for general separable bilinear sets with box constraints. In the third chapter we continue our research around separable bilinear programming. We first prove that the semidefinite programming relaxation provides no benefit over the McCormick relaxation for separable bilinear optimization problems. We then design a simple randomized separation heuristic for lifted bilinear cover inequalities. In our computational experiments, we separate many rounds of these inequalities starting from the McCormick relaxation of bilinear instances where each constraint is a separable bilinear constraint set. Our main result is to demonstrate that there is a significant improvement in the performance of a state-of-the-art global solver in terms of the gap closed, when these inequalities are added at the root node compared to when these inequalities are not added. In the fourth chapter we look at Mixed Integer Linear Programming (MILP) that arises in operational applications. Many routinely-solved MILPs are extremely challenging not only from a worst-case complexity perspective, but also because of the necessity to obtain good solutions within limited time. An example is the Security-Constrained Unit Commitment (SCUC) problem, solved daily to clear the day-ahead electricity markets. We develop ML-based methods for improving branch-and-bound variable selection rules that exploit key features of such operational problems: similar decisions are generated within the same day and across different days, based on the same power network. Utilizing similarity between instances and within an instance, we build one separate ML model per variable or per group of similar variables for learning to predict the strong branching score. The approach is able to produce branch-and-bound trees which gap closed only slightly worse than that of trees obtained by strong branching, while it outperforms previous machine learning schemes.Ph.D

    All-condition pulse detection using a magnetic sensor

    Full text link
    A plethora of wearable devices have been developed or commercialized for continuous non-invasive monitoring of physiological signals that are crucial for preventive care and management of chronic conditions. However, most of these devices are either sensitive to skin conditions or its interface with the skin due to the requirement that the external stimuli such as light or electrical excitation must penetrate the skin to detect the pulse. This often results in large motion artefacts and unsuitability for certain skin conditions. Here, we demonstrate a simple fingertip-type device which can detect clear pulse signals under all conditions, including fingers covered by opaque substances such as a plaster or nail polish, or fingers immersed in liquid. The device has a very simple structure, consisting of only a pair of magnets and a magnetic sensor. We show through both experiments and simulations that the detected pulsation signals correspond directly to the magnet vibrations caused by blood circulation, and therefore, in addition to heartrate detection, the proposed device can also be potentially used for blood pressure measurement

    A note on practical approximate projection schemes in signal space methods

    Get PDF
    Compressive sensing (CS) is a new technology which allows the acquisition of signals directly in compressed form, using far fewer measurements than traditional theory dictates. Recently, many socalled signal space methods have been developed to extend this body of work to signals sparse in arbitrary dictionaries rather than orthonormal bases. In doing so, CS can be utilized in a much broader array of practical settings. Often, such approaches often rely on the ability to optimally project a signal onto a small number of dictionary atoms. Such optimal, or even approximate, projections have been difficult to derive theoretically. Nonetheless, it has been observed experimentally that conventional CS approaches can be used for such projections, and still provide accurate signal recovery. In this letter, we summarize the empirical evidence and clearly demonstrate for what signal types certain CS methods may be used as approximate projections. In addition, we provide theoretical guarantees for such methods for certain sparse signal structures. Our theoretical results match those observed in experimental studies, and we thus establish both experimentally and theoretically that these CS methods can be used in this context

    Methods for Quantized Compressed Sensing

    Get PDF
    In this paper, we compare and catalog the performance of various greedy quantized compressed sensing algorithms that reconstruct sparse signals from quantized compressed measurements. We also introduce two new greedy approaches for reconstruction: Quantized Compressed Sampling Matching Pursuit (QCoSaMP) and Adaptive Outlier Pursuit for Quantized Iterative Hard Thresholding (AOP-QIHT). We compare the performance of greedy quantized compressed sensing algorithms for a given bit-depth, sparsity, and noise level

    Optimizing quantization for Lasso recovery

    Get PDF
    This letter is focused on quantized Compressed Sensing, assuming that Lasso is used for signal estimation. Leveraging recent work, we provide a framework to optimize the quantization function and show that the recovered signal converges to the actual signal at a quadratic rate as a function of the quantization level. We show that when the number of observations is high, this method of quantization gives a significantly better recovery rate than standard Lloyd-Max quantization. We support our theoretical analysis with numerical simulations
    corecore