347,160 research outputs found

    The s-monotone index selection rules for pivot algorithms of linear programming

    Get PDF
    In this paper we introduce the concept of s-monotone index selection rule for linear programming problems. We show that several known anti-cycling pivot rules like the minimal index, Last-In–First-Out and the most-often-selected-variable pivot rules are s-monotone index selection rules. Furthermore, we show a possible way to define new s-monotone pivot rules. We prove that several known algorithms like the primal (dual) simplex, MBU-simplex algorithms and criss-cross algorithm with s-monotone pivot rules are finite methods. We implemented primal simplex and primal MBU-simplex algorithms, in MATLAB, using three s-monotone index selection rules, the minimal-index, the Last-In–First-Out (LIFO) and the Most-Often-Selected-Variable (MOSV) index selection rule. Numerical results demonstrate the viability of the above listed s-monotone index selection rules in the framework of pivot algorithms

    Variable selection for model-based clustering using the integrated complete-data likelihood

    Full text link
    Variable selection in cluster analysis is important yet challenging. It can be achieved by regularization methods, which realize a trade-off between the clustering accuracy and the number of selected variables by using a lasso-type penalty. However, the calibration of the penalty term can suffer from criticisms. Model selection methods are an efficient alternative, yet they require a difficult optimization of an information criterion which involves combinatorial problems. First, most of these optimization algorithms are based on a suboptimal procedure (e.g. stepwise method). Second, the algorithms are often greedy because they need multiple calls of EM algorithms. Here we propose to use a new information criterion based on the integrated complete-data likelihood. It does not require any estimate and its maximization is simple and computationally efficient. The original contribution of our approach is to perform the model selection without requiring any parameter estimation. Then, parameter inference is needed only for the unique selected model. This approach is used for the variable selection of a Gaussian mixture model with conditional independence assumption. The numerical experiments on simulated and benchmark datasets show that the proposed method often outperforms two classical approaches for variable selection.Comment: submitted to Statistics and Computin

    Info-Greedy sequential adaptive compressed sensing

    Full text link
    We present an information-theoretic framework for sequential adaptive compressed sensing, Info-Greedy Sensing, where measurements are chosen to maximize the extracted information conditioned on the previous measurements. We show that the widely used bisection approach is Info-Greedy for a family of kk-sparse signals by connecting compressed sensing and blackbox complexity of sequential query algorithms, and present Info-Greedy algorithms for Gaussian and Gaussian Mixture Model (GMM) signals, as well as ways to design sparse Info-Greedy measurements. Numerical examples demonstrate the good performance of the proposed algorithms using simulated and real data: Info-Greedy Sensing shows significant improvement over random projection for signals with sparse and low-rank covariance matrices, and adaptivity brings robustness when there is a mismatch between the assumed and the true distributions.Comment: Preliminary results presented at Allerton Conference 2014. To appear in IEEE Journal Selected Topics on Signal Processin
    • …
    corecore