14,412 research outputs found

    Compressive Classification

    Full text link
    This paper derives fundamental limits associated with compressive classification of Gaussian mixture source models. In particular, we offer an asymptotic characterization of the behavior of the (upper bound to the) misclassification probability associated with the optimal Maximum-A-Posteriori (MAP) classifier that depends on quantities that are dual to the concepts of diversity gain and coding gain in multi-antenna communications. The diversity, which is shown to determine the rate at which the probability of misclassification decays in the low noise regime, is shown to depend on the geometry of the source, the geometry of the measurement system and their interplay. The measurement gain, which represents the counterpart of the coding gain, is also shown to depend on geometrical quantities. It is argued that the diversity order and the measurement gain also offer an optimization criterion to perform dictionary learning for compressive classification applications.Comment: 5 pages, 3 figures, submitted to the 2013 IEEE International Symposium on Information Theory (ISIT 2013

    Machine Learning-Based Prediction of Compressive Performance in Circular Concrete Columns Confined with FRP

    Get PDF
    This article presents a comprehensive investigation, focusing on the prediction and formulation of the design equation of compressive strength of circular concrete columns confined with Fiber Reinforced Polymer (FRP) using advanced machine learning models. Through an extensive analysis of 170 experimental data specimens, the study examines the effects of six key parameters, including concrete cylinder diameter, concrete cylinder-FRP thickness, compressive strength of concrete without FRP, initial compressive strain of concrete without FRP, elastic modulus and tensile strength of FRP, on the compressive strength of the circular concrete columns confined with FRP. The predictive model and design equation of compressive strength is developed using a machine learning technique, specifically the artificial neural networks (ANN) model. The results demonstrates strong correlations between the compressive strength of the circular concrete columns confined with FRP and certain factors, such as the compressive strength of the concrete and compressive strain of the concrete column without FRP, elastic modulus of FRP, and tensile strength of FRP. The ANN model specifically developed using Neural Designer, exhibits superior predictive accuracy compared to other constitutive models, showcasing its potential for practical implementation. The study's findings contribute valuable insights into accurately predicting the compressive performance of circular concrete columns confined with FRP, which can aid in optimizing and designing civil engineering structures for enhanced performance and efficiency

    Plug-and-Play Methods Provably Converge with Properly Trained Denoisers

    Full text link
    Plug-and-play (PnP) is a non-convex framework that integrates modern denoising priors, such as BM3D or deep learning-based denoisers, into ADMM or other proximal algorithms. An advantage of PnP is that one can use pre-trained denoisers when there is not sufficient data for end-to-end training. Although PnP has been recently studied extensively with great empirical success, theoretical analysis addressing even the most basic question of convergence has been insufficient. In this paper, we theoretically establish convergence of PnP-FBS and PnP-ADMM, without using diminishing stepsizes, under a certain Lipschitz condition on the denoisers. We then propose real spectral normalization, a technique for training deep learning-based denoisers to satisfy the proposed Lipschitz condition. Finally, we present experimental results validating the theory.Comment: Published in the International Conference on Machine Learning, 201

    Dimension-adaptive bounds on compressive FLD Classification

    Get PDF
    Efficient dimensionality reduction by random projections (RP) gains popularity, hence the learning guarantees achievable in RP spaces are of great interest. In finite dimensional setting, it has been shown for the compressive Fisher Linear Discriminant (FLD) classifier that forgood generalisation the required target dimension grows only as the log of the number of classes and is not adversely affected by the number of projected data points. However these bounds depend on the dimensionality d of the original data space. In this paper we give further guarantees that remove d from the bounds under certain conditions of regularity on the data density structure. In particular, if the data density does not fill the ambient space then the error of compressive FLD is independent of the ambient dimension and depends only on a notion of ā€˜intrinsic dimension'
    • ā€¦
    corecore