222 research outputs found

    1\ell^1-Analysis Minimization and Generalized (Co-)Sparsity: When Does Recovery Succeed?

    Full text link
    This paper investigates the problem of signal estimation from undersampled noisy sub-Gaussian measurements under the assumption of a cosparse model. Based on generalized notions of sparsity, we derive novel recovery guarantees for the 1\ell^{1}-analysis basis pursuit, enabling highly accurate predictions of its sample complexity. The corresponding bounds on the number of required measurements do explicitly depend on the Gram matrix of the analysis operator and therefore particularly account for its mutual coherence structure. Our findings defy conventional wisdom which promotes the sparsity of analysis coefficients as the crucial quantity to study. In fact, this common paradigm breaks down completely in many situations of practical interest, for instance, when applying a redundant (multilevel) frame as analysis prior. By extensive numerical experiments, we demonstrate that, in contrast, our theoretical sampling-rate bounds reliably capture the recovery capability of various examples, such as redundant Haar wavelets systems, total variation, or random frames. The proofs of our main results build upon recent achievements in the convex geometry of data mining problems. More precisely, we establish a sophisticated upper bound on the conic Gaussian mean width that is associated with the underlying 1\ell^{1}-analysis polytope. Due to a novel localization argument, it turns out that the presented framework naturally extends to stable recovery, allowing us to incorporate compressible coefficient sequences as well

    A Benchmark and Evaluation of Non-Rigid Structure from Motion

    Full text link
    Non-Rigid structure from motion (NRSfM), is a long standing and central problem in computer vision, allowing us to obtain 3D information from multiple images when the scene is dynamic. A main issue regarding the further development of this important computer vision topic, is the lack of high quality data sets. We here address this issue by presenting of data set compiled for this purpose, which is made publicly available, and considerably larger than previous state of the art. To validate the applicability of this data set, and provide and investigation into the state of the art of NRSfM, including potential directions forward, we here present a benchmark and a scrupulous evaluation using this data set. This benchmark evaluates 16 different methods with available code, which we argue reasonably spans the state of the art in NRSfM. We also hope, that the presented and public data set and evaluation, will provide benchmark tools for further development in this field

    Robust learning with low-dimensional structure: theory,algorithms and applications

    Get PDF
    Master'sMASTER OF ENGINEERIN

    ConCerNet: A Contrastive Learning Based Framework for Automated Conservation Law Discovery and Trustworthy Dynamical System Prediction

    Full text link
    Deep neural networks (DNN) have shown great capacity of modeling a dynamical system; nevertheless, they usually do not obey physics constraints such as conservation laws. This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling to endow the invariant properties. ConCerNet consists of two steps: (i) a contrastive learning method to automatically capture the system invariants (i.e. conservation properties) along the trajectory observations; (ii) a neural projection layer to guarantee that the learned dynamics models preserve the learned invariants. We theoretically prove the functional relationship between the learned latent representation and the unknown system invariant function. Experiments show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics by a large margin. With neural network based parameterization and no dependence on prior knowledge, our method can be extended to complex and large-scale dynamics by leveraging an autoencoder.Comment: 22 pages, 7 figure

    Representative-based Big Data Processing in Communications and Machine Learning

    Get PDF
    The present doctoral dissertation focuses on representative-based processing proper for a big set of high-dimensional data. Compression and subset selection are considered as two main effective methods for representing a big set of data by a much smaller set of variables. Compressive sensing, matrix singular value decomposition, and tensor decomposition are employed as powerful mathematical tools to analyze the original data in terms of their representatives. Spectrum sensing is an important application of the developed theoretical analysis. In a cognitive radio network (CRN), primary users (PUs) coexist with secondary users (SUs). However, the secondary network aims to characterize PUs in order to establish a communication link without any interference with the primary network. A dynamic and efficient spectrum sensing framework is studied based on advanced algebraic tools. In a CRN, collecting information from all SUs is energy inefficient and computationally complex. A novel sensor selection algorithm based on the compressed sensing theory is devised which is compatible with the algebraic nature of the spectrum sensing problem. Moreover, some state-of-the-art applications in machine learning are investigated. One of the main contributions of the present dissertation is the introduction a versatile data selection algorithm which is referred as spectrum pursuit (SP). The goal of SP is to reduce a big set of data to a small-size subset such that the linear span of the selected data is as close as possible to all data. SP enjoys a low-complexity procedure which enables SP to be extended to more complex selection models. The kernel spectrum pursuit (KSP) facilitates selection from a union of non-linear manifolds. This dissertation investigates a number of important applications in machine learning including fast training of generative adversarial networks (GANs), graph-based label propagation, few shot classification, and fast subspace clustering

    Evolutionary learning and global search for multi-optimal PID tuning rules

    Get PDF
    With the advances in microprocessor technology, control systems are widely seen not only in industry but now also in household appliances and consumer electronics. Among all control schemes developed so far, Proportional plus Integral plus Derivative (PID) control is the most widely adopted in practice. Today, more than 90% of industrial controllers have a built-in PID function. Their wide applications have stimulated and sustained the research and development of PID tuning techniques, patents, software packages and hardware modules. Due to parameter interaction and format variation, tuning a PID controller is not as straightforward as one would have anticipated. Therefore, designing speedy tuning rules should greatly reduce the burden on new installation and ‘time-to-market’ and should also enhance the competitive advantages of the PID system under offer. A multi-objective evolutionary algorithm (MOEA) would be an ideal candidate to conduct the learning and search for multi-objective PID tuning rules. A simple to implement MOEA, termed s-MOEA, is devised and compared with MOEAs developed elsewhere. Extensive study and analysis are performed on metrics for evaluating MOEA performance, so as to help with this comparison and development. As a result, a novel visualisation technique, termed “Distance and Distribution” (DD)” chart, is developed to overcome some of the limitations of existing metrics and visualisation techniques. The DD chart allows a user to view the comparison of multiple sets of high order non-dominated solutions in a two-dimensional space. The capability of DD chart is shown in the comparison process and it is shown to be a useful tool for gathering more in-depth information of an MOEA which is not possible in existing empirical studies. Truly multi-objective global PID tuning rules are then evolved as a result of interfacing the s-MOEA with closed-loop simulations under practical constraints. It takes into account multiple, and often conflicting, objectives such as steady-state accuracy and transient responsiveness against stability and overshoots, as well as tracking performance against load disturbance rejection. These evolved rules are compared against other tuning rules both offline on a set of well-recognised PID benchmark test systems and online on three laboratory systems of different dynamics and transport delays. The results show that the rules significantly outperform all existing tuning rules, with multi-criterion optimality. This is made possible as the evolved rules can cover a delay to time constant ratio from zero to infinity based on first-order plus delay plant models. For second-order plus delay plant models, they can also cover all possible dynamics found in practice

    Review : Deep learning in electron microscopy

    Get PDF
    Deep learning is transforming most areas of science and technology, including electron microscopy. This review paper offers a practical perspective aimed at developers with limited familiarity. For context, we review popular applications of deep learning in electron microscopy. Following, we discuss hardware and software needed to get started with deep learning and interface with electron microscopes. We then review neural network components, popular architectures, and their optimization. Finally, we discuss future directions of deep learning in electron microscopy
    corecore