39 research outputs found

    Regularized linear system identification using atomic, nuclear and kernel-based norms: the role of the stability constraint

    Full text link
    Inspired by ideas taken from the machine learning literature, new regularization techniques have been recently introduced in linear system identification. In particular, all the adopted estimators solve a regularized least squares problem, differing in the nature of the penalty term assigned to the impulse response. Popular choices include atomic and nuclear norms (applied to Hankel matrices) as well as norms induced by the so called stable spline kernels. In this paper, a comparative study of estimators based on these different types of regularizers is reported. Our findings reveal that stable spline kernels outperform approaches based on atomic and nuclear norms since they suitably embed information on impulse response stability and smoothness. This point is illustrated using the Bayesian interpretation of regularization. We also design a new class of regularizers defined by "integral" versions of stable spline/TC kernels. Under quite realistic experimental conditions, the new estimators outperform classical prediction error methods also when the latter are equipped with an oracle for model order selection

    Iterative Reconstrained Low-rank Representation via Weighted Nonconvex Regularizer

    Get PDF
    OAPA Benefiting from the joint consideration of geometric structures and low-rank constraint, graph low-rank representation (GLRR) method has led to the state-of-the-art results in many applications. However, it faces the limitations that the structure of errors should be known a prior, the isolated construction of graph Laplacian matrix, and the over shrinkage of the leading rank components. To improve GLRR in these regards, this paper proposes a new LRR model, namely iterative reconstrained LRR via weighted nonconvex regularization (IRWNR), using three distinguished properties on the concerned representation matrix. The first characterizes various distributions of the errors into an adaptively learned weight factor for more flexibility of noise suppression. The second generates an accurate graph matrix from weighted observations for less afflicted by noisy features. The third employs a parameterized Rational function to reveal the importance of different rank components for better approximation to the intrinsic subspace structure. Following a deep exploration of automatic thresholding, parallel update, and partial SVD operation, we derive a computationally efficient low-rank representation algorithm using an iterative reconstrained framework and accelerated proximal gradient method. Comprehensive experiments are conducted on synthetic data, image clustering, and background subtraction to achieve several quantitative benchmarks as clustering accuracy, normalized mutual information, and execution time. Results demonstrate the robustness and efficiency of IRWNR compared with other state-of-the-art models

    Dynamic Network Reconstruction in Systems Biology: Methods and Algorithms

    Get PDF
    Dynamic network reconstruction refers to a class of problems that explore causal interactions between variables operating in dynamical systems. This dissertation focuses on methods and algorithms that reconstruct/infer network topology or dynamics from observations of an unknown system. The essential challenges, compared to system identification, are imposing sparsity on network topology and ensuring network identifiability. This work studies the following cases: multiple experiments with heterogeneity, low sampling frequency and nonlinearity, which are generic in biology that make reconstruction problems particularly challenging. The heterogeneous data sets are measurements in multiple experiments from the underlying dynamical systems that are different in parameters, whereas the network topology is assumed to be consistent. It is particularly common in biological applications. This dissertation proposes a way to deal with multiple data sets together to increase computational robustness. Furthermore, it can also be used to enforce network identifiability by multiple experiments with input perturbations. The necessity to study low-sampling-frequency data is due to the mismatch of network topology of discrete-time and continuous-time models. It is generally assumed that the underlying physical systems are evolving over time continuously. An important concept system aliasing is introduced to manifest whether the continuous system can be uniquely determined from its associated discrete-time model with the specified sampling frequency. A Nyquist-Shannon-like sampling theorem is provided to determine the critical sampling frequency for system aliasing. The reconstruction method integrates the Expectation Maximization (EM) method with a modified Sparse Bayesian Learning (SBL) to deal with reconstruction from output measurements. A tentative study on nonlinear Boolean network reconstruction is provided. The nonlinear Boolean network is considered as a union of local networks of linearized dynamical systems. The reconstruction method extends the algorithm used for heterogeneous data sets to provide an approximated inference but improve computational robustness significantly. The reconstruction algorithms are implemented in MATLAB and wrapped as a package. With considerations on generic signal features in practice, this work contributes to practically useful network reconstruction methods in biological applications

    Sparse Structure Learning via Information-Theoretic Regularization and Self-Contained Probabilistic Estimation

    Get PDF
    Nowadays, there is an increasing amount of digital information constantly generated from every aspect of our life and data that we work with grow in both size and variety. Fortunately, most of the data have sparse structures. Compressive sensing offers us an efficient framework to not only collect data but also to process and analyze them in a timely fashion. Various compressive sensing tasks eventually boil down to the sparse signal recovery problem in an under-determined linear system. To better address the challenges of ``big'' data using compressive sensing, we focus on developing powerful sparse signal recovery approaches and providing theoretical analysis of their optimalities and convergences in this dissertation. Specifically, we bring together insights from information theory and probabilistic graphical models to tackle the sparse signal recovery problem from the following two perspectives: Sparsity-regularization approach: we propose the Shannon entropy function and Renyi entropy function constructed from the sparse signal, and prove that minimizing them does promote sparsity in the recovered signal. Experiments on simulated and real data show that the two proposed entropy function minimization methods outperform state-of-the-art lp-norm minimization and l1-norm minimization methods. Probabilistic approach: we propose the generalized approximate message passing with built-in parameter estimation (PE-GAMP) framework, present its empirical convergence analysis and give detailed formulations to obtain the MMSE and MAP estimations of the sparse signal. Experiments on simulated and real data show that the proposed PE-GAMP is more robust, much simpler and has a wider applicability compared to the popular Expectation Maximization based parameter estimation method

    Signal structure: from manifolds to molecules and structured sparsity

    Get PDF
    Effective representation methods and proper signal priors are crucial in most signal processing applications. In this thesis we focus on different structured models and we design appropriate schemes that allow the discovery of low dimensional latent structures that characterise and identify the signals. Motivated by the highly non-linear structure of most datasets, we firstly investigate the geometry of manifolds. Manifolds are low dimensional, non-linear structures that are naturally employed to describe sets of strongly related signals such as the images of a 3-D object captured from different viewpoints. However, the use of manifolds in applications is not straightforward due to their usually non-analytic and non-linear form. We propose here a way to `disassemble' a manifold into simpler components by approximating it with affine subspaces. Our objective is to discover a set of low dimensional affine subspaces that can represent manifold data accurately while preserving the manifold's structure. To this end, we employ a greedy technique that iteratively merges manifold samples into groups based on the difference of local tangents. We use our algorithm to approximate synthetic and real manifolds and to demonstrate that it is competitive to state-of-the-art techniques. Then, we consider structured sparse representations of signals and we propose a new sparsity model, where signals are essentially composed of a small number of structured {\it molecules }. We define the molecules to be linear combinations of a small number of atoms in a redundant dictionary. Our multi-level model takes into account the energy distribution of the significant signal components in addition to their support. It permits to define typical visual patterns and recognise them in prototypical or deformed form. We define a new structural difference measure between molecules and their deformed versions, which is based on their sparse codes and we create an algorithm for decomposing signals into molecules that can account for different deviations in the internal molecule structure. Our experiments verify the benefits of the new image model in various restoration tasks and they confirm that the development of proper models that extend the mere notion of sparsity can be very useful for various inverse problems in imaging. Finally, we investigate the problem of learning molecule representations directly in the sparse code domain. We constrain sparse codes to be linear combinations of a few, possibly deformed, molecules and we design an algorithm that can learn the structure from the codes without transforming them back into the signal domain. To this end, we take advantage of our structural difference which is based on the sparse codes and we devise a scheme for representing the codes with molecules and learn the molecules at the same time. To illustrate the effectiveness of our proposed algorithm we apply it to various synthetic and real datasets and we compare the results with traditional sparse coding and dictionary learning techniques. From the experiments, we verify the superior performance of our scheme in interpreting and recognising correctly the underlying structure. In short, in this thesis we are interested in low-dimensional, structured models. Among the various choices, we focus on manifolds and sparse representations and we propose schemes that enhance their structural properties and highlight their effectiveness in signal representations
    corecore