438 research outputs found
Group-Feature (Sensor) Selection With Controlled Redundancy Using Neural Networks
In this paper, we present a novel embedded feature selection method based on
a Multi-layer Perceptron (MLP) network and generalize it for group-feature or
sensor selection problems, which can control the level of redundancy among the
selected features or groups. Additionally, we have generalized the group lasso
penalty for feature selection to encompass a mechanism for selecting valuable
group features while simultaneously maintaining a control over redundancy. We
establish the monotonicity and convergence of the proposed algorithm, with a
smoothed version of the penalty terms, under suitable assumptions. Experimental
results on several benchmark datasets demonstrate the promising performance of
the proposed methodology for both feature selection and group feature selection
over some state-of-the-art methods
A new Sigma-Pi-Sigma neural network based on and regularization and applications
As one type of the important higher-order neural networks developed in the last decade, the Sigma-Pi-Sigma neural network has more powerful nonlinear mapping capabilities compared with other popular neural networks. This paper is concerned with a new Sigma-Pi-Sigma neural network based on a and regularization batch gradient method, and the numerical experiments for classification and regression problems prove that the proposed algorithm is effective and has better properties comparing with other classical penalization methods. The proposed model combines the sparse solution tendency of norm and the high benefits in efficiency of the norm, which can regulate the complexity of a network and prevent overfitting. Also, the numerical oscillation, induced by the non-differentiability of plus regularization at the origin, can be eliminated by a smoothing technique to approximate the objective function
Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks
The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, sometimes even better than, the original dense networks. Sparsity promises to reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this paper, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a baseline for comparison of different sparse networks. We close by speculating on how sparsity can improve future workloads and outline major open problems in the field
Statistical guarantees for sparse deep learning
Neural networks are becoming increasingly popular in applications, but our
mathematical understanding of their potential and limitations is still limited.
In this paper, we further this understanding by developing statistical
guarantees for sparse deep learning. In contrast to previous work, we consider
different types of sparsity, such as few active connections, few active nodes,
and other norm-based types of sparsity. Moreover, our theories cover important
aspects that previous theories have neglected, such as multiple outputs,
regularization, and l2-loss. The guarantees have a mild dependence on network
widths and depths, which means that they support the application of sparse but
wide and deep networks from a statistical perspective. Some of the concepts and
tools that we use in our derivations are uncommon in deep learning and, hence,
might be of additional interest
Machine Learning and Integrative Analysis of Biomedical Big Data.
Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues
Estimation of Granger causality through Artificial Neural Networks: applications to physiological systems and chaotic electronic oscillators
One of the most challenging problems in the study of complex dynamical systems is to find the statistical interdependencies among the system components. Granger causality (GC) represents one of the most employed approaches, based on modeling the system dynamics with a linear vector autoregressive (VAR) model and on evaluating the information flow between two processes in terms of prediction error variances. In its most advanced setting, GC analysis is performed through a statespace (SS) representation of the VAR model that allows to compute both conditional and unconditional forms of GC by solving only one regression problem. While this problem is typically solved through Ordinary Least Square (OLS) estimation, a viable alternative is to use Artificial Neural Networks (ANNs) implemented in a simple structure with one input and one output layer and trained in a way such that the weights matrix corresponds to the matrix of VAR parameters. In this work, we introduce an ANN combined with SS models for the computation of GC. The ANN is trained through the Stochastic Gradient Descent L1 (SGD-L1) algorithm, and a cumulative penalty inspired from penalized regression is applied to the network weights to encourage sparsity. Simulating networks of coupled Gaussian systems, we show how the combination of ANNs and SGD-L1 allows to mitigate the strong reduction in accuracy of OLS identification in settings of low ratio between number of time series points and of VAR parameters. We also report how the performances in GC estimation are influenced by the number of iterations of gradient descent and by the learning rate used for training the ANN. We recommend using some specific combinations for these parameters to optimize the performance of GC estimation. Then, the performances of ANN and OLS are compared in terms of GC magnitude and statistical significance to highlight the potential of the new approach to reconstruct causal coupling strength and network topology even in challenging conditions of data paucity. The results highlight the importance of of a proper selection of regularization parameter which determines the degree of sparsity in the estimated network. Furthermore, we apply the two approaches to real data scenarios, to study the physiological network of brain and peripheral interactions in humans under different conditions of rest and mental stress, and the effects of the newly emerged concept of remote synchronization on the information exchanged in a ring of electronic oscillators. The results highlight how ANNs provide a mesoscopic description of the information exchanged in networks of multiple interacting physiological systems, preserving the most active causal interactions between cardiovascular, respiratory and brain systems. Moreover, ANNs can reconstruct the flow of directed information in a ring of oscillators whose statistical properties can be related to those of physiological network
Fixing the NTK: From Neural Network Linearizations to Exact Convex Programs
Recently, theoretical analyses of deep neural networks have broadly focused
on two directions: 1) Providing insight into neural network training by SGD in
the limit of infinite hidden-layer width and infinitesimally small learning
rate (also known as gradient flow) via the Neural Tangent Kernel (NTK), and 2)
Globally optimizing the regularized training objective via cone-constrained
convex reformulations of ReLU networks. The latter research direction also
yielded an alternative formulation of the ReLU network, called a gated ReLU
network, that is globally optimizable via efficient unconstrained convex
programs. In this work, we interpret the convex program for this gated ReLU
network as a Multiple Kernel Learning (MKL) model with a weighted data masking
feature map and establish a connection to the NTK. Specifically, we show that
for a particular choice of mask weights that do not depend on the learning
targets, this kernel is equivalent to the NTK of the gated ReLU network on the
training data. A consequence of this lack of dependence on the targets is that
the NTK cannot perform better than the optimal MKL kernel on the training set.
By using iterative reweighting, we improve the weights induced by the NTK to
obtain the optimal MKL kernel which is equivalent to the solution of the exact
convex reformulation of the gated ReLU network. We also provide several
numerical simulations corroborating our theory. Additionally, we provide an
analysis of the prediction error of the resulting optimal kernel via
consistency results for the group lasso.Comment: Accepted to Neurips 202
- …