60,205 research outputs found
VeriSparse: Training Verified Locally Robust Sparse Neural Networks from Scratch
Several safety-critical applications such as self-navigation, health care,
and industrial control systems use embedded systems as their core. Recent
advancements in Neural Networks (NNs) in approximating complex functions make
them well-suited for these domains. However, the compute-intensive nature of
NNs limits their deployment and training in embedded systems with limited
computation and storage capacities. Moreover, the adversarial vulnerability of
NNs challenges their use in safety-critical scenarios. Hence, developing sparse
models having robustness guarantees while leveraging fewer resources during
training is critical in expanding NNs' use in safety-critical and
resource-constrained embedding system settings. This paper presents
'VeriSparse'-- a framework to search verified locally robust sparse networks
starting from a random sparse initialization (i.e., scratch). VeriSparse
obtains sparse NNs exhibiting similar or higher verified local robustness,
requiring one-third of the training time compared to the state-of-the-art
approaches. Furthermore, VeriSparse performs both structured and unstructured
sparsification, enabling storage, computing-resource, and computation time
reduction during inference generation. Thus, it facilitates the
resource-constraint embedding platforms to leverage verified robust NN models,
expanding their scope to safety-critical, real-time, and edge applications. We
exhaustively investigated VeriSparse's efficacy and generalizability by
evaluating various benchmark and application-specific datasets across several
model architectures.Comment: 21 pages, 13 tables, 3 figure
Sparsely Aggregated Convolutional Networks
We explore a key architectural aspect of deep convolutional neural networks:
the pattern of internal skip connections used to aggregate outputs of earlier
layers for consumption by deeper layers. Such aggregation is critical to
facilitate training of very deep networks in an end-to-end manner. This is a
primary reason for the widespread adoption of residual networks, which
aggregate outputs via cumulative summation. While subsequent works investigate
alternative aggregation operations (e.g. concatenation), we focus on an
orthogonal question: which outputs to aggregate at a particular point in the
network. We propose a new internal connection structure which aggregates only a
sparse set of previous outputs at any given depth. Our experiments demonstrate
this simple design change offers superior performance with fewer parameters and
lower computational requirements. Moreover, we show that sparse aggregation
allows networks to scale more robustly to 1000+ layers, thereby opening future
avenues for training long-running visual processes.Comment: Accepted to ECCV 201
Neural Ranking Models with Weak Supervision
Despite the impressive improvements achieved by unsupervised deep neural
networks in computer vision and NLP tasks, such improvements have not yet been
observed in ranking for information retrieval. The reason may be the complexity
of the ranking problem, as it is not obvious how to learn from queries and
documents when no supervised signal is available. Hence, in this paper, we
propose to train a neural ranking model using weak supervision, where labels
are obtained automatically without human annotators or any external resources
(e.g., click data). To this aim, we use the output of an unsupervised ranking
model, such as BM25, as a weak supervision signal. We further train a set of
simple yet effective ranking models based on feed-forward neural networks. We
study their effectiveness under various learning scenarios (point-wise and
pair-wise models) and using different input representations (i.e., from
encoding query-document pairs into dense/sparse vectors to using word embedding
representation). We train our networks using tens of millions of training
instances and evaluate it on two standard collections: a homogeneous news
collection(Robust) and a heterogeneous large-scale web collection (ClueWeb).
Our experiments indicate that employing proper objective functions and letting
the networks to learn the input representation based on weakly supervised data
leads to impressive performance, with over 13% and 35% MAP improvements over
the BM25 model on the Robust and the ClueWeb collections. Our findings also
suggest that supervised neural ranking models can greatly benefit from
pre-training on large amounts of weakly labeled data that can be easily
obtained from unsupervised IR models.Comment: In proceedings of The 40th International ACM SIGIR Conference on
Research and Development in Information Retrieval (SIGIR2017
- …