149 research outputs found
Mitigating Architectural Mismatch During the Evolutionary Synthesis of Deep Neural Networks
Evolutionary deep intelligence has recently shown great promise for producing
small, powerful deep neural network models via the organic synthesis of
increasingly efficient architectures over successive generations. Existing
evolutionary synthesis processes, however, have allowed the mating of parent
networks independent of architectural alignment, resulting in a mismatch of
network structures. We present a preliminary study into the effects of
architectural alignment during evolutionary synthesis using a gene tagging
system. Surprisingly, the network architectures synthesized using the gene
tagging approach resulted in slower decreases in performance accuracy and
storage size; however, the resultant networks were comparable in size and
performance accuracy to the non-gene tagging networks. Furthermore, we
speculate that there is a noticeable decrease in network variability for
networks synthesized with gene tagging, indicating that enforcing a
like-with-like mating policy potentially restricts the exploration of the
search space of possible network architectures.Comment: 5 page
Assessing Architectural Similarity in Populations of Deep Neural Networks
Evolutionary deep intelligence has recently shown great promise for producing
small, powerful deep neural network models via the synthesis of increasingly
efficient architectures over successive generations. Despite recent research
showing the efficacy of multi-parent evolutionary synthesis, little has been
done to directly assess architectural similarity between networks during the
synthesis process for improved parent network selection. In this work, we
present a preliminary study into quantifying architectural similarity via the
percentage overlap of architectural clusters. Results show that networks
synthesized using architectural alignment (via gene tagging) maintain higher
architectural similarities within each generation, potentially restricting the
search space of highly efficient network architectures.Comment: 3 pages. arXiv admin note: text overlap with arXiv:1811.0796
Efficient Deep Feature Learning and Extraction via StochasticNets
Deep neural networks are a powerful tool for feature learning and extraction
given their ability to model high-level abstractions in highly complex data.
One area worth exploring in feature learning and extraction using deep neural
networks is efficient neural connectivity formation for faster feature learning
and extraction. Motivated by findings of stochastic synaptic connectivity
formation in the brain as well as the brain's uncanny ability to efficiently
represent information, we propose the efficient learning and extraction of
features via StochasticNets, where sparsely-connected deep neural networks can
be formed via stochastic connectivity between neurons. To evaluate the
feasibility of such a deep neural network architecture for feature learning and
extraction, we train deep convolutional StochasticNets to learn abstract
features using the CIFAR-10 dataset, and extract the learned features from
images to perform classification on the SVHN and STL-10 datasets. Experimental
results show that features learned using deep convolutional StochasticNets,
with fewer neural connections than conventional deep convolutional neural
networks, can allow for better or comparable classification accuracy than
conventional deep neural networks: relative test error decrease of ~4.5% for
classification on the STL-10 dataset and ~1% for classification on the SVHN
dataset. Furthermore, it was shown that the deep features extracted using deep
convolutional StochasticNets can provide comparable classification accuracy
even when only 10% of the training data is used for feature learning. Finally,
it was also shown that significant gains in feature extraction speed can be
achieved in embedded applications using StochasticNets. As such, StochasticNets
allow for faster feature learning and extraction performance while facilitate
for better or comparable accuracy performances.Comment: 10 pages. arXiv admin note: substantial text overlap with
arXiv:1508.0546
Multi-Neighborhood Convolutional Networks
We explore the role of scale for improved feature learning in convolutionalnetworks. We propose multi-neighborhood convolutionalnetworks, designed to learn image features at different levels ofdetail. Utilizing nonlinear scale-space models, the proposed multineighborhoodmodel can effectively capture fine-scale image characteristics(i.e., appearance) using a small-size neighborhood, whilecoarse-scale image structures (i.e., shape) are detected througha larger neighborhood. The experimental results demonstrate thesuperior performance of the proposed multi-scale multi-neighborhoodmodels over their single-scale counterparts
CLVOS23: A Long Video Object Segmentation Dataset for Continual Learning
Continual learning in real-world scenarios is a major challenge. A general
continual learning model should have a constant memory size and no predefined
task boundaries, as is the case in semi-supervised Video Object Segmentation
(VOS), where continual learning challenges particularly present themselves in
working on long video sequences. In this article, we first formulate the
problem of semi-supervised VOS, specifically online VOS, as a continual
learning problem, and then secondly provide a public VOS dataset, CLVOS23,
focusing on continual learning. Finally, we propose and implement a
regularization-based continual learning approach on LWL, an existing online VOS
baseline, to demonstrate the efficacy of continual learning when applied to
online VOS and to establish a CLVOS23 baseline. We apply the proposed baseline
to the Long Videos dataset as well as to two short video VOS datasets, DAVIS16
and DAVIS17. To the best of our knowledge, this is the first time that VOS has
been defined and addressed as a continual learning problem
Machine Learning Challenges of Biological Factors in Insect Image Data
The BIOSCAN project, led by the International Barcode of Life Consortium,
seeks to study changes in biodiversity on a global scale. One component of the
project is focused on studying the species interaction and dynamics of all
insects. In addition to genetically barcoding insects, over 1.5 million images
per year will be collected, each needing taxonomic classification. With the
immense volume of incoming images, relying solely on expert taxonomists to
label the images would be impossible; however, artificial intelligence and
computer vision technology may offer a viable high-throughput solution.
Additional tasks including manually weighing individual insects to determine
biomass, remain tedious and costly. Here again, computer vision may offer an
efficient and compelling alternative. While the use of computer vision methods
is appealing for addressing these problems, significant challenges resulting
from biological factors present themselves. These challenges are formulated in
the context of machine learning in this paper.Comment: 4 pages, 3 figures. Submitted to the Journal of Computational Vision
and Imaging System
Impact of Training Images on Radiometric Compensation
The increasing availability of both high-resolution projectors andimperfect displays make radiometric correction an essential componentin all modern projection systems. Particularly, projectingin casual locations, such as classrooms, open areas and homes,calls for the development of radiometric correction techniques thatare fully automatic and deal with display imperfections in real-time.This paper reviews the current radiometric compensation algorithmsand discusses the influence of different training images on theirperformance
- …