161 research outputs found
Feature Learning for Multispectral Satellite Imagery Classification Using Neural Architecture Search
Automated classification of remote sensing data is an integral tool for earth scientists, and deep learning has proven very successful at solving such problems. However, building deep learning models to process the data requires expert knowledge of machine learning. We introduce DELTA, a software toolkit to bridge this technical gap and make deep learning easily accessible to earth scientists. Visual feature engineering is a critical part of the machine learning lifecycle, and hence is a key area that will be automated by DELTA. Hand-engineered features can perform well, but require a cross functional team with expertise in both machine learning and the specific problem domain, which is costly in both researcher time and labor. The problem is more acute with multispectral satellite imagery, which requires considerable computational resources to process. In order to automate the feature learning process, a neural architecture search samples the space of asymmetric and symmetric autoencoders using evolutionary algorithms. Since denoising autoencoders have been shown to perform well for feature learning, the autoencoders are trained on various levels of noise and the features generated by the best performing autoencoders evaluated according to their performance on image classification tasks. The resulting features are demonstrated to be effective for Landsat-8 flood mapping, as well as benchmark datasets CIFAR10 and SVHN
Neuroevolution in Deep Neural Networks: Current Trends and Future Challenges
A variety of methods have been applied to the architectural configuration and
learning or training of artificial deep neural networks (DNN). These methods
play a crucial role in the success or failure of the DNN for most problems and
applications. Evolutionary Algorithms (EAs) are gaining momentum as a
computationally feasible method for the automated optimisation and training of
DNNs. Neuroevolution is a term which describes these processes of automated
configuration and training of DNNs using EAs. While many works exist in the
literature, no comprehensive surveys currently exist focusing exclusively on
the strengths and limitations of using neuroevolution approaches in DNNs.
Prolonged absence of such surveys can lead to a disjointed and fragmented field
preventing DNNs researchers potentially adopting neuroevolutionary methods in
their own research, resulting in lost opportunities for improving performance
and wider application within real-world deep learning problems. This paper
presents a comprehensive survey, discussion and evaluation of the
state-of-the-art works on using EAs for architectural configuration and
training of DNNs. Based on this survey, the paper highlights the most pertinent
current issues and challenges in neuroevolution and identifies multiple
promising future research directions.Comment: 20 pages (double column), 2 figures, 3 tables, 157 reference
Machine learning for outlier detection in medical imaging
Outlier detection is an important problem with diverse practical applications. In medical imaging, there are many diagnostic tasks that can be framed as outlier detection. Since pathologies can manifest in so many different ways, the goal is typically to learn from normal, healthy data and identify any deviations. Unfortunately, many outliers in the medical domain can be subtle and specific, making them difficult to detect without labelled examples. This thesis analyzes some of the nuances of medical data and the value of labels in this context. It goes on to propose several strategies for unsupervised learning. More specifically, these methods are designed to learn discriminative features from data of a single class. One approach uses divergent search to continually find different ways to partition the data and thereby accumulates a repertoire of features. The other proposed methods are based on a self-supervised task that distorts normal data to form a contrasting class. A network can then be trained to localize the irregularities and estimate the degree of foreign interference. This basic technique is further enhanced using advanced image editing to create more natural irregularities. Lastly, the same self-supervised task is repurposed for few-shot learning to create a framework for adaptive outlier detection. These proposed methods are able to outperform conventional strategies across a range of datasets including brain MRI, abdominal CT, chest X-ray, and fetal ultrasound data. In particular, these methods excel at detecting more subtle irregularities. This complements existing methods and aims to maximize benefit to clinicians by detecting fine-grained anomalies that can otherwise require intense scrutiny. Note that all approaches to outlier detection must accept some assumptions; these will affect which types of outliers can be detected. As such, these methods aim for broad generalization within the most medically relevant categories. Ultimately, the hope is to support clinicians and to focus their attention and efforts on the data that warrants further analysis.Open Acces
Efficient Residual Dense Block Search for Image Super-Resolution
Although remarkable progress has been made on single image super-resolution
due to the revival of deep convolutional neural networks, deep learning methods
are confronted with the challenges of computation and memory consumption in
practice, especially for mobile devices. Focusing on this issue, we propose an
efficient residual dense block search algorithm with multiple objectives to
hunt for fast, lightweight and accurate networks for image super-resolution.
Firstly, to accelerate super-resolution network, we exploit the variation of
feature scale adequately with the proposed efficient residual dense blocks. In
the proposed evolutionary algorithm, the locations of pooling and upsampling
operator are searched automatically. Secondly, network architecture is evolved
with the guidance of block credits to acquire accurate super-resolution
network. The block credit reflects the effect of current block and is earned
during model evaluation process. It guides the evolution by weighing the
sampling probability of mutation to favor admirable blocks. Extensive
experimental results demonstrate the effectiveness of the proposed searching
method and the found efficient super-resolution models achieve better
performance than the state-of-the-art methods with limited number of parameters
and FLOPs
Single Cell Training on Architecture Search for Image Denoising
Neural Architecture Search (NAS) for automatically finding the optimal
network architecture has shown some success with competitive performances in
various computer vision tasks. However, NAS in general requires a tremendous
amount of computations. Thus reducing computational cost has emerged as an
important issue. Most of the attempts so far has been based on manual
approaches, and often the architectures developed from such efforts dwell in
the balance of the network optimality and the search cost. Additionally, recent
NAS methods for image restoration generally do not consider dynamic operations
that may transform dimensions of feature maps because of the dimensionality
mismatch in tensor calculations. This can greatly limit NAS in its search for
optimal network structure. To address these issues, we re-frame the optimal
search problem by focusing at component block level. From previous work, it's
been shown that an effective denoising block can be connected in series to
further improve the network performance. By focusing at block level, the search
space of reinforcement learning becomes significantly smaller and evaluation
process can be conducted more rapidly. In addition, we integrate an innovative
dimension matching modules for dealing with spatial and channel-wise mismatch
that may occur in the optimal design search. This allows much flexibility in
optimal network search within the cell block. With these modules, then we
employ reinforcement learning in search of an optimal image denoising network
at a module level. Computational efficiency of our proposed Denoising Prior
Neural Architecture Search (DPNAS) was demonstrated by having it complete an
optimal architecture search for an image restoration task by just one day with
a single GPU
Review : Deep learning in electron microscopy
Deep learning is transforming most areas of science and technology, including electron microscopy. This review paper offers a practical perspective aimed at developers with limited familiarity. For context, we review popular applications of deep learning in electron microscopy. Following, we discuss hardware and software needed to get started with deep learning and interface with electron microscopes. We then review neural network components, popular architectures, and their optimization. Finally, we discuss future directions of deep learning in electron microscopy
- …