2,505 research outputs found
An Unsupervised Deep Learning Approach for Scenario Forecasts
In this paper, we propose a novel scenario forecasts approach which can be
applied to a broad range of power system operations (e.g., wind, solar, load)
over various forecasts horizons and prediction intervals. This approach is
model-free and data-driven, producing a set of scenarios that represent
possible future behaviors based only on historical observations and point
forecasts. It first applies a newly-developed unsupervised deep learning
framework, the generative adversarial networks, to learn the intrinsic patterns
in historical renewable generation data. Then by solving an optimization
problem, we are able to quickly generate large number of realistic future
scenarios. The proposed method has been applied to a wind power generation and
forecasting dataset from national renewable energy laboratory. Simulation
results indicate our method is able to generate scenarios that capture spatial
and temporal correlations. Our code and simulation datasets are freely
available online.Comment: Accepted to Power Systems Computation Conference 2018 Code available
at https://github.com/chennnnnyize/Scenario-Forecasts-GA
Unsupervised Deep Epipolar Flow for Stationary or Dynamic Scenes
Unsupervised deep learning for optical flow computation has achieved
promising results. Most existing deep-net based methods rely on image
brightness consistency and local smoothness constraint to train the networks.
Their performance degrades at regions where repetitive textures or occlusions
occur. In this paper, we propose Deep Epipolar Flow, an unsupervised optical
flow method which incorporates global geometric constraints into network
learning. In particular, we investigate multiple ways of enforcing the epipolar
constraint in flow estimation. To alleviate a "chicken-and-egg" type of problem
encountered in dynamic scenes where multiple motions may be present, we propose
a low-rank constraint as well as a union-of-subspaces constraint for training.
Experimental results on various benchmarking datasets show that our method
achieves competitive performance compared with supervised methods and
outperforms state-of-the-art unsupervised deep-learning methods.Comment: CVPR 201
Encoding Multi-Resolution Brain Networks Using Unsupervised Deep Learning
The main goal of this study is to extract a set of brain networks in multiple
time-resolutions to analyze the connectivity patterns among the anatomic
regions for a given cognitive task. We suggest a deep architecture which learns
the natural groupings of the connectivity patterns of human brain in multiple
time-resolutions. The suggested architecture is tested on task data set of
Human Connectome Project (HCP) where we extract multi-resolution networks, each
of which corresponds to a cognitive task. At the first level of this
architecture, we decompose the fMRI signal into multiple sub-bands using
wavelet decompositions. At the second level, for each sub-band, we estimate a
brain network extracted from short time windows of the fMRI signal. At the
third level, we feed the adjacency matrices of each mesh network at each
time-resolution into an unsupervised deep learning algorithm, namely, a Stacked
De- noising Auto-Encoder (SDAE). The outputs of the SDAE provide a compact
connectivity representation for each time window at each sub-band of the fMRI
signal. We concatenate the learned representations of all sub-bands at each
window and cluster them by a hierarchical algorithm to find the natural
groupings among the windows. We observe that each cluster represents a
cognitive task with a performance of 93% Rand Index and 71% Adjusted Rand
Index. We visualize the mean values and the precisions of the networks at each
component of the cluster mixture. The mean brain networks at cluster centers
show the variations among cognitive tasks and the precision of each cluster
shows the within cluster variability of networks, across the subjects.Comment: 6 pages, 3 figures, submitted to The 17th annual IEEE International
Conference on BioInformatics and BioEngineerin
Unsupervised Deep Learning via Affinity Diffusion
Convolutional neural networks (CNNs) have achieved unprecedented success in a variety of computer vision tasks. However, they usually rely on supervised model learning with the need for massive labelled training data, limiting dramatically their usability and deployability in real-world scenarios without any labelling budget. In this work, we introduce a general-purpose unsupervised deep learning approach to deriving discriminative feature representations. It is based on self-discovering semantically consistent groups of unlabelled training samples with the same class concepts through a progressive affinity diffusion process. Extensive experiments on object image classification and clustering show the performance superiority of the proposed method over the state-of-the-art unsupervised learning models using six common image recognition benchmarks including MNIST, SVHN, STL10, CIFAR10, CIFAR100 and ImageNet
Unsupervised Deep Learning by Neighbourhood Discovery
Deep convolutional neural networks (CNNs) have demonstrated remarkable
success in computer vision by supervisedly learning strong visual feature
representations. However, training CNNs relies heavily on the availability of
exhaustive training data annotations, limiting significantly their deployment
and scalability in many application scenarios. In this work, we introduce a
generic unsupervised deep learning approach to training deep models without the
need for any manual label supervision. Specifically, we progressively discover
sample anchored/centred neighbourhoods to reason and learn the underlying class
decision boundaries iteratively and accumulatively. Every single neighbourhood
is specially formulated so that all the member samples can share the same
unseen class labels at high probability for facilitating the extraction of
class discriminative feature representations during training. Experiments on
image classification show the performance advantages of the proposed method
over the state-of-the-art unsupervised learning models on six benchmarks
including both coarse-grained and fine-grained object image categorisation.Comment: 36th International Conference on Machine Learning (ICML'19). Code is
available at https://github.com/Raymond-sci/AN
Cloud Classification with Unsupervised Deep Learning
We present a framework for cloud characterization that leverages modern
unsupervised deep learning technologies. While previous neural network-based
cloud classification models have used supervised learning methods, unsupervised
learning allows us to avoid restricting the model to artificial categories
based on historical cloud classification schemes and enables the discovery of
novel, more detailed classifications. Our framework learns cloud features
directly from radiance data produced by NASA's Moderate Resolution Imaging
Spectroradiometer (MODIS) satellite instrument, deriving cloud characteristics
from millions of images without relying on pre-defined cloud types during the
training process. We present preliminary results showing that our method
extracts physically relevant information from radiance data and produces
meaningful cloud classes.Comment: 5 pages, 6 figures, Proceedings for Climate Informatics Workshop 2019
Pari
- …