684,592 research outputs found
Deep Complex Networks
At present, the vast majority of building blocks, techniques, and
architectures for deep learning are based on real-valued operations and
representations. However, recent work on recurrent neural networks and older
fundamental theoretical analysis suggests that complex numbers could have a
richer representational capacity and could also facilitate noise-robust memory
retrieval mechanisms. Despite their attractive properties and potential for
opening up entirely new neural architectures, complex-valued deep neural
networks have been marginalized due to the absence of the building blocks
required to design such models. In this work, we provide the key atomic
components for complex-valued deep neural networks and apply them to
convolutional feed-forward networks and convolutional LSTMs. More precisely, we
rely on complex convolutions and present algorithms for complex
batch-normalization, complex weight initialization strategies for
complex-valued neural nets and we use them in experiments with end-to-end
training schemes. We demonstrate that such complex-valued models are
competitive with their real-valued counterparts. We test deep complex models on
several computer vision tasks, on music transcription using the MusicNet
dataset and on Speech Spectrum Prediction using the TIMIT dataset. We achieve
state-of-the-art performance on these audio-related tasks
Deep learning systems as complex networks
Thanks to the availability of large scale digital datasets and massive
amounts of computational power, deep learning algorithms can learn
representations of data by exploiting multiple levels of abstraction. These
machine learning methods have greatly improved the state-of-the-art in many
challenging cognitive tasks, such as visual object recognition, speech
processing, natural language understanding and automatic translation. In
particular, one class of deep learning models, known as deep belief networks,
can discover intricate statistical structure in large data sets in a completely
unsupervised fashion, by learning a generative model of the data using
Hebbian-like learning mechanisms. Although these self-organizing systems can be
conveniently formalized within the framework of statistical mechanics, their
internal functioning remains opaque, because their emergent dynamics cannot be
solved analytically. In this article we propose to study deep belief networks
using techniques commonly employed in the study of complex networks, in order
to gain some insights into the structural and functional properties of the
computational graph resulting from the learning process.Comment: 20 pages, 9 figure
Geometric robustness of deep networks: analysis and improvement
Deep convolutional neural networks have been shown to be vulnerable to
arbitrary geometric transformations. However, there is no systematic method to
measure the invariance properties of deep networks to such transformations. We
propose ManiFool as a simple yet scalable algorithm to measure the invariance
of deep networks. In particular, our algorithm measures the robustness of deep
networks to geometric transformations in a worst-case regime as they can be
problematic for sensitive applications. Our extensive experimental results show
that ManiFool can be used to measure the invariance of fairly complex networks
on high dimensional datasets and these values can be used for analyzing the
reasons for it. Furthermore, we build on Manifool to propose a new adversarial
training scheme and we show its effectiveness on improving the invariance
properties of deep neural networks
Deep Quaternion Networks
The field of deep learning has seen significant advancement in recent years.
However, much of the existing work has been focused on real-valued numbers.
Recent work has shown that a deep learning system using the complex numbers can
be deeper for a fixed parameter budget compared to its real-valued counterpart.
In this work, we explore the benefits of generalizing one step further into the
hyper-complex numbers, quaternions specifically, and provide the architecture
components needed to build deep quaternion networks. We develop the theoretical
basis by reviewing quaternion convolutions, developing a novel quaternion
weight initialization scheme, and developing novel algorithms for quaternion
batch-normalization. These pieces are tested in a classification model by
end-to-end training on the CIFAR-10 and CIFAR-100 data sets and a segmentation
model by end-to-end training on the KITTI Road Segmentation data set. These
quaternion networks show improved convergence compared to real-valued and
complex-valued networks, especially on the segmentation task, while having
fewer parametersComment: IJCNN 2018, 8 pages, 1 figur
- …