182,230 research outputs found
Integrated Deep and Shallow Networks for Salient Object Detection
Deep convolutional neural network (CNN) based salient object detection
methods have achieved state-of-the-art performance and outperform those
unsupervised methods with a wide margin. In this paper, we propose to integrate
deep and unsupervised saliency for salient object detection under a unified
framework. Specifically, our method takes results of unsupervised saliency
(Robust Background Detection, RBD) and normalized color images as inputs, and
directly learns an end-to-end mapping between inputs and the corresponding
saliency maps. The color images are fed into a Fully Convolutional Neural
Networks (FCNN) adapted from semantic segmentation to exploit high-level
semantic cues for salient object detection. Then the results from deep FCNN and
RBD are concatenated to feed into a shallow network to map the concatenated
feature maps to saliency maps. Finally, to obtain a spatially consistent
saliency map with sharp object boundaries, we fuse superpixel level saliency
map at multi-scale. Extensive experimental results on 8 benchmark datasets
demonstrate that the proposed method outperforms the state-of-the-art
approaches with a margin.Comment: Accepted by IEEE International Conference on Image Processing (ICIP)
201
Picasso: A Modular Framework for Visualizing the Learning Process of Neural Network Image Classifiers
Picasso is a free open-source (Eclipse Public License) web application
written in Python for rendering standard visualizations useful for analyzing
convolutional neural networks. Picasso ships with occlusion maps and saliency
maps, two visualizations which help reveal issues that evaluation metrics like
loss and accuracy might hide: for example, learning a proxy classification
task. Picasso works with the Tensorflow deep learning framework, and Keras
(when the model can be loaded into the Tensorflow backend). Picasso can be used
with minimal configuration by deep learning researchers and engineers alike
across various neural network architectures. Adding new visualizations is
simple: the user can specify their visualization code and HTML template
separately from the application code.Comment: 9 pages, submission to the Journal of Open Research Software,
github.com/merantix/picass
Deep Joint Entity Disambiguation with Local Neural Attention
We propose a novel deep learning model for joint document-level entity
disambiguation, which leverages learned neural representations. Key components
are entity embeddings, a neural attention mechanism over local context windows,
and a differentiable joint inference stage for disambiguation. Our approach
thereby combines benefits of deep learning with more traditional approaches
such as graphical models and probabilistic mention-entity maps. Extensive
experiments show that we are able to obtain competitive or state-of-the-art
accuracy at moderate computational costs.Comment: Conference on Empirical Methods in Natural Language Processing
(EMNLP) 2017 long pape
DIMAL: Deep Isometric Manifold Learning Using Sparse Geodesic Sampling
This paper explores a fully unsupervised deep learning approach for computing
distance-preserving maps that generate low-dimensional embeddings for a certain
class of manifolds. We use the Siamese configuration to train a neural network
to solve the problem of least squares multidimensional scaling for generating
maps that approximately preserve geodesic distances. By training with only a
few landmarks, we show a significantly improved local and nonlocal
generalization of the isometric mapping as compared to analogous non-parametric
counterparts. Importantly, the combination of a deep-learning framework with a
multidimensional scaling objective enables a numerical analysis of network
architectures to aid in understanding their representation power. This provides
a geometric perspective to the generalizability of deep learning.Comment: 10 pages, 11 Figure
- …