3,365 research outputs found
Unsupervised Learning of Edges
Data-driven approaches for edge detection have proven effective and achieve
top results on modern benchmarks. However, all current data-driven edge
detectors require manual supervision for training in the form of hand-labeled
region segments or object boundaries. Specifically, human annotators mark
semantically meaningful edges which are subsequently used for training. Is this
form of strong, high-level supervision actually necessary to learn to
accurately detect edges? In this work we present a simple yet effective
approach for training edge detectors without human supervision. To this end we
utilize motion, and more specifically, the only input to our method is noisy
semi-dense matches between frames. We begin with only a rudimentary knowledge
of edges (in the form of image gradients), and alternate between improving
motion estimation and edge detection in turn. Using a large corpus of video
data, we show that edge detectors trained using our unsupervised scheme
approach the performance of the same methods trained with full supervision
(within 3-5%). Finally, we show that when using a deep network for the edge
detector, our approach provides a novel pre-training scheme for object
detection.Comment: Camera ready version for CVPR 201
Learning to Associate Words and Images Using a Large-scale Graph
We develop an approach for unsupervised learning of associations between
co-occurring perceptual events using a large graph. We applied this approach to
successfully solve the image captcha of China's railroad system. The approach
is based on the principle of suspicious coincidence. In this particular
problem, a user is presented with a deformed picture of a Chinese phrase and
eight low-resolution images. They must quickly select the relevant images in
order to purchase their train tickets. This problem presents several
challenges: (1) the teaching labels for both the Chinese phrases and the images
were not available for supervised learning, (2) no pre-trained deep
convolutional neural networks are available for recognizing these Chinese
phrases or the presented images, and (3) each captcha must be solved within a
few seconds. We collected 2.6 million captchas, with 2.6 million deformed
Chinese phrases and over 21 million images. From these data, we constructed an
association graph, composed of over 6 million vertices, and linked these
vertices based on co-occurrence information and feature similarity between
pairs of images. We then trained a deep convolutional neural network to learn a
projection of the Chinese phrases onto a 230-dimensional latent space. Using
label propagation, we computed the likelihood of each of the eight images
conditioned on the latent space projection of the deformed phrase for each
captcha. The resulting system solved captchas with 77% accuracy in 2 seconds on
average. Our work, in answering this practical challenge, illustrates the power
of this class of unsupervised association learning techniques, which may be
related to the brain's general strategy for associating language stimuli with
visual objects on the principle of suspicious coincidence.Comment: 8 pages, 7 figures, 14th Conference on Computer and Robot Vision 201
Deep Learning for Free-Hand Sketch: A Survey
Free-hand sketches are highly illustrative, and have been widely used by
humans to depict objects or stories from ancient times to the present. The
recent prevalence of touchscreen devices has made sketch creation a much easier
task than ever and consequently made sketch-oriented applications increasingly
popular. The progress of deep learning has immensely benefited free-hand sketch
research and applications. This paper presents a comprehensive survey of the
deep learning techniques oriented at free-hand sketch data, and the
applications that they enable. The main contents of this survey include: (i) A
discussion of the intrinsic traits and unique challenges of free-hand sketch,
to highlight the essential differences between sketch data and other data
modalities, e.g., natural photos. (ii) A review of the developments of
free-hand sketch research in the deep learning era, by surveying existing
datasets, research topics, and the state-of-the-art methods through a detailed
taxonomy and experimental evaluation. (iii) Promotion of future work via a
discussion of bottlenecks, open problems, and potential research directions for
the community.Comment: This paper is accepted by IEEE TPAM
The Zero Resource Speech Challenge 2017
We describe a new challenge aimed at discovering subword and word units from
raw speech. This challenge is the followup to the Zero Resource Speech
Challenge 2015. It aims at constructing systems that generalize across
languages and adapt to new speakers. The design features and evaluation metrics
of the challenge are presented and the results of seventeen models are
discussed.Comment: IEEE ASRU (Automatic Speech Recognition and Understanding) 2017.
Okinawa, Japa
- …