121 research outputs found
SuperPoint: Self-Supervised Interest Point Detection and Description
This paper presents a self-supervised framework for training interest point
detectors and descriptors suitable for a large number of multiple-view geometry
problems in computer vision. As opposed to patch-based neural networks, our
fully-convolutional model operates on full-sized images and jointly computes
pixel-level interest point locations and associated descriptors in one forward
pass. We introduce Homographic Adaptation, a multi-scale, multi-homography
approach for boosting interest point detection repeatability and performing
cross-domain adaptation (e.g., synthetic-to-real). Our model, when trained on
the MS-COCO generic image dataset using Homographic Adaptation, is able to
repeatedly detect a much richer set of interest points than the initial
pre-adapted deep model and any other traditional corner detector. The final
system gives rise to state-of-the-art homography estimation results on HPatches
when compared to LIFT, SIFT and ORB.Comment: Camera-ready version for CVPR 2018 Deep Learning for Visual SLAM
Workshop (DL4VSLAM2018
ParseNet: Looking Wider to See Better
We present a technique for adding global context to deep convolutional
networks for semantic segmentation. The approach is simple, using the average
feature for a layer to augment the features at each location. In addition, we
study several idiosyncrasies of training, significantly increasing the
performance of baseline networks (e.g. from FCN). When we add our proposed
global feature, and a technique for learning normalization parameters, accuracy
increases consistently even over our improved versions of the baselines. Our
proposed approach, ParseNet, achieves state-of-the-art performance on SiftFlow
and PASCAL-Context with small additional computational cost over baselines, and
near current state-of-the-art performance on PASCAL VOC 2012 semantic
segmentation with a simple approach. Code is available at
https://github.com/weiliu89/caffe/tree/fcn .Comment: ICLR 2016 submissio
What's Cookin'? Interpreting Cooking Videos using Text, Speech and Vision
We present a novel method for aligning a sequence of instructions to a video
of someone carrying out a task. In particular, we focus on the cooking domain,
where the instructions correspond to the recipe. Our technique relies on an HMM
to align the recipe steps to the (automatically generated) speech transcript.
We then refine this alignment using a state-of-the-art visual food detector,
based on a deep convolutional neural network. We show that our technique
outperforms simpler techniques based on keyword spotting. It also enables
interesting applications, such as automatically illustrating recipes with
keyframes, and searching within a video for events of interest.Comment: To appear in NAACL 201
- …