13,981 research outputs found
Plane extraction for indoor place recognition
In this paper, we present an image based plane extraction
method well suited for real-time operations. Our approach exploits the
assumption that the surrounding scene is mainly composed by planes
disposed in known directions. Planes are detected from a single image
exploiting a voting scheme that takes into account the vanishing lines.
Then, candidate planes are validated and merged using a region grow-
ing based approach to detect in real-time planes inside an unknown in-
door environment. Using the related plane homographies is possible to
remove the perspective distortion, enabling standard place recognition
algorithms to work in an invariant point of view setup. Quantitative Ex-
periments performed with real world images show the effectiveness of our
approach compared with a very popular method
Appearance-based localization for mobile robots using digital zoom and visual compass
This paper describes a localization system for mobile robots moving in dynamic indoor environments, which uses probabilistic integration of visual appearance and odometry information. The approach is based on a novel image matching algorithm for appearance-based place recognition that integrates digital zooming, to extend the area of application, and a visual compass. Ambiguous information used for recognizing places is resolved with multiple hypothesis tracking and a selection procedure inspired by Markov localization. This enables the system to deal with perceptual aliasing or absence of reliable sensor data. It has been implemented on a robot operating in an office scenario and the robustness of the approach demonstrated experimentally
Distinctive-attribute Extraction for Image Captioning
Image captioning, an open research issue, has been evolved with the progress
of deep neural networks. Convolutional neural networks (CNNs) and recurrent
neural networks (RNNs) are employed to compute image features and generate
natural language descriptions in the research. In previous works, a caption
involving semantic description can be generated by applying additional
information into the RNNs. In this approach, we propose a distinctive-attribute
extraction (DaE) which explicitly encourages significant meanings to generate
an accurate caption describing the overall meaning of the image with their
unique situation. Specifically, the captions of training images are analyzed by
term frequency-inverse document frequency (TF-IDF), and the analyzed semantic
information is trained to extract distinctive-attributes for inferring
captions. The proposed scheme is evaluated on a challenge data, and it improves
an objective performance while describing images in more detail.Comment: 14 main pages, 4 supplementary page
- …