10,276 research outputs found
Categorization of indoor places by combining local binary pattern histograms of range and reflectance data from laser range finders
This paper presents an approach to categorize typical places in indoor environments using 3D scans provided by a laser range finder. Examples of such places are offices, laboratories, or kitchens. In our method, we combine the range and reflectance data from the laser scan for the final categorization of places. Range and reflectance images are transformed into histograms of local binary patterns and combined into a single feature vector. This vector is later classified using support vector machines. The results of the presented experiments demonstrate the capability of our technique to categorize indoor places with high accuracy. We also show that the combination of range and reflectance information improves the final categorization results in comparison with a single modality
Categorization of indoor places using the Kinect sensor
The categorization of places in indoor environments is an important capability for service robots working and interacting with humans. In this paper we present a method to categorize different areas in indoor environments using a mobile robot equipped with a Kinect camera. Our approach transforms depth and grey scale images taken at each place into histograms of local binary patterns (LBPs) whose dimensionality is further reduced following a uniform criterion. The histograms are then combined into a single feature vector which is categorized using a supervised method. In this work we compare the performance of support vector machines and random forests as supervised classifiers. Finally, we apply our technique to distinguish five different place categories: corridors, laboratories, offices, kitchens, and study rooms. Experimental results show that we can categorize these places with high accuracy using our approach
Place Categorization and Semantic Mapping on a Mobile Robot
In this paper we focus on the challenging problem of place categorization and
semantic mapping on a robot without environment-specific training. Motivated by
their ongoing success in various visual recognition tasks, we build our system
upon a state-of-the-art convolutional network. We overcome its closed-set
limitations by complementing the network with a series of one-vs-all
classifiers that can learn to recognize new semantic classes online. Prior
domain knowledge is incorporated by embedding the classification system into a
Bayesian filter framework that also ensures temporal coherence. We evaluate the
classification accuracy of the system on a robot that maps a variety of places
on our campus in real-time. We show how semantic information can boost robotic
object detection performance and how the semantic map can be used to modulate
the robot's behaviour during navigation tasks. The system is made available to
the community as a ROS module
Learning Deep NBNN Representations for Robust Place Categorization
This paper presents an approach for semantic place categorization using data
obtained from RGB cameras. Previous studies on visual place recognition and
classification have shown that, by considering features derived from
pre-trained Convolutional Neural Networks (CNNs) in combination with part-based
classification models, high recognition accuracy can be achieved, even in
presence of occlusions and severe viewpoint changes. Inspired by these works,
we propose to exploit local deep representations, representing images as set of
regions applying a Na\"{i}ve Bayes Nearest Neighbor (NBNN) model for image
classification. As opposed to previous methods where CNNs are merely used as
feature extractors, our approach seamlessly integrates the NBNN model into a
fully-convolutional neural network. Experimental results show that the proposed
algorithm outperforms previous methods based on pre-trained CNN models and
that, when employed in challenging robot place recognition tasks, it is robust
to occlusions, environmental and sensor changes
Robust Place Categorization With Deep Domain Generalization
Traditional place categorization approaches in robot vision assume that training and test images have similar visual appearance. Therefore, any seasonal, illumination, and environmental changes typically lead to severe degradation in performance. To cope with this problem, recent works have been proposed to adopt domain adaptation techniques. While effective, these methods assume that some prior information about the scenario where the robot will operate is available at training time. Unfortunately, in many cases, this assumption does not hold, as we often do not know where a robot will be deployed. To overcome this issue, in this paper, we present an approach that aims at learning classification models able to generalize to unseen scenarios. Specifically, we propose a novel deep learning framework for domain generalization. Our method develops from the intuition that, given a set of different classification models associated to known domains (e.g., corresponding to multiple environments, robots), the best model for a new sample in the novel domain can be computed directly at test time by optimally combining the known models. To implement our idea, we exploit recent advances in deep domain adaptation and design a convolutional neural network architecture with novel layers performing a weighted version of batch normalization. Our experiments, conducted on three common datasets for robot place categorization, confirm the validity of our contribution
Conceptual spatial representations for indoor mobile robots
We present an approach for creating conceptual representations of human-made indoor environments using mobile
robots. The concepts refer to spatial and functional properties of typical indoor environments. Following findings
in cognitive psychology, our model is composed of layers representing maps at different levels of abstraction. The
complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition.
The system also incorporates a linguistic framework that actively supports the map acquisition process, and which
is used for situated dialogue. Finally, we discuss the capabilities of the integrated system
Knowledge Representation for Robots through Human-Robot Interaction
The representation of the knowledge needed by a robot to perform complex
tasks is restricted by the limitations of perception. One possible way of
overcoming this situation and designing "knowledgeable" robots is to rely on
the interaction with the user. We propose a multi-modal interaction framework
that allows to effectively acquire knowledge about the environment where the
robot operates. In particular, in this paper we present a rich representation
framework that can be automatically built from the metric map annotated with
the indications provided by the user. Such a representation, allows then the
robot to ground complex referential expressions for motion commands and to
devise topological navigation plans to achieve the target locations.Comment: Knowledge Representation and Reasoning in Robotics Workshop at ICLP
201
Toward a unified PNT, Part 1: Complexity and context: Key challenges of multisensor positioning
The next generation of navigation and positioning systems must provide greater accuracy and reliability in a range of challenging environments to meet the needs of a variety of mission-critical applications. No single navigation technology is robust enough to meet these requirements on its own, so a multisensor solution is required. Known environmental features, such as signs, buildings, terrain height variation, and magnetic anomalies, may or may not be available for positioning. The system could be stationary, carried by a pedestrian, or on any type of land, sea, or air vehicle. Furthermore, for many applications, the environment and host behavior are subject to change. A multi-sensor solution is thus required. The expert knowledge problem is compounded by the fact that different modules in an integrated navigation system are often supplied by different organizations, who may be reluctant to share necessary design information if this is considered to be intellectual property that must be protected
- …