8,152 research outputs found
Appearance-based localization for mobile robots using digital zoom and visual compass
This paper describes a localization system for mobile robots moving in dynamic indoor environments, which uses probabilistic integration of visual appearance and odometry information. The approach is based on a novel image matching algorithm for appearance-based place recognition that integrates digital zooming, to extend the area of application, and a visual compass. Ambiguous information used for recognizing places is resolved with multiple hypothesis tracking and a selection procedure inspired by Markov localization. This enables the system to deal with perceptual aliasing or absence of reliable sensor data. It has been implemented on a robot operating in an office scenario and the robustness of the approach demonstrated experimentally
Mixed Reality Architecture: a dynamic architectural topology
Architecture can be shown to structure patterns of co-presence and in turn to be
structured itself by the rules and norms of the society present within it. This two-way
relationship exists in a surprisingly stable framework, as fundamental changes to
buildings are slow and costly. At the same time, change within organisations is
increasingly rapid and buildings are used to accommodate some of that change. This
adaptation can be supported by the use of telecommunication technologies, overcoming
the need for co-presence during social interaction. However, often this results in a loss
of accountability or ‘civic legibility’, as the link between physical location and social
activity is broken. In response to these considerations, Mixed Reality Architecture
(MRA) was developed. MRA links multiple physical spaces across a shared 3D virtual
world. We report on the design of MRA, including the key concept of the Mixed Reality
Architectural Cell, a novel architectural interface between architectural spaces that are
remote to each other. An in-depth study lasting one year and involving six office-based
MRACells, used video recordings, the analysis of event logs, diaries and an interview
survey. This produced a series of ethnographic vignettes describing social interaction
within MRA in detail. In this paper we concentrate on the topological properties of MRA.
It can be shown that the dynamic topology of MRA and social interaction taking place
within it are fundamentally intertwined. We discuss how topological adjacencies across
virtual space change the integration of the architectural spaces that MRA is installed in.
We further reflect on how the placement of MRA technology in different parts of an
office space (deep or shallow) impacts on the nature of that particular space. Both the
above can be shown to influence movement through the building and social interaction
taking place within it. These findings are directly relevant to new buildings that need to
be designed to accommodate organisational change in future but also to existing
building stock that might be very hard to adapt. We are currently expanding the system
to new sites and are planning changes to the infrastructure of MRA as well as its
interactional interface
Seeing the invisible: from imagined to virtual urban landscapes
Urban ecosystems consist of infrastructure features working together to provide services for inhabitants. Infrastructure functions akin to an ecosystem, having dynamic relationships and interdependencies. However, with age, urban infrastructure can deteriorate and stop functioning. Additional pressures on infrastructure include urbanizing populations and a changing climate that exposes vulnerabilities. To manage the urban infrastructure ecosystem in a modernizing world, urban planners need to integrate a coordinated management plan for these co-located and dependent infrastructure features. To implement such a management practice, an improved method for communicating how these infrastructure features interact is needed. This study aims to define urban infrastructure as a system, identify the systematic barriers preventing implementation of a more coordinated management model, and develop a virtual reality tool to provide visualization of the spatial system dynamics of urban infrastructure. Data was collected from a stakeholder workshop that highlighted a lack of appreciation for the system dynamics of urban infrastructure. An urban ecology VR model was created to highlight the interconnectedness of infrastructure features. VR proved to be useful for communicating spatial information to urban stakeholders about the complexities of infrastructure ecology and the interactions between infrastructure features.https://doi.org/10.1016/j.cities.2019.102559Published versio
Semantic labeling of places
Indoor environments can typically be divided into places with different
functionalities like corridors, kitchens, offices, or seminar rooms. We believe that
such semantic information enables a mobile robot to more efficiently accomplish a
variety of tasks such as human-robot interaction, path-planning, or localization. In
this paper, we propose an approach to classify places in indoor environments into
different categories. Our approach uses AdaBoost to boost simple features extracted from vision and laser range data. Furthermore,we apply a Hidden Markov Model to take spatial dependencies between robot poses into account and to increase the robustness of the classification. Our technique has been implemented and tested on real robots as well as in simulation. Experiments presented in this paper demonstrate that our approach can be utilized to robustly classify places into semantic categories
Conceptual spatial representations for indoor mobile robots
We present an approach for creating conceptual representations of human-made indoor environments using mobile
robots. The concepts refer to spatial and functional properties of typical indoor environments. Following findings
in cognitive psychology, our model is composed of layers representing maps at different levels of abstraction. The
complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition.
The system also incorporates a linguistic framework that actively supports the map acquisition process, and which
is used for situated dialogue. Finally, we discuss the capabilities of the integrated system
Navigation, localization and stabilization of formations of unmanned aerial and ground vehicles
A leader-follower formation driving algorithm developed for control of heterogeneous groups of unmanned micro aerial and ground vehicles stabilized under a top-view relative localization is presented in this paper. The core of the proposed method lies in a novel avoidance function, in which the entire 3D formation is represented by a convex hull projected along a desired path to be followed by the group. Such a representation of the formation provides non-collision trajectories of the robots and respects requirements of the direct visibility between the team members in environment with static as well as dynamic obstacles, which is crucial for the top-view localization. The algorithm is suited for utilization of a simple yet stable visual based navigation of the group (referred to as GeNav), which together with the on-board relative localization enables deployment of large teams of micro-scale robots in environments without any available global localization system. We formulate a novel Model Predictive Control (MPC) based concept that enables to respond to the changing environment and that provides a robust solution with team members' failure tolerance included. The performance of the proposed method is verified by numerical and hardware experiments inspired by reconnaissance and surveillance missions
Role Playing Learning for Socially Concomitant Mobile Robot Navigation
In this paper, we present the Role Playing Learning (RPL) scheme for a mobile
robot to navigate socially with its human companion in populated environments.
Neural networks (NN) are constructed to parameterize a stochastic policy that
directly maps sensory data collected by the robot to its velocity outputs,
while respecting a set of social norms. An efficient simulative learning
environment is built with maps and pedestrians trajectories collected from a
number of real-world crowd data sets. In each learning iteration, a robot
equipped with the NN policy is created virtually in the learning environment to
play itself as a companied pedestrian and navigate towards a goal in a socially
concomitant manner. Thus, we call this process Role Playing Learning, which is
formulated under a reinforcement learning (RL) framework. The NN policy is
optimized end-to-end using Trust Region Policy Optimization (TRPO), with
consideration of the imperfectness of robot's sensor measurements. Simulative
and experimental results are provided to demonstrate the efficacy and
superiority of our method
LCrowdV: Generating Labeled Videos for Simulation-based Crowd Behavior Learning
We present a novel procedural framework to generate an arbitrary number of
labeled crowd videos (LCrowdV). The resulting crowd video datasets are used to
design accurate algorithms or training models for crowded scene understanding.
Our overall approach is composed of two components: a procedural simulation
framework for generating crowd movements and behaviors, and a procedural
rendering framework to generate different videos or images. Each video or image
is automatically labeled based on the environment, number of pedestrians,
density, behavior, flow, lighting conditions, viewpoint, noise, etc.
Furthermore, we can increase the realism by combining synthetically-generated
behaviors with real-world background videos. We demonstrate the benefits of
LCrowdV over prior lableled crowd datasets by improving the accuracy of
pedestrian detection and crowd behavior classification algorithms. LCrowdV
would be released on the WWW
- …