1,315 research outputs found
Reactive exploration with self-reconfigurable systems
Modular self-reconfigurable robots (MSR) are robots composed of modules that translate over one another to permit reconfiguration and locomotion. This construction allows them to traverse a broader range of environments than legged or wheeled robots. MSR also posses the ability to split apart to parallelize their efforts or combine with each other to produce a larger robot capable of navigating more diverse terrain. In this paper we discuss a state-based reactive architecture for the distributed control of cooperative MSR teams in unknown environments. The MSR use local sensory data from the environment and a model of the team to select their actions. These actions include selecting a destination, aborting a route to a destination, splitting into two separate robots, and combining with another robot. In simulation, team-configuration, environmental complexity, and behavioral parameters are varied to discern the most effective circumstances for the architecture and MSR. Our results show that the best configuration of the system is highly dependent on the environment
SOCIALGYM: A Framework for Benchmarking Social Robot Navigation
Robots moving safely and in a socially compliant manner in dynamic human
environments is an essential benchmark for long-term robot autonomy. However,
it is not feasible to learn and benchmark social navigation behaviors entirely
in the real world, as learning is data-intensive, and it is challenging to make
safety guarantees during training. Therefore, simulation-based benchmarks that
provide abstractions for social navigation are required. A framework for these
benchmarks would need to support a wide variety of learning approaches, be
extensible to the broad range of social navigation scenarios, and abstract away
the perception problem to focus on social navigation explicitly. While there
have been many proposed solutions, including high fidelity 3D simulators and
grid world approximations, no existing solution satisfies all of the
aforementioned properties for learning and evaluating social navigation
behaviors. In this work, we propose SOCIALGYM, a lightweight 2D simulation
environment for robot social navigation designed with extensibility in mind,
and a benchmark scenario built on SOCIALGYM. Further, we present benchmark
results that compare and contrast human-engineered and model-based learning
approaches to a suite of off-the-shelf Learning from Demonstration (LfD) and
Reinforcement Learning (RL) approaches applied to social robot navigation.
These results demonstrate the data efficiency, task performance, social
compliance, and environment transfer capabilities for each of the policies
evaluated to provide a solid grounding for future social navigation research.Comment: Published in IROS202
Eye-tracking Analysis of Interactive 3D Geovisualization
This paper describes a new tool for eye-tracking data and their analysis with the use of interactive 3D models. This tool helps to analyse interactive 3D models easier than by time-consuming, frame-by-frame investigation of captured screen recordings with superimposed scanpaths. The main function of this tool, called 3DgazeR, is to calculate 3D coordinates (X, Y, Z coordinates of the 3D scene) for individual points of view. These 3D coordinates can be calculated from the values of the position and orientation of a virtual camera and the 2D coordinates of the gaze upon the screen. The functionality of 3DgazeR is introduced in a case study example using Digital Elevation Models as stimuli. The purpose of the case study was to verify the functionality of the tool and discover the most suitable visualization methods for geographic 3D models. Five selected methods are presented in the results section of the paper. Most of the output was created in a Geographic Information System. 3DgazeR works with generic CSV files, SMI eye-tracker, and the low-cost EyeTribe tracker connected with open source application OGAMA. It can compute 3D coordinates from raw data and fixations
Human-Robot Site Survey and Sampling for Space Exploration
NASA is planning to send humans and robots back to the Moon before 2020. In order for extended missions to be productive, high quality maps of lunar terrain and resources are required. Although orbital images can provide much information, many features (local topography, resources, etc) will have to be characterized directly on the surface. To address this need, we are developing a system to perform site survey and sampling. The system includes multiple robots and humans operating in a variety of team configurations, coordinated via peer-to-peer human-robot interaction. In this paper, we present our system design and describe planned field tests
Enhancing the Behaviorial Fidelity of Synthetic Entities with Human Behavior Models
Human-behavior models (HBMs) and artificial intelligence systems are called on to fill a wide variety of roles in military simulations. Each of the off the shelf human behavior models available today focuses on a specific area of human cognition and behavior. While this makes these HBMs very effective in specific roles, none are single-handedly capable of supporting the full range of roles necessary in an urban military scenario involving asymmetric opponents and potentially hostile civilians. The research presented here explores the integration of three separate human behavior models to support three different roles for synthetic participants in a single simulated scenario. The Soar architecture, focusing on knowledge-based, goal-directed behavior, supports a fire team of U.S. Army Rangers. PMFServ, focusing on a physiologically/stress constrained model of decision-making based on emotional utility, supports civilians that may become hostile. Finally, AI.Implant, focusing on individual and crowd navigation, supports a small group of opposing militia. Due to the autonomy and wide range of behavior supported by the three human behavior models, the scenario is more flexible and dynamic than many military simulations and commercial computer games
Design and development of a frameworkbased on OGC web services for thevisualization of three dimensional large-scale geospatial data
The aim of this project is to design a streaming framework for the visualization of three dimensional large-scale geospatial data. A simple idea is implemented: just the bare necessities have to be loaded and rendered. The 3D scene is so incrementally built and dynamically updated run-time, taking into account the movements of the camera and its field of view. To effectively and efficiently achieve this behavior, proper mechanisms of tiling and caching have been implemented.
The framework implementation focuses on textured terrain streaming. Despite the scope limitation, the defined streaming paradigm has general validity and can be applied to more complex 3D environments. The addition of other features on top of the terrain is straightforward and does not imply substantial modifications to the framework. In order to make the framework standard compliant and platform independent, it has been designed to work with OGC web services and the widely adopted web-based approach has been chosen. As a result, any WebGL compliant browser can run web applications built on top of this framework without the use of plug-ins or additional softwar
Deep Reinforcement Learning-based Multi-objective Path Planning on the Off-road Terrain Environment for Ground Vehicles
Due to the energy-consumption efficiency between up-slope and down-slope is
hugely different, a path with the shortest length on a complex off-road terrain
environment (2.5D map) is not always the path with the least energy
consumption. For any energy-sensitive vehicles, realizing a good trade-off
between distance and energy consumption on 2.5D path planning is significantly
meaningful. In this paper, a deep reinforcement learning-based 2.5D
multi-objective path planning method (DMOP) is proposed. The DMOP can
efficiently find the desired path with three steps: (1) Transform the
high-resolution 2.5D map into a small-size map. (2) Use a trained deep Q
network (DQN) to find the desired path on the small-size map. (3) Build the
planned path to the original high-resolution map using a path enhanced method.
In addition, the imitation learning method and reward shaping theory are
applied to train the DQN. The reward function is constructed with the
information of terrain, distance, border. Simulation shows that the proposed
method can finish the multi-objective 2.5D path planning task. Also, simulation
proves that the method has powerful reasoning capability that enables it to
perform arbitrary untrained planning tasks on the same map
An Architecture for Online Affordance-based Perception and Whole-body Planning
The DARPA Robotics Challenge Trials held in December 2013 provided a landmark demonstration of dexterous mobile robots executing a variety of tasks aided by a remote human operator using only data from the robot's sensor suite transmitted over a constrained, field-realistic communications link. We describe the design considerations, architecture, implementation and performance of the software that Team MIT developed to command and control an Atlas humanoid robot. Our design emphasized human interaction with an efficient motion planner, where operators expressed desired robot actions in terms of affordances fit using perception and manipulated in a custom user interface. We highlight several important lessons we learned while developing our system on a highly compressed schedule
- …