1,196 research outputs found
Task scheduling and merging in space and time
Every day, robots are being deployed in more challenging environments, where they are required to perform complex tasks. In order to achieve these tasks, robots rely on intelligent deliberation algorithms. In this thesis, we study two deliberation approaches – task scheduling and task planning. We extend these approaches in order to not only deal with temporal and spatial constraints imposed by the environment, but also exploit them to be more efficient than the state-of-the-art approaches.
Our first main contribution is a scheduler that exploits a heuristic based on Allen’s interval algebra to prune the search space to be traversed by a mixed integer program. We empirically show that the proposed scheduler outperforms the state of the art by at least one order of magnitude. Furthermore, the scheduler has been deployed on several mobile robots in long-term autonomy scenarios.
Our second main contribution is the POPMERX algorithm, which is based on merging of partially ordered temporal plans. POPMERX first reasons with the spatial and temporal structure of separately generated plans. Then, it merges these plans into a single final plan, while optimising the makespan of the merged plan. We empirically show that POPMERX produces better plans that the-state-ofthe- art planners on temporal domains with time windows
Recommended from our members
Intelligent and High-Performance Behavior Design of Autonomous Systems via Learning, Optimization and Control
Nowadays, great societal demands have rapidly boosted the development of autonomous systems that densely interact with humans in many application domains, from manufacturing to transportation and from workplaces to daily lives. The shift from isolated working environments to human-dominated space requires autonomous systems to be empowered to handle not only environmental uncertainties such as external vibrations but also interaction uncertainties arising from human behavior which is in nature probabilistic, causal but not strictly rational, internally hierarchical and socially compliant.This dissertation is concerned with the design of intelligent and high-performance behavior of such autonomous systems, leveraging the strength from control, optimization, learning, and cognitive science. The work consists of two parts. In Part I, the problem of high-level hybrid human-machine behavior design is addressed. The goal is to achieve safe, efficient and human-like interaction with people. A framework based on the theory of mind, utility theories and imitation learning is proposed to efficiently represent and learn the complicated behavior of humans. Built upon that, machine behaviors at three different levels - the perceptual level, the reasoning level, and the action level - are designed via imitation learning, optimization, and online adaptation, allowing the system to interpret, reason and behave as human, particularly when a variety of uncertainties exist. Applications to autonomous driving are considered throughout Part I. Part II is concerned with the design of high-performance low-level individual machine behavior in the presence of model uncertainties and external disturbances. Advanced control laws based on adaptation, iterative learning and the internal structures of uncertainties/disturbances are developed to assure that the high-level interactive behaviors can be reliably executed. Applications on robot manipulators and high-precision motion systems are discussed in this part
NeBula: TEAM CoSTAR’s robotic autonomy solution that won phase II of DARPA subterranean challenge
This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR’s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.Peer ReviewedAgha, A., Otsu, K., Morrell, B., Fan, D. D., Thakker, R., Santamaria-Navarro, A., Kim, S.-K., Bouman, A., Lei, X., Edlund, J., Ginting, M. F., Ebadi, K., Anderson, M., Pailevanian, T., Terry, E., Wolf, M., Tagliabue, A., Vaquero, T. S., Palieri, M., Tepsuporn, S., Chang, Y., Kalantari, A., Chavez, F., Lopez, B., Funabiki, N., Miles, G., Touma, T., Buscicchio, A., Tordesillas, J., Alatur, N., Nash, J., Walsh, W., Jung, S., Lee, H., Kanellakis, C., Mayo, J., Harper, S., Kaufmann, M., Dixit, A., Correa, G. J., Lee, C., Gao, J., Merewether, G., Maldonado-Contreras, J., Salhotra, G., Da Silva, M. S., Ramtoula, B., Fakoorian, S., Hatteland, A., Kim, T., Bartlett, T., Stephens, A., Kim, L., Bergh, C., Heiden, E., Lew, T., Cauligi, A., Heywood, T., Kramer, A., Leopold, H. A., Melikyan, H., Choi, H. C., Daftry, S., Toupet, O., Wee, I., Thakur, A., Feras, M., Beltrame, G., Nikolakopoulos, G., Shim, D., Carlone, L., & Burdick, JPostprint (published version
Safe navigation and human-robot interaction in assistant robotic applications
L'abstract è presente nell'allegato / the abstract is in the attachmen
Toward Robots with Peripersonal Space Representation for Adaptive Behaviors
The abilities to adapt and act autonomously in an unstructured and
human-oriented environment are necessarily vital for the next generation of
robots, which aim to safely cooperate with humans. While this adaptability
is natural and feasible for humans, it is still very complex and challenging
for robots. Observations and findings from psychology and neuroscience in
respect to the development of the human sensorimotor system can inform
the development of novel approaches to adaptive robotics.
Among these is the formation of the representation of space closely surrounding
the body, the Peripersonal Space (PPS) , from multisensory sources
like vision, hearing, touch and proprioception, which helps to facilitate human
activities within their surroundings.
Taking inspiration from the virtual safety margin formed by the PPS representation
in humans, this thesis first constructs an equivalent model of the
safety zone for each body part of the iCub humanoid robot. This PPS layer
serves as a distributed collision predictor, which translates visually detected
objects approaching a robot\u2019s body parts (e.g., arm, hand) into the probabilities
of a collision between those objects and body parts. This leads to
adaptive avoidance behaviors in the robot via an optimization-based reactive
controller. Notably, this visual reactive control pipeline can also seamlessly
incorporate tactile input to guarantee safety in both pre- and post-collision
phases in physical Human-Robot Interaction (pHRI). Concurrently, the controller
is also able to take into account multiple targets (of manipulation reaching tasks) generated by a multiple Cartesian point planner. All components,
namely the PPS, the multi-target motion planner (for manipulation
reaching tasks), the reaching-with-avoidance controller and the humancentred
visual perception, are combined harmoniously to form a hybrid control
framework designed to provide safety for robots\u2019 interactions in a cluttered
environment shared with human partners.
Later, motivated by the development of manipulation skills in infants, in
which the multisensory integration is thought to play an important role, a
learning framework is proposed to allow a robot to learn the processes of
forming sensory representations, namely visuomotor and visuotactile, from
their own motor activities in the environment. Both multisensory integration
models are constructed with Deep Neural Networks (DNNs) in such a
way that their outputs are represented in motor space to facilitate the robot\u2019s
subsequent actions
Recommended from our members
Coordination for Scalable Multiple Robot Planning Under Temporal Uncertainty
This dissertation incorporates coalition formation and probabilistic planning towards a domain-independent automated planning solution scalable to multiple heterogeneous robots in complex domains. The first research direction investigates the effectiveness of Task Fusion and introduces heuristics that improve task allocation and result in better quality plans, while requiring lower computational cost than the baseline approaches. The heuristics incorporate relaxed plans to estimate coupling and determine which tasks to fuse. As a result, larger temporal continuous planning problems involving multiple robots can be solved. The second research direction introduces new coordination methods to merge plans and resolve conflicts while extending the framework to domains with stochastic action duration. Merging distributedly generated plans becomes computationally costly when task plans are tightly coupled, and conflicts arise due to dependencies between plan actions. Existing methods either scale poorly as the number of agents and tasks increases, or do not minimize makespan, the overall time necessary to execute all tasks. A new family of plan coordination and conflict resolution algorithms is introduced to merge independently generated plans, minimize the resulting makespan, and scale to a large number of tasks and agents in complex problems. A thorough algorithmic analysis and empirical evaluation demonstrates how the new conflict identification and resolution models can impact the resulting plan quality and computational cost across three heterogeneous multiagent domains and outperform the baseline algorithms
NeBula: Team CoSTAR's robotic autonomy solution that won phase II of DARPA Subterranean Challenge
This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR¿s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.The work is partially supported by the Jet Propulsion Laboratory, California Institute of Technology,
under a contract with the National Aeronautics and Space Administration (80NM0018D0004), and
Defense Advanced Research Projects Agency (DARPA)
Robots learn to behave: improving human-robot collaboration in flexible manufacturing applications
L'abstract è presente nell'allegato / the abstract is in the attachmen
- …