537 research outputs found

    Algorithms for Generating Motion Trajectories Described by Prepositions

    Get PDF
    A list of representative directional prepositions of the English language is investigated to develop computation models that output some general motion trajectory or goal direction, given instructions involving prepositional phrases. Computation models are implemented through geometric definitions and procedures such as: centroid, quasi-centroid, convex-hull, closest, nearest-neighbor, and next-to. All algorithms are defined by or derived from standard computational geometry concepts

    High-Level Control Of Modular Robots

    Full text link
    Reconfigurable modular robots can exhibit different specializations by rearranging the same set of parts comprising them. Actuating modular robots can be complicated because of the many degrees of freedom that scale exponentially with the size of the robot. Effectively controlling these robots directly relates to how well they can be used to complete meaningful tasks. This paper discusses an approach for creating provably correct controllers for modular robots from high-level tasks defined with structured English sentences. While this has been demonstrated with simple mobile robots, the problem was enriched by considering the uniqueness of reconfigurable modular robots. These requirements are expressed through traits in the high-level task specification that store information about the geometry and motion types of a robot. Given a high-level problem definition for a modular robot, the approach in this paper deals with generating all lower levels of control needed to solve it. Information about different robot characteristics is stored in a library, and two tools for populating this library have been developed. The first approach is a physics-based simulator and gait creator for manual generation of motion gaits. The second is a genetic algorithm framework that uses traits to evaluate performance under various metrics. Demonstration is done through simulation and with the CKBot hardware platform

    Planning Approaches to Constraint-Aware Navigation in Dynamic Environments

    Get PDF
    Path planning is a fundamental problem in many areas, ranging from robotics and artificial intelligence to computer graphics and animation. Although there is extensive literature for computing optimal, collision-free paths, there is relatively little work that explores the satisfaction of spatial constraints between objects and agents at the global navigation layer. This paper presents a planning framework that satisfies multiple spatial constraints imposed on the path. The type of constraints specified can include staying behind a building, walking along walls, or avoiding the line of sight of patrolling agents. We introduce two hybrid environment representations that balance computational efficiency and search space density to provide a minimal, yet sufficient, discretization of the search graph for constraint-aware navigation. An extended anytime dynamic planner is used to compute constraint-aware paths, while efficiently repairing solutions to account for varying dynamic constraints or an updating world model. We demonstrate the benefits of our method on challenging navigation problems in complex environments for dynamic agents using combinations of hard and soft, attracting and repelling constraints, defined by both static obstacles and moving obstacles

    Learning in vision and robotics

    Get PDF
    I present my work on learning from video and robotic input. This is an important problem, with numerous potential applications. The use of machine learning makes it possible to obtain models which can handle noise and variation without explicitly programming them. It also raises the possibility of robots which can interact more seamlessly with humans rather than only exhibiting hard-coded behaviors. I will present my work in two areas: video action recognition, and robot navigation. First, I present a video action recognition method which represents actions in video by sequences of retinotopic appearance and motion detectors, learns such models automatically from training data, and allow actions in new video to be recognized and localized completely automatically. Second, I present a new method which allows a mobile robot to learn word meanings from a combination of robot sensor measurements and sentential descriptions corresponding to a set of robotically driven paths. These word meanings support automatic driving from sentential input, and generation of sentential description of new paths. Finally, I also present work on a new action recognition dataset, and comparisons of the performance of recent methods on this dataset and others

    Recovering from failure by asking for help

    Get PDF
    Robots inevitably fail, often without the ability to recover autonomously. We demonstrate an approach for enabling a robot to recover from failures by communicating its need for specific help to a human partner using natural language. Our approach automatically detects failures, then generates targeted spoken-language requests for help such as “Please give me the white table leg that is on the black table.” Once the human partner has repaired the failure condition, the system resumes full autonomy. We present a novel inverse semantics algorithm for generating effective help requests. In contrast to forward semantic models that interpret natural language in terms of robot actions and perception, our inverse semantics algorithm generates requests by emulating the human’s ability to interpret a request using the Generalized Grounding Graph (G[superscript 3]) framework. To assess the effectiveness of our approach, we present a corpus-based online evaluation, as well as an end-to-end user study, demonstrating that our approach increases the effectiveness of human interventions compared to static requests for help.Boeing CompanyU.S. Army Research Laboratory (Robotics Collaborative Technology Alliance
    • …
    corecore