3,718 research outputs found

    Optimal control models of the goal-oriented human locomotion

    Get PDF
    In recent papers it has been suggested that human locomotion may be modeled as an inverse optimal control problem. In this paradigm, the trajectories are assumed to be solutions of an optimal control problem that has to be determined. We discuss the modeling of both the dynamical system and the cost to be minimized, and we analyze the corresponding optimal synthesis. The main results describe the asymptotic behavior of the optimal trajectories as the target point goes to infinity

    SAFETY-GUARANTEED TASK PLANNING FOR BIPEDAL NAVIGATION IN PARTIALLY OBSERVABLE ENVIRONMENTS

    Get PDF
    Bipedal robots are becoming more capable as basic hardware and control challenges are being overcome, however reasoning about safety at the task and motion planning levels has been largely underexplored. I would like to make key steps towards guaranteeing safe locomotion in cluttered environments in the presence of humans or other dynamic obstacles by designing a hierarchical task planning framework that incorporates safety guarantees at each level. This layered planning framework is composed of a coarse high-level symbolic navigation planner and a lower-level local action planner. A belief abstraction at the global navigation planning level enables belief estimation of non-visible dynamic obstacle states and guarantees navigation safety with collision avoidance. Both planning layers employ linear temporal logic for a reactive game synthesis between the robot and its environment while incorporating lower level safe locomotion keyframe policies into formal task specification design. The high-level symbolic navigation planner has been extended to leverage the capabilities of a heterogeneous multi-agent team to resolve environment assumption violations that appear at runtime. Modifications in the navigation planner in conjunction with a coordination layer allow each agent to guarantee immediate safety and eventual task completion in the presence of an assumption violation if another agent exists that can resolve said violation, e.g. a door is closed that another dexterous agent can open. The planning framework leverages the expressive nature and formal guarantees of LTL to generate provably correct controllers for complex robotic systems. The use of belief space planning for dynamic obstacle belief tracking and heterogeneous robot capabilities to assist one another when environment assumptions are violated allows the planning framework to reduce the conservativeness traditionally associated with using formal methods for robot planning.M.S

    Conceptual design of a Manned-Unmanned Lunar Explorer /MULE/

    Get PDF
    Manned-unmanned lunar explorer systems desig

    Virtual to Real Reinforcement Learning for Autonomous Driving

    Full text link
    Reinforcement learning is considered as a promising direction for driving policy learning. However, training autonomous driving vehicle with reinforcement learning in real environment involves non-affordable trial-and-error. It is more desirable to first train in a virtual environment and then transfer to the real environment. In this paper, we propose a novel realistic translation network to make model trained in virtual environment be workable in real world. The proposed network can convert non-realistic virtual image input into a realistic one with similar scene structure. Given realistic frames as input, driving policy trained by reinforcement learning can nicely adapt to real world driving. Experiments show that our proposed virtual to real (VR) reinforcement learning (RL) works pretty well. To our knowledge, this is the first successful case of driving policy trained by reinforcement learning that can adapt to real world driving data
    corecore