40 research outputs found
Multi-UAV trajectory planning for 3D visual inspection of complex structures
This paper presents a new trajectory planning algorithm for 3D autonomous UAV
volume coverage and visual inspection. The algorithm is an extension of a
state-of-the-art Heat Equation Driven Area Coverage (HEDAC) multi-agent area
coverage algorithm for 3D domains. With a given target exploration density
field, the algorithm designs a potential field and directs UAVs to the regions
of higher potential, i.e., higher values of remaining density. Collisions
between the agents and agents with domain boundaries are prevented by
implementing the distance field and correcting the agent's directional vector
when the distance threshold is reached. A unit cube test case is considered to
evaluate this trajectory planning strategy for volume coverage. For visual
inspection applications, the algorithm is supplemented with camera direction
control. A field containing the nearest distance from any point in the domain
to the structure surface is designed. The gradient of this field is calculated
to obtain the camera orientation throughout the trajectory. Three different
test cases of varying complexities are considered to validate the proposed
method for visual inspection. The simplest scenario is a synthetic portal-like
structure inspected using three UAVs. The other two inspection scenarios are
based on realistic structures where UAVs are commonly utilized: a wind turbine
and a bridge. When deployed to a wind turbine inspection, two simulated UAVs
traversing smooth spiral trajectories have successfully explored the entire
turbine structure while cameras are directed to the curved surfaces of the
turbine's blades. In the bridge test case an efficacious visual inspection of a
complex structure is demonstrated by employing a single UAV and five UAVs. The
proposed methodology is successful, flexible and applicable in real-world UAV
inspection tasks.Comment: 14 page
Provably Robust Semi-Infinite Program Under Collision Constraints via Subdivision
We present a semi-infinite program (SIP) solver for trajectory optimizations
of general articulated robots. These problems are more challenging than
standard Nonlinear Program (NLP) by involving an infinite number of non-convex,
collision constraints. Prior SIP solvers based on constraint sampling cannot
guarantee the satisfaction of all constraints. Instead, our method uses a
conservative bound on articulated body motions to ensure the solution
feasibility throughout the optimization procedure. We further use subdivision
to adaptively reduce the error in conservative motion estimation. Combined, we
prove that our SIP solver guarantees feasibility while approaches the critical
point of SIP problems up to arbitrary user-provided precision. We have verified
our method on a row of trajectory optimization problems involving industrial
robot arms and UAVs, where our method can generate collision-free, locally
optimal trajectories within a couple minutes
Scaled Autonomy for Networked Humanoids
Humanoid robots have been developed with the intention of aiding in environments designed for humans. As such, the control of humanoid morphology and effectiveness of human robot interaction form the two principal research issues for deploying these robots in the real world. In this thesis work, the issue of humanoid control is coupled with human robot interaction under the framework of scaled autonomy, where the human and robot exchange levels of control depending on the environment and task at hand. This scaled autonomy is approached with control algorithms for reactive stabilization of human commands and planned trajectories that encode semantically meaningful motion preferences in a sequential convex optimization framework.
The control and planning algorithms have been extensively tested in the field for robustness and system verification. The RoboCup competition provides a benchmark competition for autonomous agents that are trained with a human supervisor. The kid-sized and adult-sized humanoid robots coordinate over a noisy network in a known environment with adversarial opponents, and the software and routines in this work allowed for five consecutive championships. Furthermore, the motion planning and user interfaces developed in the work have been tested in the noisy network of the DARPA Robotics Challenge (DRC) Trials and Finals in an unknown environment.
Overall, the ability to extend simplified locomotion models to aid in semi-autonomous manipulation allows untrained humans to operate complex, high dimensional robots. This represents another step in the path to deploying humanoids in the real world, based on the low dimensional motion abstractions and proven performance in real world tasks like RoboCup and the DRC
Motion synthesis for high degree-of-freedom robots in complex and changing environments
The use of robotics has recently seen significant growth in various domains such as
unmanned ground/underwater/aerial vehicles, smart manufacturing, and humanoid
robots. However, one of the most important and essential capabilities required for
long term autonomy, which is the ability to operate robustly and safely in real-world
environments, in contrast to industrial and laboratory setup is largely missing. Designing
robots that can operate reliably and efficiently in cluttered and changing
environments is non-trivial, especially for high degree-of-freedom (DoF) systems, i.e.
robots with multiple actuators. On one hand, the dexterity offered by the kinematic
redundancy allows the robot to perform dexterous manipulation tasks in complex
environments, whereas on the other hand, such complex system also makes controlling
and planning very challenging. To address such two interrelated problems, we
exploit robot motion synthesis from three perspectives that feed into each other: end-pose
planning, motion planning and motion adaptation. We propose several novel
ideas in each of the three phases, using which we can efficiently synthesise dexterous
manipulation motion for fixed-base robotic arms, mobile manipulators, as well as
humanoid robots in cluttered and potentially changing environments.
Collision-free inverse kinematics (IK), or so-called end-pose planning, a key prerequisite
for other modules such as motion planning, is an important and yet unsolved
problem in robotics. Such information is often assumed given, or manually provided
in practice, which significantly limiting high-level autonomy. In our research, by using
novel data pre-processing and encoding techniques, we are able to efficiently
search for collision-free end-poses in challenging scenarios in the presence of uneven
terrains.
After having found the end-poses, the motion planning module can proceed. Although
motion planning has been claimed as well studied, we find that existing algorithms
are still unreliable for robust and safe operations in real-world applications,
especially when the environment is cluttered and changing. We propose a novel
resolution complete motion planning algorithm, namely the Hierarchical Dynamic
Roadmap, that is able to generate collision-free motion trajectories for redundant
robotic arms in extremely complicated environments where other methods would fail.
While planning for fixed-base robotic arms is relatively less challenging, we also investigate
into efficient motion planning algorithms for high DoF (30 - 40) humanoid
robots, where an extra balance constraint needs to be taken into account. The result
shows that our method is able to efficiently generate collision-free whole-body trajectories
for different humanoid robots in complex environments, where other methods
would require a much longer planning time.
Both end-pose and motion planning algorithms compute solutions in static environments,
and assume the environments stay static during execution. While human
and most animals are incredibly good at handling environmental changes, the state-of-the-art robotics technology is far from being able to achieve such an ability. To
address this issue, we propose a novel state space representation, the Distance Mesh
space, in which the robot is able to remap the pre-planned motion in real-time and
adapt to environmental changes during execution.
By utilizing the proposed end-pose planning, motion planning and motion adaptation
techniques, we obtain a robotic framework that significantly improves the
level of autonomy. The proposed methods have been validated on various state-of-the-art robot platforms, such as UR5 (6-DoF fixed-base robotic arm), KUKA LWR
(7-DoF fixed-base robotic arm), Baxter (14-DoF fixed-base bi-manual manipulator),
Husky with Dual UR5 (15-DoF mobile bi-manual manipulator), PR2 (20-DoF mobile
bi-manual manipulator), NASA Valkyrie (38-DoF humanoid) and many others, showing
that our methods are truly applicable to solve high dimensional motion planning
for practical problems
Decentralized task allocation for dynamic, time-sensitive tasks
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 103-110).In time-sensitive and dynamic missions, autonomous vehicles must respond quickly to new information and objectives. In the case of dynamic task allocation, a team of agents are presented with a new, unknown task that must be allocated with their original allocations. This is exacerbated further in decentralized settings where agents are limited to utilizing local information during the allocation process. This thesis presents a fully decentralized, dynamic task allocation algorithm that extends the Consensus-Based Bundle Algorithm (CBBA) to allow for allocating new tasks. Whereas static CBBA requires a full resetting of previous allocations, CBBA with Partial Replanning (CBBA-PR) enables the agents to only partially reset their allocations to efficiently and quickly allocate a new task. By varying the number of existing tasks that are reset during replan, the team can trade-off convergence speed with amount of coordination. By specifically choosing the lowest bid tasks for resetting, CBBA-PR is shown to converge linearly with the number of tasks reset and the network diameter of the team. In addition, limited replanning methods are presented for scenarios without sufficient replanning time. These include a single reset bidding procedure for agents at capacity, a no-replanning heuristic that can identify scenarios that does not require replanning, and a subteam formation algorithm for reducing the network diameter. Finally, this thesis describes hardware and simulation experiments used to explore the effects of ad-hoc, decentralized communication on consensus algorithms and to validate the performance of CBBA-PR.by Noam Buckman.S.M
RIACS
The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities that serves as a bridge between NASA and the academic community. Under a five-year co-operative agreement with NASA, research at RIACS is focused on areas that are strategically enabling to the Ames Research Center's role as NASA's Center of Excellence for Information Technology. The primary mission of RIACS is charted to carry out research and development in computer science. This work is devoted in the main to tasks that are strategically enabling with respect to NASA's bold mission in space exploration and aeronautics. There are three foci for this work: (1) Automated Reasoning. (2) Human-Centered Computing. and (3) High Performance Computing and Networking. RIACS has the additional goal of broadening the base of researcher in these areas of importance to the nation's space and aeronautics enterprises. Through its visiting scientist program, RIACS facilitates the participation of university-based researchers, including both faculty and students, in the research activities of NASA and RIACS. RIACS researchers work in close collaboration with NASA computer scientists on projects such as the Remote Agent Experiment on Deep Space One mission, and Super-Resolution Surface Modeling
ImMesh: An Immediate LiDAR Localization and Meshing Framework
In this paper, we propose a novel LiDAR(-inertial) odometry and mapping
framework to achieve the goal of simultaneous localization and meshing in
real-time. This proposed framework termed ImMesh comprises four tightly-coupled
modules: receiver, localization, meshing, and broadcaster. The localization
module utilizes the prepossessed sensor data from the receiver, estimates the
sensor pose online by registering LiDAR scans to maps, and dynamically grows
the map. Then, our meshing module takes the registered LiDAR scan for
incrementally reconstructing the triangle mesh on the fly. Finally, the
real-time odometry, map, and mesh are published via our broadcaster. The key
contribution of this work is the meshing module, which represents a scene by an
efficient hierarchical voxels structure, performs fast finding of voxels
observed by new scans, and reconstructs triangle facets in each voxel in an
incremental manner. This voxel-wise meshing operation is delicately designed
for the purpose of efficiency; it first performs a dimension reduction by
projecting 3D points to a 2D local plane contained in the voxel, and then
executes the meshing operation with pull, commit and push steps for incremental
reconstruction of triangle facets. To the best of our knowledge, this is the
first work in literature that can reconstruct online the triangle mesh of
large-scale scenes, just relying on a standard CPU without GPU acceleration. To
share our findings and make contributions to the community, we make our code
publicly available on our GitHub: https://github.com/hku-mars/ImMesh
Dynamic mission planning for communication control in multiple unmanned aircraft teams
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 147-160).As autonomous technologies continue to progress, teams of multiple unmanned aerial vehicles will play an increasingly important role in civilian and military applications. A multi-UAV system relies on communications to operate. Failure to communicate remotely sensed mission data to the base may render the system ineffective, and the inability to exchange command and control messages can lead to system failures. This thesis presents a unique method to control communications through distributed mission planning to engage under-utilized UAVs to serve as communication relays and to ensure that the network supports mission tasks. The distributed algorithm uses task assignment information, including task location and proposed execution time, to predict the network topology and plan support using relays. By explicitly coupling task assignment and relay creation processes the team is able to optimize the use of agents to address the needs of dynamic complex missions. The framework is designed to consider realistic network communication dynamics including path loss, stochastic fading, and information routing. The planning strategy is shown to ensure agents support both data-rate and interconnectivity bit-error- rate requirements during task execution. In addition, a method is provided for UAVs to estimate the network performance during times of uncertainty, adjust their plans to acceptable levels of risk, and adapt the planning behavior to changes in the communication environment. The system performance is verified through multiple experiments conducted in simulation. Finally, the work developed is implemented in outdoor flight testing with a team of up to four UAVs to demonstrate real-time capability and robustness to imperfections in the environment. The results validate the proposed framework, but highlight some of the challenges these systems face when operating in outdoor uncontrolled environments.by Andrew N. Kopeikin.S.M
Autonomous underwater navigation and optical mapping in unknown natural environments
We present an approach for navigating in unknown environments while, simultaneously, gathering information for inspecting underwater structures using an autonomous underwater vehicle (AUV). To accomplish this, we first use our pipeline for mapping and planning collision-free paths online, which endows an AUV with the capability to autonomously acquire optical data in close proximity. With that information, we then propose a reconstruction pipeline to create a photo-realistic textured 3D model of the inspected area. These 3D models are also of particular interest to other fields of study in marine sciences, since they can serve as base maps for environmental monitoring, thus allowing change detection of biological communities and their environment over time. Finally, we evaluate our approach using the Sparus II, a torpedo-shaped AUV, conducting inspection missions in a challenging, real-world and natural scenario
Recommended from our members
Learning To Grasp
Providing robots with the ability to grasp objects has, despite decades of research, remained a challenging problem. The problem is approachable in constrained environments where there is ample prior knowledge of the scene and objects that will be manipulated. The challenge is in building systems that scale beyond specific situational instances and gracefully operate in novel conditions. In the past, heuristic and simple rule based strategies were used to accomplish tasks such as scene segmentation or reasoning about occlusion. These heuristic strategies work in constrained environments where a roboticist can make simplifying assumptions about everything from the geometries of the objects to be interacted with, level of clutter, camera position, lighting, and a myriad of other relevant variables. With these assumptions in place, it becomes tractable for a roboticist to hardcode desired behaviour and build a robotic system capable of completing repetitive tasks. These hardcoded behaviours will quickly fail if the assumptions about the environment are invalidated. In this thesis we will demonstrate how a robust grasping system can be built that is capable of operating under a more variable set of conditions without requiring significant engineering of behavior by a roboticist.
This robustness is enabled by a new found ability to empower novel machine learning techniques with massive amounts of synthetic training data. The ability of simulators to create realistic sensory data enables the generation of massive corpora of labeled training data for various grasping related tasks. The use of simulation allows for the creation of a wide variety of environments and experiences exposing the robotic system to a large number of scenarios before ever operating in the real world. This thesis demonstrates that it is now possible to build systems that work in the real world trained using deep learning on synthetic data. The sheer volume of data that can be produced via simulation enables the use of powerful deep learning techniques whose performance scales with the amount of data available. This thesis will explore how deep learning and other techniques can be used to encode these massive datasets for efficient runtime use. The ability to train and test on synthetic data allows for quick iterative development of new perception, planning and grasp execution algorithms that work in a large number of environments. Creative applications of machine learning and massive synthetic datasets are allowing robotic systems to learn skills, and move beyond repetitive hardcoded tasks