53 research outputs found

    A hardware-in-the-loop testing facility for unmanned aerial vehicle sensor suites and control algorithms

    Get PDF
    In the past decade Unmanned Aerial Vehicles (UAVs) have rapidly grown into a major field of robotics in both industry and academia. Many well established platforms have been developed, and the demand continues to grow. However, the UAVs utilized in industry are predominately remotely piloted aircraft offering very limited levels of autonomy. In contrast, fully autonomous flight has been achieved in research, and the degree of autonomy continues to grow, with research now focusing on advanced tasks such as navigating cluttered terrain and formation ying.The gap between academia and industry is the robustness of control algorithms. Academic research often focuses on proof of concept demonstrations with little or no consideration to real world concerns such as adverse weather or sensor integration.One of the goals of this thesis is to integrate real world issues into the design process. A testing environment was designed and built that allows sensors and control algorithms to be tested against real obstacles and environmental conditions in a controlled, repeatable fashion. The use of this facility is demonstrated in the implementation of a safe landing zone algorithm for a robotic helicopter equipped with a laser scanner. Results from tests conducted in the testing facility are used to analyze results from ights in the field.Controlling the testing environment also provides a baseline to evaluate different control solutions. In the current research paradigm, it is difficult to determine which research questions have been solved because the testing conditions vary from researcher to researcher. A common testing environment eliminates ambiguities and allows solutions to be characterized based on their performance in different terrains and environmental conditions.This thesis explores how flight tests can be conducted in the lab using the actual hardware and control algorithms. The sensor package is attached to a 6 DOF gantry whose motion is governed by the dynamic model of the aircraft. To provide an expansive terrain over which the flight can be conducted, a scaled model of the environment was created.The the feasibility of using a scaled environment is demonstrated with a common sensor package and control task: using computer vision to guide an autonomous helicopter. The effcts of scaling are investigated, and the approach validated by comparing results in the scaled model to actual flights. Finally, it is demonstrated how the facility can be used to investigate the effect of adverse conditions on control algorithm performance. The overarching philosophy of this work is that incorporating real world concerns into the design process leads to more fully developed and robust solutions.Ph.D., Mechanical Engineering -- Drexel University, 201

    Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging.

    Full text link
    This thesis reports on the incorporation of surface information into a probabilistic simultaneous localization and mapping (SLAM) framework used on an autonomous underwater vehicle (AUV) designed for underwater inspection. AUVs operating in cluttered underwater environments, such as ship hulls or dams, are commonly equipped with Doppler-based sensors, which---in addition to navigation---provide a sparse representation of the environment in the form of a three-dimensional (3D) point cloud. The goal of this thesis is to develop perceptual algorithms that take full advantage of these sparse observations for correcting navigational drift and building a model of the environment. In particular, we focus on three objectives. First, we introduce a novel representation of this 3D point cloud as collections of planar features arranged in a factor graph. This factor graph representation probabalistically infers the spatial arrangement of each planar segment and can effectively model smooth surfaces (such as a ship hull). Second, we show how this technique can produce 3D models that serve as input to our pipeline that produces the first-ever 3D photomosaics using a two-dimensional (2D) imaging sonar. Finally, we propose a model-assisted bundle adjustment (BA) framework that allows for robust registration between surfaces observed from a Doppler sensor and visual features detected from optical images. Throughout this thesis, we show methods that produce 3D photomosaics using a combination of triangular meshes (derived from our SLAM framework or given a-priori), optical images, and sonar images. Overall, the contributions of this thesis greatly increase the accuracy, reliability, and utility of in-water ship hull inspection with AUVs despite the challenges they face in underwater environments. We provide results using the Hovering Autonomous Underwater Vehicle (HAUV) for autonomous ship hull inspection, which serves as the primary testbed for the algorithms presented in this thesis. The sensor payload of the HAUV consists primarily of: a Doppler velocity log (DVL) for underwater navigation and ranging, monocular and stereo cameras, and---for some applications---an imaging sonar.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120750/1/paulozog_1.pd

    Robot manipulation in human environments

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 211-228).Human environments present special challenges for robot manipulation. They are often dynamic, difficult to predict, and beyond the control of a robot engineer. Fortunately, many characteristics of these settings can be used to a robot's advantage. Human environments are typically populated by people, and a robot can rely on the guidance and assistance of a human collaborator. Everyday objects exhibit common, task-relevant features that reduce the cognitive load required for the object's use. Many tasks can be achieved through the detection and control of these sparse perceptual features. And finally, a robot is more than a passive observer of the world. It can use its body to reduce its perceptual uncertainty about the world. In this thesis we present advances in robot manipulation that address the unique challenges of human environments. We describe the design of a humanoid robot named Domo, develop methods that allow Domo to assist a person in everyday tasks, and discuss general strategies for building robots that work alongside people in their homes and workplaces.by Aaron Ladd Edsinger.Ph.D

    Visual Servoing

    Get PDF
    The goal of this book is to introduce the visional application by excellent researchers in the world currently and offer the knowledge that can also be applied to another field widely. This book collects the main studies about machine vision currently in the world, and has a powerful persuasion in the applications employed in the machine vision. The contents, which demonstrate that the machine vision theory, are realized in different field. For the beginner, it is easy to understand the development in the vision servoing. For engineer, professor and researcher, they can study and learn the chapters, and then employ another application method

    The development of a hybrid virtual reality/video view-morphing display system for teleoperation and teleconferencing

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, System Design & Management Program, 2000.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 84-89).The goal of this study is to extend the desktop panoramic static image viewer concept (e.g., Apple QuickTime VR; IPIX) to support immersive real time viewing, so that an observer wearing a head-mounted display can make free head movements while viewing dynamic scenes rendered in real time stereo using video data obtained from a set of fixed cameras. Computational experiments by Seitz and others have demonstrated the feasibility of morphing image pairs to render stereo scenes from novel, virtual viewpoints. The user can interact both with morphed real world video images, and supplementary artificial virtual objects (“Augmented Reality”). The inherent congruence of the real and artificial coordinate frames of this system reduces registration errors commonly found in Augmented Reality applications. In addition, the user’s eyepoint is computed locally so that any scene lag resulting from head movement will be less than those from alternative technologies using remotely controlled ground cameras. For space applications, this can significantly reduce the apparent lag due to satellite communication delay. This hybrid VR/view-morphing display (“Virtual Video”) has many important NASA applications including remote teleoperation, crew onboard training, private family and medical teleconferencing, and telemedicine. The technical objective of this study developed a proof-of-concept system using a 3D graphics PC workstation of one of the component technologies, Immersive Omnidirectional Video, of Virtual Video. The management goal identified a system process for planning, managing, and tracking the integration, test and validation of this phased, 3-year multi-university research and development program.by William E. Hutchison.S.M

    An annotated bibligraphy of multisensor integration

    Get PDF
    technical reportIn this paper we give an annotated bibliography of the multisensor integration literature

    Intelligent Agent Architectures: Reactive Planning Testbed

    Get PDF
    An Integrated Agent Architecture (IAA) is a framework or paradigm for constructing intelligent agents. Intelligent agents are collections of sensors, computers, and effectors that interact with their environments in real time in goal-directed ways. Because of the complexity involved in designing intelligent agents, it has been found useful to approach the construction of agents with some organizing principle, theory, or paradigm that gives shape to the agent's components and structures their relationships. Given the wide variety of approaches being taken in the field, the question naturally arises: Is there a way to compare and evaluate these approaches? The purpose of the present work is to develop common benchmark tasks and evaluation metrics to which intelligent agents, including complex robotic agents, constructed using various architectural approaches can be subjected

    A system for eye-directed control in an split-foveal-peripheral-display

    Get PDF
    In this thesis an eye-directed controller is developed that slaves the narrow field display within a split-foveal-peripheral-display system to the operator's gaze position. A neural network controller is proposed that directly maps the gaze position to the narrow field projection co-ordinates without the need for any axis or co-ordinate transformations. A novel image feature-extraction algorithm, for extraction of the pupil-purkinje difference measure, has been developed that exhibits robust and reproducible real-time performance. By providing foveal and peripheral vision in a far-field teleoperator through the eye-directed split-foveal-peripheral-display, visual information is sufficiently and naturally provided for the establishment of telepresence.Dissertation (M Eng (Electronic Engineering))--University of Pretoria, 2007.Electrical, Electronic and Computer Engineeringunrestricte

    Planetary terminal descent and landing radar Final report

    Get PDF
    Development and flight test of bi-mode radar for three dimensional velocity readout and slant range measurement to ground over wide range of velocities and altitude
    corecore