486 research outputs found

    Vision-Aided Autonomous Precision Weapon Terminal Guidance Using a Tightly-Coupled INS and Predictive Rendering Techniques

    Get PDF
    This thesis documents the development of the Vision-Aided Navigation using Statistical Predictive Rendering (VANSPR) algorithm which seeks to enhance the endgame navigation solution possible by inertial measurements alone. The eventual goal is a precision weapon that does not rely on GPS, functions autonomously, thrives in complex 3-D environments, and is impervious to jamming. The predictive rendering is performed by viewpoint manipulation of computer-generated of target objects. A navigation solution is determined by an Unscented Kalman Filter (UKF) which corrects positional errors by comparing camera images with a collection of statistically significant virtual images. Results indicate that the test algorithm is a viable method of aiding an inertial-only navigation system to achieve the precision necessary for most tactical strikes. On 14 flight test runs, the average positional error was 166 feet at endgame, compared with an inertial-only error of 411 feet

    Estimating Epipolar Geometry With The Use of a Camera Mounted Orientation Sensor

    Get PDF
    Context: Image processing and computer vision are rapidly becoming more and more commonplace, and the amount of information about a scene, such as 3D geometry, that can be obtained from an image, or multiple images of the scene is steadily increasing due to increasing resolutions and availability of imaging sensors, and an active research community. In parallel, advances in hardware design and manufacturing are allowing for devices such as gyroscopes, accelerometers and magnetometers and GPS receivers to be included alongside imaging devices at a consumer level. Aims: This work aims to investigate the use of orientation sensors in the field of computer vision as sources of data to aid with image processing and the determination of a scene’s geometry, in particular, the epipolar geometry of a pair of images - and devises a hybrid methodology from two sets of previous works in order to exploit the information available from orientation sensors alongside data gathered from image processing techniques. Method: A readily available consumer-level orientation sensor was used alongside a digital camera to capture images of a set of scenes and record the orientation of the camera. The fundamental matrix of these pairs of images was calculated using a variety of techniques - both incorporating data from the orientation sensor and excluding its use Results: Some methodologies could not produce an acceptable result for the Fundamental Matrix on certain image pairs, however, a method described in the literature that used an orientation sensor always produced a result - however in cases where the hybrid or purely computer vision methods also produced a result - this was found to be the least accurate. Conclusion: Results from this work show that the use of an orientation sensor to capture information alongside an imaging device can be used to improve both the accuracy and reliability of calculations of the scene’s geometry - however noise from the orientation sensor can limit this accuracy and further research would be needed to determine the magnitude of this problem and methods of mitigation

    Mechatronic Systems

    Get PDF
    Mechatronics, the synergistic blend of mechanics, electronics, and computer science, has evolved over the past twenty five years, leading to a novel stage of engineering design. By integrating the best design practices with the most advanced technologies, mechatronics aims at realizing high-quality products, guaranteeing at the same time a substantial reduction of time and costs of manufacturing. Mechatronic systems are manifold and range from machine components, motion generators, and power producing machines to more complex devices, such as robotic systems and transportation vehicles. With its twenty chapters, which collect contributions from many researchers worldwide, this book provides an excellent survey of recent work in the field of mechatronics with applications in various fields, like robotics, medical and assistive technology, human-machine interaction, unmanned vehicles, manufacturing, and education. We would like to thank all the authors who have invested a great deal of time to write such interesting chapters, which we are sure will be valuable to the readers. Chapters 1 to 6 deal with applications of mechatronics for the development of robotic systems. Medical and assistive technologies and human-machine interaction systems are the topic of chapters 7 to 13.Chapters 14 and 15 concern mechatronic systems for autonomous vehicles. Chapters 16-19 deal with mechatronics in manufacturing contexts. Chapter 20 concludes the book, describing a method for the installation of mechatronics education in schools

    Conceptual design study for a teleoperator visual system, phase 2

    Get PDF
    An analysis of the concept for the hybrid stereo-monoscopic television visual system is reported. The visual concept is described along with the following subsystems: illumination, deployment/articulation, telecommunications, visual displays, and the controls and display station

    Reconfigurable AUV for Intervention Missions: A Case Study on Underwater Object Recovery

    Get PDF
    Starting in January 2009, the RAUVI (Reconfigurable Autonomous Underwater Vehicle for Intervention Missions) project is a 3-year coordinated research action funded by the Spanish Ministry of Research and Innovation. In this paper, the state of progress after 2 years of continuous research is reported. As a first experimental validation of the complete system, a search and recovery problem is addressed, consisting of finding and recovering a flight data recorder placed at an unknown position at the bottom of a water tank. An overview of the techniques used to successfully solve the problem in an autonomous way is provided. The obtained results are very promising and are the first step toward the final test in shallow water at the end of 2011

    The Balloon-borne Large Aperture Submillimeter Telescope: BLAST

    Get PDF
    The Balloon-borne Large Aperture Submillimeter Telescope (BLAST) is a sub-orbital surveying experiment designed to study the evolutionary history and processes of star formation in local galaxies (including the Milky Way) and galaxies at cosmological distances. The BLAST continuum camera, which consists of 270 detectors distributed between 3 arrays, observes simultaneously in broad-band (30%) spectral-windows at 250, 350, and 500 microns. The optical design is based on a 2m diameter telescope, providing a diffraction-limited resolution of 30" at 250 microns. The gondola pointing system enables raster mapping of arbitrary geometry, with a repeatable positional accuracy of ~30"; post-flight pointing reconstruction to ~5" rms is achieved. The on-board telescope control software permits autonomous execution of a pre-selected set of maps, with the option of manual override. In this paper we describe the primary characteristics and measured in-flight performance of BLAST. BLAST performed a test-flight in 2003 and has since made two scientifically productive long-duration balloon flights: a 100-hour flight from ESRANGE (Kiruna), Sweden to Victoria Island, northern Canada in June 2005; and a 250-hour, circumpolar-flight from McMurdo Station, Antarctica, in December 2006.Comment: 38 Pages, 11 figures; Replaced with version accepted for publication in the Astrophysical Journal; related results available at http://blastexperiment.info

    Surveyor 1 mission report. Part 1 - Mission description and performance

    Get PDF
    Surveyor 1 mission discription and performance for preflight and flight phase

    Autonomous Aerial Manipulation Using a Quadrotor

    Get PDF
    This paper presents an implementation of autonomous indoor aerial gripping using a low-cost, custom-built quadrotor. Such research extends the typical functionality of micro air vehicles (MAV) from passive observation and sensing to dynamic interaction with the environment. To achieve this, three major challenges are overcome: precise positioning, sensing and manipulation of the object, and stabilization in the presence of disturbance due to interaction with the object. Navigation in both indoor and outdoor unstructured, Global Positioning System-denied (GPS-denied) environments is achieved using a visual Simultaneous Localization and Mapping (SLAM) algorithm that relies on an onboard monocular camera. A secondary camera, capable of detecting infrared light sources, is used to estimate the 3D location of the object, while an under-actuated and passively compliant manipulator is designed for effective gripping under uncertainty. The system utilizes nested ProportionalIntegral-Derivative (PID) controllers for attitude stabilization, vision-based navigation, and gripping. The quadrotor is therefore able to autonomously navigate, locate, and grasp an object, using only onboard sensors
    • …
    corecore