5,474 research outputs found

    NASA Tech Briefs, June 2006

    Get PDF
    Topics covered include: Magnetic-Field-Response Measurement-Acquisition System; Platform for Testing Robotic Vehicles on Simulated Terrain; Interferometer for Low-Uncertainty Vector Metrology; Rayleigh Scattering for Measuring Flow in a Nozzle Testing Facility; "Virtual Feel" Capaciflectors; FETs Based on Doped Polyaniline/Polyethylene Oxide Fibers; Miniature Housings for Electronics With Standard Interfaces; Integrated Modeling Environment; Modified Recursive Hierarchical Segmentation of Data; Sizing Structures and Predicting Weight of a Spacecraft; Stress Testing of Data-Communication Networks; Framework for Flexible Security in Group Communications; Software for Collaborative Use of Large Interactive Displays; Microsphere Insulation Panels; Single-Wall Carbon Nanotube Anodes for Lithium Cells; Tantalum-Based Ceramics for Refractory Composites; Integral Flexure Mounts for Metal Mirrors for Cryogenic Use; Templates for Fabricating Nanowire/Nanoconduit- Based Devices; Measuring Vapors To Monitor the State of Cure of a Resin; Partial-Vacuum-Gasketed Electrochemical Corrosion Cell; Theodolite Ring Lights; Integrating Terrain Maps Into a Reactive Navigation Strategy; Reducing Centroid Error Through Model-Based Noise Reduction; Adaptive Modeling Language and Its Derivatives; Stable Satellite Orbits for Global Coverage of the Moon; and Low-Cost Propellant Launch From a Tethered Balloo

    Reducing Centroid Error Through Model-Based Noise Reduction

    Get PDF
    A method of processing the digitized output of a charge-coupled device (CCD) image detector has been devised to enable reduction of the error in computed centroid of the image of a point source of light. The method involves model-based estimation of, and correction for, the contributions of bias and noise to the image data. The method could be used to advantage in any of a variety of applications in which there are requirements for measuring precise locations of, and/or precisely aiming optical instruments toward, point light sources. In the present method, prior to normal operations of the CCD, one measures the point-spread function (PSF) of the telescope or other optical system used to project images on the CCD. The PSF is used to construct a database of spot models representing the nominal CCD pixel outputs for a point light source projected onto the CCD at various positions incremented by small fractions of a pixel

    Vision-based legged robot navigation: localisation, local planning, learning

    Get PDF
    The recent advances in legged locomotion control have made legged robots walk up staircases, go deep into underground caves, and walk in the forest. Nevertheless, autonomously achieving this task is still a challenge. Navigating and acomplishing missions in the wild relies not only on robust low-level controllers but also higher-level representations and perceptual systems that are aware of the robot's capabilities. This thesis addresses the navigation problem for legged robots. The contributions are four systems designed to exploit unique characteristics of these platforms, from the sensing setup to their advanced mobility skills over different terrain. The systems address localisation, scene understanding, and local planning, and advance the capabilities of legged robots in challenging environments. The first contribution tackles localisation with multi-camera setups available on legged platforms. It proposes a strategy to actively switch between the cameras and stay localised while operating in a visual teach and repeat context---in spite of transient changes in the environment. The second contribution focuses on local planning, effectively adding a safety layer for robot navigation. The approach uses a local map built on-the-fly to generate efficient vector field representations that enable fast and reactive navigation. The third contribution demonstrates how to improve local planning in natural environments by learning robot-specific traversability from demonstrations. The approach leverages classical and learning-based methods to enable online, onboard traversability learning. These systems are demonstrated via different robot deployments on industrial facilities, underground mines, and parklands. The thesis concludes by presenting a real-world application: an autonomous forest inventory system with legged robots. This last contribution presents a mission planning system for autonomous surveying as well as a data analysis pipeline to extract forestry attributes. The approach was experimentally validated in a field campaign in Finland, evidencing the potential that legged platforms offer for future applications in the wild

    Viewfinder: final activity report

    Get PDF
    The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources. The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation. The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein

    Logical behaviors

    Get PDF
    technical reportIn this paper we describe an approach to high-level multisensor integration in t h e context of an autonomous mobile robot. Previous papers have described the development of t h e INRIA mobile robot subsystems: 1. sensor and actuator systems 2. distance and range analysis 3. feature extraction and segmentation 4. motion detection 5. uncertainty management, and 6. 3 -D environment descriptions. We describe here an approach to: ? the semantic analysis of the 3-D environment descriptions

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described
    corecore