34,433 research outputs found

    Monocular navigation for long-term autonomy

    Get PDF
    We present a reliable and robust monocular navigation system for an autonomous vehicle. The proposed method is computationally efficient, needs off-the-shelf equipment only and does not require any additional infrastructure like radio beacons or GPS. Contrary to traditional localization algorithms, which use advanced mathematical methods to determine vehicle position, our method uses a more practical approach. In our case, an image-feature-based monocular vision technique determines only the heading of the vehicle while the vehicle's odometry is used to estimate the distance traveled. We present a mathematical proof and experimental evidence indicating that the localization error of a robot guided by this principle is bound. The experiments demonstrate that the method can cope with variable illumination, lighting deficiency and both short- and long-term environment changes. This makes the method especially suitable for deployment in scenarios which require long-term autonomous operation

    NASA Automated Rendezvous and Capture Review. Executive summary

    Get PDF
    In support of the Cargo Transfer Vehicle (CTV) Definition Studies in FY-92, the Advanced Program Development division of the Office of Space Flight at NASA Headquarters conducted an evaluation and review of the United States capabilities and state-of-the-art in Automated Rendezvous and Capture (AR&C). This review was held in Williamsburg, Virginia on 19-21 Nov. 1991 and included over 120 attendees from U.S. government organizations, industries, and universities. One hundred abstracts were submitted to the organizing committee for consideration. Forty-two were selected for presentation. The review was structured to include five technical sessions. Forty-two papers addressed topics in the five categories below: (1) hardware systems and components; (2) software systems; (3) integrated systems; (4) operations; and (5) supporting infrastructure

    Simple yet efficient real-time pose-based action recognition

    Full text link
    Recognizing human actions is a core challenge for autonomous systems as they directly share the same space with humans. Systems must be able to recognize and assess human actions in real-time. In order to train corresponding data-driven algorithms, a significant amount of annotated training data is required. We demonstrated a pipeline to detect humans, estimate their pose, track them over time and recognize their actions in real-time with standard monocular camera sensors. For action recognition, we encode the human pose into a new data format called Encoded Human Pose Image (EHPI) that can then be classified using standard methods from the computer vision community. With this simple procedure we achieve competitive state-of-the-art performance in pose-based action detection and can ensure real-time performance. In addition, we show a use case in the context of autonomous driving to demonstrate how such a system can be trained to recognize human actions using simulation data.Comment: Submitted to IEEE Intelligent Transportation Systems Conference (ITSC) 2019. Code will be available soon at https://github.com/noboevbo/ehpi_action_recognitio

    Design and Control of a Flight-Style AUV with Hovering Capability

    Get PDF
    The small flight-style Delphin AUV is designed to evaluate the performance of a long range survey AUV with the additional capability to hover and manoeuvre at slow speed. Delphin’s hull form is based on a scaled version of Autosub6000, and in addition to the main thruster and control surfaces at the rear of the vehicle, Delphin is equipped with four rim driven tunnel thrusters. In order to reduce the development cycle time, Delphin was designed to use commercial-off-the-shelf (COTS) sensors and thrusters interfaced to a standard PC motherboard running the control software within the MS Windows environment. To further simplify the development, the autonomy system uses the State-Flow Toolbox within the Matlab/Simulink environment. While the autonomy software is running, image processing routines are used for obstacle avoidance and target tracking, within the commercial Scorpion Vision software. This runs as a parallel thread and passes results to Matlab via the TCP/IP communication protocol. The COTS based development approach has proved effective. However, a powerful PC is required to effectively run Matlab and Simulink, and, due to the nature of the Windows environment, it is impossible to run the control in hard real-time. The autonomy system will be recoded to run under the Matlab Windows Real-Time Windows Target in the near future. Experimental results are used to demonstrating the performance and current capabilities of the vehicle are presented

    ITERL: A Wireless Adaptive System for Efficient Road Lighting

    Get PDF
    This work presents the development and construction of an adaptive street lighting system that improves safety at intersections, which is the result of applying low-power Internet of Things (IoT) techniques to intelligent transportation systems. A set of wireless sensor nodes using the Institute of Electrical and Electronics Engineers (IEEE) 802.15.4 standard with additional internet protocol (IP) connectivity measures both ambient conditions and vehicle transit. These measurements are sent to a coordinator node that collects and passes them to a local controller, which then makes decisions leading to the streetlight being turned on and its illumination level controlled. Streetlights are autonomous, powered by photovoltaic energy, and wirelessly connected, achieving a high degree of energy efficiency. Relevant data are also sent to the highway conservation center, allowing it to maintain up-to-date information for the system, enabling preventive maintenance.ConsejerĂ­a de Fomento y Vivienda Junta de AndalucĂ­a G-GI3002 / IDIOFondo Europeo de Desarrollo Regional G-GI3002 / IDI
    • …
    corecore