4,812 research outputs found

    Learning Pose Estimation for UAV Autonomous Navigation and Landing Using Visual-Inertial Sensor Data

    Get PDF
    In this work, we propose a robust network-in-the-loop control system for autonomous navigation and landing of an Unmanned-Aerial-Vehicle (UAV). To estimate the UAV’s absolute pose, we develop a deep neural network (DNN) architecture for visual-inertial odometry, which provides a robust alternative to traditional methods. We first evaluate the accuracy of the estimation by comparing the prediction of our model to traditional visual-inertial approaches on the publicly available EuRoC MAV dataset. The results indicate a clear improvement in the accuracy of the pose estimation up to 25% over the baseline. Finally, we integrate the data-driven estimator in the closed-loop flight control system of Airsim, a simulator available as a plugin for Unreal Engine, and we provide simulation results for autonomous navigation and landing

    Visual Feedback Without Geometric Features Against Occlusion: A Walsh Basis

    Get PDF
    Date of Online Publication: 09 January 2018For a visual feedback without geometric features, this brief suggests to apply a basis made by the Walsh functions in order to reduce the off-line experimental cost. Depending on the resolution, the feedback is implementable and achieves the closed-loop stability of dynamical systems as long as the input-output linearity on matrix space exists. Remarkably, a part of the whole occlusion effects is rejected, and the remaining part is attenuated. The validity is confirmed by the experimental feedback for nonplanar sloshing

    Bootstrapping bilinear models of robotic sensorimotor cascades

    Get PDF
    We consider the bootstrapping problem, which consists in learning a model of the agent's sensors and actuators starting from zero prior information, and we take the problem of servoing as a cross-modal task to validate the learned models. We study the class of bilinear dynamics sensors, in which the derivative of the observations are a bilinear form of the control commands and the observations themselves. This class of models is simple yet general enough to represent the main phenomena of three representative robotics sensors (field sampler, camera, and range-finder), apparently very different from one another. It also allows a bootstrapping algorithm based on hebbian learning, and that leads to a simple and bioplausible control strategy. The convergence properties of learning and control are demonstrated with extensive simulations and by analytical arguments

    Active Vision for Scene Understanding

    Get PDF
    Visual perception is one of the most important sources of information for both humans and robots. A particular challenge is the acquisition and interpretation of complex unstructured scenes. This work contributes to active vision for humanoid robots. A semantic model of the scene is created, which is extended by successively changing the robot\u27s view in order to explore interaction possibilities of the scene

    Active Vision for Scene Understanding

    Get PDF
    Visual perception is one of the most important sources of information for both humans and robots. A particular challenge is the acquisition and interpretation of complex unstructured scenes. This work contributes to active vision for humanoid robots. A semantic model of the scene is created, which is extended by successively changing the robot's view in order to explore interaction possibilities of the scene

    EventCap: Monocular 3D Capture of High-Speed Human Motions using an Event Camera

    No full text
    The high frame rate is a critical requirement for capturing fast human motions. In this setting, existing markerless image-based methods are constrained by the lighting requirement, the high data bandwidth and the consequent high computation overhead. In this paper, we propose EventCap --- the first approach for 3D capturing of high-speed human motions using a single event camera. Our method combines model-based optimization and CNN-based human pose detection to capture high-frequency motion details and to reduce the drifting in the tracking. As a result, we can capture fast motions at millisecond resolution with significantly higher data efficiency than using high frame rate videos. Experiments on our new event-based fast human motion dataset demonstrate the effectiveness and accuracy of our method, as well as its robustness to challenging lighting conditions

    Coordinated optimization of visual cortical maps (I) Symmetry-based analysis

    Get PDF
    In the primary visual cortex of primates and carnivores, functional architecture can be characterized by maps of various stimulus features such as orientation preference (OP), ocular dominance (OD), and spatial frequency. It is a long-standing question in theoretical neuroscience whether the observed maps should be interpreted as optima of a specific energy functional that summarizes the design principles of cortical functional architecture. A rigorous evaluation of this optimization hypothesis is particularly demanded by recent evidence that the functional architecture of OP columns precisely follows species invariant quantitative laws. Because it would be desirable to infer the form of such an optimization principle from the biological data, the optimization approach to explain cortical functional architecture raises the following questions: i) What are the genuine ground states of candidate energy functionals and how can they be calculated with precision and rigor? ii) How do differences in candidate optimization principles impact on the predicted map structure and conversely what can be learned about an hypothetical underlying optimization principle from observations on map structure? iii) Is there a way to analyze the coordinated organization of cortical maps predicted by optimization principles in general? To answer these questions we developed a general dynamical systems approach to the combined optimization of visual cortical maps of OP and another scalar feature such as OD or spatial frequency preference.Comment: 90 pages, 16 figure

    Towards dynamical network biomarkers in neuromodulation of episodic migraine

    Get PDF
    Computational methods have complemented experimental and clinical neursciences and led to improvements in our understanding of the nervous systems in health and disease. In parallel, neuromodulation in form of electric and magnetic stimulation is gaining increasing acceptance in chronic and intractable diseases. In this paper, we firstly explore the relevant state of the art in fusion of both developments towards translational computational neuroscience. Then, we propose a strategy to employ the new theoretical concept of dynamical network biomarkers (DNB) in episodic manifestations of chronic disorders. In particular, as a first example, we introduce the use of computational models in migraine and illustrate on the basis of this example the potential of DNB as early-warning signals for neuromodulation in episodic migraine.Comment: 13 pages, 5 figure
    corecore