5,445 research outputs found

    On sensor fusion for airborne wind energy systems

    Full text link
    A study on filtering aspects of airborne wind energy generators is presented. This class of renewable energy systems aims to convert the aerodynamic forces generated by tethered wings, flying in closed paths transverse to the wind flow, into electricity. The accurate reconstruction of the wing's position, velocity and heading is of fundamental importance for the automatic control of these kinds of systems. The difficulty of the estimation problem arises from the nonlinear dynamics, wide speed range, large accelerations and fast changes of direction that the wing experiences during operation. It is shown that the overall nonlinear system has a specific structure allowing its partitioning into sub-systems, hence leading to a series of simpler filtering problems. Different sensor setups are then considered, and the related sensor fusion algorithms are presented. The results of experimental tests carried out with a small-scale prototype and wings of different sizes are discussed. The designed filtering algorithms rely purely on kinematic laws, hence they are independent from features like wing area, aerodynamic efficiency, mass, etc. Therefore, the presented results are representative also of systems with larger size and different wing design, different number of tethers and/or rigid wings.Comment: This manuscript is a preprint of a paper accepted for publication on the IEEE Transactions on Control Systems Technology and is subject to IEEE Copyright. The copy of record is available at IEEEXplore library: http://ieeexplore.ieee.org

    Attention and Anticipation in Fast Visual-Inertial Navigation

    Get PDF
    We study a Visual-Inertial Navigation (VIN) problem in which a robot needs to estimate its state using an on-board camera and an inertial sensor, without any prior knowledge of the external environment. We consider the case in which the robot can allocate limited resources to VIN, due to tight computational constraints. Therefore, we answer the following question: under limited resources, what are the most relevant visual cues to maximize the performance of visual-inertial navigation? Our approach has four key ingredients. First, it is task-driven, in that the selection of the visual cues is guided by a metric quantifying the VIN performance. Second, it exploits the notion of anticipation, since it uses a simplified model for forward-simulation of robot dynamics, predicting the utility of a set of visual cues over a future time horizon. Third, it is efficient and easy to implement, since it leads to a greedy algorithm for the selection of the most relevant visual cues. Fourth, it provides formal performance guarantees: we leverage submodularity to prove that the greedy selection cannot be far from the optimal (combinatorial) selection. Simulations and real experiments on agile drones show that our approach ensures state-of-the-art VIN performance while maintaining a lean processing time. In the easy scenarios, our approach outperforms appearance-based feature selection in terms of localization errors. In the most challenging scenarios, it enables accurate visual-inertial navigation while appearance-based feature selection fails to track robot's motion during aggressive maneuvers.Comment: 20 pages, 7 figures, 2 table

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world
    • …
    corecore