32,707 research outputs found

    Correlation Flow: Robust Optical Flow Using Kernel Cross-Correlators

    Full text link
    Robust velocity and position estimation is crucial for autonomous robot navigation. The optical flow based methods for autonomous navigation have been receiving increasing attentions in tandem with the development of micro unmanned aerial vehicles. This paper proposes a kernel cross-correlator (KCC) based algorithm to determine optical flow using a monocular camera, which is named as correlation flow (CF). Correlation flow is able to provide reliable and accurate velocity estimation and is robust to motion blur. In addition, it can also estimate the altitude velocity and yaw rate, which are not available by traditional methods. Autonomous flight tests on a quadcopter show that correlation flow can provide robust trajectory estimation with very low processing power. The source codes are released based on the ROS framework.Comment: 2018 International Conference on Robotics and Automation (ICRA 2018

    Integrated 2-D Optical Flow Sensor

    Get PDF
    I present a new focal-plane analog VLSI sensor that estimates optical flow in two visual dimensions. The chip significantly improves previous approaches both with respect to the applied model of optical flow estimation as well as the actual hardware implementation. Its distributed computational architecture consists of an array of locally connected motion units that collectively solve for the unique optimal optical flow estimate. The novel gradient-based motion model assumes visual motion to be translational, smooth and biased. The model guarantees that the estimation problem is computationally well-posed regardless of the visual input. Model parameters can be globally adjusted, leading to a rich output behavior. Varying the smoothness strength, for example, can provide a continuous spectrum of motion estimates, ranging from normal to global optical flow. Unlike approaches that rely on the explicit matching of brightness edges in space or time, the applied gradient-based model assures spatiotemporal continuity on visual information. The non-linear coupling of the individual motion units improves the resulting optical flow estimate because it reduces spatial smoothing across large velocity differences. Extended measurements of a 30x30 array prototype sensor under real-world conditions demonstrate the validity of the model and the robustness and functionality of the implementation

    Defocusing digital particle image velocimetry and the three-dimensional characterization of two-phase flows

    Get PDF
    Defocusing digital particle image velocimetry (DDPIV) is the natural extension of planar PIV techniques to the third spatial dimension. In this paper we give details of the defocusing optical concept by which scalar and vector information can be retrieved within large volumes. The optical model and computational procedures are presented with the specific purpose of mapping the number density, the size distribution, the associated local void fraction and the velocity of bubbles or particles in two-phase flows. Every particle or bubble is characterized in terms of size and of spatial coordinates, used to compute a true three-component velocity field by spatial three-dimensional cross-correlation. The spatial resolution and uncertainty limits are established through numerical simulations. The performance of the DDPIV technique is established in terms of number density and void fraction. Finally, the velocity evaluation methodology, using the spatial cross-correlation technique, is described and discussed in terms of velocity accuracy

    Split-screen single-camera stereoscopic PIV application to a turbulent confined swirling layer with free surface

    Get PDF
    An annular liquid wall jet, or vortex tube, generated by helical injection inside a tube is studied experimentally as a possible means of fusion reactor shielding. The hollow confined vortex/swirling layer exhibits simultaneously all the complexities of swirling turbulence, free surface, droplet formation, bubble entrapment; all posing challenging diagnostic issues. The construction of flow apparatus and the choice of working liquid and seeding particles facilitate unimpeded optical access to the flow field. A split-screen, single-camera stereoscopic particle image velocimetry (SPIV) scheme is employed for flow field characterization. Image calibration and free surface identification issues are discussed. The interference in measurements of laser beam reflection at the interface are identified and discussed. Selected velocity measurements and turbulence statistics are presented at Re_λ = 70 (Re = 3500 based on mean layer thickness)

    Autonomous flight and remote site landing guidance research for helicopters

    Get PDF
    Automated low-altitude flight and landing in remote areas within a civilian environment are investigated, where initial cost, ongoing maintenance costs, and system productivity are important considerations. An approach has been taken which has: (1) utilized those technologies developed for military applications which are directly transferable to a civilian mission; (2) exploited and developed technology areas where new methods or concepts are required; and (3) undertaken research with the potential to lead to innovative methods or concepts required to achieve a manual and fully automatic remote area low-altitude and landing capability. The project has resulted in a definition of system operational concept that includes a sensor subsystem, a sensor fusion/feature extraction capability, and a guidance and control law concept. These subsystem concepts have been developed to sufficient depth to enable further exploration within the NASA simulation environment, and to support programs leading to the flight test

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world
    corecore