60,300 research outputs found

    Real-Time Anisotropic Diffusion using Space-Variant Vision

    Full text link
    Many computer and robot vision applications require multi-scale image analysis. Classically, this has been accomplished through the use of a linear scale-space, which is constructed by convolution of visual input with Gaussian kernels of varying size (scale). This has been shown to be equivalent to the solution of a linear diffusion equation on an infinite domain, as the Gaussian is the Green's function of such a system (Koenderink, 1984). Recently, much work has been focused on the use of a variable conductance function resulting in anisotropic diffusion described by a nonlinear partial differential equation (PDF). The use of anisotropic diffusion with a conductance coefficient which is a decreasing function of the gradient magnitude has been shown to enhance edges, while decreasing some types of noise (Perona and Malik, 1987). Unfortunately, the solution of the anisotropic diffusion equation requires the numerical integration of a nonlinear PDF which is a costly process when carried out on a fixed mesh such as a typical image. In this paper we show that the complex log transformation, variants of which are universally used in mammalian retino-cortical systems, allows the nonlinear diffusion equation to be integrated at exponentially enhanced rates due to the non-uniform mesh spacing inherent in the log domain. The enhanced integration rates, coupled with the intrinsic compression of the complex log transformation, yields a seed increase of between two and three orders of magnitude, providing a means of performing real-time image enhancement using anisotropic diffusion.Office of Naval Research (N00014-95-I-0409

    Adaptive Nonlocal Filtering: A Fast Alternative to Anisotropic Diffusion for Image Enhancement

    Full text link
    The goal of many early visual filtering processes is to remove noise while at the same time sharpening contrast. An historical succession of approaches to this problem, starting with the use of simple derivative and smoothing operators, and the subsequent realization of the relationship between scale-space and the isotropic dfffusion equation, has recently resulted in the development of "geometry-driven" dfffusion. Nonlinear and anisotropic diffusion methods, as well as image-driven nonlinear filtering, have provided improved performance relative to the older isotropic and linear diffusion techniques. These techniques, which either explicitly or implicitly make use of kernels whose shape and center are functions of local image structure are too computationally expensive for use in real-time vision applications. In this paper, we show that results which are largely equivalent to those obtained from geometry-driven diffusion can be achieved by a process which is conceptually separated info two very different functions. The first involves the construction of a vector~field of "offsets", defined on a subset of the original image, at which to apply a filter. The offsets are used to displace filters away from boundaries to prevent edge blurring and destruction. The second is the (straightforward) application of the filter itself. The former function is a kind generalized image skeletonization; the latter is conventional image filtering. This formulation leads to results which are qualitatively similar to contemporary nonlinear diffusion methods, but at computation times that are roughly two orders of magnitude faster; allowing applications of this technique to real-time imaging. An additional advantage of this formulation is that it allows existing filter hardware and software implementations to be applied with no modification, since the offset step reduces to an image pixel permutation, or look-up table operation, after application of the filter

    High dynamic range perception with spatially variant exposure

    Get PDF
    In this paper we present a method capable of perceiving high dynamic range scene. The special feature of the method is that it changes the integration time of the imager on the pixel level. Using CNN-UM we can calculate the integration time for the pixels, and hence low dynamic range integration type CMOS sensors will be able to perceive high dynamic range scenes. The method yields high contrast without introducing non-existing edges

    Radio-wave propagation through a medium containing electron-density fluctuations described by an anisotropic Goldreich-Sridhar spectrum

    Full text link
    We study the propagation of radio waves through a medium possessing density fluctuations that are elongated along the ambient magnetic field and described by an anisotropic Goldreich-Sridhar power spectrum. We derive general formulas for the wave phase structure function, visibility, angular broadening, diffraction-pattern length scales, and scintillation time scale for arbitrary distributions of turbulence along the line of sight, and specialize these formulas to idealized cases.Comment: 25 pages, 3 figures, submitted to Ap

    Laminar Cortical Dynamics of Visual Form and Motion Interactions During Coherent Object Motion Perception

    Full text link
    How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? A 3D FORMOTION model specifies how 3D boundary representations, which separate figures from backgrounds within cortical area V2, capture motion signals at the appropriate depths in MT; how motion signals in MT disambiguate boundaries in V2 via MT-to-Vl-to-V2 feedback; how sparse feature tracking signals are amplified; and how a spatially anisotropic motion grouping process propagates across perceptual space via MT-MST feedback to integrate feature-tracking and ambiguous motion signals to determine a global object motion percept. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.Air Force Office of Scientific Research (F49620-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (BCS-02-35398, SBE-0354378); Office of Naval Research (N00014-95-1-0409, N00014-01-1-0624

    Theory of Parabolic Arcs in Interstellar Scintillation Spectra

    Full text link
    Our theory relates the secondary spectrum, the 2D power spectrum of the radio dynamic spectrum, to the scattered pulsar image in a thin scattering screen geometry. Recently discovered parabolic arcs in secondary spectra are generic features for media that scatter radiation at angles much larger than the rms scattering angle. Each point in the secondary spectrum maps particular values of differential arrival-time delay and fringe rate (or differential Doppler frequency) between pairs of components in the scattered image. Arcs correspond to a parabolic relation between these quantities through their common dependence on the angle of arrival of scattered components. Arcs appear even without consideration of the dispersive nature of the plasma. Arcs are more prominent in media with negligible inner scale and with shallow wavenumber spectra, such as the Kolmogorov spectrum, and when the scattered image is elongated along the velocity direction. The arc phenomenon can be used, therefore, to constrain the inner scale and the anisotropy of scattering irregularities for directions to nearby pulsars. Arcs are truncated by finite source size and thus provide sub micro arc sec resolution for probing emission regions in pulsars and compact active galactic nuclei. Multiple arcs sometimes seen signify two or more discrete scattering screens along the propagation path, and small arclets oriented oppositely to the main arc persisting for long durations indicate the occurrence of long-term multiple images from the scattering screen.Comment: 22 pages, 11 figures, submitted to the Astrophysical Journa
    • 

    corecore