169,198 research outputs found

    No local cancellation between directionally opposed first-order and second-order motion signals

    Get PDF
    AbstractDespite strong converging evidence that there are separate mechanisms for the processing of first-order and second-order motion, the issue remains controversial. Qian, Andersen and Adelson (J. Neurosci., 14 (1994), 7357–7366) have shown that first-order motion signals cancel if locally balanced. Here we show that this is also the case for second-order motion signals, but not for a mixture of first-order and second-order motion even when the visibility of the two types of stimulus is equated. Our motion sequence consisted of a dynamic binary noise carrier divided into horizontal strips of equal height, each of which was spatially modulated in either contrast or luminance by a 1.0 c/deg sinusoid. The modulation moved leftward or rightward (3.75 Hz) in alternate strips. The single-interval task was to identify the direction of motion of the central strip. Three conditions were tested: all second-order strips, all first-order strips, and spatially alternated first-order and second-order strips. In the first condition, a threshold strip height for the second-order strips was obtained at a contrast modulation depth of 100%. In the second condition, this height was used for the first-order strips, and a threshold was obtained in terms of luminance contrast. These two previously-obtained threshold values were used to equate visibility of the first-order and second-order components in the third condition. Direction identification, instead of being at threshold, was near-perfect for all observers. We argue that the first two conditions demonstrate local cancellation of motion signals, whereas in the third condition this does not occur. We attribute this non-cancellation to separate processing of first-order and second-order motion inputs

    Limited Visibility and Uncertainty Aware Motion Planning for Automated Driving

    Full text link
    Adverse weather conditions and occlusions in urban environments result in impaired perception. The uncertainties are handled in different modules of an automated vehicle, ranging from sensor level over situation prediction until motion planning. This paper focuses on motion planning given an uncertain environment model with occlusions. We present a method to remain collision free for the worst-case evolution of the given scene. We define criteria that measure the available margins to a collision while considering visibility and interactions, and consequently integrate conditions that apply these criteria into an optimization-based motion planner. We show the generality of our method by validating it in several distinct urban scenarios

    Decoherence and Recoherence in a Vibrating RF SQUID

    Get PDF
    We study an RF SQUID, in which a section of the loop is a freely suspended beam that is allowed to oscillate mechanically. The coupling between the RF SQUID and the mechanical resonator originates from the dependence of the total magnetic flux threading the loop on the displacement of the resonator. Motion of the latter affects the visibility of Rabi oscillations between the two lowest energy states of the RF SQUID. We address the feasibility of experimental observation of decoherence and recoherence, namely decay and rise of the visibility, in such a system.Comment: 9 pages, 2 figure

    Complementarity and Young's interference fringes from two atoms

    Get PDF
    The interference pattern of the resonance fluorescence from a J=1/2 to J=1/2 transition of two identical atoms confined in a three-dimensional harmonic potential is calculated. Thermal motion of the atoms is included. Agreement is obtained with experiments [Eichmann et al., Phys. Rev. Lett. 70, 2359 (1993)]. Contrary to some theoretical predictions, but in agreement with the present calculations, a fringe visibility greater than 50% can be observed with polarization-selective detection. The dependence of the fringe visibility on polarization has a simple interpretation, based on whether or not it is possible in principle to determine which atom emitted the photon.Comment: 12 pages, including 7 EPS figures, RevTex. Submitted to Phys. Rev.

    Decoupled Sampling for Real-Time Graphics Pipelines

    Get PDF
    We propose decoupled sampling, an approach that decouples shading from visibility sampling in order to enable motion blur and depth-of-field at reduced cost. More generally, it enables extensions of modern real-time graphics pipelines that provide controllable shading rates to trade off quality for performance. It can be thought of as a generalization of GPU-style multisample antialiasing (MSAA) to support unpredictable shading rates, with arbitrary mappings from visibility to shading samples as introduced by motion blur, depth-of-field, and adaptive shading. It is inspired by the Reyes architecture in offline rendering, but targets real-time pipelines by driving shading from visibility samples as in GPUs, and removes the need for micropolygon dicing or rasterization. Decoupled Sampling works by defining a many-to-one hash from visibility to shading samples, and using a buffer to memoize shading samples and exploit reuse across visibility samples. We present extensions of two modern GPU pipelines to support decoupled sampling: a GPU-style sort-last fragment architecture, and a Larrabee-style sort-middle pipeline. We study the architectural implications and derive end-to-end performance estimates on real applications through an instrumented functional simulator. We demonstrate high-quality motion blur and depth-of-field, as well as variable and adaptive shading rates

    Driver steering dynamics measured in car simulator under a range of visibility and roadmaking conditions

    Get PDF
    A simulation experiment was conducted to determine the effect of reduced visibility on driver lateral (steering) control. The simulator included a real car cab and a single lane road image projected on a screen six feet in front of the driver. Simulated equations of motion controlled apparent car lane position in response to driver steering actions, wind gusts, and road curvature. Six drivers experienced a range of visibility conditions at various speeds with assorted roadmaking configurations (mark and gap lengths). Driver describing functions were measured and detailed parametric model fits were determined. A pursuit model employing a road curvature feedforward was very effective in explaining driver behavior in following randomly curving roads. Sampled-data concepts were also effective in explaining the combined effects of reduced visibility and intermittent road markings on the driver's dynamic time delay. The results indicate the relative importance of various perceptual variables as the visual input to the driver's steering control process is changed

    Speckle visibility spectroscopy and variable granular fluidization

    Get PDF
    We introduce a dynamic light scattering technique capable of resolving motion that changes systematically, and rapidly, with time. It is based on the visibility of a speckle pattern for a given exposure duration. Applying this to a vibrated layer of glass beads, we measure the granular temperature and its variation with phase in the oscillation cycle. We observe several transitions involving jammed states, where the grains are at rest during some portion of the cycle. We also observe a two-step decay of the temperature on approach to jamming.Comment: 4 pages, 4 figures, experimen

    Decoupled Sampling for Graphics Pipelines

    Get PDF
    We propose a generalized approach to decoupling shading from visibility sampling in graphics pipelines, which we call decoupled sampling. Decoupled sampling enables stochastic supersampling of motion and defocus blur at reduced shading cost, as well as controllable or adaptive shading rates which trade off shading quality for performance. It can be thought of as a generalization of multisample antialiasing (MSAA) to support complex and dynamic mappings from visibility to shading samples, as introduced by motion and defocus blur and adaptive shading. It works by defining a many-to-one hash from visibility to shading samples, and using a buffer to memoize shading samples and exploit reuse across visibility samples. Decoupled sampling is inspired by the Reyes rendering architecture, but like traditional graphics pipelines, it shades fragments rather than micropolygon vertices, decoupling shading from the geometry sampling rate. Also unlike Reyes, decoupled sampling only shades fragments after precise computation of visibility, reducing overshading. We present extensions of two modern graphics pipelines to support decoupled sampling: a GPU-style sort-last fragment architecture, and a Larrabee-style sort-middle pipeline. We study the architectural implications of decoupled sampling and blur, and derive end-to-end performance estimates on real applications through an instrumented functional simulator. We demonstrate high-quality motion and defocus blur, as well as variable and adaptive shading rates
    corecore