74 research outputs found

    A computational analysis of separating motion signals in transparent random dot kinematograms

    Get PDF
    When multiple motion directions are presented simultaneously within the same region of the visual field human observers see motion transparency. This perceptual phenomenon requires from the visual system to separate different motion signal distributions, which are characterised by distinct means that correspond to the different dot directions and variances that are determined by the signal and processing noise. Averaging of local motion signals can be employed to reduce noise components, but such pooling could at the same time lead to the averaging of different directional signal components, arising from spatially adjacent dots moving in different directions, which would reduce the visibility of transparent directions. To study the theoretical limitations of encoding transparent motion by a biologically plausible motion detector network, the distributions of motion directions signalled by a motion detector model (2DMD) were analysed here for Random Dot Kinematograms (RDKs). In sparse dot RDKs with two randomly interleaved motion directions, the angular separation that still allows us to separate two directions is limited by the internal noise in the system. Under the present conditions direction differences down to 30 deg could be separated. Correspondingly, in a transparent motion stimulus containing multiple motion directions, more than eight directions could be separated. When this computational analysis is compared to some published psychophysical data, it appears that the experimental results do not reach the predicted limits. Whereas the computer simulations demonstrate that even an unsophisticated motion detector network would be appropriate to represent a considerable number of motion directions simultaneously within the same region, human observers usually are restricted to seeing not more than two or three directions under comparable conditions. This raises the question why human observers do not make full use of information that could be easily extracted from the representation of motion signals at the early stages of the visual system

    Exploring Mondrian Compositions in Three-Dimensional Space

    Get PDF
    The dogmatic nature of Piet Mondrian’s neoplasticism manifesto initiated a discourse about translating aesthetic ideals from paintings to 3D structures. Mondrian rarely ventured into architectural design, and his unique interior design of “Salon de Madame B … à Dresden” was not executed. The authors discuss physical constraints and perceptual factors that conflict with neoplastic ideals. Using physical and virtual models of the salon, the authors demonstrate challenges with perspective projections and show how such distortions could be minimized in a cylinder. The paradoxical percept elicited by a “reverspective” Mondrian-like space further highlights the essential role of perceptual processes in reaching neoplastic standards of beauty. </jats:p

    Immersive virtual reality fitness games for enhancement of recovery after colorectal surgery: study protocol for a randomised pilot trial

    Get PDF
    Abstract Background Physical inactivity after surgery is an important risk factor for postoperative complications. Compared to conventional physiotherapy, activity-promoting video games are often more motivating and engaging for patients with physical impairments. This effect could be enhanced by immersive virtual reality (VR) applications that visually, aurally and haptically simulate a virtual environment and provide a more interactive experience. The use of VR-based fitness games in the early postoperative phase could contribute to improved mobilisation and have beneficial psychological effects. Currently, there is no data on the use of VR-based fitness games in the early postoperative period after colorectal surgery. Methods This pilot trial features a single-centre, randomised, two-arm study design with a 1:1 allocation. Patients undergoing elective abdominal surgery for colorectal cancer or liver metastases of colorectal cancer will be recruited. Participants will be randomly assigned to an intervention group or a control group. Patients randomised to the intervention group will perform immersive virtual reality-based fitness exercises during their postoperative hospital stay. Feasibility and clinical outcomes will be assessed. Discussion Early mobilisation after surgery is crucial for reducing many postoperative complications. VR-based interventions are easy to use and often inexpensive, especially compared to interventions that require more medical staff and equipment. VR-based interventions could serve as an alternative or complement to regular physiotherapy and enhance mobilisation after surgery. The proposed pilot study will be the first step to evaluate the feasibility of VR-based interventions in the perioperative period, with the aim of improving the postoperative rehabilitation of cancer patients. Trial registration The trial has been registered in the German Clinical Trials Register (DRKS) Nr. DRKS00024888 , on April 13, 2021, WHO Universal Trial Number (UTN) U1111-1261–5968

    Microsaccades and preparatory set: a comparison between delayed and immediate, exogenous and endogenous pro- and anti-saccades

    Get PDF
    When we fixate an object, our eyes are not entirely still, but undergo small displacements such as microsaccades. Here, we investigate whether these microsaccades are sensitive to the preparatory processes involved in programming a saccade. We show that the frequency of microsaccades depends in a specific manner on the intention where to move the eyes (towards a target location or away from it), when to move (immediately after the onset of the target or after a delay), and what type of cue is followed (a peripheral onset or a centrally presented symbolic cue). In particular, in the preparatory interval before and early after target onset, more microsaccades were found when a delayed saccade towards a peripheral target was prepared than when a saccade away was programmed. However, no such difference in the frequency of microsaccades was observed when saccades were initiated immediately after the onset of the target or when the saccades were programmed on the basis of a centrally presented arrow cue. The results are discussed in the context of the neural correlates of response preparation, known as preparatory set.status: publishe

    Object recognition in flight: how do Bees distinguish between 3D shapes?

    Get PDF
    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees’ flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots

    Movement-induced motion signal distributions in outdoor scenes

    No full text
    The movement of an observer generates a characteristic field of velocity vectors on the retina (Gibson 1950). Because such optic flow-fields are useful for navigation, many theoretical, psychophysical and physiological studies have addressed the question how egomotion parameters such as direction of heading can be estimated from optic flow. Little is known, however, about the structure of optic flow under natural conditions. To address this issue, we recorded sequences of panoramic images along accurately defined paths in a variety of outdoor locations and used these sequences as input to a two-dimensional array of correlation-based motion detectors (2DMD). We find that (a) motion signal distributions are sparse and noisy with respect to local motion directions; (b) motion signal distributions contain patches (motion streaks) which are systematically oriented along the principal flowfield directions; (c) motion signal distributions showa distinct, dorso-ventral topography, reflecting the distance anisotropy of terrestrial environments; (d) the patiotemporal tuning of the local motion detector we used has little influence on the structure of motion signal distributions, at least for the range of conditions we tested; and (e) environmental motion is locally noisy throughout the visual field, with little spatial or temporal correlation; it can therefore be removed by temporal averaging and is largely over-ridden by image motion caused by observer movement. Our results suggest that spatial or temporal integration is important to retrieve reliable information on the local direction and size of motion vectors, because the structure of optic flowis clearly detectable in the temporal average of motion signal distributions. Egomotion parameters can be reliably retrieved from such averaged distributions under a range of environmental conditions. These observations raise a number of questions about the role of specific environmental and computational constraints in the processing of natural optic flow

    Looking at Op Art from a computational viewpoint

    No full text

    Learning in primary and secondary motion vision

    No full text

    Modeling Human Motion Perception

    No full text
    • …
    corecore