46,087 research outputs found

    Testing the surface fixation method in gestational diabetes mellitus

    Get PDF
    Introduction: To test the surface fixation method contrasting urine samples of women with GDM vs healthy pregnant women. Methods: This was a pilot descriptive study. Three groups were conformed: A) Pregnant women with GDM, B) Women with healthy pregnancies and C) Non-pregnant healthy women. The positiveness of the surface fixation method was contrasted with Odds Ratio. Results: 12 women with GDM, 14 with healthy pregnancies and 9 non-pregnant women were included in the study. The OR for a positive surface fixation test when contrasting GDM vs Healthy pregnancies was of 2.7 while the value when contrasting GDM vs Healthy pregnancies + Non pregnant women was of 3.2 without reaching significant statistical difference in any case. Conclusion: the surface fixation method used with urine samples, suggests the existence of a transient antigen-antibody reaction that contributes to the inefficient insulin secretion

    Word skipping: implications for theories of eye movement control in reading

    Get PDF
    This chapter provides a meta-analysis of the factors that govern word skipping in reading. It is concluded that the primary predictor is the length of the word to be skipped. A much smaller effect is due to the processing ease of the word (e.g., the frequency of the word and its predictability in the sentence)

    Brain Control of Movement Execution Onset Using Local Field Potentials in Posterior Parietal Cortex

    Get PDF
    The precise control of movement execution onset is essential for safe and autonomous cortical motor prosthetics. A recent study from the parietal reach region (PRR) suggested that the local field potentials (LFPs) in this area might be useful for decoding execution time information because of the striking difference in the LFP spectrum between the plan and execution states (Scherberger et al., 2005). More specifically, the LFP power in the 0–10 Hz band sharply rises while the power in the 20–40 Hz band falls as the state transitions from plan to execution. However, a change of visual stimulus immediately preceded reach onset, raising the possibility that the observed spectral change reflected the visual event instead of the reach onset. Here, we tested this possibility and found that the LFP spectrum change was still time locked to the movement onset in the absence of a visual event in self-paced reaches. Furthermore, we successfully trained the macaque subjects to use the LFP spectrum change as a "go" signal in a closed-loop brain-control task in which the animals only modulated the LFP and did not execute a reach. The execution onset was signaled by the change in the LFP spectrum while the target position of the cursor was controlled by the spike firing rates recorded from the same site. The results corroborate that the LFP spectrum change in PRR is a robust indicator for the movement onset and can be used for control of execution onset in a cortical prosthesis

    Designing an Adaptive Interface: Using Eye Tracking to Classify How Information Usage Changes Over Time in Partially Automated Vehicles

    Get PDF
    While partially automated vehicles can provide a range of benefits, they also bring about new Human Machine Interface (HMI) challenges around ensuring the driver remains alert and is able to take control of the vehicle when required. While humans are poor monitors of automated processes, specifically during ‘steady state’ operation, presenting the appropriate information to the driver can help. But to date, interfaces of partially automated vehicles have shown evidence of causing cognitive overload. Adaptive HMIs that automatically change the information presented (for example, based on workload, time or physiologically), have been previously proposed as a solution, but little is known about how information should adapt during steady-state driving. This study aimed to classify information usage based on driver experience to inform the design of a future adaptive HMI in partially automated vehicles. The unique feature of this study over existing literature is that each participant attended for five consecutive days; enabling a first look at how information usage changes with increasing familiarity and providing a methodological contribution to future HMI user trial study design. Seventeen participants experienced a steady-state automated driving simulation for twenty-six minutes per day in a driving simulator, replicating a regularly driven route, such as a work commute. Nine information icons, representative of future partially automated vehicle HMIs, were displayed on a tablet and eye tracking was used to record the information that the participants fixated on. The results found that information usage did change with increased exposure, with significant differences in what information participants looked at between the first and last trial days. With increasing experience, participants tended to view information as confirming technical competence rather than the future state of the vehicle. On this basis, interface design recommendations are made, particularly around the design of adaptive interfaces for future partially automated vehicles

    Simultaneous localization and map-building using active vision

    No full text
    An active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated Simultaneous Localization and Map-Building (SLAM) to be implemented with vision, permitting repeatable long-term localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using active vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.Published versio

    The stroboscopic human vision

    Get PDF
    When the frequency of seeing light from a pair of point flashes is beyond the probability summation of the separate flashes, the surplus is due to the successful interaction of subliminal responses from the different flashes. Experiments with various distances and various periods of the pair show thet successful interaction occurs when in each of two successive time-quanta of 0.04 seconds and in each of two adjacent distinct receptor groups at least one subliminal receptor response occurs. An autonomous source produces the time-quanta. It serves the time-processing of the central nervous system and of the motor system. Posdsibly, action potentials from the purkinje cells of the myocardium play a role. Hyper acuity in direction and in depth, flicker fusion, perceptual rivalry and ather phenomena follow from the quantized spatiotemporal signal processing

    Neural Representations for Sensory-Motor Control, II: Learning a Head-Centered Visuomotor Representation of 3-D Target Position

    Full text link
    A neural network model is described for how an invariant head-centered representation of 3-D target position can be autonomously learned by the brain in real time. Once learned, such a target representation may be used to control both eye and limb movements. The target representation is derived from the positions of both eyes in the head, and the locations which the target activates on the retinas of both eyes. A Vector Associative Map, or YAM, learns the many-to-one transformation from multiple combinations of eye-and-retinal position to invariant 3-D target position. Eye position is derived from outflow movement signals to the eye muscles. Two successive stages of opponent processing convert these corollary discharges into a. head-centered representation that closely approximates the azimuth, elevation, and vergence of the eyes' gaze position with respect to a cyclopean origin located between the eyes. YAM learning combines this cyclopean representation of present gaze position with binocular retinal information about target position into an invariant representation of 3-D target position with respect to the head. YAM learning can use a teaching vector that is externally derived from the positions of the eyes when they foveate the target. A YAM can also autonomously discover and learn the invariant representation, without an explicit teacher, by generating internal error signals from environmental fluctuations in which these invariant properties are implicit. YAM error signals are computed by Difference Vectors, or DVs, that are zeroed by the YAM learning process. YAMs may be organized into YAM Cascades for learning and performing both sensory-to-spatial maps and spatial-to-motor maps. These multiple uses clarify why DV-type properties are computed by cells in the parietal, frontal, and motor cortices of many mammals. YAMs are modulated by gating signals that express different aspects of the will-to-act. These signals transform a single invariant representation into movements of different speed (GO signal) and size (GRO signal), and thereby enable YAM controllers to match a planned action sequence to variable environmental conditions.National Science Foundation (IRI-87-16960, IRI-90-24877); Office of Naval Research (N00014-92-J-1309

    Towards binocular active vision in a robot head system

    Get PDF
    This paper presents the first results of an investigation and pilot study into an active, binocular vision system that combines binocular vergence, object recognition and attention control in a unified framework. The prototype developed is capable of identifying, targeting, verging on and recognizing objects in a highly-cluttered scene without the need for calibration or other knowledge of the camera geometry. This is achieved by implementing all image analysis in a symbolic space without creating explicit pixel-space maps. The system structure is based on the ‘searchlight metaphor’ of biological systems. We present results of a first pilot investigation that yield a maximum vergence error of 6.4 pixels, while seven of nine known objects were recognized in a high-cluttered environment. Finally a “stepping stone” visual search strategy was demonstrated, taking a total of 40 saccades to find two known objects in the workspace, neither of which appeared simultaneously within the Field of View resulting from any individual saccade
    corecore