10 research outputs found

    How are Three-Deminsional Objects Represented in the Brain?

    Get PDF
    We discuss a variety of object recognition experiments in which human subjects were presented with realistically rendered images of computer-generated three-dimensional objects, with tight control over stimulus shape, surface properties, illumination, and viewpoint, as well as subjects' prior exposure to the stimulus objects. In all experiments recognition performance was: (1) consistently viewpoint dependent; (2) only partially aided by binocular stereo and other depth information, (3) specific to viewpoints that were familiar; (4) systematically disrupted by rotation in depth more than by deforming the two-dimensional images of the stimuli. These results are consistent with recently advanced computational theories of recognition based on view interpolation

    Eye height manipulations:a possible solution to reduce underestimation of egocentric distances in head-mounted displays

    No full text
    Virtual reality technology can be considered a multipurpose tool for diverse applications in various domains, for example, training, prototyping, design, entertainment, and research investigating human perception. However, for many of these applications, it is necessary that the designed and computer-generated virtual environments are perceived as a replica of the real world. Many research studies have shown that this is not necessarily the case. Specifically, egocentric distances are underestimated compared to real-world estimates regardless of whether the virtual environment is displayed in a head-mounted display or on an immersive large-screen display. While the main reason for this observed distance underestimation is still unknown, we investigate a potential approach to reduce or even eliminate this distance underestimation. Building up on the angle of declination below the horizon relationship for perceiving egocentric distances, we describe how eye height manipulations in virtual reality should affect perceived distances. In addition, we describe how this relationship could be exploited to reduce distance underestimation for individual users. In a first experiment, we investigate the influence of a manipulated eye height on an action-based measure of egocentric distance perception. We found that eye height manipulations have similar predictable effects on an action-based measure of egocentric distance as we previously observed for a cognitive measure. This might make this approach more useful than other proposed solutions across different scenarios in various domains, for example, for collaborative tasks. In three additional experiments, we investigate the influence of an individualized manipulation of eye height to reduce distance underestimation in a sparse-cue and a rich-cue environment. In these experiments, we demonstrate that a simple eye height manipulation can be used to selectively alter perceived distances on an individual basis, which could be helpful to enable every user to have an experience close to what was intended by the content designer

    Frequency Domain System Identification of a Light Helicopter in Hover

    No full text
    This paper presents the implementation of a Multi-Input Single-Output fully coupled transfer function model of a civil light helicopter in hover. A frequency domain identification method is implemented. It is discussed that the chosen frequency range of excitation allows to capture some important rotor dynamic modes. Therefore, studies that require coupled rotor/body models are possible. The pitch-rate response with respect to the longitudinal cyclic is considered in detail throughout the paper. Different transfer functions are evaluated to compare the capability to capture the main helicopter dynamic modes. It is concluded that models with order less than 6 are not able to model the lead-lag dynamics in the pitch axis. Nevertheless, a transfer function model of the 4th order can provide acceptable results for handling qualities evaluations. The identified transfer function models are validated in the time domain with different input signals than those used during the identification and show good predictive capabilities. From the results it is possible to conclude that the identified transfer function models are able to capture the main dynamic characteristics of the considered light helicopter in hover

    Multi-loop Pilot Behavior Identification in Response to Simultaneous Visual and Haptic Stimuli

    No full text
    The goal of this paper is to better understand how the neuromuscular system of a pilot, or more generally an operator, adapts itself to different types of haptic aids during a pitch control task. A multi-loop pilot model, capable of describing the human behaviour during a tracking task, is presented. Three different identification techniques were investigated in order to simultaneously identify neuromuscular admittance and the visual response of a human pilot. In one of them, the various frequency response functions that build up the pilot model are identified using multi-inputs linear time-invariant models in ARX form. A second method makes use of cross-spectral densities and diagram block algebra to obtain the desired frequency response estimates. The identification techniques were validated using Monte Carlo simulations of a closed-loop control task. Both techniques were compared with the results of another identification method well known in literature and based on cross-spectral density estimates. All those methods were applied in an experimental setup in which pilots performed a pitch control task with different haptic aids. Two different haptic aids for tracking task are presented, a Direct Haptic Aid and an Indirect Haptic Aid. The two haptic aids were compared with a baseline condition in which no haptic force was used. The data obtained with the proposed method provide insight in how the pilot adapts his control behavior in relation to different haptic feedback schemes. From the experimental results it can be concluded that humans adapt their neuromuscular admittance in relation with different haptic aids. Furthermore, the two new identification techniques seemed to give more reliable admittance estimates

    The effect of social context on the use of visual information

    No full text
    Social context modulates action kinematics. Less is known about whether social context also affects the use of task relevant visual information. We tested this hypothesis by examining whether the instruction to play table tennis competitively or cooperatively affected the kind of visual cues necessary for successful table tennis performance. In two experiments, participants played table tennis in a dark room with only the ball, net, and table visible. Visual information about both players' actions was manipulated by means of self-glowing markers. We recorded the number of successful passes for each player individually. The results showed that participants' performance increased when their own body was rendered visible in both the cooperative and the competitive condition. However, social context modulated the importance of different sources of visual information about the other player. In the cooperative condition, seeing the other player's racket had the largest effects on performance increase, whereas in the competitive condition, seeing the other player's body resulted in the largest performance increase. These results suggest that social context selectively modulates the use of visual information about others' actions in social interactions.http://www.springerlink.com/content/b014430023h47417/Published versio

    Experimental evaluation of haptic support systems for learning a 2-DoF tracking task

    No full text
    This paper investigated use of a haptic support system for learning purposes. A 2 Degrees of Freedom (DoF) haptic force feedback system was designed for a dual-axes compensatory tracking task. The haptic system was used in a human-in-the-loop experiment with inexperienced participants on a xed-base simulator. In the experiment, participants were divided into 3 groups. All participants performed 30 trials of the compensatory tracking task. A group of participants (NoHA group) performed the whole experiment without haptic aid. The other two groups (HA20 and HA10 groups) performed a training phase with haptic aid, followed by an evaluation phase without haptic feedback. The HA20 group performed 20 trials in the training phase, whereas the HA10 group performed only 10 trials. The results show that haptic aid was benecial for performing the tracking task in the training phase for both the axes, compared to manual control. In the pitch axis performance of the HA20 group did not worsen when the feedback was switched o, whereas a considerable deterioration in performance was visible for HA10 group. Thus, haptic force feedback was eective to learn the control task in the pitch axis, compared to manual control. In the roll axis overall performance was found to be worse than the pitch axis. Moreover no benets were found from training with haptic feedback in the roll axis for both the haptic groups

    Neural Categorization of Vibrotactile Frequency in Flutter and Vibration Stimulations: An fMRI Study

    No full text
    As the use of wearable haptic devices with vibrating alert features is commonplace, an understanding of the perceptual categorization of vibrotactile frequencies has become important. This understanding can be substantially enhanced by unveiling how neural activity represents vibrotactile frequency information. Using functional magnetic resonance imaging (fMRI), this study investigated categorical clustering patterns of the frequency-dependent neural activity evoked by vibrotactile stimuli with gradually changing frequencies from 20 to 200 Hz. First, a searchlight multi-voxel pattern analysis (MVPA) was used to find brain regions exhibiting neural activities associated with frequency information. We found that the contralateral postcentral gyrus (S1) and the supramarginal gyrus (SMG) carried frequency-dependent information. Next, we applied multidimensional scaling (MDS) to find low-dimensional neural representations of different frequencies obtained from the multi-voxel activity patterns within these regions. The clustering analysis on the MDS results showed that neural activity patterns of 20-100 Hz and 120-200 Hz were divided into two distinct groups. Interestingly, this neural grouping conformed to the perceptual frequency categories found in the previous behavioral studies. Our findings therefore suggest that neural activity patterns in the somatosensory cortical regions may provide a neural basis for the perceptual categorization of vibrotactile frequency.clos

    Decoding pressure stimulation locations on the fingers from human neural activation patterns

    No full text
    In this functional MRI study, we investigated how the human brain activity represents tactile location information evoked by pressure stimulation on fingers. Using the searchlight multivoxel pattern analysis, we looked for local activity patterns that could be decoded into one of four stimulated finger locations. The supramarginal gyrus (SMG) and the thalamus were found to contain distinct multivoxel patterns corresponding to individual stimulated locations. In contrast, the univariate general linear model analysis contrasting stimulation against resting phases for each finger identified activations mainly in the primary somatosensory cortex (S1), but not in SMG or in thalamus. Our results indicate that S1 might be involved in the detection of the presence of pressure stimuli, whereas the SMG and the thalamus might play a role in identifying which finger is stimulated. This finding may provide additional evidence for hierarchical information processing in the human somatosensory areas.clos

    fMRI Adaptation between Action Observation and Action Execution Reveals Cortical Areas with Mirror Neuron Properties in Human BA 44/45 Citation for published version (APA): fMRI Adaptation between Action Observation and Action Execution Reveals Cortical A

    No full text
    Mirror neurons (MNs) are considered to be the supporting neural mechanism for action understanding. MNs have been identified in monkey's area F5. The identification of MNs in the human homolog of monkeys' area F5 Broadmann Area 44/45 (BA 44/45) has been proven methodologically difficult. Cross-modal functional MRI (fMRI) adaptation studies supporting the existence of MNs restricted their analysis to a priori candidate regions, whereas studies that failed to find evidence used non-object-directed (NDA) actions. We tackled these limitations by using object-directed actions (ODAs) differing only in terms of their object directedness in combination with a cross-modal adaptation paradigm and a whole-brain analysis. Additionally, we tested voxels' blood oxygenation level-dependent (BOLD) response patterns for several properties previously reported as typical MN response properties. Our results revealed 52 voxels in left inferior frontal gyrus (IFG; particularly BA 44/45), which respond to both motor and visual stimulation and exhibit cross-modal adaptation between the execution and observation of the same action. These results demonstrate that part of human IFG, specifically BA 44/45, has BOLD response characteristics very similar to monkey's area F5
    corecore