97 research outputs found
Elastic Block Modeling of Fault Slip Rates across Southern California
We present fault slip rate estimates for Southern California based on Global Positioning System (GPS) velocity data from the University NAVSTAR Consortium (UNAVCO), the Southern California Earthquake Center (SCEC), and new campaign GPS velocity data from the San Bernardino Mountains and vicinity. Fault slip-rates were calculated using Tdefnode, a program used to model elastic deformation within lithospheric blocks and slip on block bounding faults [2]. Our block model comprised most major faults within Southern California. Tdefnode produced similar slip rate values as other geodetic modeling techniques. The fastest slipping faults are the Imperial fault (37.4±0.1 mm/yr) and the Brawley seismic zone (23.5±0.1 mm/yr) in the SW section of the San Andreas fault (SAF). The slip rate of the SAF decreases northwestward from 18.7±0.2 mm/yr in Coachella Valley to 6.6±0.2 mm/yr along the Banning/Garnet Hill sections, as slip transfers northward into the Eastern California Shear zone. North of the junction with the San Jacinto fault (10.5±0.2 mm/yr), the San Andreas fault slip rate increases to 14.2±0.1 mm/yr in the Mojave section. Tdefnode slip rate estimates match well with geologic estimates for SAF (Coachella), SAF (San Gorgonio Pass), San Jacinto, Elsinore, and Whittier faults, but not so well for other faults. We determine that the northwest and Southeast sections of the SAF are slipping fastest with slip being partitioned over several faults in the central model area. In addition, our modeling technique produces similar results to other geodetic studies but deviated from geologic estimates. We conclude that Tdefnode is a viable modeling technique in this context and at the undergraduate level
Asymmetric interlimb transfer of concurrent adaptation to opposing dynamic forces
Interlimb transfer of a novel dynamic force has been well documented. It has also been shown that unimanual adaptation to opposing novel environments is possible if they are associated with different workspaces. The main aim of this study was to test if adaptation to opposing velocity dependent viscous forces with one arm could improve the initial performance of the other arm. The study also examined whether this interlimb transfer occurred across an extrinsic, spatial, coordinative system or an intrinsic, joint based, coordinative system. Subjects initially adapted to opposing viscous forces separated by target location. Our measure of performance was the correlation between the speed profiles of each movement within a force condition and an ‘average’ trajectory within null force conditions. Adaptation to the opposing forces was seen during initial acquisition with a significantly improved coefficient in epoch eight compared to epoch one. We then tested interlimb transfer from the dominant to non-dominant arm (D → ND) and vice-versa (ND → D) across either an extrinsic or intrinsic coordinative system. Interlimb transfer was only seen from the dominant to the non-dominant limb across an intrinsic coordinative system. These results support previous studies involving adaptation to a single dynamic force but also indicate that interlimb transfer of multiple opposing states is possible. This suggests that the information available at the level of representation allowing interlimb transfer can be more intricate than a general movement goal or a single perceived directional error
Visual, Motor and Attentional Influences on Proprioceptive Contributions to Perception of Hand Path Rectilinearity during Reaching
We examined how proprioceptive contributions to perception of hand path straightness are influenced by visual, motor and attentional sources of performance variability during horizontal planar reaching. Subjects held the handle of a robot that constrained goal-directed movements of the hand to the paths of controlled curvature. Subjects attempted to detect the presence of hand path curvature during both active (subject driven) and passive (robot driven) movements that either required active muscle force production or not. Subjects were less able to discriminate curved from straight paths when actively reaching for a target versus when the robot moved their hand through the same curved paths. This effect was especially evident during robot-driven movements requiring concurrent activation of lengthening but not shortening muscles. Subjects were less likely to report curvature and were more variable in reporting when movements appeared straight in a novel “visual channel” condition previously shown to block adaptive updating of motor commands in response to deviations from a straight-line hand path. Similarly, compromised performance was obtained when subjects simultaneously performed a distracting secondary task (key pressing with the contralateral hand). The effects compounded when these last two treatments were combined. It is concluded that environmental, intrinsic and attentional factors all impact the ability to detect deviations from a rectilinear hand path during goal-directed movement by decreasing proprioceptive contributions to limb state estimation. In contrast, response variability increased only in experimental conditions thought to impose additional attentional demands on the observer. Implications of these results for perception and other sensorimotor behaviors are discussed
Multimodal virtual environments: an opportunity to improve fire safety training?
Fires and fire-related fatalities remain a tragic and frequent occurrence. Evidence has shown that humans adopt sub-optimal behaviours during fire incidents and, therefore, training is one possible means to improve occupant survival rates. We present the potential benefits of using Virtual Environment Training (VET) for fire evacuation. These include experiential and active learning, the ability to interact with contexts which would be dangerous to experience in real life, the ability to customise training and scenarios to the learner, and analytics on learner performance. While several studies have investigated fire safety in VET, generally with positive outcomes, challenges related to cybersickness, interaction and content creation remain. Moreover, issues such as lack of behavioural realism have been attributed to the lack realistic sensory feedback. We argue for multimodal (visual, audio, olfactory, heat) virtual fire safety training to address limitations with existing simulators, and ultimately improve the outcomes of fire incidents. © 2020, Institution of Occupational Safety and Health
Compression of Auditory Space during Forward Self-Motion
<div><h3>Background</h3><p>Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation.</p> <h3>Methodology/Principal Findings</h3><p>Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener’s physical coronal plane reached the location of one of the speakers (null point). In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point.</p> <h3>Conclusions/Significance</h3><p>These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial shifts in the auditory receptive field locations driven by afferent signals from vestibular system.</p> </div
Proprioceptive Movement Illusions Due to Prolonged Stimulation: Reversals and Aftereffects
Background. Adaptation to constant stimulation has often been used to investigate the mechanisms of perceptual coding, but the adaptive processes within the proprioceptive channels that encode body movement have not been well described. We investigated them using vibration as a stimulus because vibration of muscle tendons results in a powerful illusion of movement. Methodology/Principal Findings. We applied sustained 90 Hz vibratory stimulation to biceps brachii, an elbow flexor and induced the expected illusion of elbow extension (in 12 participants). There was clear evidence of adaptation to the movement signal both during the 6-min long vibration and on its cessation. During vibration, the strong initial illusion of extension waxed and waned, with diminishing duration of periods of illusory movement and occasional reversals in the direction of the illusion. After vibration there was an aftereffect in which the stationary elbow seemed to move into flexion. Muscle activity shows no consistent relationship with the variations in perceived movement. Conclusion. We interpret the observed effects as adaptive changes in the central mechanisms that code movement in direction-selective opponent channels
Eye-Hand Coordination during Dynamic Visuomotor Rotations
Background
for many technology-driven visuomotor tasks such as tele-surgery, human operators face situations in which the frames of reference for vision and action are misaligned and need to be compensated in order to perform the tasks with the necessary precision. The cognitive mechanisms for the selection of appropriate frames of reference are still not fully understood. This study investigated the effect of changing visual and kinesthetic frames of reference during wrist pointing, simulating activities typical for tele-operations.
Methods
using a robotic manipulandum, subjects had to perform center-out pointing movements to visual targets presented on a computer screen, by coordinating wrist flexion/extension with abduction/adduction. We compared movements in which the frames of reference were aligned (unperturbed condition) with movements performed under different combinations of visual/kinesthetic dynamic perturbations. The visual frame of reference was centered to the computer screen, while the kinesthetic frame was centered around the wrist joint. Both frames changed their orientation dynamically (angular velocity\u200a=\u200a36\ub0/s) with respect to the head-centered frame of reference (the eyes). Perturbations were either unimodal (visual or kinesthetic), or bimodal (visual+kinesthetic). As expected, pointing performance was best in the unperturbed condition. The spatial pointing error dramatically worsened during both unimodal and most bimodal conditions. However, in the bimodal condition, in which both disturbances were in phase, adaptation was very fast and kinematic performance indicators approached the values of the unperturbed condition.
Conclusions
this result suggests that subjects learned to exploit an \u201caffordance\u201d made available by the invariant phase relation between the visual and kinesthetic frames. It seems that after detecting such invariance, subjects used the kinesthetic input as an informative signal rather than a disturbance, in order to compensate the visual rotation without going through the lengthy process of building an internal adaptation model. Practical implications are discussed as regards the design of advanced, high-performance man-machine interfaces
The Proprioceptive Map of the Arm Is Systematic and Stable, but Idiosyncratic
Visual and somatosensory signals participate together in providing an estimate of the hand's spatial location. While the ability of subjects to identify the spatial location of their hand based on visual and proprioceptive signals has previously been characterized, relatively few studies have examined in detail the spatial structure of the proprioceptive map of the arm. Here, we reconstructed and analyzed the spatial structure of the estimation errors that resulted when subjects reported the location of their unseen hand across a 2D horizontal workspace. Hand position estimation was mapped under four conditions: with and without tactile feedback, and with the right and left hands. In the task, we moved each subject's hand to one of 100 targets in the workspace while their eyes were closed. Then, we either a) applied tactile stimulation to the fingertip by allowing the index finger to touch the target or b) as a control, hovered the fingertip 2 cm above the target. After returning the hand to a neutral position, subjects opened their eyes to verbally report where their fingertip had been. We measured and analyzed both the direction and magnitude of the resulting estimation errors. Tactile feedback reduced the magnitude of these estimation errors, but did not change their overall structure. In addition, the spatial structure of these errors was idiosyncratic: each subject had a unique pattern of errors that was stable between hands and over time. Finally, we found that at the population level the magnitude of the estimation errors had a characteristic distribution over the workspace: errors were smallest closer to the body. The stability of estimation errors across conditions and time suggests the brain constructs a proprioceptive map that is reliable, even if it is not necessarily accurate. The idiosyncrasy across subjects emphasizes that each individual constructs a map that is unique to their own experiences
Adaptive tuning functions arise from visual observation of past movement
Visual observation of movement plays a key role in action. For example, tennis players have little time to react to the ball, but still need to prepare the appropriate stroke. Therefore, it might be useful to use visual information about the ball trajectory to recall a specific motor memory. Past visual observation of movement (as well as passive and active arm movement) affects the learning and recall of motor memories. Moreover, when passive or active, these past contextual movements exhibit generalization (or tuning) across movement directions. Here we extend this work, examining whether visual motion also exhibits similar generalization across movement directions and whether such generalization functions can explain patterns of interference. Both the adaptation movement and contextual movement exhibited generalization beyond the training direction, with the visual contextual motion exhibiting much broader tuning. A second experiment demonstrated that this pattern was consistent with the results of an interference experiment where opposing force fields were associated with two separate visual movements. Overall, our study shows that visual contextual motion exhibits much broader (and shallower) tuning functions than previously seen for either passive or active movements, demonstrating that the tuning characteristics of past motion are highly dependent on their sensory modality
A process model of the formation of spatial presence experiences
In order to bridge interdisciplinary differences in Presence research and to establish connections between Presence and “older” concepts of psychology and communication, a theoretical model of the formation of Spatial Presence is proposed. It is applicable to the exposure to different media and intended to unify the existing efforts to develop a theory of Presence. The model includes assumptions about attention allocation, mental models, and involvement, and considers the role of media factors and user characteristics as well, thus incorporating much previous work. It is argued that a commonly accepted model of Spatial Presence is the only solution to secure further progress within the international, interdisciplinary and multiple-paradigm community of Presence research
- …