3,492 research outputs found
Temporal isolation of neural processes underlying face preference decisions
Decisions about whether we like someone are often made so rapidly from first impressions that it is difficult to examine the engagement of neural structures at specific points in time. Here, we used a temporally extended decision-making paradigm to examine brain activation with functional MRI (fMRI) at sequential stages of the decision-making process. Activity in reward-related brain structures—the nucleus accumbens (NAC) and orbitofrontal cortex (OFC)—was found to occur at temporally dissociable phases while subjects decided which of two unfamiliar faces they preferred. Increases in activation in the OFC occurred late in the trial, consistent with a role for this area in computing the decision of which face to choose. Signal increases in the NAC occurred early in the trial, consistent with a role for this area in initial preference formation. Moreover, early signal increases in the NAC also occurred while subjects performed a control task (judging face roundness) when these data were analyzed on the basis of which of those faces were subsequently chosen as preferred in a later task. The findings support a model in which rapid, automatic engagement of the NAC conveys a preference signal to the OFC, which in turn is used to guide choice
Distractibility in daily life is reflected in the structure and function of human parietal cortex
We all appreciate that some of our friends and colleagues are more distractible than others. This variability can be captured by pencil and paper questionnaires in which individuals report such cognitive failures in their everyday life. Surprisingly, these self-report measures have high heritability, leading to the hypothesis that distractibility might have a basis in brain structure. In a large sample of healthy adults, we demonstrated that a simple self-report measure of everyday distractibility accurately predicted gray matter volume in a remarkably focal region of left superior parietal cortex. This region must play a causal role in reducing distractibility, because we found that disrupting its function with transcranial magnetic stimulation increased susceptibility to distraction. Finally, we showed that the self-report measure of distractibility reliably predicted our laboratory-based measure of attentional capture. Our findings distinguish a critical mechanism in the human brain causally involved in avoiding distractibility, which, importantly, bridges self-report judgments of cognitive failures in everyday life and a commonly used laboratory measure of distractibility to the structure of the human brai
Recommended from our members
Functional Brain Hyperactivations Are Linked to an Electrophysiological Measure of Slow Interhemispheric Transfer Time after Pediatric Moderate/Severe Traumatic Brain Injury.
Increased task-related blood oxygen level dependent (BOLD) activation is commonly observed in functional magnetic resonance imaging (fMRI) studies of moderate/severe traumatic brain injury (msTBI), but the functional relevance of these hyperactivations and how they are linked to more direct measures of neuronal function remain largely unknown. Here, we investigated how working memory load (WML)-dependent BOLD activation was related to an electrophysiological measure of interhemispheric transfer time (IHTT) in a sample of 18 msTBI patients and 26 demographically matched controls from the UCLA RAPBI (Recovery after Pediatric Brain Injury) study. In the context of highly similar fMRI task performance, a subgroup of TBI patients with slow IHTT had greater BOLD activation with higher WML than both healthy control children and a subgroup of msTBI patients with normal IHTT. Slower IHTT treated as a continuous variable was also associated with BOLD hyperactivation in the full TBI sample and in controls. Higher WML-dependent BOLD activation was related to better performance on a clinical cognitive performance index, an association that was more pronounced within the patient group with slow IHTT. Our previous work has shown that a subgroup of children with slow IHTT after pediatric msTBI has increased risk for poor white matter organization, long-term neurodegeneration, and poor cognitive outcome. BOLD hyperactivations after msTBI may reflect neuronal compensatory processes supporting higher-order capacity demanding cognitive functions in the context of inefficient neuronal transfer of information. The link between BOLD hyperactivations and slow IHTT adds to the multi-modal validation of this electrophysiological measure as a promising biomarker
A method for viewing and interacting with medical volumes in virtual reality
The medical field has long benefited from advancements in diagnostic imaging technology. Medical images created through methods such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are used by medical professionals to non-intrusively peer into the body to make decisions about surgeries. Over time, the viewing medium of medical images has evolved from X-ray film negatives to stereoscopic 3D displays, with each new development enhancing the viewer’s ability to discern detail or decreasing the time needed to produce and render a body scan. Though doctors and surgeons are trained to view medical images in 2D, some are choosing to view body scans in 3D through volume rendering. While traditional 2D displays can be used to display 3D data, a viewing method that incorporates depth would convey more information to the viewer. One device that has shown promise in medical image viewing applications is the Virtual Reality Head Mounted Display (VR HMD).
VR HMDs have recently increased in popularity, with several commodity devices being released within the last few years. The Oculus Rift, HTC Vive, and Windows Mixed Reality HMDs like the Samsung Odyssey offer higher resolution screens, more accurate motion tracking, and lower prices than earlier HMDs. They also include motion-tracked handheld controllers meant for navigation and interaction in video games. Because of their popularity and low cost, medical volume viewing software that is compatible with these headsets would be accessible to a wide audience. However, the introduction of VR to medical volume rendering presents difficulties in implementing consistent user interactions and ensuring performance.
Though all three headsets require unique driver software, they are compatible with OpenVR, a middleware that standardizes communication between the HMD, the HMD’s controllers, and VR software. However, the controllers included with the HMDs each has a slightly different control layout. Furthermore, buttons, triggers, touchpads, and joysticks that share the same hand position between devices do not report values to OpenVR in the same way. Implementing volume rendering functions like clipping and tissue density windowing on VR controllers could improve the user’s experience over mouse-and-keyboard schemes through the use of tracked hand and finger movements. To create a control scheme that is compatible with multiple HMD’s A way of mapping controls differently depending on the device was developed.
Additionally, volume rendering is a computationally intensive process, and even more so when rendering for an HMD. By using techniques like GPU raytracing with modern GPUs, real-time framerates are achievable on desktop computers with traditional displays. However, the importance of achieving high framerates is even greater when viewing with a VR HMD due to its higher level of immersion. Because the 3D scene occupies most of the user’s field of view, low or choppy framerates contribute to feelings of motion sickness. This was mitigated through a decrease in volume rendering quality in situations where the framerate drops below acceptable levels.
The volume rendering and VR interaction methods described in this thesis were demonstrated in an application developed for immersive viewing of medical volumes. This application places the user and a medical volume in a 3D VR environment, allowing the user to manually place clipping planes, adjust the tissue density window, and move the volume to achieve different viewing angles with handheld motion tracked controllers. The result shows that GPU raytraced medical volumes can be viewed and interacted with in VR using commodity hardware, and that a control scheme can be mapped to allow the same functions on different HMD controllers despite differences in layout
Generating Stimuli for Neuroscience Using PsychoPy
PsychoPy is a software library written in Python, using OpenGL to generate very precise visual stimuli on standard personal computers. It is designed to allow the construction of as wide a variety of neuroscience experiments as possible, with the least effort. By writing scripts in standard Python syntax users can generate an enormous variety of visual and auditory stimuli and can interact with a wide range of external hardware (enabling its use in fMRI, EEG, MEG etc.). The structure of scripts is simple and intuitive. As a result, new experiments can be written very quickly, and trying to understand a previously written script is easy, even with minimal code comments. PsychoPy can also generate movies and image sequences to be used in demos or simulated neuroscience experiments. This paper describes the range of tools and stimuli that it provides and the environment in which experiments are conducted
A field portable ultrasound system for bovine tissue characterization
Efforts by beef producers and beef packers to reduce carcass losses and to move the industry to a value-based marketing system have renewed their interest in the development of new electronic grading techniques for assessing beef carcass composition. A custom ultrasonic data acquisition system has been developed for the purpose of investigating the feasibility of beef tissue characterization. The unit developed is a compact, hand-held, six channel battery-operated data logger capable of sampling A-mode ultrasound signals at a rate of 15 megasamples per second. A custom epoxy encapsulated ultrasound transducer array was developed to fit the inner curvature of the thoracic cavity on top of the intercostal muscle between the 12th and 13th rib. Data collected by the ultrasound unit is temporarily stored in a handheld data terminal/computer for retrieval and analysis on a personal computer at a later time. Ultrasound samples from 39 carcasses were analyzed for backscatter energy content. The ultrasound records were partitioned into two groups based on the contact characteristics between the probe and tissue. Preliminary fat estimation in the longissimus dorsi muscle have resulted in correlation coefficients on two partitioned data sets of 0.81 and 0.82. Preliminary marbling estimation in the longissimus dorsi muscle have resulted in correlation coefficients on two partitioned groups of 0.59 and 0.70
Attention-dependent modulation of neural activity in primary sensorimotor cortex
Although motor tasks at most times do not require much attention, there are findings that attention can alter neuronal activity not only in higher motor areas but also within the primary sensorimotor cortex. However, these findings are equivocal as attention effects were investigated only in either the dominant or the nondominant hand; attention was operationalized either as concentration (i.e., attention directed to motor task) or as distraction (i.e., attention directed away from motor task), the complexity of motor tasks varied and almost no left-handers were studied. Therefore, in this study, both right- and left-handers were investigated with an externally paced button press task in which subjects typed with the index finger of the dominant, nondominant, or both hands. We introduced four different attention levels: attention-modulation-free, distraction (counting backward), concentration on the moving finger, and divided concentration during bimanual movement. We found that distraction reduced neuronal activity in both contra- and ipsilateral primary sensorimotor cortex when the nondominant hand was tapping in both handedness groups. At the same time, distraction activated the dorsal frontoparietal attention network and deactivated the ventral default network. We conclude that difficulty and training status of both the motor and cognitive task, as well as usage of the dominant versus the nondominant hand, are crucial for the presence and magnitude of attention effects on sensorimotor cortex activity. In the case of a very simple button press task, attention modulation is seen for the nondominant hand under distraction and in both handedness groups
Recommended from our members
The multisensory attentional consequences of tool use : a functional magnetic resonance imaging study
Background: Tool use in humans requires that multisensory information is integrated across different locations, from objects
seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool
to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the
tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used.
Methodology/Principal Findings: We tested this hypothesis by scanning healthy human participants’ brains using
functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations,
accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional
hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied
significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore,
these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to
respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory
(visual-vibrotactile) interactions in participants’ behavioural responses significantly predicted the BOLD response in occipital
cortical areas that were also modulated as a function of both visual stimulus position and tool position.
Conclusions/Significance: These results show that using a simple tool to locate and to perceive vibrotactile stimuli is
accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in
enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly
observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional
importance of visuospatial information during human tool use
Interaction of numerosity and time in prefrontal and parietal cortex
It has been proposed that numerical and temporal information are processed by partially overlapping magnitude systems. Interactions across different magnitude domains could occur both at the level of perception and decision-making. However, their neural correlates have been elusive. Here, using functional magnetic resonance imaging in humans, we show that the right intraparietal cortex (IPC) and inferior frontal gyrus (IFG) are jointly activated by duration and numerosity discrimination tasks, with a congruency effect in the right IFG. To determine whether the IPC and the IFG are involved in response conflict (or facilitation) or modulation of subjective passage of time by numerical information, we examined their functional roles using transcranial magnetic stimulation (TMS) and two different numerosity-time interaction tasks: duration discrimination and time reproduction tasks. Our results show that TMS of the right IFG impairs categorical duration discrimination, whereas that of the right IPC modulates the degree of influence of numerosity on time perception and impairs precise time estimation. These results indicate that the right IFG is specifically involved at the categorical decision stage, whereas bleeding of numerosity information on perception of time occurs within the IPC. Together, our findings suggest a two-stage model of numerosity-time interactions whereby the interaction at the perceptual level occurs within the parietal region and the interaction at categorical decisions takes place in the prefrontal cortex
- …