25 research outputs found
Estimating the long-term impact of a prophylactic human papillomavirus 16/18 vaccine on the burden of cervical cancer in the UK
To predict the public health impact on cervical disease by introducing human papillomavirus (HPV) vaccination in the United Kingdom, we developed a mathematical model that can be used to reflect the impact of vaccination in different countries with existing screening programmes. Its use is discussed in the context of the United Kingdom. The model was calibrated with published data. The impact of vaccination on cervical cancer and deaths, precancerous lesions and screening outcomes were estimated for a vaccinated cohort of 12-year-old girls, among which it is estimated that there would be a reduction of 66% in the prevalence of high-grade precancerous lesions and a 76% reduction in cervical cancer deaths. Estimates for various other measures of the population effects of vaccination are also presented. We concluded that it is feasible to forecast the potential effects of HPV vaccination in the context of an existing national screening programme. Results suggest a sizable reduction in the incidence of cervical cancer and related deaths. Areas for future research include investigation of the beneficial effects of HPV vaccination on infection transmission and epidemic dynamics, as well as HPV-related neoplasms in other sites
Time-locked Cortical Processing of Speech in Complex Environments
Our ability to communicate using speech depends on complex, rapid processing mechanisms in the human brain. These cortical processes make it possible for us to easily understand one another even in noisy environments. Measurements of neural activity have found that cortical responses time-lock to the acoustic and linguistic features of speech. Investigating the neural mechanisms that underlie this ability could lead to a better understanding of human cognition, language comprehension, and hearing and speech impairments.
We use Magnetoencephalography (MEG), which non-invasively measures the magnetic fields that arise from neural activity, to further explore these time-locked cortical processes. One method for detecting this activity is the Temporal Response Function (TRF), which models the impulse response of the neural system to continuous stimuli. Prior work has found that TRFs reflect several stages of speech processing in the cortex. Accordingly, we use TRFs to investigate cortical processing of both low-level acoustic and high-level linguistic features of continuous speech.
First, we find that cortical responses time-lock at high gamma frequencies (~100 Hz) to the acoustic envelope modulations of the low pitch segments of speech. Older and younger listeners show similar high gamma responses, even though slow envelope TRFs show age-related differences. Next, we utilize frequency domain analysis, TRFs and linear decoders to investigate cortical processing of high-level structures such as sentences and equations. We find that the cortical networks involved in arithmetic processing dissociate from those underlying language processing, although bothinvolve several overlapping areas. These processes are more separable when subjects selectively attend to one speaker over another distracting speaker. Finally, we compare both conventional and novel TRF algorithms in terms of their ability to estimate TRF components, which may provide robust measures for analyzing group and task differences in auditory and speech processing. Overall, this work provides insights into several stages of time-locked cortical processing of speech and highlights the use of TRFs for investigating neural responses to continuous speech in complex environments
Bilaterally Reduced Rolandic Beta Band Activity in Minor Stroke Patients - Dataset
Stroke patients with hemiparesis display decreased beta band (13–25Hz) rolandic activity, correlating to impaired motor function. However, clinically, patients without significant weakness, with small lesions far from sensorimotor cortex, exhibit bilateral decreased motor dexterity and slowed reaction times. We investigate whether these minor stroke patients also display abnormal beta band activity. Magnetoencephalographic (MEG) data were collected from nine minor stroke patients (NIHSS < 4) without significant hemiparesis, at ~1 and ~6 months postinfarct, and eight age-similar controls. Rolandic relative beta power during matching tasks and resting state, and Beta Event Related (De)Synchronization (ERD/ERS) during button press responses were analyzed. Regardless of lesion location, patients had significantly reduced relative beta power and ERS compared to controls. abnormalities persisted over visits, and were present in both ipsi- and contra-lesional hemispheres, consistent with bilateral impairments in motor dexterity and speed. Minor stroke patients without severe weakness display reduced rolandic beta band activity in both hemispheres, which may be linked to bilaterally impaired dexterity and processing speed, implicating global
connectivity dysfunction affecting sensorimotor cortex independent of lesion location. Findings not only illustrate global network disruption after minor stroke, but suggest rolandic beta band activity may be a potential biomarker and treatment target, even for
minor stroke patients with small lesions far from sensorimotor areas.This work was supported by an Innovative Research Grant through the American Heart Association (18IPA34170313) and National Science Foundation Grant SMA-1734892
Time-locked auditory cortical responses in the high-gamma band: A window into primary auditory cortex
Primary auditory cortex is a critical stage in the human auditory pathway, a gateway between subcortical and higher-level cortical areas. Receiving the output of all subcortical processing, it sends its output on to higher-level cortex. Non-invasive physiological recordings of primary auditory cortex using electroencephalography (EEG) and magnetoencephalography (MEG), however, may not have sufficient specificity to separate responses generated in primary auditory cortex from those generated in underlying subcortical areas or neighboring cortical areas. This limitation is important for investigations of effects of top-down processing (e.g., selective-attention-based) on primary auditory cortex: higher-level areas are known to be strongly influenced by top-down processes, but subcortical areas are often assumed to perform strictly bottom-up processing. Fortunately, recent advances have made it easier to isolate the neural activity of primary auditory cortex from other areas. In this perspective, we focus on time-locked responses to stimulus features in the high gamma band (70-150 Hz) and with early cortical latency (similar to 40 ms), intermediate between subcortical and higher-level areas. We review recent findings from physiological studies employing either repeated simple sounds or continuous speech, obtaining either a frequency following response (FFR) or temporal response function (TRF). The potential roles of top-down processing are underscored, and comparisons with invasive intracranial EEG (iEEG) and animal model recordings are made. We argue that MEG studies employing continuous speech stimuli may offer particular benefits, in that only a few minutes of speech generates robust high gamma responses from bilateral primary auditory cortex, and without measurable interference from subcortical or higher-level areas.Funding Agencies|National Institute of Deafness and Other Communication Disorders [R01-DC019394]; National Institute on Aging [P01-AG055365]; National Science Foundation [SMA-1734892]; William Demant Foundation [20-0480]</p
Cortical Processing of Arithmetic and Simple Sentences in an Auditory Attention Task - Dataset
MEG dataset collected for a study on arithmetic and language processing. Full details of experiment design, stimuli and data preprocessing can be found at https://doi.org/10.1101/2021.01.31.429030. Additional information: Joshua P. Kulasingham - [email protected] processing of arithmetic and of language rely on both shared and task-specific neural mechanisms, which should also be dissociable from the particular sensory modality used to probe them. Here, spoken arithmetical and non-mathematical statements were employed to investigate neural processing of arithmetic, compared to general language processing, in an attention-modulated cocktail party paradigm. Magnetoencephalography (MEG) data were recorded from 22 human subjects listening to audio mixtures of spoken sentences and arithmetic equations while selectively attending to one of the two speech streams. Short sentences and simple equations were presented diotically at fixed and distinct word/symbol and sentence/equation rates. Critically, this allowed neural responses to acoustics, words, and symbols to be dissociated from responses to sentences and equations. Indeed, the simultaneous neural processing of the acoustics of words and symbols were observed in auditory cortex for both streams. Neural responses to sentences and equations, however, were predominantly to the attended stream, originating primarily from left temporal, and parietal areas, respectively. Additionally, these neural responses were correlated with behavioral performance in a deviant detection task. Source-localized Temporal Response Functions revealed distinct cortical dynamics of responses to sentences in left temporal areas and equations in bilateral temporal, parietal, and motor areas. Finally, the target of attention could be decoded from MEG responses, especially in left superior parietal areas. In short, the neural responses to arithmetic and language are especially well segregated during the cocktail party paradigm, and the correlation with behavior suggests that they may be linked to successful comprehension or calculation.This work was supported by DARPA (N660011824024), the National Science Foundation (SMA-1734892 and DGE-1449815), and the National Institutes of Health (R01-DC014085). The views, opinions and/or findings expressed are those of the authors and
should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government
Extending Subcortical EEG Responses to Continuous Speech to the Sound-Field
The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.</p
High Frequency Cortical Processing of Continuous Speech in Younger and Older Listeners - Dataset
MEG dataset collected for a study on age-related changes in hearing. Older and younger subjects with clinically normal hearing listened to 60 second narrations of an English audiobook by a male speaker. This dataset extends the dataset given in http://hdl.handle.net/1903/21184, with the addition of 8 additional subjects. The analysis of original dataset was published in Presacco et. al. 2016a (https://doi.org/10.1152/jn.00372.2016), 2016b (https://doi.org/10.1152/jn.00373.2016)Neural processing along the ascending auditory pathway is often associated with a progressive reduction in characteristic processing rates. For instance, the well-known frequency-following response (FFR) of the auditory midbrain, as measured with electroencephalography (EEG), is dominated by frequencies from ~100 Hz to several hundred Hz, phase-locking to the stimulus waveform at those frequencies. In contrast, cortical responses, whether measured by EEG or magnetoencephalography (MEG), are typically characterized by frequencies of a few Hz to a few tens of Hz, time-locking to acoustic envelope features. In this study we investigated a crossover, cortically generated responses time-locked to continuous speech features at FFR-like rates. Using MEG, we analyzed high-frequency responses (70-300 Hz) to continuous speech using neural source-localized reverse correlation and its corresponding temporal response functions (TRFs). Continuous speech stimuli were presented to 40 subjects (17 younger, 23 older adults) with clinically normal hearing and their MEG responses were analyzed in the 70-300 Hz band. Consistent with the insensitivity of MEG to many subcortical structures, the spatiotemporal profile of these response components indicated a purely cortical origin with ~40 ms peak latency and a right hemisphere bias. TRF analysis was performed using two separate aspects of the speech stimuli: a) the 70-300 Hz band of the speech waveform itself, and b) the 70-300 Hz temporal modulations in the high frequency envelope (300-4000 Hz) of the speech stimulus. The response was dominantly driven by the high frequency envelope, with a much weaker contribution from the waveform (carrier) itself. Age-related differences were also analyzed to investigate a reversal previously seen along the ascending auditory pathway, whereby older listeners show weaker midbrain FFR responses than younger listeners, but, paradoxically, have stronger cortical low frequency responses. In contrast to both these earlier results, this study does not find clear age-related differences in high frequency cortical responses. Finally, these results suggest that EEG high (FFR-like) frequency responses have distinct and separable contributions from both subcortical and cortical sources. Cortical responses at FFR-like frequencies share some properties with midbrain responses at the same frequencies and with cortical responses at much lower frequencies.This work was supported by the National Institute on Deafness and Other Communication Disorders (R01 DC-014085); the National Institute of Aging (P01 AG-055365); and the National Science Foundation (SMA1734892)
Predictors for estimating subcortical EEG responses to continuous speech.
Perception of sounds and speech involves structures in the auditory brainstem that rapidly process ongoing auditory stimuli. The role of these structures in speech processing can be investigated by measuring their electrical activity using scalp-mounted electrodes. However, typical analysis methods involve averaging neural responses to many short repetitive stimuli that bear little relevance to daily listening environments. Recently, subcortical responses to more ecologically relevant continuous speech were detected using linear encoding models. These methods estimate the temporal response function (TRF), which is a regression model that minimises the error between the measured neural signal and a predictor derived from the stimulus. Using predictors that model the highly non-linear peripheral auditory system may improve linear TRF estimation accuracy and peak detection. Here, we compare predictors from both simple and complex peripheral auditory models for estimating brainstem TRFs on electroencephalography (EEG) data from 24 participants listening to continuous speech. We also investigate the data length required for estimating subcortical TRFs, and find that around 12 minutes of data is sufficient for clear wave V peaks (>3 dB SNR) to be seen in nearly all participants. Interestingly, predictors derived from simple filterbank-based models of the peripheral auditory system yield TRF wave V peak SNRs that are not significantly different from those estimated using a complex model of the auditory nerve, provided that the nonlinear effects of adaptation in the auditory system are appropriately modelled. Crucially, computing predictors from these simpler models is more than 50 times faster compared to the complex model. This work paves the way for efficient modelling and detection of subcortical processing of continuous speech, which may lead to improved diagnosis metrics for hearing impairment and assistive hearing technology
Post-Stroke Acute Dysexecutive Syndrome, a Disorder Resulting from Minor Stroke due to Disruption of Network Dynamics - Dataset
Data for: Marsh EB, Brodbeck C, Llinas RH, Mallick D, Kulasingham JP, Simon JZ, Llinas RR. Post-Stroke Acute Dysexecutive Syndrome, a Disorder Resulting from Minor Stroke due to Disruption of Network Dynamics. PNAS, in press.
This dataset contains MEG responses for each subject, with artifacts removed, averaged by task. Files can be read using MNE: https://mne.tools
Additional information: EB Marsh- [email protected] patients with small CNS infarcts often demonstrate an acute dysexecutive syndrome characterized by difficulty with attention, concentration, and processing speed, independent of lesion size or location. We use magnetoencephalography (MEG)
to show that disruption of network dynamics may be responsible. Nine patients with recent minor stroke and 8 age-similar controls underwent cognitive screening using the Montreal Cognitive Assessment (MoCA) and MEG to evaluate differences in cerebral activation patterns. During MEG, subjects participated in a visual picture-word matching task. Task complexity was increased as testing progressed. Cluster based permutation tests determined differences in activation patterns within the visual cortex, fusiform gyrus, and lateral temporal lobe. At visit 1, MoCA scores were significantly lower for patients than controls (median (IQR)=26.0 (4) versus 29.5 (3), p=0.005), and patient reaction times were increased. The amplitude of activation was significantly lower after infarct and demonstrated a pattern of temporal dispersion independent of stroke location. Differences were prominent in the fusiform gyrus and lateral temporal lobe. The pattern suggests that distributed network dysfunction may be responsible. Additionally, controls were able to modulate their cerebral activity based on task difficulty. In contrast, stroke patients exhibited the same low-amplitude response to all stimuli. Group differences remained, to a lesser degree, six months later; while MoCA scores and reaction times improved for patients. This study suggests that function is a globally distributed property beyond area-specific functionality, and illustrates the need for longer-term follow-up studies to determine whether abnormal activation patterns ultimately resolve or another mechanism underlies continued recovery.This study was funded in part through an American Heart Association Innovative Project Award (AHA 18IPA34170313), and through the generous support of the Iorizzo family