32 research outputs found
List Equivalency of PRESTO for the Evaluation of Speech Recognition
BACKGROUND:
There is a pressing clinical need for the development of ecologically valid and robust assessment measures of speech recognition. Perceptually Robust English Sentence Test Open-set (PRESTO) is a new high-variability sentence recognition test that is sensitive to individual differences and was designed for use with several different clinical populations. PRESTO differs from other sentence recognition tests because the target sentences differ in talker, gender, and regional dialect. Increasing interest in using PRESTO as a clinical test of spoken word recognition dictates the need to establish equivalence across test lists.
PURPOSE:
The purpose of this study was to establish list equivalency of PRESTO for clinical use.
RESEARCH DESIGN:
PRESTO sentence lists were presented to three groups of normal-hearing listeners in noise (multitalker babble [MTB] at 0 dB signal-to-noise ratio) or under eight-channel cochlear implant simulation (CI-Sim).
STUDY SAMPLE:
Ninety-one young native speakers of English who were undergraduate students from the Indiana University community participated in this study.
DATA COLLECTION AND ANALYSIS:
Participants completed a sentence recognition task using different PRESTO sentence lists. They listened to sentences presented over headphones and typed in the words they heard on a computer. Keyword scoring was completed offline. Equivalency for sentence lists was determined based on the list intelligibility (mean keyword accuracy for each list compared with all other lists) and listener consistency (the relation between mean keyword accuracy on each list for each listener).
RESULTS:
Based on measures of list equivalency and listener consistency, ten PRESTO lists were found to be equivalent in the MTB condition, nine lists were equivalent in the CI-Sim condition, and six PRESTO lists were equivalent in both conditions.
CONCLUSIONS:
PRESTO is a valuable addition to the clinical toolbox for assessing sentence recognition across different populations. Because the test condition influenced the overall intelligibility of lists, researchers and clinicians should take the presentation conditions into consideration when selecting the best PRESTO lists for their research or clinical protocols
Some Neurocognitive Correlates of Noise-Vocoded Speech Perception in Children With Normal Hearing: A Replication and Extension of )
OBJECTIVES:
Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by ) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention (AA) and response set, talker discrimination, and verbal and nonverbal short-term working memory.
DESIGN:
Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (Peabody Picture Vocabulary test-4th Edition and Expressive Vocabulary test-2nd Edition) and measures of AA (NEPSY AA and response set and a talker discrimination task) and short-term memory (visual digit and symbol spans).
RESULTS:
Consistent with the findings reported in the original ) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the Peabody Picture Vocabulary test-4th Edition using language quotients to control for age effects. However, children who scored higher on the Expressive Vocabulary test-2nd Edition recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of AA and short-term memory capacity were significantly correlated with a child's ability to perceive noise-vocoded isolated words and sentences.
CONCLUSIONS:
First, we successfully replicated the major findings from the ) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children's ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally degraded speech reflects early peripheral auditory processes, as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that AA and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, because they are routinely required to encode, process, and understand spectrally degraded acoustic signals
Phonological discrimination and contrast detection in pupillometry
IntroductionThe perception of phonemes is guided by both low-level acoustic cues and high-level linguistic context. However, differentiating between these two types of processing can be challenging. In this study, we explore the utility of pupillometry as a tool to investigate both low- and high-level processing of phonological stimuli, with a particular focus on its ability to capture novelty detection and cognitive processing during speech perception.MethodsPupillometric traces were recorded from a sample of 22 Danish-speaking adults, with self-reported normal hearing, while performing two phonological-contrast perception tasks: a nonword discrimination task, which included minimal-pair combinations specific to the Danish language, and a nonword detection task involving the detection of phonologically modified words within sentences. The study explored the perception of contrasts in both unprocessed speech and degraded speech input, processed with a vocoder.ResultsNo difference in peak pupil dilation was observed when the contrast occurred between two isolated nonwords in the nonword discrimination task. For unprocessed speech, higher peak pupil dilations were measured when phonologically modified words were detected within a sentence compared to sentences without the nonwords. For vocoded speech, higher peak pupil dilation was observed for sentence stimuli, but not for the isolated nonwords, although performance decreased similarly for both tasks.ConclusionOur findings demonstrate the complexity of pupil dynamics in the presence of acoustic and phonological manipulation. Pupil responses seemed to reflect higher-level cognitive and lexical processing related to phonological perception rather than low-level perception of acoustic cues. However, the incorporation of multiple talkers in the stimuli, coupled with the relatively low task complexity, may have affected the pupil dilation
The assessment, serial evaluation, and subsequent sequelae of acute kidney injury (ASSESS-AKI) study: design and methods
<p>Abstract</p> <p>Background</p> <p>The incidence of acute kidney injury (AKI) has been increasing over time and is associated with a high risk of short-term death. Previous studies on hospital-acquired AKI have important methodological limitations, especially their retrospective study designs and limited ability to control for potential confounding factors.</p> <p>Methods</p> <p>The Assessment, Serial Evaluation, and Subsequent Sequelae of Acute Kidney Injury (ASSESS-AKI) Study was established to examine how a hospitalized episode of AKI independently affects the risk of chronic kidney disease development and progression, cardiovascular events, death, and other important patient-centered outcomes. This prospective study will enroll a cohort of 1100 adult participants with a broad range of AKI and matched hospitalized participants without AKI at three Clinical Research Centers, as well as 100 children undergoing cardiac surgery at three Clinical Research Centers. Participants will be followed for up to four years, and will undergo serial evaluation during the index hospitalization, at three months post-hospitalization, and at annual clinic visits, with telephone interviews occurring during the intervening six-month intervals. Biospecimens will be collected at each visit, along with information on lifestyle behaviors, quality of life and functional status, cognitive function, receipt of therapies, interim renal and cardiovascular events, electrocardiography and urinalysis.</p> <p>Conclusions</p> <p>ASSESS-AKI will characterize the short-term and long-term natural history of AKI, evaluate the incremental utility of novel blood and urine biomarkers to refine the diagnosis and prognosis of AKI, and identify a subset of high-risk patients who could be targeted for future clinical trials to improve outcomes after AKI.</p
Recommended from our members
Effect of Hydrocortisone on Mortality and Organ Support in Patients With Severe COVID-19: The REMAP-CAP COVID-19 Corticosteroid Domain Randomized Clinical Trial.
Importance: Evidence regarding corticosteroid use for severe coronavirus disease 2019 (COVID-19) is limited. Objective: To determine whether hydrocortisone improves outcome for patients with severe COVID-19. Design, Setting, and Participants: An ongoing adaptive platform trial testing multiple interventions within multiple therapeutic domains, for example, antiviral agents, corticosteroids, or immunoglobulin. Between March 9 and June 17, 2020, 614 adult patients with suspected or confirmed COVID-19 were enrolled and randomized within at least 1 domain following admission to an intensive care unit (ICU) for respiratory or cardiovascular organ support at 121 sites in 8 countries. Of these, 403 were randomized to open-label interventions within the corticosteroid domain. The domain was halted after results from another trial were released. Follow-up ended August 12, 2020. Interventions: The corticosteroid domain randomized participants to a fixed 7-day course of intravenous hydrocortisone (50 mg or 100 mg every 6 hours) (nâ=â143), a shock-dependent course (50 mg every 6 hours when shock was clinically evident) (nâ=â152), or no hydrocortisone (nâ=â108). Main Outcomes and Measures: The primary end point was organ support-free days (days alive and free of ICU-based respiratory or cardiovascular support) within 21 days, where patients who died were assigned -1 day. The primary analysis was a bayesian cumulative logistic model that included all patients enrolled with severe COVID-19, adjusting for age, sex, site, region, time, assignment to interventions within other domains, and domain and intervention eligibility. Superiority was defined as the posterior probability of an odds ratio greater than 1 (threshold for trial conclusion of superiority >99%). Results: After excluding 19 participants who withdrew consent, there were 384 patients (mean age, 60 years; 29% female) randomized to the fixed-dose (nâ=â137), shock-dependent (nâ=â146), and no (nâ=â101) hydrocortisone groups; 379 (99%) completed the study and were included in the analysis. The mean age for the 3 groups ranged between 59.5 and 60.4 years; most patients were male (range, 70.6%-71.5%); mean body mass index ranged between 29.7 and 30.9; and patients receiving mechanical ventilation ranged between 50.0% and 63.5%. For the fixed-dose, shock-dependent, and no hydrocortisone groups, respectively, the median organ support-free days were 0 (IQR, -1 to 15), 0 (IQR, -1 to 13), and 0 (-1 to 11) days (composed of 30%, 26%, and 33% mortality rates and 11.5, 9.5, and 6 median organ support-free days among survivors). The median adjusted odds ratio and bayesian probability of superiority were 1.43 (95% credible interval, 0.91-2.27) and 93% for fixed-dose hydrocortisone, respectively, and were 1.22 (95% credible interval, 0.76-1.94) and 80% for shock-dependent hydrocortisone compared with no hydrocortisone. Serious adverse events were reported in 4 (3%), 5 (3%), and 1 (1%) patients in the fixed-dose, shock-dependent, and no hydrocortisone groups, respectively. Conclusions and Relevance: Among patients with severe COVID-19, treatment with a 7-day fixed-dose course of hydrocortisone or shock-dependent dosing of hydrocortisone, compared with no hydrocortisone, resulted in 93% and 80% probabilities of superiority with regard to the odds of improvement in organ support-free days within 21 days. However, the trial was stopped early and no treatment strategy met prespecified criteria for statistical superiority, precluding definitive conclusions. Trial Registration: ClinicalTrials.gov Identifier: NCT02735707
Finishing the euchromatic sequence of the human genome
The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers âŒ99% of the euchromatic genome and is accurate to an error rate of âŒ1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead
Genomic epidemiology of SARS-CoV-2 in a UK university identifies dynamics of transmission
AbstractUnderstanding SARS-CoV-2 transmission in higher education settings is important to limit spread between students, and into at-risk populations. In this study, we sequenced 482 SARS-CoV-2 isolates from the University of Cambridge from 5 October to 6 December 2020. We perform a detailed phylogenetic comparison with 972 isolates from the surrounding community, complemented with epidemiological and contact tracing data, to determine transmission dynamics. We observe limited viral introductions into the university; the majority of student cases were linked to a single genetic cluster, likely following social gatherings at a venue outside the university. We identify considerable onward transmission associated with student accommodation and courses; this was effectively contained using local infection control measures and following a national lockdown. Transmission clusters were largely segregated within the university or the community. Our study highlights key determinants of SARS-CoV-2 transmission and effective interventions in a higher education setting that will inform public health policy during pandemics.</jats:p
Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19
IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19.
Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19.
DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 nonâcritically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022).
INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (nâ=â257), ARB (nâ=â248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; nâ=â10), or no RAS inhibitor (control; nâ=â264) for up to 10 days.
MAIN OUTCOMES AND MEASURES The primary outcome was organ supportâfree days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes.
RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ supportâfree days among critically ill patients was 10 (â1 to 16) in the ACE inhibitor group (nâ=â231), 8 (â1 to 17) in the ARB group (nâ=â217), and 12 (0 to 17) in the control group (nâ=â231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ supportâfree days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively).
CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes.
TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570
Development of otolith receptors in Japanese quail
This study examined the morphological development of the otolith vestibular receptors in quail. Here, we describe epithelial growth, hair cell density, stereocilia polarization, and afferent nerve innervation during development. The otolith maculae epithelial areas increased exponentially throughout embryonic development reaching asymptotic values near posthatch day P7. Increases in hair cell density were dependent upon macular location; striolar hair cells developed first followed by hair cells in extrastriola regions. Stereocilia polarization was initiated early, with defining reversal zones forming at E8. Less than half of all immature hair cells observed had nonpolarized internal kinocilia with the remaining exhibiting planar polarity. Immunohistochemistry and neural tracing techniques were employed to examine the shape and location of the striolar regions. Initial innervation of the maculae was by small fibers with terminal growth cones at E6, followed by collateral branches with apparent bouton terminals at E8. Calyceal terminal formation began at E10; however, no mature calyces were observed until E12, when all fibers appeared to be dimorphs. Calyx afferents innervating only Type I hair cells did not develop until E14. Finally, the topographic organization of afferent macular innervation in the adult quail utricle was quantified. Calyx and dimorph afferents were primarily confined to the striolar regions, while bouton fibers were located in the extrastriola and Type II band. Calyx fibers were the least complex, followed by dimorph units. Bouton fibers had large innervation fields, with arborous branches and many terminal boutons