22 research outputs found

    Recent smell loss is the best predictor of COVID-19 among individuals with recent respiratory symptoms

    Get PDF
    In a preregistered, cross-sectional study we investigated whether olfactory loss is a reliable predictor of COVID-19 using a crowdsourced questionnaire in 23 languages to assess symptoms in individuals self-reporting recent respiratory illness. We quantified changes in chemosensory abilities during the course of the respiratory illness using 0-100 visual analog scales (VAS) for participants reporting a positive (C19+; n=4148) or negative (C19-; n=546) COVID-19 laboratory test outcome. Logistic regression models identified univariate and multivariate predictors of COVID-19 status and post-COVID-19 olfactory recovery. Both C19+ and C19- groups exhibited smell loss, but it was significantly larger in C19+ participants (mean±SD, C19+: -82.5±27.2 points; C19-: -59.8±37.7). Smell loss during illness was the best predictor of COVID-19 in both univariate and multivariate models (ROC AUC=0.72). Additional variables provide negligible model improvement. VAS ratings of smell loss were more predictive than binary chemosensory yes/no-questions or other cardinal symptoms (e.g., fever). Olfactory recovery within 40 days of respiratory symptom onset was reported for ~50% of participants and was best predicted by time since respiratory symptom onset. We find that quantified smell loss is the best predictor of COVID-19 amongst those with symptoms of respiratory illness. To aid clinicians and contact tracers in identifying individuals with a high likelihood of having COVID-19, we propose a novel 0-10 scale to screen for recent olfactory loss, the ODoR-19. We find that numeric ratings ≤2 indicate high odds of symptomatic COVID-19 (4<10). Once independently validated, this tool could be deployed when viral lab tests are impractical or unavailable

    More than smell - COVID-19 is associated with severe impairment of smell, taste, and chemesthesis

    Get PDF
    Recent anecdotal and scientific reports have provided evidence of a link between COVID-19 and chemosensory impairments such as anosmia. However, these reports have downplayed or failed to distinguish potential effects on taste, ignored chemesthesis, generally lacked quantitative measurements, were mostly restricted to data from single countries. Here, we report the development, implementation and initial results of a multi-lingual, international questionnaire to assess self-reported quantity and quality of perception in three distinct chemosensory modalities (smell, taste, and chemesthesis) before and during COVID-19. In the first 11 days after questionnaire launch, 4039 participants (2913 women, 1118 men, 8 other, ages 19-79) reported a COVID-19 diagnosis either via laboratory tests or clinical assessment. Importantly, smell, taste and chemesthetic function were each significantly reduced compared to their status before the disease. Difference scores (maximum possible change+/-100) revealed a mean reduction of smell (-79.7+/- 28.7, mean+/- SD), taste (-69.0+/- 32.6), and chemesthetic (-37.3+/- 36.2) function during COVID-19. Qualitative changes in olfactory ability (parosmia and phantosmia) were relatively rare and correlated with smell loss. Importantly, perceived nasal obstruction did not account for smell loss. Furthermore, chemosensory impairments were similar between participants in the laboratory test and clinical assessment groups. These results show that COVID-19-associated chemosensory impairment is not limited to smell, but also affects taste and chemesthesis. The multimodal impact of COVID-19 and lack of perceived nasal obstruction suggest that SARS-CoV-2 infection may disrupt sensory-neural mechanisms.Additional co-authors: Veronica Pereda-Loth, Shannon B Olsson, Richard C Gerkin, Paloma Rohlfs Domínguez, Javier Albayay, Michael C. Farruggia, Surabhi Bhutani, Alexander W Fjaeldstad, Ritesh Kumar, Anna Menini, Moustafa Bensafi, Mari Sandell, Iordanis Konstantinidis, Antonella Di Pizio, Federica Genovese, Lina Öztürk, Thierry Thomas-Danguin, Johannes Frasnelli, Sanne Boesveldt, Özlem Saatci, Luis R. Saraiva, Cailu Lin, Jérôme Golebiowski, Liang-Dar Hwang, Mehmet Hakan Ozdener, Maria Dolors Guàrdia, Christophe Laudamiel, Marina Ritchie, Jan Havlícek, Denis Pierron, Eugeni Roura, Marta Navarro, Alissa A. Nolden, Juyun Lim, KL Whitcroft, Lauren R. Colquitt, Camille Ferdenzi, Evelyn V. Brindha, Aytug Altundag, Alberto Macchi, Alexia Nunez-Parra, Zara M. Patel, Sébastien Fiorucci, Carl M. Philpott, Barry C. Smith, Johan N Lundström, Carla Mucignat, Jane K. Parker, Mirjam van den Brink, Michael Schmuker, Florian Ph.S Fischmeister, Thomas Heinbockel, Vonnie D.C. Shields, Farhoud Faraji, Enrique Enrique Santamaría, William E.A. Fredborg, Gabriella Morini, Jonas K. Olofsson, Maryam Jalessi, Noam Karni, Anna D'Errico, Rafieh Alizadeh, Robert Pellegrino, Pablo Meyer, Caroline Huart, Ben Chen, Graciela M. Soler, Mohammed K. Alwashahi, Olagunju Abdulrahman, Antje Welge-Lüssen, Pamela Dalton, Jessica Freiherr, Carol H. Yan, Jasper H. B. de Groot, Vera V. Voznessenskaya, Hadar Klein, Jingguo Chen, Masako Okamoto, Elizabeth A. Sell, Preet Bano Singh, Julie Walsh-Messinger, Nicholas S. Archer, Sachiko Koyama, Vincent Deary, Hüseyin Yanik, Samet Albayrak, Lenka Martinec Novákov, Ilja Croijmans, Patricia Portillo Mazal, Shima T. Moein, Eitan Margulis, Coralie Mignot, Sajidxa Mariño, Dejan Georgiev, Pavan K. Kaushik, Bettina Malnic, Hong Wang, Shima Seyed-Allaei, Nur Yoluk, Sara Razzaghi, Jeb M. Justice, Diego Restrepo, Julien W Hsieh, Danielle R. Reed, Thomas Hummel, Steven D Munger, John E Haye

    A word by any other intonation: fMRI evidence for implicit memory traces for pitch contours of spoken words in adult brains.

    Get PDF
    OBJECTIVES: Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. EXPERIMENTAL DESIGN: Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition. PRINCIPAL FINDINGS: The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. CONCLUSIONS: Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words

    ROI analysis in STG-MTG in the two semantic categorization tasks, pooled together.

    No full text
    <p>Contrast values (CV) and SEM in the three repeating blocks of each of the three prosodic modulations (M, P, and V). (<b>A</b>) Left STG- MTG; (<b>B</b>) Right STG-MTG. Repetition suppression (RS) effects were found in the M and P modulations (*p<0.016, Bonferroni test for multiple comparisons).</p

    Regions of activation in both semantic categorization tasks in all three prosodic modulation conditions (center of clusters, extent threshold: k = 27 voxels).

    No full text
    <p>Center of cluster given in Talariach coordinates. MT/ST = middle temporal/superior temporal lobes (encompassing both banks of the superior temporal sulcus); SMA = supplementary motor area; SMG = supramarginal gyrus; IFG = inferior frontal gyrus. Some brain areas showed more than one center of activation. For all regions except left and right IFG: p<0.05, FWE correction. For left and right IFG: p<0.001.</p

    ROI analysis, left and right STG-MTG in the three repeating mini-blocks of the non-semantic task.

    No full text
    <p>Contrast values (CV) and SEM are shown. (A) During C modulation - constant pitch contour for all repetitions of the same word; (B) during V modulation - variable pitch contour between repetitions of the same word. Repetition suppression (RS) effect was found only in the C modulation (*p<0.016, Bonferroni test for multiple comparisons.</p

    ROI analysis, left and right A1 in the three repeating mini-blocks of the non-semantic task Contrast values (CV) and SEM are shown.

    No full text
    <p>(A) during C modulation - single, constant pitch contour for all repetitions of the same word; (B) during V modulation - variable pitch contour between repetitions of the same word. Repetition suppression (RS) effect was found only in the C modulation (*p<0.016, Bonferroni test for multiple comparisons).</p

    Regions of activation induced by the semantic categorization tasks in all three prosodic modulations.

    No full text
    <p>(<b>A</b>) High threshold Map: p<0.05; FWE corrected, Extent threshold: k = 27 voxels. (<b>B</b>) Low threshold Map: p<0.001, non -corrected; Extent threshold: k = 27 voxels.</p
    corecore