50 research outputs found

    Maximum-Entropy Adversarial Audio Augmentation for Keyword Spotting

    Full text link
    Data augmentation is a key tool for improving the performance of deep networks, particularly when there is limited labeled data. In some fields, such as computer vision, augmentation methods have been extensively studied; however, for speech and audio data, there are relatively fewer methods developed. Using adversarial learning as a starting point, we develop a simple and effective augmentation strategy based on taking the gradient of the entropy of the outputs with respect to the inputs and then creating new data points by moving in the direction of the gradient to maximize the entropy. We validate its efficacy on several keyword spotting tasks as well as standard audio benchmarks. Our method is straightforward to implement, offering greater computational efficiency than more complex adversarial schemes like GANs. Despite its simplicity, it proves robust and effective, especially when combined with the established SpecAugment technique, leading to enhanced performance.Comment: 5 pages, 2 figure

    Neurophysiological Vocal Source Modeling for Biomarkers of Disease

    Get PDF
    Speech is potentially a rich source of biomarkers for detecting and monitoring neuropsychological disorders. Current biomarkers typically comprise acoustic descriptors extracted from behavioral measures of source, filter, prosodic and linguistic cues. In contrast, in this paper, we extract vocal features based on a neurocomputational model of speech production, reflecting latent or internal motor control parameters that may be more sensitive to individual variation under neuropsychological disease. These features, which are constrained by neurophysiology, may be resilient to artifacts and provide an articulatory complement to acoustic features. Our features represent a mapping from a low-dimensional acoustics-based feature space to a high-dimensional space that captures the underlying neural process including articulatory commands and auditory and somatosensory feedback errors. In particular, we demonstrate a neurophysiological vocal source model that generates biomarkers of disease by modeling vocal source control. By using the fundamental frequency contour and a biophysical representation of the vocal source, we infer two neuromuscular time series whose coordination provides vocal features that are applied to depression and Parkinson’s disease as examples. These vocal source coordination features alone, on a single held vowel, outperform or are comparable to other features sets and reflect a significant compression of the feature space.United States. Air Force (Contract No. FA8721-05-C-0002)United States. Air Force (Contract No. FA8702-15- D-0001

    Physiologic Status Monitoring via the Gastrointestinal Tract

    Get PDF
    Reliable, real-time heart and respiratory rates are key vital signs used in evaluating the physiological status in many clinical and non-clinical settings. Measuring these vital signs generally requires superficial attachment of physically or logistically obtrusive sensors to subjects that may result in skin irritation or adversely influence subject performance. Given the broad acceptance of ingestible electronics, we developed an approach that enables vital sign monitoring internally from the gastrointestinal tract. Here we report initial proof-of-concept large animal (porcine) experiments and a robust processing algorithm that demonstrates the feasibility of this approach. Implementing vital sign monitoring as a stand-alone technology or in conjunction with other ingestible devices has the capacity to significantly aid telemedicine, optimize performance monitoring of athletes, military service members, and first-responders, as well as provide a facile method for rapid clinical evaluation and triage.United States. Dept. of the Air Force (Air Force Contract FA8721-05-C-0002)United States. Dept. of Defense. Assistant Secretary of Defense for Research & EngineeringNational Institutes of Health (U.S.) (Grant EB000244)National Institutes of Health (U.S.) (Grant T32DK7191-38-S1

    TRY plant trait database – enhanced coverage and open access

    Get PDF
    Plant traits - the morphological, anatomical, physiological, biochemical and phenological characteristics of plants - determine how plants respond to environmental factors, affect other trophic levels, and influence ecosystem properties and their benefits and detriments to people. Plant trait data thus represent the basis for a vast area of research spanning from evolutionary biology, community and functional ecology, to biodiversity conservation, ecosystem and landscape management, restoration, biogeography and earth system modelling. Since its foundation in 2007, the TRY database of plant traits has grown continuously. It now provides unprecedented data coverage under an open access data policy and is the main plant trait database used by the research community worldwide. Increasingly, the TRY database also supports new frontiers of trait‐based plant research, including the identification of data gaps and the subsequent mobilization or measurement of new data. To support this development, in this article we evaluate the extent of the trait data compiled in TRY and analyse emerging patterns of data coverage and representativeness. Best species coverage is achieved for categorical traits - almost complete coverage for ‘plant growth form’. However, most traits relevant for ecology and vegetation modelling are characterized by continuous intraspecific variation and trait–environmental relationships. These traits have to be measured on individual plants in their respective environment. Despite unprecedented data coverage, we observe a humbling lack of completeness and representativeness of these continuous traits in many aspects. We, therefore, conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements. This can only be achieved in collaboration with other initiatives

    TRY plant trait database – enhanced coverage and open access

    Get PDF
    Plant traits—the morphological, anatomical, physiological, biochemical and phenological characteristics of plants—determine how plants respond to environmental factors, affect other trophic levels, and influence ecosystem properties and their benefits and detriments to people. Plant trait data thus represent the basis for a vast area of research spanning from evolutionary biology, community and functional ecology, to biodiversity conservation, ecosystem and landscape management, restoration, biogeography and earth system modelling. Since its foundation in 2007, the TRY database of plant traits has grown continuously. It now provides unprecedented data coverage under an open access data policy and is the main plant trait database used by the research community worldwide. Increasingly, the TRY database also supports new frontiers of trait-based plant research, including the identification of data gaps and the subsequent mobilization or measurement of new data. To support this development, in this article we evaluate the extent of the trait data compiled in TRY and analyse emerging patterns of data coverage and representativeness. Best species coverage is achieved for categorical traits—almost complete coverage for ‘plant growth form’. However, most traits relevant for ecology and vegetation modelling are characterized by continuous intraspecific variation and trait–environmental relationships. These traits have to be measured on individual plants in their respective environment. Despite unprecedented data coverage, we observe a humbling lack of completeness and representativeness of these continuous traits in many aspects. We, therefore, conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements. This can only be achieved in collaboration with other initiatives.Rest of authors: Decky Junaedi, Robert R. Junker, Eric Justes, Richard Kabzems, Jeffrey Kane, Zdenek Kaplan, Teja Kattenborn, Lyudmila Kavelenova, Elizabeth Kearsley, Anne Kempel, Tanaka Kenzo, Andrew Kerkhoff, Mohammed I. Khalil, Nicole L. Kinlock, Wilm Daniel Kissling, Kaoru Kitajima, Thomas Kitzberger, Rasmus Kjøller, Tamir Klein, Michael Kleyer, Jitka Klimešová, Joice Klipel, Brian Kloeppel, Stefan Klotz, Johannes M. H. Knops, Takashi Kohyama, Fumito Koike, Johannes Kollmann, Benjamin Komac, Kimberly Komatsu, Christian König, Nathan J. B. Kraft, Koen Kramer, Holger Kreft, Ingolf Kühn, Dushan Kumarathunge, Jonas Kuppler, Hiroko Kurokawa, Yoko Kurosawa, Shem Kuyah, Jean-Paul Laclau, Benoit Lafleur, Erik Lallai, Eric Lamb, Andrea Lamprecht, Daniel J. Larkin, Daniel Laughlin, Yoann Le Bagousse-Pinguet, Guerric le Maire, Peter C. le Roux, Elizabeth le Roux, Tali Lee, Frederic Lens, Simon L. Lewis, Barbara Lhotsky, Yuanzhi Li, Xine Li, Jeremy W. Lichstein, Mario Liebergesell, Jun Ying Lim, Yan-Shih Lin, Juan Carlos Linares, Chunjiang Liu, Daijun Liu, Udayangani Liu, Stuart Livingstone, Joan Llusià, Madelon Lohbeck, Álvaro López-García, Gabriela Lopez-Gonzalez, Zdeňka Lososová, Frédérique Louault, Balázs A. Lukács, Petr Lukeš, Yunjian Luo, Michele Lussu, Siyan Ma, Camilla Maciel Rabelo Pereira, Michelle Mack, Vincent Maire, Annikki Mäkelä, Harri Mäkinen, Ana Claudia Mendes Malhado, Azim Mallik, Peter Manning, Stefano Manzoni, Zuleica Marchetti, Luca Marchino, Vinicius Marcilio-Silva, Eric Marcon, Michela Marignani, Lars Markesteijn, Adam Martin, Cristina Martínez-Garza, Jordi Martínez-Vilalta, Tereza Mašková, Kelly Mason, Norman Mason, Tara Joy Massad, Jacynthe Masse, Itay Mayrose, James McCarthy, M. Luke McCormack, Katherine McCulloh, Ian R. McFadden, Brian J. McGill, Mara Y. McPartland, Juliana S. Medeiros, Belinda Medlyn, Pierre Meerts, Zia Mehrabi, Patrick Meir, Felipe P. L. Melo, Maurizio Mencuccini, Céline Meredieu, Julie Messier, Ilona Mészáros, Juha Metsaranta, Sean T. Michaletz, Chrysanthi Michelaki, Svetlana Migalina, Ruben Milla, Jesse E. D. Miller, Vanessa Minden, Ray Ming, Karel Mokany, Angela T. Moles, Attila Molnár V, Jane Molofsky, Martin Molz, Rebecca A. Montgomery, Arnaud Monty, Lenka Moravcová, Alvaro Moreno-Martínez, Marco Moretti, Akira S. Mori, Shigeta Mori, Dave Morris, Jane Morrison, Ladislav Mucina, Sandra Mueller, Christopher D. Muir, Sandra Cristina Müller, François Munoz, Isla H. Myers-Smith, Randall W. Myster, Masahiro Nagano, Shawna Naidu, Ayyappan Narayanan, Balachandran Natesan, Luka Negoita, Andrew S. Nelson, Eike Lena Neuschulz, Jian Ni, Georg Niedrist, Jhon Nieto, Ülo Niinemets, Rachael Nolan, Henning Nottebrock, Yann Nouvellon, Alexander Novakovskiy, The Nutrient Network, Kristin Odden Nystuen, Anthony O'Grady, Kevin O'Hara, Andrew O'Reilly-Nugent, Simon Oakley, Walter Oberhuber, Toshiyuki Ohtsuka, Ricardo Oliveira, Kinga Öllerer, Mark E. Olson, Vladimir Onipchenko, Yusuke Onoda, Renske E. Onstein, Jenny C. Ordonez, Noriyuki Osada, Ivika Ostonen, Gianluigi Ottaviani, Sarah Otto, Gerhard E. Overbeck, Wim A. Ozinga, Anna T. Pahl, C. E. Timothy Paine, Robin J. Pakeman, Aristotelis C. Papageorgiou, Evgeniya Parfionova, Meelis Pärtel, Marco Patacca, Susana Paula, Juraj Paule, Harald Pauli, Juli G. Pausas, Begoña Peco, Josep Penuelas, Antonio Perea, Pablo Luis Peri, Ana Carolina Petisco-Souza, Alessandro Petraglia, Any Mary Petritan, Oliver L. Phillips, Simon Pierce, Valério D. Pillar, Jan Pisek, Alexandr Pomogaybin, Hendrik Poorter, Angelika Portsmuth, Peter Poschlod, Catherine Potvin, Devon Pounds, A. Shafer Powell, Sally A. Power, Andreas Prinzing, Giacomo Puglielli, Petr Pyšek, Valerie Raevel, Anja Rammig, Johannes Ransijn, Courtenay A. Ray, Peter B. Reich, Markus Reichstein, Douglas E. B. Reid, Maxime Réjou-Méchain, Victor Resco de Dios, Sabina Ribeiro, Sarah Richardson, Kersti Riibak, Matthias C. Rillig, Fiamma Riviera, Elisabeth M. R. Robert, Scott Roberts, Bjorn Robroek, Adam Roddy, Arthur Vinicius Rodrigues, Alistair Rogers, Emily Rollinson, Victor Rolo, Christine Römermann, Dina Ronzhina, Christiane Roscher, Julieta A. Rosell, Milena Fermina Rosenfield, Christian Rossi, David B. Roy, Samuel Royer-Tardif, Nadja Rüger, Ricardo Ruiz-Peinado, Sabine B. Rumpf, Graciela M. Rusch, Masahiro Ryo, Lawren Sack, Angela Saldaña, Beatriz Salgado-Negret, Roberto Salguero-Gomez, Ignacio Santa-Regina, Ana Carolina Santacruz-García, Joaquim Santos, Jordi Sardans, Brandon Schamp, Michael Scherer-Lorenzen, Matthias Schleuning, Bernhard Schmid, Marco Schmidt, Sylvain Schmitt, Julio V. Schneider, Simon D. Schowanek, Julian Schrader, Franziska Schrodt, Bernhard Schuldt, Frank Schurr, Galia Selaya Garvizu, Marina Semchenko, Colleen Seymour, Julia C. Sfair, Joanne M. Sharpe, Christine S. Sheppard, Serge Sheremetiev, Satomi Shiodera, Bill Shipley, Tanvir Ahmed Shovon, Alrun Siebenkäs, Carlos Sierra, Vasco Silva, Mateus Silva, Tommaso Sitzia, Henrik Sjöman, Martijn Slot, Nicholas G. Smith, Darwin Sodhi, Pamela Soltis, Douglas Soltis, Ben Somers, Grégory Sonnier, Mia Vedel Sørensen, Enio Egon Sosinski Jr, Nadejda A. Soudzilovskaia, Alexandre F. Souza, Marko Spasojevic, Marta Gaia Sperandii, Amanda B. Stan, James Stegen, Klaus Steinbauer, Jörg G. Stephan, Frank Sterck, Dejan B. Stojanovic, Tanya Strydom, Maria Laura Suarez, Jens-Christian Svenning, Ivana Svitková, Marek Svitok, Miroslav Svoboda, Emily Swaine, Nathan Swenson, Marcelo Tabarelli, Kentaro Takagi, Ulrike Tappeiner, Rubén Tarifa, Simon Tauugourdeau, Cagatay Tavsanoglu, Mariska te Beest, Leho Tedersoo, Nelson Thiffault, Dominik Thom, Evert Thomas, Ken Thompson, Peter E. Thornton, Wilfried Thuiller, Lubomír Tichý, David Tissue, Mark G. Tjoelker, David Yue Phin Tng, Joseph Tobias, Péter Török, Tonantzin Tarin, José M. Torres-Ruiz, Béla Tóthmérész, Martina Treurnicht, Valeria Trivellone, Franck Trolliet, Volodymyr Trotsiuk, James L. Tsakalos, Ioannis Tsiripidis, Niklas Tysklind, Toru Umehara, Vladimir Usoltsev, Matthew Vadeboncoeur, Jamil Vaezi, Fernando Valladares, Jana Vamosi, Peter M. van Bodegom, Michiel van Breugel, Elisa Van Cleemput, Martine van de Weg, Stephni van der Merwe, Fons van der Plas, Masha T. van der Sande, Mark van Kleunen, Koenraad Van Meerbeek, Mark Vanderwel, Kim André Vanselow, Angelica Vårhammar, Laura Varone, Maribel Yesenia Vasquez Valderrama, Kiril Vassilev, Mark Vellend, Erik J. Veneklaas, Hans Verbeeck, Kris Verheyen, Alexander Vibrans, Ima Vieira, Jaime Villacís, Cyrille Violle, Pandi Vivek, Katrin Wagner, Matthew Waldram, Anthony Waldron, Anthony P. Walker, Martyn Waller, Gabriel Walther, Han Wang, Feng Wang, Weiqi Wang, Harry Watkins, James Watkins, Ulrich Weber, James T. Weedon, Liping Wei, Patrick Weigelt, Evan Weiher, Aidan W. Wells, Camilla Wellstein, Elizabeth Wenk, Mark Westoby, Alana Westwood, Philip John White, Mark Whitten, Mathew Williams, Daniel E. Winkler, Klaus Winter, Chevonne Womack, Ian J. Wright, S. Joseph Wright, Justin Wright, Bruno X. Pinho, Fabiano Ximenes, Toshihiro Yamada, Keiko Yamaji, Ruth Yanai, Nikolay Yankov, Benjamin Yguel, Kátia Janaina Zanini, Amy E. Zanne, David Zelený, Yun-Peng Zhao, Jingming Zheng, Ji Zheng, Kasia Ziemińska, Chad R. Zirbel, Georg Zizka, Irié Casimir Zo-Bi, Gerhard Zotz, Christian Wirth.Max Planck Institute for Biogeochemistry; Max Planck Society; German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig; International Programme of Biodiversity Science (DIVERSITAS); International Geosphere-Biosphere Programme (IGBP); Future Earth; French Foundation for Biodiversity Research (FRB); GIS ‘Climat, Environnement et Société'.http://wileyonlinelibrary.com/journal/gcbhj2021Plant Production and Soil Scienc

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570

    Early warning of patient deterioration in the inpatient setting

    No full text
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 161-166).Early signs of patient deterioration have been documented in the medical literature. Recognition of such signs offers the possibility of treatment with sufficient lead time to prevent irreversible organ damage and death. Pediatric hospitals currently utilize simple, human evaluated rubrics called early warning scores to detect early signs of patient deterioration. These scores comprise subjective (patient behavior, clinician's impression) and objective (vital signs) components to assess patient health and are computed intermittently by the nursing staff. At Boston Children's Hospital (BCH), early warning scores are evaluated at least every four hours for each patient. Many hospitals monitor inpatients continuously to alert caregivers to changes in physiological status. At BCH, each hospital bed is equipped with a bedside monitor that continuously collects and archives vital sign data, such as heart rate, respiration rate, and arterial oxygen saturation. Continuous access to these physiological variables allows for the definition of a continuously evaluated early warning score on a reduced rubric. This thesis quantitatively assesses the performance of BCH's current Children's Hospital Early Warning Score (CHEWS). We also apply several standard machine learning approaches to investigate the utility of automatically collected bedside monitoring trend data for prediction of patient deterioration. Our results suggest that CHEWS offers at least a 6-hour warning with sensitivity 0.78 and specificity 0.90 but only with a prohibitively large uncertainty (48 hours) surrounding the time of transfer. Performance using only standard bedside trend data is no better than chance; improvement may require exploiting additional intra-beat features of monitored waveforms. The full CHEWS appears to capture significant clinical features that are not present in the monitoring data used in this study.by Gregory Alan Ciccarelli.S.M

    Characterization of phoneme rate as a vocal biomarker of depression

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.Cataloged from PDF version of thesis.Includes bibliographical references (pages 149-161).Quantitative approaches to psychiatric assessment beyond the qualitative descriptors in the Diagnostic and Statistical Manual of Mental Disorders could transform mental health care. However, objective neurocognitive state estimation and tracking demands robust, scalable indicators of a disorder. A person's speech is a rich source of neurocognitive information because speech production is a complex sensorimotor task that draws upon many cortical and subcortical regions. Furthermore, the ease of collection makes speech a practical, scalable candidate for assessment of mental health. One aspect of speech production that has shown sensitivity to neuropsychological disorders is phoneme rate, the rate at which individual consonants and vowels are spoken. Our aim in this thesis is to characterize phoneme rate as an indicator of depression and to improve our use of phoneme rate as a feature through both brain imaging and neurocomputational modeling. This thesis proposes that psychiatric assessment can be enhanced using a neurocomputational model of speech motor control to estimate unobserved parameters as latent descriptors of a disorder. We use depression as our model disorder and focus on motor control of speech phoneme rate. First, we investigate the neural basis for phoneme rate modulation in healthy subjects uttering emotional sentences and in depression using functional magnetic resonance imaging. Then, we develop a computational model of phoneme rate to estimate subject-specific parameters that correlate with individual phoneme rate. Finally, we apply these and other features derived from speech to distinguish depressed from healthy control subjects.by Gregory Alan Ciccarelli.Ph. D

    Deep Neural Network Model of Hearing-Impaired Speech-in-Noise Performance

    No full text
    Many individuals struggle to understand speech in listening scenarios that includereverberation and background noise. An individual’s ability to understand speech arisesfrom a combination of peripheral auditory function, central auditory function, and generalcognitive abilities. The interaction of these factors complicates the prescription oftreatment or therapy to improve hearing function. Damage tothe auditory peripherycan be studied in animals; however, this method alone is not enough to understandthe impact of hearing loss on speech perception. Computational auditory models bridgethe gap between animal studies and human speech perception.Perturbations to themodeled auditory systems can permit mechanism-based investigations into observedhuman behavior. In this study, we propose a computational model that accounts forthe complex interactions between different hearing damagemechanisms and simulateshuman speech-in-noise perception. The model performs a digit classification task asa human would, with only acoustic sound pressure as input. Thus, we can use themodel’s performance as a proxy for human performance. This two-stage model consistsof a biophysical cochlear-nerve spike generator followed by a deep neural network(DNN) classifier. We hypothesize that sudden damage to the periphery affects speechperception and that central nervous system adaptation overtime may compensatefor peripheral hearing damage. Our model achieved human-like performance acrosssignal-to-noise ratios (SNRs) under normal-hearing (NH) cochlear settings, achieving50% digit recognition accuracy at−20.7 dB SNR. Results were comparable to eightNH participants on the same task who achieved 50% behavioralperformance at−22dB SNR. We also simulated medial olivocochlear reflex (MOCR)and auditory nervefiber (ANF) loss, which worsened digit-recognition accuracy at lower SNRs comparedto higher SNRs. Our simulated performance following ANF loss is consistent withthe hypothesis that cochlear synaptopathy impacts communication in backgroundnoise more so than in quiet. Following the insult of various cochlear degradations, weimplemented extreme and conservative adaptation through the DNN. At the lowest SNRs(<0 dB), both adapted models were unable to fully recover NH performance, even withhundreds of thousands of training samples. This implies a limit on performance recoveryfollowing peripheral damage in our human-inspired DNN architecture.United States. Department of Defense. Research and Engineering (Air Force Contract No. FA8702-15-D-0001)National Institutes of Health (U.S.) (T32 Trainee Grant No. 5T32DC000038-27)National Science Foundation (U.S.). Graduate Research Fellowship Program (Grant DGE1745303
    corecore