15 research outputs found

    A large-scale and PCR-referenced vocal audio dataset for COVID-19

    Get PDF
    The UK COVID-19 Vocal Audio Dataset is designed for the training and evaluation of machine learning models that classify SARS-CoV-2 infection status or associated respiratory symptoms using vocal audio. The UK Health Security Agency recruited voluntary participants through the national Test and Trace programme and the REACT-1 survey in England from March 2021 to March 2022, during dominant transmission of the Alpha and Delta SARS-CoV-2 variants and some Omicron variant sublineages. Audio recordings of volitional coughs, exhalations, and speech were collected in the 'Speak up to help beat coronavirus' digital survey alongside demographic, self-reported symptom and respiratory condition data, and linked to SARS-CoV-2 test results. The UK COVID-19 Vocal Audio Dataset represents the largest collection of SARS-CoV-2 PCR-referenced audio recordings to date. PCR results were linked to 70,794 of 72,999 participants and 24,155 of 25,776 positive cases. Respiratory symptoms were reported by 45.62% of participants. This dataset has additional potential uses for bioacoustics research, with 11.30% participants reporting asthma, and 27.20% with linked influenza PCR test results.Comment: 37 pages, 4 figure

    A large-scale and PCR-referenced vocal audio dataset for COVID-19

    Get PDF
    The UK COVID-19 Vocal Audio Dataset is designed for the training and evaluation of machine learning models that classify SARS-CoV-2 infection status or associated respiratory symptoms using vocal audio. The UK Health Security Agency recruited voluntary participants through the national Test and Trace programme and the REACT-1 survey in England from March 2021 to March 2022, during dominant transmission of the Alpha and Delta SARS-CoV-2 variants and some Omicron variant sublineages. Audio recordings of volitional coughs, exhalations, and speech were collected in the ‘Speak up and help beat coronavirus’ digital survey alongside demographic, symptom and self-reported respiratory condition data. Digital survey submissions were linked to SARS-CoV-2 test results. The UK COVID-19 Vocal Audio Dataset represents the largest collection of SARS-CoV-2 PCR-referenced audio recordings to date. PCR results were linked to 70,565 of 72,999 participants and 24,105 of 25,706 positive cases. Respiratory symptoms were reported by 45.6% of participants. This dataset has additional potential uses for bioacoustics research, with 11.3% participants self-reporting asthma, and 27.2% with linked influenza PCR test results

    Digital health and mobile health: a bibliometric analysis of the 100 most cited papers and their contributing authors

    Get PDF
    Aim: This study aimed to identify and analyze the top 100 most cited digital health and mobile health (m-health) publications. It could aid researchers in the identification of promising new research avenues, additionally supporting the establishment of international scientific collaboration between interdisciplinary research groups with demonstrated achievements in the area of interest. Methods: On 30th August, 2023, the Web of Science Core Collection (WOSCC) electronic database was queried to identify the top 100 most cited digital health papers with a comprehensive search string. From the initial search, 106 papers were identified. After screening for relevance, six papers were excluded, resulting in the final list of the top 100 papers. The basic bibliographic data was directly extracted from WOSCC using its “Analyze” and “Create Citation Report” functions. The complete records of the top 100 papers were downloaded and imported into a bibliometric software called VOSviewer (version 1.6.19) to generate an author keyword map and author collaboration map. Results: The top 100 papers on digital health received a total of 49,653 citations. Over half of them (n = 55) were published during 2013–2017. Among these 100 papers, 59 were original articles, 36 were reviews, 4 were editorial materials, and 1 was a proceeding paper. All papers were written in English. The University of London and the University of California system were the most represented affiliations. The USA and the UK were the most represented countries. The Journal of Medical Internet Research was the most represented journal. Several diseases and health conditions were identified as a focus of these works, including anxiety, depression, diabetes mellitus, cardiovascular diseases, and coronavirus disease 2019 (COVID-19). Conclusions: The findings underscore key areas of focus in the field and prominent contributors, providing a roadmap for future research in digital and m-health

    Parallelt. [Übers. des Autors]: The power of the unconscious. Sun, fire and light as constitutive factors of non-European sacral spatial structures

    No full text
    Die Funktion sakraler Architektur besteht niemals ausschließlich in der Beherbergung des "Göttlichen". Vielmehr transportiert sie durch ihre Gestaltung und die damit verbundenen Assoziationen darüber hinausgehend unterschiedlichste Inhalte theologischer (sakraler und sepulkraler), kosmogonischer und soziopolitischer Art, wobei die Wirksamkeit dieses Informationstransportes und damit die Verständlichkeit der sakralen Stätte wesentlich von der Art der Rezeption des Gebäudes abhängen.Sonne, Feuer und Licht aber auch Finsternis stellen dabei geeignete Medien zur Gestaltung und Differenzierung sakraler Raumstrukturen dar, da sie aufgrund ihrer fundamentalen Bedeutung für den Menschen archetypische Emotionen und Reaktionen provozieren und dadurch unbewusst wirksam sind.Dabei resultieren aus der Bezugnahme auf die Sonne, das Feuer oder das transzendierte Feuer als Kultobjekt bzw. den Einsatz von Licht unterschiedliche Raumschemata, welche die sakrale Stätte unmittelbar mit diesen in Zusammenhang bringen (beispielsweise durch ihre hypäthrale Gestaltung oder mittels geografischer Exposition), dadurch sakralisieren und durch die Einbeziehung kulturspezifischer Faktoren zusätzliche Inhalte vermitteln. Während aber die Entwicklung der Architektur der Sonne zu jener des transzendierten Feuers eine lineare, kulturelle Evolution widerspiegelt und durch die reelle oder ideelle Präsenz des Kultobjektes bestimmt wird, dient der Einsatz von Licht primär der Sakralisierung und Emotionalisierung des sakralen Raumes bzw. der Vermittlung eines regenerativen und vitalisierenden Energietransfers. In keinem Fall aber ist die Funktion des Sakralbaus monokausal erklärbar.Allen sakralen Raumkonzepten, welche auf der Sonne, dem Feuer oder dem Licht als Kultobjekt oder Gestaltungsmittel beruhen, sind damit die Mehrschichtigkeit der transportierten Inhalte sowie ihre primär unbewusste, theologische und soziopolitische Wirksamkeit gemein.The function of sacral architecture never consists of the housing of the "Sacred" only. It rather conveys varied meanings through its structure and the relating associations of theological, cosmogonical and socio-political meanings, but the effectiveness of this transport of information and - as a consequence - the comprehensibility of the sacred place mainly depend on the way of reception of the building itself.Hereby sun, fire and light but also darkness act as suitable devices for layout and the distinction of sacral spatial structures since they are unconsciously but immediately effective due to their fundamental and archetypal meaning for human beings.As a result of using sun, fire or transcendized fire as cult objects or light for distinction of space there are different spatial schemes, which immediately evoke associations with the former mentioned (e.g. by their open design or their geographical exposition), have them realized as sacred places and transport additional meaning due to the influence of special cultural aspects. But while the development of the "Architecture of the Sun" to the "Architecture of the transcendated Fire" merely represents a linear cultural evolution, the use of light on one hand mainly serves to determine the building as a sacred ground by emotionalizing it and on the other hand to depict its vitalizing and regenerating energies. But the function of sacral buildings is by no means monocausally explicable.Therefore all sacral spatial concepts relating to the sun, fire or light as cult objects or methods for designing the interior have two things in common: the impossibility of explaining their existance by giving only one reason and the merely unconscious, theological and socio-political effectiveness.43

    Audio-based AI classifiers show no evidence of improved COVID-19 diagnosis over simple symptom checkers

    Get PDF
    Recent work has reported that respiratory audio-trained AI classifiers can accurately predict SARS-CoV-2 infection status. However, it has not yet been determined whether such model performance is driven by latent audio biomarkers with true causal links to SARS-CoV-2 infection or by confounding effects present in observational studies, such as recruitment bias. Here, we undertake a large-scale study of audio-based AI classifiers, as part of the UK government’s pandemic response. We collect a dataset of audio recordings from 67,842 individuals, with linked metadata, of whom 23,514 had positive PCR tests for SARS-CoV-2. In an unadjusted analysis, similar to that in previous works, AI classifiers predict SARS-CoV-2 infection status with high accuracy (ROC-AUC=0.846 [0.838–0.854]). However, after matching on measured confounders, such as self-reported symptoms, performance is much weaker (ROC-AUC=0.619 [0.594–0.644]). Upon quantifying the utility of audio-based classifiers in practical settings, we find them to be outperformed by predictions based on user-reported symptoms. We make best-practice recommendations for handling recruitment bias, and for assessing audio-based classifiers by their utility in relevant practical settings. Our work provides novel insights into the value of AI audio analysis and the importance of study design and treatment of confounders in AI-enabled diagnostics

    A large-scale and PCR-referenced vocal audio dataset for COVID-19

    No full text
    The UK COVID-19 Vocal Audio Dataset is designed for the training and evaluation of machine learning models that classify SARS-CoV-2 infection status or associated respiratory symptoms using vocal audio. The UK Health Security Agency recruited voluntary participants through the national Test and Trace programme and the REACT-1 survey in England from March 2021 to March 2022, during dominant transmission of the Alpha and Delta SARS-CoV-2 variants and some Omicron variant sublineages. Audio recordings of volitional coughs, exhalations, and speech were collected in the ‘Speak up and help beat coronavirus’ digital survey alongside demographic, symptom and self-reported respiratory condition data. Digital survey submissions were linked to SARS-CoV-2 test results. The UK COVID-19 Vocal Audio Dataset represents the largest collection of SARS-CoV-2 PCR-referenced audio recordings to date. PCR results were linked to 70,565 of 72,999 participants and 24,105 of 25,706 positive cases. Respiratory symptoms were reported by 45.6% of participants. This dataset has additional potential uses for bioacoustics research, with 11.3% participants self-reporting asthma, and 27.2% with linked influenza PCR test results

    The UK COVID-19 Vocal Audio Dataset

    No full text
    <p>The UK COVID-19 Vocal Audio Dataset is designed for the training and evaluation of machine learning models that classify SARS-CoV-2 infection status or associated respiratory symptoms using vocal audio. The UK Health Security Agency recruited voluntary participants through the national Test and Trace programme and the REACT-1 survey in England from March 2021 to March 2022, during dominant transmission of the Alpha and Delta SARS-CoV-2 variants and some Omicron variant sublineages. Audio recordings of volitional coughs, exhalations, and speech (speech not available in open access version) were collected in the 'Speak up to help beat coronavirus' digital survey alongside demographic, self-reported symptom and respiratory condition data, and linked to SARS-CoV-2 test results. The UK COVID-19 Vocal Audio Dataset represents the largest collection of SARS-CoV-2 PCR-referenced audio recordings to date. PCR results were linked to 70,794 of 72,999 participants and 24,155 of 25,776 positive cases. Respiratory symptoms were reported by 45.62% of participants. This dataset has additional potential uses for bioacoustics research, with 11.30% participants reporting asthma, and 27.20% with linked influenza PCR test results.</p><h3>Contents</h3><ul><li><strong>participant_metadata.csv</strong> row-wise, participant identifier indexed information on participant demographics and health status. Please see <a href="https://arxiv.org/pdf/2212.07738.pdf">A large-scale and PCR-referenced vocal audio dataset for COVID-19</a> for a full description of the dataset.</li><li><strong>audio_metadata.csv</strong> row-wise, participant identifier indexed information on three recorded audio modalities, including audio filepaths. Please see <a href="https://arxiv.org/pdf/2212.07738.pdf">A large-scale and PCR-referenced vocal audio dataset for COVID-19</a> for a full description of the dataset.</li><li><strong>train_test_splits.csv</strong> row-wise, participant identifier indexed information on train test splits for the following sets: 'Randomised' train and test set, Standard' train and test set, Matched' train and test sets, 'Longitudinal' test set and 'Matched Longitudinal' test set. Please see <a href="https://arxiv.org/abs/2212.08570">Audio-based AI classifiers show no evidence of improved COVID-19 screening over simple symptoms checkers</a> for a full description of the train test splits.</li><li><strong>audio/ </strong>directory containing all the recordings in .wav format<ul><li>Due to the large size of the dataset, to assist with ease of download, the audio files have been zipped into <strong>covid_data.z{ip, 01-24}.</strong> This enables the dataset to be downloaded in short periods, reducing the chances of a dropped internet connection scuppering progress. To unzip, first, ensure that all zip files are in the same directory. Then run the command 'unzip covid_data.zip' or right-click on 'covid_data.zip' and use a programme such as 'The Unarchiver' to open the file.</li><li>Once extracted, to check the validity of the download, please run the 'python Turing-RSS-Health-Data-Lab-Biomedical-Acoustic-Markers/data-paper/unit-tests.py. All tests should pass with no exceptions. Please clone the GitHub repo detailed below.</li></ul></li><li><strong>README.md</strong> full dataset descriptor.</li><li><strong>DataDictionary_UKCOVID19VocalAudioDataset_OpenAccess.xlsx </strong>descriptor of each dataset attribute with the percentage coverage.</li></ul><h3>Code Base</h3><p>The accompanying code can be found here: https://github.com/alan-turing-institute/Turing-RSS-Health-Data-Lab-Biomedical-Acoustic-Markers</p><h3>Citations:</h3><p>Please cite.</p><p>@article{coppock2022,</p><p> author = {Coppock, Harry and Nicholson, George and Kiskin, Ivan and Koutra, Vasiliki and Baker, Kieran and Budd, Jobie and Payne, Richard and Karoune, Emma and Hurley, David and Titcomb, Alexander and Egglestone, Sabrina and Cañadas, Ana Tendero and Butler, Lorraine and Jersakova, Radka and Mellor, Jonathon and Patel, Selina and Thornley, Tracey and Diggle, Peter and Richardson, Sylvia and Packham, Josef and Schuller, Björn W. and Pigoli, Davide and Gilmour, Steven and Roberts, Stephen and Holmes, Chris},</p><p> title = {Audio-based AI classifiers show no evidence of improved COVID-19 screening over simple symptoms checkers},</p><p> journal = {arXiv},</p><p> year = {2022},</p><p> doi = {10.48550/ARXIV.2212.08570},</p><p> url = {https://arxiv.org/abs/2212.08570},</p><p>}</p><p> </p><p>@article{budd2022,</p><p>   author={Jobie Budd and Kieran Baker and Emma Karoune and Harry Coppock and Selina Patel and Ana Tendero Cañadas and Alexander Titcomb and Richard Payne and David Hurley and Sabrina Egglestone and Lorraine Butler and George Nicholson and Ivan Kiskin and Vasiliki Koutra and Radka Jersakova and Peter Diggle and Sylvia Richardson and Bjoern Schuller and Steven Gilmour and Davide Pigoli and Stephen Roberts and Josef Packham Tracey Thornley Chris Holmes},</p><p>   title={A large-scale and PCR-referenced vocal audio dataset for COVID-19},</p><p>   year={2022},</p><p>   journal={arXiv},</p><p>   doi = {10.48550/ARXIV.2212.07738}</p><p>}</p><p>@article{Pigoli2022,</p><p>   author={Davide Pigoli and Kieran Baker and Jobie Budd and Lorraine Butler and Harry Coppock and Sabrina Egglestone and Steven G.\ Gilmour and Chris Holmes and David Hurley and Radka Jersakova and Ivan Kiskin and Vasiliki Koutra and George Nicholson and Joe Packham and Selina Patel and Richard Payne and Stephen J.\ Roberts and Bj\"{o}rn W.\ Schuller and Ana Tendero-Can~\tilde{n}adas and Tracey Thornley and Alexander Titcomb},</p><p>title={Statistical Design and Analysis for Robust Machine Learning: A Case Study from Covid-19},</p><p>   year={2022},</p><p>   journal={arXiv},</p><p>   doi = {10.48550/ARXIV.2212.08571}</p><p>}</p><p> </p><h3>The Dublin Core™ Metadata Initiative</h3><p> </p><p>- Title: The UK COVID-19 Vocal Audio Dataset, Open Access Edition.</p><p>- Creator: The UK Health Security Agency (UKHSA) in collaboration with The Turing-RSS Health Data Lab.</p><p>- Subject: COVID-19, Respiratory symptom, Other audio, Cough, Asthma, Influenza.</p><p>- Description:  The UK COVID-19 Vocal Audio Dataset Open Access Edition is designed for the training and evaluation of machine learning models that classify SARS-CoV-2 infection status or associated respiratory symptoms using vocal audio. The UK Health Security Agency recruited voluntary participants through the national Test and Trace programme and the REACT-1 survey in England from March 2021 to March 2022, during dominant transmission of the Alpha and Delta SARS-CoV-2 variants and some Omicron variant sublineages. Audio recordings of volitional coughs and exhalations were collected in the 'Speak up to help beat coronavirus' digital survey alongside demographic, self-reported symptom and respiratory condition data, and linked to SARS-CoV-2 test results. The UK COVID-19 Vocal Audio Dataset Open Access Edition represents the largest collection of SARS-CoV-2 PCR-referenced audio recordings to date. PCR results were linked to 70,794 of 72,999 participants and 24,155 of 25,776 positive cases. Respiratory symptoms were reported by 45.62% of participants. This dataset has additional potential uses for bioacoustics research, with 11.30% participants reporting asthma, and 27.20% with linked influenza PCR test results.</p><p>- Publisher: The UK Health Security Agency (UKHSA).</p><p>- Contributor: The UK Health Security Agency (UKHSA) and The Alan Turing Institute.</p><p>- Date: 2021-03/2022-03</p><p>- Type: Dataset</p><p>- Format:  Waveform Audio File Format audio/wave, Comma-separated values text/csv</p><p>- Identifier: <strong>10.5281/zenodo.10043978</strong></p><p>- Source: The UK COVID-19 Vocal Audio Dataset Protected Edition, accessed via application to <a href="https://www.gov.uk/government/publications/accessing-ukhsa-protected-data/accessing-ukhsa-protected-data">Accessing UKHSA protected data</a>.</p><p>- Language: eng</p><p>- Relation: The UK COVID-19 Vocal Audio Dataset Protected Edition, accessed via application to <a href="https://www.gov.uk/government/publications/accessing-ukhsa-protected-data/accessing-ukhsa-protected-data">Accessing UKHSA protected data</a>.</p><p>- Coverage: United Kingdom, 2021-03/2022-03.</p><p>- Rights: Open Government Licence version 3 (OGL v.3), © Crown Copyright UKHSA 2023.</p><p>- accessRights: When you use this information under the Open Government Licence, you should include the following attribution: The UK COVID-19 Vocal Audio Dataset Open Access Edition, UK Health Security Agency, 2023, licensed under the <a href="https://www.nationalarchives.gov.uk/doc/open-government-licence/">Open Government Licence v3.0</a> and cite the papers detailed above.</p><p> </p&gt

    Literaturverzeichnis

    No full text

    Standing the test of time: targeting thymidylate biosynthesis in cancer therapy

    No full text
    corecore