210 research outputs found

    I hear you eat and speak: automatic recognition of eating condition and food type, use-cases, and impact on ASR performance

    Get PDF
    We propose a new recognition task in the area of computational paralinguistics: automatic recognition of eating conditions in speech, i. e., whether people are eating while speaking, and what they are eating. To this end, we introduce the audio-visual iHEARu-EAT database featuring 1.6 k utterances of 30 subjects (mean age: 26.1 years, standard deviation: 2.66 years, gender balanced, German speakers), six types of food (Apple, Nectarine, Banana, Haribo Smurfs, Biscuit, and Crisps), and read as well as spontaneous speech, which is made publicly available for research purposes. We start with demonstrating that for automatic speech recognition (ASR), it pays off to know whether speakers are eating or not. We also propose automatic classification both by brute-forcing of low-level acoustic features as well as higher-level features related to intelligibility, obtained from an Automatic Speech Recogniser. Prediction of the eating condition was performed with a Support Vector Machine (SVM) classifier employed in a leave-one-speaker-out evaluation framework. Results show that the binary prediction of eating condition (i. e., eating or not eating) can be easily solved independently of the speaking condition; the obtained average recalls are all above 90%. Low-level acoustic features provide the best performance on spontaneous speech, which reaches up to 62.3% average recall for multi-way classification of the eating condition, i. e., discriminating the six types of food, as well as not eating. The early fusion of features related to intelligibility with the brute-forced acoustic feature set improves the performance on read speech, reaching a 66.4% average recall for the multi-way classification task. Analysing features and classifier errors leads to a suitable ordinal scale for eating conditions, on which automatic regression can be performed with up to 56.2% determination coefficient

    Computer audition for emotional wellbeing

    Get PDF
    This thesis is focused on the application of computer audition (i. e., machine listening) methodologies for monitoring states of emotional wellbeing. Computer audition is a growing field and has been successfully applied to an array of use cases in recent years. There are several advantages to audio-based computational analysis; for example, audio can be recorded non-invasively, stored economically, and can capture rich information on happenings in a given environment, e. g., human behaviour. With this in mind, maintaining emotional wellbeing is a challenge for humans and emotion-altering conditions, including stress and anxiety, have become increasingly common in recent years. Such conditions manifest in the body, inherently changing how we express ourselves. Research shows these alterations are perceivable within vocalisation, suggesting that speech-based audio monitoring may be valuable for developing artificially intelligent systems that target improved wellbeing. Furthermore, computer audition applies machine learning and other computational techniques to audio understanding, and so by combining computer audition with applications in the domain of computational paralinguistics and emotional wellbeing, this research concerns the broader field of empathy for Artificial Intelligence (AI). To this end, speech-based audio modelling that incorporates and understands paralinguistic wellbeing-related states may be a vital cornerstone for improving the degree of empathy that an artificial intelligence has. To summarise, this thesis investigates the extent to which speech-based computer audition methodologies can be utilised to understand human emotional wellbeing. A fundamental background on the fields in question as they pertain to emotional wellbeing is first presented, followed by an outline of the applied audio-based methodologies. Next, detail is provided for several machine learning experiments focused on emotional wellbeing applications, including analysis and recognition of under-researched phenomena in speech, e. g., anxiety, and markers of stress. Core contributions from this thesis include the collection of several related datasets, hybrid fusion strategies for an emotional gold standard, novel machine learning strategies for data interpretation, and an in-depth acoustic-based computational evaluation of several human states. All of these contributions focus on ascertaining the advantage of audio in the context of modelling emotional wellbeing. Given the sensitive nature of human wellbeing, the ethical implications involved with developing and applying such systems are discussed throughout

    A survey on perceived speaker traits: personality, likability, pathology, and the first challenge

    Get PDF
    The INTERSPEECH 2012 Speaker Trait Challenge aimed at a unified test-bed for perceived speaker traits – the first challenge of this kind: personality in the five OCEAN personality dimensions, likability of speakers, and intelligibility of pathologic speakers. In the present article, we give a brief overview of the state-of-the-art in these three fields of research and describe the three sub-challenges in terms of the challenge conditions, the baseline results provided by the organisers, and a new openSMILE feature set, which has been used for computing the baselines and which has been provided to the participants. Furthermore, we summarise the approaches and the results presented by the participants to show the various techniques that are currently applied to solve these classification tasks

    The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing

    Get PDF
    Work on voice sciences over recent decades has led to a proliferation of acoustic parameters that are used quite selectively and are not always extracted in a similar fashion. With many independent teams working in different research areas, shared standards become an essential safeguard to ensure compliance with state-of-the-art methods allowing appropriate comparison of results across studies and potential integration and combination of extraction and recognition systems. In this paper we propose a basic standard acoustic parameter set for various areas of automatic voice analysis, such as paralinguistic or clinical speech analysis. In contrast to a large brute-force parameter set, we present a minimalistic set of voice parameters here. These were selected based on a) their potential to index affective physiological changes in voice production, b) their proven value in former studies as well as their automatic extractability, and c) their theoretical significance. The set is intended to provide a common baseline for evaluation of future research and eliminate differences caused by varying parameter sets or even different implementations of the same parameters. Our implementation is publicly available with the openSMILE toolkit. Comparative evaluations of the proposed feature set and large baseline feature sets of INTERSPEECH challenges show a high performance of the proposed set in relation to its size

    АналитичСский ΠΎΠ±Π·ΠΎΡ€ Π°ΡƒΠ΄ΠΈΠΎΠ²ΠΈΠ·ΡƒΠ°Π»ΡŒΠ½Ρ‹Ρ… систСм для опрСдСлСния срСдств ΠΈΠ½Π΄ΠΈΠ²ΠΈΠ΄ΡƒΠ°Π»ΡŒΠ½ΠΎΠΉ Π·Π°Ρ‰ΠΈΡ‚Ρ‹ Π½Π° Π»ΠΈΡ†Π΅ Ρ‡Π΅Π»ΠΎΠ²Π΅ΠΊΠ°

    Get PDF
    Начиная с 2019 Π³ΠΎΠ΄Π° всС страны ΠΌΠΈΡ€Π° ΡΡ‚ΠΎΠ»ΠΊΠ½ΡƒΠ»ΠΈΡΡŒ со ΡΡ‚Ρ€Π΅ΠΌΠΈΡ‚Π΅Π»ΡŒΠ½Ρ‹ΠΌ распространСниСм ΠΏΠ°Π½Π΄Π΅ΠΌΠΈΠΈ, Π²Ρ‹Π·Π²Π°Π½Π½ΠΎΠΉ коронавирусной ΠΈΠ½Ρ„Π΅ΠΊΡ†ΠΈΠ΅ΠΉ COVID-19, Π±ΠΎΡ€ΡŒΠ±Π° с ΠΊΠΎΡ‚ΠΎΡ€ΠΎΠΉ продолТаСтся ΠΌΠΈΡ€ΠΎΠ²Ρ‹ΠΌ сообщСством ΠΈ ΠΏΠΎ настоящСС врСмя. НСсмотря Π½Π° ΠΎΡ‡Π΅Π²ΠΈΠ΄Π½ΡƒΡŽ ΡΡ„Ρ„Π΅ΠΊΡ‚ΠΈΠ²Π½ΠΎΡΡ‚ΡŒ срСдств ΠΈΠ½Π΄ΠΈΠ²ΠΈΠ΄ΡƒΠ°Π»ΡŒΠ½ΠΎΠΉ Π·Π°Ρ‰ΠΈΡ‚Ρ‹ ΠΎΡ€Π³Π°Π½ΠΎΠ² дыхания ΠΎΡ‚ зараТСния коронавирусной ΠΈΠ½Ρ„Π΅ΠΊΡ†ΠΈΠ΅ΠΉ, ΠΌΠ½ΠΎΠ³ΠΈΠ΅ люди ΠΏΡ€Π΅Π½Π΅Π±Ρ€Π΅Π³Π°ΡŽΡ‚ использованиСм Π·Π°Ρ‰ΠΈΡ‚Π½Ρ‹Ρ… масок для Π»ΠΈΡ†Π° Π² общСствСнных мСстах. ΠŸΠΎΡΡ‚ΠΎΠΌΡƒ для контроля ΠΈ своСврСмСнного выявлСния Π½Π°Ρ€ΡƒΡˆΠΈΡ‚Π΅Π»Π΅ΠΉ общСствСнных ΠΏΡ€Π°Π²ΠΈΠ» здравоохранСния Π½Π΅ΠΎΠ±Ρ…ΠΎΠ΄ΠΈΠΌΠΎ ΠΏΡ€ΠΈΠΌΠ΅Π½ΡΡ‚ΡŒ соврСмСнныС ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΠΎΠ½Π½Ρ‹Π΅ Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³ΠΈΠΈ, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹Π΅ Π±ΡƒΠ΄ΡƒΡ‚ Π΄Π΅Ρ‚Π΅ΠΊΡ‚ΠΈΡ€ΠΎΠ²Π°Ρ‚ΡŒ Π·Π°Ρ‰ΠΈΡ‚Π½Ρ‹Π΅ маски Π½Π° Π»ΠΈΡ†Π°Ρ… людСй ΠΏΠΎ Π²ΠΈΠ΄Π΅ΠΎ- ΠΈ Π°ΡƒΠ΄ΠΈΠΎΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΠΈ. Π’ ΡΡ‚Π°Ρ‚ΡŒΠ΅ ΠΏΡ€ΠΈΠ²Π΅Π΄Π΅Π½ аналитичСский ΠΎΠ±Π·ΠΎΡ€ ΡΡƒΡ‰Π΅ΡΡ‚Π²ΡƒΡŽΡ‰ΠΈΡ… ΠΈ Ρ€Π°Π·Ρ€Π°Π±Π°Ρ‚Ρ‹Π²Π°Π΅ΠΌΡ‹Ρ… ΠΈΠ½Ρ‚Π΅Π»Π»Π΅ΠΊΡ‚ΡƒΠ°Π»ΡŒΠ½Ρ‹Ρ… ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΠΎΠ½Π½Ρ‹Ρ… Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³ΠΈΠΉ бимодального Π°Π½Π°Π»ΠΈΠ·Π° голосовых ΠΈ Π»ΠΈΡ†Π΅Π²Ρ‹Ρ… характСристик Ρ‡Π΅Π»ΠΎΠ²Π΅ΠΊΠ° Π² маскС. БущСствуСт ΠΌΠ½ΠΎΠ³ΠΎ исслСдований Π½Π° Ρ‚Π΅ΠΌΡƒ обнаруТСния масок ΠΏΠΎ видСоизобраТСниям, Ρ‚Π°ΠΊΠΆΠ΅ Π² ΠΎΡ‚ΠΊΡ€Ρ‹Ρ‚ΠΎΠΌ доступС ΠΌΠΎΠΆΠ½ΠΎ Π½Π°ΠΉΡ‚ΠΈ Π·Π½Π°Ρ‡ΠΈΡ‚Π΅Π»ΡŒΠ½ΠΎΠ΅ количСство корпусов, содСрТащих изобраТСния Π»ΠΈΡ† ΠΊΠ°ΠΊ Π±Π΅Π· масок, Ρ‚Π°ΠΊ ΠΈ Π² масках, ΠΏΠΎΠ»ΡƒΡ‡Π΅Π½Π½Ρ‹Ρ… Ρ€Π°Π·Π»ΠΈΡ‡Π½Ρ‹ΠΌΠΈ способами. ИсслСдований ΠΈ Ρ€Π°Π·Ρ€Π°Π±ΠΎΡ‚ΠΎΠΊ, Π½Π°ΠΏΡ€Π°Π²Π»Π΅Π½Π½Ρ‹Ρ… Π½Π° Π΄Π΅Ρ‚Π΅ΠΊΡ‚ΠΈΡ€ΠΎΠ²Π°Π½ΠΈΠ΅ срСдств ΠΈΠ½Π΄ΠΈΠ²ΠΈΠ΄ΡƒΠ°Π»ΡŒΠ½ΠΎΠΉ Π·Π°Ρ‰ΠΈΡ‚Ρ‹ ΠΎΡ€Π³Π°Π½ΠΎΠ² дыхания ΠΏΠΎ акустичСским характСристикам Ρ€Π΅Ρ‡ΠΈ Ρ‡Π΅Π»ΠΎΠ²Π΅ΠΊΠ° ΠΏΠΎΠΊΠ° достаточно ΠΌΠ°Π»ΠΎ, Ρ‚Π°ΠΊ ΠΊΠ°ΠΊ это Π½Π°ΠΏΡ€Π°Π²Π»Π΅Π½ΠΈΠ΅ Π½Π°Ρ‡Π°Π»ΠΎ Ρ€Π°Π·Π²ΠΈΠ²Π°Ρ‚ΡŒΡΡ Ρ‚ΠΎΠ»ΡŒΠΊΠΎ Π² ΠΏΠ΅Ρ€ΠΈΠΎΠ΄ ΠΏΠ°Π½Π΄Π΅ΠΌΠΈΠΈ, Π²Ρ‹Π·Π²Π°Π½Π½ΠΎΠΉ коронавирусной ΠΈΠ½Ρ„Π΅ΠΊΡ†ΠΈΠ΅ΠΉ COVID-19. Π‘ΡƒΡ‰Π΅ΡΡ‚Π²ΡƒΡŽΡ‰ΠΈΠ΅ систСмы ΠΏΠΎΠ·Π²ΠΎΠ»ΡΡŽΡ‚ ΠΏΡ€Π΅Π΄ΠΎΡ‚Π²Ρ€Π°Ρ‚ΠΈΡ‚ΡŒ распространСниС коронавирусной ΠΈΠ½Ρ„Π΅ΠΊΡ†ΠΈΠΈ с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ распознавания наличия/отсутствия масок Π½Π° Π»ΠΈΡ†Π΅, Ρ‚Π°ΠΊΠΆΠ΅ Π΄Π°Π½Π½Ρ‹Π΅ систСмы ΠΏΠΎΠΌΠΎΠ³Π°ΡŽΡ‚ Π² дистанционном диагностировании COVID-19 с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ обнаруТСния ΠΏΠ΅Ρ€Π²Ρ‹Ρ… симптомов вирусной ΠΈΠ½Ρ„Π΅ΠΊΡ†ΠΈΠΈ ΠΏΠΎ акустичСским характСристикам. Однако, Π½Π° сСгодняшний дСнь сущСствуСт ряд Π½Π΅Ρ€Π΅ΡˆΠ΅Π½Π½Ρ‹Ρ… ΠΏΡ€ΠΎΠ±Π»Π΅ΠΌ Π² области автоматичСского диагностирования симптомов COVID-19 ΠΈ наличия/отсутствия масок Π½Π° Π»ΠΈΡ†Π°Ρ… людСй. Π’ ΠΏΠ΅Ρ€Π²ΡƒΡŽ ΠΎΡ‡Π΅Ρ€Π΅Π΄ΡŒ это низкая Ρ‚ΠΎΡ‡Π½ΠΎΡΡ‚ΡŒ обнаруТСния масок ΠΈ коронавирусной ΠΈΠ½Ρ„Π΅ΠΊΡ†ΠΈΠΈ, Ρ‡Ρ‚ΠΎ Π½Π΅ позволяСт ΠΎΡΡƒΡ‰Π΅ΡΡ‚Π²Π»ΡΡ‚ΡŒ Π°Π²Ρ‚ΠΎΠΌΠ°Ρ‚ΠΈΡ‡Π΅ΡΠΊΡƒΡŽ диагностику Π±Π΅Π· присутствия экспСртов (мСдицинского пСрсонала). МногиС систСмы Π½Π΅ способны Ρ€Π°Π±ΠΎΡ‚Π°Ρ‚ΡŒ Π² Ρ€Π΅ΠΆΠΈΠΌΠ΅ Ρ€Π΅Π°Π»ΡŒΠ½ΠΎΠ³ΠΎ Π²Ρ€Π΅ΠΌΠ΅Π½ΠΈ, ΠΈΠ·-Π·Π° Ρ‡Π΅Π³ΠΎ Π½Π΅Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎ ΠΏΡ€ΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡ‚ΡŒ ΠΊΠΎΠ½Ρ‚Ρ€ΠΎΠ»ΡŒ ΠΈ ΠΌΠΎΠ½ΠΈΡ‚ΠΎΡ€ΠΈΠ½Π³ ношСния Π·Π°Ρ‰ΠΈΡ‚Π½Ρ‹Ρ… масок Π² общСствСнных мСстах. Π’Π°ΠΊΠΆΠ΅ Π±ΠΎΠ»ΡŒΡˆΠΈΠ½ΡΡ‚Π²ΠΎ ΡΡƒΡ‰Π΅ΡΡ‚Π²ΡƒΡŽΡ‰ΠΈΡ… систСм Π½Π΅Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎ Π²ΡΡ‚Ρ€ΠΎΠΈΡ‚ΡŒ Π² смартфон, Ρ‡Ρ‚ΠΎΠ±Ρ‹ ΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚Π΅Π»ΠΈ ΠΌΠΎΠ³Π»ΠΈ Π² любом мСстС произвСсти диагностированиС наличия коронавирусной ΠΈΠ½Ρ„Π΅ΠΊΡ†ΠΈΠΈ. Π•Ρ‰Π΅ ΠΎΠ΄Π½ΠΎΠΉ основной ΠΏΡ€ΠΎΠ±Π»Π΅ΠΌΠΎΠΉ являСтся сбор Π΄Π°Π½Π½Ρ‹Ρ… ΠΏΠ°Ρ†ΠΈΠ΅Π½Ρ‚ΠΎΠ², Π·Π°Ρ€Π°ΠΆΠ΅Π½Π½Ρ‹Ρ… COVID-19, Ρ‚Π°ΠΊ ΠΊΠ°ΠΊ ΠΌΠ½ΠΎΠ³ΠΈΠ΅ люди Π½Π΅ согласны Ρ€Π°ΡΠΏΡ€ΠΎΡΡ‚Ρ€Π°Π½ΡΡ‚ΡŒ ΠΊΠΎΠ½Ρ„ΠΈΠ΄Π΅Π½Ρ†ΠΈΠ°Π»ΡŒΠ½ΡƒΡŽ ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΡŽ

    АналитичСский ΠΎΠ±Π·ΠΎΡ€ Π°ΡƒΠ΄ΠΈΠΎΠ²ΠΈΠ·ΡƒΠ°Π»ΡŒΠ½Ρ‹Ρ… систСм для опрСдСлСния срСдств ΠΈΠ½Π΄ΠΈΠ²ΠΈΠ΄ΡƒΠ°Π»ΡŒΠ½ΠΎΠΉ Π·Π°Ρ‰ΠΈΡ‚Ρ‹ Π½Π° Π»ΠΈΡ†Π΅ Ρ‡Π΅Π»ΠΎΠ²Π΅ΠΊΠ°

    Get PDF
    Since 2019 all countries of the world have faced the rapid spread of the pandemic caused by the COVID-19 coronavirus infection, the fight against which continues to the present day by the world community. Despite the obvious effectiveness of personal respiratory protection equipment against coronavirus infection, many people neglect the use of protective face masks in public places. Therefore, to control and timely identify violators of public health regulations, it is necessary to apply modern information technologies that will detect protective masks on people's faces using video and audio information. The article presents an analytical review of existing and developing intelligent information technologies for bimodal analysis of the voice and facial characteristics of a masked person. There are many studies on the topic of detecting masks from video images, and a significant number of cases containing images of faces both in and without masks obtained by various methods can also be found in the public access. Research and development aimed at detecting personal respiratory protection equipment by the acoustic characteristics of human speech is still quite small, since this direction began to develop only during the pandemic caused by the COVID-19 coronavirus infection. Existing systems allow to prevent the spread of coronavirus infection by recognizing the presence/absence of masks on the face, and these systems also help in remote diagnosis of COVID-19 by detecting the first symptoms of a viral infection by acoustic characteristics. However, to date, there is a number of unresolved problems in the field of automatic diagnosis of COVID-19 and the presence/absence of masks on people's faces. First of all, this is the low accuracy of detecting masks and coronavirus infection, which does not allow for performing automatic diagnosis without the presence of experts (medical personnel). Many systems are not able to operate in real time, which makes it impossible to control and monitor the wearing of protective masks in public places. Also, most of the existing systems cannot be built into a smartphone, so that users be able to diagnose the presence of coronavirus infection anywhere. Another major problem is the collection of data from patients infected with COVID-19, as many people do not agree to distribute confidential information.Начиная с 2019 Π³ΠΎΠ΄Π° всС страны ΠΌΠΈΡ€Π° ΡΡ‚ΠΎΠ»ΠΊΠ½ΡƒΠ»ΠΈΡΡŒ со ΡΡ‚Ρ€Π΅ΠΌΠΈΡ‚Π΅Π»ΡŒΠ½Ρ‹ΠΌ распространСниСм ΠΏΠ°Π½Π΄Π΅ΠΌΠΈΠΈ, Π²Ρ‹Π·Π²Π°Π½Π½ΠΎΠΉ коронавирусной ΠΈΠ½Ρ„Π΅ΠΊΡ†ΠΈΠ΅ΠΉ COVID-19, Π±ΠΎΡ€ΡŒΠ±Π° с ΠΊΠΎΡ‚ΠΎΡ€ΠΎΠΉ продолТаСтся ΠΌΠΈΡ€ΠΎΠ²Ρ‹ΠΌ сообщСством ΠΈ ΠΏΠΎ настоящСС врСмя. НСсмотря Π½Π° ΠΎΡ‡Π΅Π²ΠΈΠ΄Π½ΡƒΡŽ ΡΡ„Ρ„Π΅ΠΊΡ‚ΠΈΠ²Π½ΠΎΡΡ‚ΡŒ срСдств ΠΈΠ½Π΄ΠΈΠ²ΠΈΠ΄ΡƒΠ°Π»ΡŒΠ½ΠΎΠΉ Π·Π°Ρ‰ΠΈΡ‚Ρ‹ ΠΎΡ€Π³Π°Π½ΠΎΠ² дыхания ΠΎΡ‚ зараТСния коронавирусной ΠΈΠ½Ρ„Π΅ΠΊΡ†ΠΈΠ΅ΠΉ, ΠΌΠ½ΠΎΠ³ΠΈΠ΅ люди ΠΏΡ€Π΅Π½Π΅Π±Ρ€Π΅Π³Π°ΡŽΡ‚ использованиСм Π·Π°Ρ‰ΠΈΡ‚Π½Ρ‹Ρ… масок для Π»ΠΈΡ†Π° Π² общСствСнных мСстах. ΠŸΠΎΡΡ‚ΠΎΠΌΡƒ для контроля ΠΈ своСврСмСнного выявлСния Π½Π°Ρ€ΡƒΡˆΠΈΡ‚Π΅Π»Π΅ΠΉ общСствСнных ΠΏΡ€Π°Π²ΠΈΠ» здравоохранСния Π½Π΅ΠΎΠ±Ρ…ΠΎΠ΄ΠΈΠΌΠΎ ΠΏΡ€ΠΈΠΌΠ΅Π½ΡΡ‚ΡŒ соврСмСнныС ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΠΎΠ½Π½Ρ‹Π΅ Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³ΠΈΠΈ, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹Π΅ Π±ΡƒΠ΄ΡƒΡ‚ Π΄Π΅Ρ‚Π΅ΠΊΡ‚ΠΈΡ€ΠΎΠ²Π°Ρ‚ΡŒ Π·Π°Ρ‰ΠΈΡ‚Π½Ρ‹Π΅ маски Π½Π° Π»ΠΈΡ†Π°Ρ… людСй ΠΏΠΎ Π²ΠΈΠ΄Π΅ΠΎ- ΠΈ Π°ΡƒΠ΄ΠΈΠΎΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΠΈ. Π’ ΡΡ‚Π°Ρ‚ΡŒΠ΅ ΠΏΡ€ΠΈΠ²Π΅Π΄Π΅Π½ аналитичСский ΠΎΠ±Π·ΠΎΡ€ ΡΡƒΡ‰Π΅ΡΡ‚Π²ΡƒΡŽΡ‰ΠΈΡ… ΠΈ Ρ€Π°Π·Ρ€Π°Π±Π°Ρ‚Ρ‹Π²Π°Π΅ΠΌΡ‹Ρ… ΠΈΠ½Ρ‚Π΅Π»Π»Π΅ΠΊΡ‚ΡƒΠ°Π»ΡŒΠ½Ρ‹Ρ… ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΠΎΠ½Π½Ρ‹Ρ… Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³ΠΈΠΉ бимодального Π°Π½Π°Π»ΠΈΠ·Π° голосовых ΠΈ Π»ΠΈΡ†Π΅Π²Ρ‹Ρ… характСристик Ρ‡Π΅Π»ΠΎΠ²Π΅ΠΊΠ° Π² маскС. БущСствуСт ΠΌΠ½ΠΎΠ³ΠΎ исслСдований Π½Π° Ρ‚Π΅ΠΌΡƒ обнаруТСния масок ΠΏΠΎ видСоизобраТСниям, Ρ‚Π°ΠΊΠΆΠ΅ Π² ΠΎΡ‚ΠΊΡ€Ρ‹Ρ‚ΠΎΠΌ доступС ΠΌΠΎΠΆΠ½ΠΎ Π½Π°ΠΉΡ‚ΠΈ Π·Π½Π°Ρ‡ΠΈΡ‚Π΅Π»ΡŒΠ½ΠΎΠ΅ количСство корпусов, содСрТащих изобраТСния Π»ΠΈΡ† ΠΊΠ°ΠΊ Π±Π΅Π· масок, Ρ‚Π°ΠΊ ΠΈ Π² масках, ΠΏΠΎΠ»ΡƒΡ‡Π΅Π½Π½Ρ‹Ρ… Ρ€Π°Π·Π»ΠΈΡ‡Π½Ρ‹ΠΌΠΈ способами. ИсслСдований ΠΈ Ρ€Π°Π·Ρ€Π°Π±ΠΎΡ‚ΠΎΠΊ, Π½Π°ΠΏΡ€Π°Π²Π»Π΅Π½Π½Ρ‹Ρ… Π½Π° Π΄Π΅Ρ‚Π΅ΠΊΡ‚ΠΈΡ€ΠΎΠ²Π°Π½ΠΈΠ΅ срСдств ΠΈΠ½Π΄ΠΈΠ²ΠΈΠ΄ΡƒΠ°Π»ΡŒΠ½ΠΎΠΉ Π·Π°Ρ‰ΠΈΡ‚Ρ‹ ΠΎΡ€Π³Π°Π½ΠΎΠ² дыхания ΠΏΠΎ акустичСским характСристикам Ρ€Π΅Ρ‡ΠΈ Ρ‡Π΅Π»ΠΎΠ²Π΅ΠΊΠ° ΠΏΠΎΠΊΠ° достаточно ΠΌΠ°Π»ΠΎ, Ρ‚Π°ΠΊ ΠΊΠ°ΠΊ это Π½Π°ΠΏΡ€Π°Π²Π»Π΅Π½ΠΈΠ΅ Π½Π°Ρ‡Π°Π»ΠΎ Ρ€Π°Π·Π²ΠΈΠ²Π°Ρ‚ΡŒΡΡ Ρ‚ΠΎΠ»ΡŒΠΊΠΎ Π² ΠΏΠ΅Ρ€ΠΈΠΎΠ΄ ΠΏΠ°Π½Π΄Π΅ΠΌΠΈΠΈ, Π²Ρ‹Π·Π²Π°Π½Π½ΠΎΠΉ коронавирусной ΠΈΠ½Ρ„Π΅ΠΊΡ†ΠΈΠ΅ΠΉ COVID-19. Π‘ΡƒΡ‰Π΅ΡΡ‚Π²ΡƒΡŽΡ‰ΠΈΠ΅ систСмы ΠΏΠΎΠ·Π²ΠΎΠ»ΡΡŽΡ‚ ΠΏΡ€Π΅Π΄ΠΎΡ‚Π²Ρ€Π°Ρ‚ΠΈΡ‚ΡŒ распространСниС коронавирусной ΠΈΠ½Ρ„Π΅ΠΊΡ†ΠΈΠΈ с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ распознавания наличия/отсутствия масок Π½Π° Π»ΠΈΡ†Π΅, Ρ‚Π°ΠΊΠΆΠ΅ Π΄Π°Π½Π½Ρ‹Π΅ систСмы ΠΏΠΎΠΌΠΎΠ³Π°ΡŽΡ‚ Π² дистанционном диагностировании COVID-19 с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ обнаруТСния ΠΏΠ΅Ρ€Π²Ρ‹Ρ… симптомов вирусной ΠΈΠ½Ρ„Π΅ΠΊΡ†ΠΈΠΈ ΠΏΠΎ акустичСским характСристикам. Однако, Π½Π° сСгодняшний дСнь сущСствуСт ряд Π½Π΅Ρ€Π΅ΡˆΠ΅Π½Π½Ρ‹Ρ… ΠΏΡ€ΠΎΠ±Π»Π΅ΠΌ Π² области автоматичСского диагностирования симптомов COVID-19 ΠΈ наличия/отсутствия масок Π½Π° Π»ΠΈΡ†Π°Ρ… людСй. Π’ ΠΏΠ΅Ρ€Π²ΡƒΡŽ ΠΎΡ‡Π΅Ρ€Π΅Π΄ΡŒ это низкая Ρ‚ΠΎΡ‡Π½ΠΎΡΡ‚ΡŒ обнаруТСния масок ΠΈ коронавирусной ΠΈΠ½Ρ„Π΅ΠΊΡ†ΠΈΠΈ, Ρ‡Ρ‚ΠΎ Π½Π΅ позволяСт ΠΎΡΡƒΡ‰Π΅ΡΡ‚Π²Π»ΡΡ‚ΡŒ Π°Π²Ρ‚ΠΎΠΌΠ°Ρ‚ΠΈΡ‡Π΅ΡΠΊΡƒΡŽ диагностику Π±Π΅Π· присутствия экспСртов (мСдицинского пСрсонала). МногиС систСмы Π½Π΅ способны Ρ€Π°Π±ΠΎΡ‚Π°Ρ‚ΡŒ Π² Ρ€Π΅ΠΆΠΈΠΌΠ΅ Ρ€Π΅Π°Π»ΡŒΠ½ΠΎΠ³ΠΎ Π²Ρ€Π΅ΠΌΠ΅Π½ΠΈ, ΠΈΠ·-Π·Π° Ρ‡Π΅Π³ΠΎ Π½Π΅Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎ ΠΏΡ€ΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡ‚ΡŒ ΠΊΠΎΠ½Ρ‚Ρ€ΠΎΠ»ΡŒ ΠΈ ΠΌΠΎΠ½ΠΈΡ‚ΠΎΡ€ΠΈΠ½Π³ ношСния Π·Π°Ρ‰ΠΈΡ‚Π½Ρ‹Ρ… масок Π² общСствСнных мСстах. Π’Π°ΠΊΠΆΠ΅ Π±ΠΎΠ»ΡŒΡˆΠΈΠ½ΡΡ‚Π²ΠΎ ΡΡƒΡ‰Π΅ΡΡ‚Π²ΡƒΡŽΡ‰ΠΈΡ… систСм Π½Π΅Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎ Π²ΡΡ‚Ρ€ΠΎΠΈΡ‚ΡŒ Π² смартфон, Ρ‡Ρ‚ΠΎΠ±Ρ‹ ΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚Π΅Π»ΠΈ ΠΌΠΎΠ³Π»ΠΈ Π² любом мСстС произвСсти диагностированиС наличия коронавирусной ΠΈΠ½Ρ„Π΅ΠΊΡ†ΠΈΠΈ. Π•Ρ‰Π΅ ΠΎΠ΄Π½ΠΎΠΉ основной ΠΏΡ€ΠΎΠ±Π»Π΅ΠΌΠΎΠΉ являСтся сбор Π΄Π°Π½Π½Ρ‹Ρ… ΠΏΠ°Ρ†ΠΈΠ΅Π½Ρ‚ΠΎΠ², Π·Π°Ρ€Π°ΠΆΠ΅Π½Π½Ρ‹Ρ… COVID-19, Ρ‚Π°ΠΊ ΠΊΠ°ΠΊ ΠΌΠ½ΠΎΠ³ΠΈΠ΅ люди Π½Π΅ согласны Ρ€Π°ΡΠΏΡ€ΠΎΡΡ‚Ρ€Π°Π½ΡΡ‚ΡŒ ΠΊΠΎΠ½Ρ„ΠΈΠ΄Π΅Π½Ρ†ΠΈΠ°Π»ΡŒΠ½ΡƒΡŽ ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΡŽ
    • …
    corecore