260 research outputs found

    The electronic stethoscope

    Get PDF

    Certainty Modeling of a Decision Support System for Mobile Monitoring of Exercise-induced Respiratory Conditions

    Get PDF
    Mobile health systems in recent times, have notably improved the healthcare sector by empowering patients to actively participate in their health, and by facilitating access to healthcare professionals. Effective operation of these mobile systems nonetheless, requires high level of intelligence and expertise implemented in the form of decision support systems (DSS). However, common challenges in the implementation include generalization and reliability, due to the dynamics and incompleteness of information presented to the inference models. In this paper, we advance the use of ad hoc mobile decision support system to monitor and detect triggers and early symptoms of respiratory distress provoked by strenuous physical exertion. The focus is on the application of certainty theory to model inexact reasoning by the mobile monitoring system. The aim is to develop a mobile tool to assist patients in managing their conditions, and to provide objective clinical data to aid physicians in the screening, diagnosis, and treatment of the respiratory ailments. We present the proposed model architecture and then describe an application scenario in a clinical setting. We also show implementation of an aspect of the system that enables patients in the self-management of their conditions

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies

    Heart sounds:From animal to patient and Mhealth

    Get PDF

    ILSA 2017 in Tromsø : proceedings from the 42nd annual conference of the International Lung Sound Association

    Get PDF
    Edited by Hasse Medbye, med bidrag fra flere.<brThe usefulness of lung auscultation is changing. It depends on how well practitioners understand the generation of sounds. It also depends on their knowledge on how lung sounds are associated with lung and heart diseases, as well as with other factors such as ageing and smoking habits. In clinical practice, practitioners need to give sufficient attention to lung auscultation, and they should use the same terminology, or at least understand each other’s use of terms. Technological innovations lead to an extended use of lung auscultation. Continuous monitoring of lung sounds is now possible, and computers can extract more information from the complex lung sounds than human hearing is capable of. Learning how to carry out lung auscultation and to interpret the sounds are essential skills in the education of doctors and other health professionals. Thus, new computer based learning tools for the study of recorded sounds will be helpful. In this conference there will be focus on all these determinants for efficient lung auscultation. In addition to free oral presentations, we have three symposia: on computerized analysis based on machine learning, on diagnostics, and on learning lung sounds, including the psychology of hearing. The symposia include extended presentations from invited speakers. The 42nd conference is the first in history arranged by a research unit for general practice. Primary care doctors are probably the group of health professionals that put the greatest emphasis on lung auscultation in their clinical work. Many patients with chest symptoms consult without a known diagnosis, and several studies have shown that general practitioners pay attention to crackles and wheezes when making decisions, for instance when antibiotics are prescribed to coughing patients. In hospital, the diagnosis of lung diseases is more strongly influenced by technologies such as radiography and blood gas analysis. Since lung auscultation holds a strong position in the work of primary care doctors, I think it is just timely, that the 42nd ILSA conference is hosted by General Practice Research Unit in Tromsø. I hope all participants will find presentations of importance, and that the stay in Tromsø will be enjoyable

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Automatic analysis and classification of cardiac acoustic signals for long term monitoring

    Get PDF
    Objective: Cardiovascular diseases are the leading cause of death worldwide resulting in over 17.9 million deaths each year. Most of these diseases are preventable and treatable, but their progression and outcomes are significantly more positive with early-stage diagnosis and proper disease management. Among the approaches available to assist with the task of early-stage diagnosis and management of cardiac conditions, automatic analysis of auscultatory recordings is one of the most promising ones, since it could be particularly suitable for ambulatory/wearable monitoring. Thus, proper investigation of abnormalities present in cardiac acoustic signals can provide vital clinical information to assist long term monitoring. Cardiac acoustic signals, however, are very susceptible to noise and artifacts, and their characteristics vary largely with the recording conditions which makes the analysis challenging. Additionally, there are challenges in the steps used for automatic analysis and classification of cardiac acoustic signals. Broadly, these steps are the segmentation, feature extraction and subsequent classification of recorded signals using selected features. This thesis presents approaches using novel features with the aim to assist the automatic early-stage detection of cardiovascular diseases with improved performance, using cardiac acoustic signals collected in real-world conditions. Methods: Cardiac auscultatory recordings were studied to identify potential features to help in the classification of recordings from subjects with and without cardiac diseases. The diseases considered in this study for the identification of the symptoms and characteristics are the valvular heart diseases due to stenosis and regurgitation, atrial fibrillation, and splitting of fundamental heart sounds leading to additional lub/dub sounds in the systole or diastole interval of a cardiac cycle. The localisation of cardiac sounds of interest was performed using an adaptive wavelet-based filtering in combination with the Shannon energy envelope and prior information of fundamental heart sounds. This is a prerequisite step for the feature extraction and subsequent classification of recordings, leading to a more precise diagnosis. Localised segments of S1 and S2 sounds, and artifacts, were used to extract a set of perceptual and statistical features using wavelet transform, homomorphic filtering, Hilbert transform and mel-scale filtering, which were then fed to train an ensemble classifier to interpret S1 and S2 sounds. Once sound peaks of interest were identified, features extracted from these peaks, together with the features used for the identification of S1 and S2 sounds, were used to develop an algorithm to classify recorded signals. Overall, 99 features were extracted and statistically analysed using neighborhood component analysis (NCA) to identify the features which showed the greatest ability in classifying recordings. Selected features were then fed to train an ensemble classifier to classify abnormal recordings, and hyperparameters were optimized to evaluate the performance of the trained classifier. Thus, a machine learning-based approach for the automatic identification and classification of S1 and S2, and normal and abnormal recordings, in real-world noisy recordings using a novel feature set is presented. The validity of the proposed algorithm was tested using acoustic signals recorded in real-world, non-controlled environments at four auscultation sites (aortic valve, tricuspid valve, mitral valve, and pulmonary valve), from the subjects with and without cardiac diseases; together with recordings from the three large public databases. The performance metrics of the methodology in relation to classification accuracy (CA), sensitivity (SE), precision (P+), and F1 score, were evaluated. Results: This thesis proposes four different algorithms to automatically classify fundamental heart sounds – S1 and S2; normal fundamental sounds and abnormal additional lub/dub sounds recordings; normal and abnormal recordings; and recordings with heart valve disorders, namely the mitral stenosis (MS), mitral regurgitation (MR), mitral valve prolapse (MVP), aortic stenosis (AS) and murmurs, using cardiac acoustic signals. The results obtained from these algorithms were as follows: • The algorithm to classify S1 and S2 sounds achieved an average SE of 91.59% and 89.78%, and F1 score of 90.65% and 89.42%, in classifying S1 and S2, respectively. 87 features were extracted and statistically studied to identify the top 14 features which showed the best capabilities in classifying S1 and S2, and artifacts. The analysis showed that the most relevant features were those extracted using Maximum Overlap Discrete Wavelet Transform (MODWT) and Hilbert transform. • The algorithm to classify normal fundamental heart sounds and abnormal additional lub/dub sounds in the systole or diastole intervals of a cardiac cycle, achieved an average SE of 89.15%, P+ of 89.71%, F1 of 89.41%, and CA of 95.11% using the test dataset from the PASCAL database. The top 10 features that achieved the highest weights in classifying these recordings were also identified. • Normal and abnormal classification of recordings using the proposed algorithm achieved a mean CA of 94.172%, and SE of 92.38%, in classifying recordings from the different databases. Among the top 10 acoustic features identified, the deterministic energy of the sound peaks of interest and the instantaneous frequency extracted using the Hilbert Huang-transform, achieved the highest weights. • The machine learning-based approach proposed to classify recordings of heart valve disorders (AS, MS, MR, and MVP) achieved an average CA of 98.26% and SE of 95.83%. 99 acoustic features were extracted and their abilities to differentiate these abnormalities were examined using weights obtained from the neighborhood component analysis (NCA). The top 10 features which showed the greatest abilities in classifying these abnormalities using recordings from the different databases were also identified. The achieved results demonstrate the ability of the algorithms to automatically identify and classify cardiac sounds. This work provides the basis for measurements of many useful clinical attributes of cardiac acoustic signals and can potentially help in monitoring the overall cardiac health for longer duration. The work presented in this thesis is the first-of-its-kind to validate the results using both, normal and pathological cardiac acoustic signals, recorded for a long continuous duration of 5 minutes at four different auscultation sites in non-controlled real-world conditions.Open Acces

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications (MAVEBA) came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the newborn to the adult and elderly. Over the years the initial issues have grown and spread also in other fields of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years in Firenze, Italy. This edition celebrates twenty-two years of uninterrupted and successful research in the field of voice analysis

    수면 호흡음을 이용한 폐쇄성 수면 무호흡 중증도 분류

    Get PDF
    학위논문 (박사)-- 서울대학교 융합과학기술대학원 융합과학부, 2017. 8. 이교구.Obstructive sleep apnea (OSA) is a common sleep disorder. The symptom has a high prevalence and increases mortality as a risk factor for hypertension and stroke. Sleep disorders occur during sleep, making it difficult for patients to self-perceive themselves, and the actual diagnosis rate is low. Despite the existence of a standard sleep study called a polysomnography (PSG), it is difficult to diagnose the sleep disorders due to complicated test procedures and high medical cost burdens. Therefore, there is an increasing demand for an effective and rational screening test that can determine whether or not to undergo a PSG. In this thesis, we conducted three studies to classify the snoring sounds and OSA severity using only breathing sounds during sleep without additional biosensors. We first identified the classification possibility of snoring sounds related to sleep disorders using the features based on the cyclostationary analysis. Then, we classified the patients OSA severity with the features extracted using temporal and cyclostationary analysis from long-term sleep breathing sounds. Finally, the partial sleep sound extraction, and feature learning process using a convolutional neural network (CNN, or ConvNet) were applied to improve the efficiency and performance of previous snoring sound and OSA severity classification tasks. The sleep breathing sound analysis method using a CNN showed superior classification accuracy of more than 80% (average area under curve > 0.8) in multiclass snoring sounds and OSA severity classification tasks. The proposed analysis and classification method is expected to be used as a screening tool for improving the efficiency of PSG in the future customized healthcare service.Chapter 1. Introduction ................................ .......................1 1.1 Personal healthcare in sleep ................................ ..............1 1.2 Existing approaches and limitations ....................................... 9 1.3 Clinical information related to SRBD ................................ .. ..12 1.4 Study objectives ................................ .........................16 Chapter 2. Overview of Sleep Research using Sleep Breathing Sounds ........... 23 2.1 Previous goals of studies ................................ ................23 2.2 Recording environments and related configurations ........................ 24 2.3 Sleep breathing sound analysis ................................ ...........27 2.4 Sleep breathing sound classification ..................................... 35 2.5 Current limitations ................................ ......................36 Chapter 3. Multiple SRDB-related Snoring Sound Classification .................39 3.1 Introduction ................................ .............................39 3.2 System architecture ................................ ......................41 3.3 Evaluation ................................ ...............................52 3.4 Results ................................ ..................................55 3.5 Discussion ................................ ...............................59 3.6 Summary ................................ ..................................63 Chapter 4. Patients OSA Severity Classification .............................65 4.1 Introduction ................................ .............................65 4.2 Existing Approaches ................................ ......................69 4.3 System Architecture ................................ ......................70 4.4 Evaluation ................................ ...............................85 4.5 Results ................................ ..................................87 4.6 Discussion ................................ ...............................94 4.7 Summary ................................ ..................................97 Chapter 5. Patient OSA Severity Prediction using Deep Learning Techniques .....99 5.1 Introduction ................................ .............................99 5.2 Methods ................................ ..................................101 5.3 Results ................................ ..................................109 5.4 Discussion ................................ ...............................115 5.5 Summary ................................ ..................................118 Chapter 6. Conclusions and Future Work ........................................120 6.1 Conclusions ................................ ..............................120 6.2 Future work ................................ ..............................127Docto
    corecore