109 research outputs found

    Analysis of Respiratory Sounds: State of the Art

    Get PDF
    Objective This paper describes state of the art, scientific publications and ongoing research related to the methods of analysis of respiratory sounds. Methods and material Review of the current medical and technological literature using Pubmed and personal experience. Results The study includes a description of the various techniques that are being used to collect auscultation sounds, a physical description of known pathologic sounds for which automatic detection tools were developed. Modern tools are based on artificial intelligence and on technics such as artificial neural networks, fuzzy systems, and genetic algorithms… Conclusion The next step will consist in finding new markers so as to increase the efficiency of decision aid algorithms and tools

    Recognizing swallowing movements using a textile-based device

    Get PDF
    Dysphagia can stem from various etiologies and cause several serious complications. Instrumental evaluation methods for swallowing require special equipment not available everywhere. Thus, an instrumental means to evaluate swallowing that could be used outside a hospital setting would be critical. Dual-axis accelerometers have been utilized in earlier research to recognize swallowing movements. However, no textile-based approaches have been reported. In this study, we developed a textile-based prototype device for identifying swallowing movements. The device used accelerometers and gyroscopes, with eight sensors attached to the fabric. Two female participants were asked to perform two tasks while wearing the device around their neck: sitting still and taking 10 sips of water. The sensor attached to the middle of the thyroid notch level and the two sensors horizontally aligned to both sides of the hyoid bone level were the most accurate in recognizing swallowing movements. No sensor alone could recognize all swallows. However, all the swallows were identified using the combined data from the sensors. Thus, based on these preliminary results, it seems like a textile-based device using accelerometers and gyroscopes could identify swallowing movements.publishedVersionPeer reviewe

    Detecting Eating Episodes with an Ear-mounted Sensor

    Get PDF
    In this paper, we propose Auracle, a wearable earpiece that can automatically recognize eating behavior. More specifically, in free-living conditions, we can recognize when and for how long a person is eating. Using an off-the-shelf contact microphone placed behind the ear, Auracle captures the sound of a person chewing as it passes through the bone and tissue of the head. This audio data is then processed by a custom analog/digital circuit board. To ensure reliable (yet comfortable) contact between microphone and skin, all hardware components are incorporated into a 3D-printed behind-the-head framework. We collected field data with 14 participants for 32 hours in free-living conditions and additional eating data with 10 participants for 2 hours in a laboratory setting. We achieved accuracy exceeding 92.8% and F1 score exceeding 77.5% for eating detection. Moreover, Auracle successfully detected 20-24 eating episodes (depending on the metrics) out of 26 in free-living conditions. We demonstrate that our custom device could sense, process, and classify audio data in real time. Additionally, we estimateAuracle can last 28.1 hours with a 110 mAh battery while communicating its observations of eating behavior to a smartphone over Bluetooth

    COVID-19 and Computer Audition: An Overview on What Speech & Sound Analysis Could Contribute in the SARS-CoV-2 Corona Crisis

    Get PDF
    At the time of writing, the world population is suffering from more than 10,000 registered COVID-19 disease epidemic induced deaths since the outbreak of the Corona virus more than three months ago now officially known as SARS-CoV-2. Since, tremendous efforts have been made worldwide to counter-steer and control the epidemic by now labelled as pandemic. In this contribution, we provide an overview on the potential for computer audition (CA), i.e., the usage of speech and sound analysis by artificial intelligence to help in this scenario. We first survey which types of related or contextually significant phenomena can be automatically assessed from speech or sound. These include the automatic recognition and monitoring of breathing, dry and wet coughing or sneezing sounds, speech under cold, eating behaviour, sleepiness, or pain to name but a few. Then, we consider potential use-cases for exploitation. These include risk assessment and diagnosis based on symptom histograms and their development over time, as well as monitoring of spread, social distancing and its effects, treatment and recovery, and patient wellbeing. We quickly guide further through challenges that need to be faced for real-life usage. We come to the conclusion that CA appears ready for implementation of (pre-)diagnosis and monitoring tools, and more generally provides rich and significant, yet so far untapped potential in the fight against COVID-19 spread

    Source Separation for Target Enhancement of Food Intake Acoustics from Noisy Recordings

    Get PDF
    International audienceAutomatic food intake monitoring can be significantly beneficial in the fight against obesity and weight management in our society today. Different sensing modalities have been used in several research efforts to accomplish automatic food intake monitoring with acoustic sensors being the most common. In this study, we explore the ability to learn spectral patterns of food intake acoustics from a clean signal and use this learned patterns for extracting the signal of interest from a noisy recording. Using standard metrics for evaluation of blind source separation, namely signal to distortion ratio and signal to interference ratio, we observed up to 20dB improvement of separation quality in very low signal to noise ratio conditions. For more practical performance evaluation of food intake monitoring, we compared the detection accuracy for chew events on the mixed/noisy signal versus on the estimated/separated target signal. We observed up to 60% improvement in chew event detection accuracy for low signal to noise ratio conditions when using the estimated target signal compared to when using the mixed/noisy signal. – Index Terms—food intake monitoring, audio source separation , nonnegative matrix factorization, harmonizable processe

    ROLE OF ULTRASOUND IN THYROID DISORDERS

    Get PDF
    Ultrasonography has established itself has a useful tool in evaluating and managing thyroid disorders. This article provides an overview of basic principles of ultrasound, how it is used in different thyroid disorders, different sonographic pattern of thyroid disorders, comparative features of malignant and benign nodule, ultrasound features of diffuse thyroid disorders and congenital thyroid disorders, ultrasound guided FNAC, advanced techniques of ultrasound in thyroid imaging. This record was migrated from the OpenDepot repository service in June, 2017 before shutting down

    High Fidelity Computational Modeling and Analysis of Voice Production

    Get PDF
    This research aims to improve the fundamental understanding of the multiphysics nature of voice production, particularly, the dynamic couplings among glottal flow, vocal fold vibration and airway acoustics through high-fidelity computational modeling and simulations. Built upon in-house numerical solvers, including an immersed-boundary-method based incompressible flow solver, a finite element method based solid mechanics solver and a hydrodynamic/aerodynamic splitting method based acoustics solver, a fully coupled, continuum mechanics based fluid-structure-acoustics interaction model was developed to simulate the flow-induced vocal fold vibrations and sound production in birds and mammals. Extensive validations of the model were conducted by comparing to excised syringeal and laryngeal experiments. The results showed that, driven by realistic representations of physiology and experimental conditions, including the geometries, material properties and boundary conditions, the model had an excellent agreement with the experiments on the vocal fold vibration patterns, acoustics and intraglottal flow dynamics, demonstrating that the model is able to reproduce realistic phonatory dynamics during voice production. The model was then utilized to investigate the effect of vocal fold inner structures on voice production. Assuming the human vocal fold to be a three-layer structure, this research focused on the effect of longitudinal variation of layer thickness as well as the cover-body thickness ratio on vocal fold vibrations. The results showed that the longitudinal variation of the cover and ligament layers thicknesses had little effect on the flow rate, vocal fold vibration amplitude and pattern but affected the glottal angle in different coronal planes, which also influenced the energy transfer between glottal flow and the vocal fold. The cover-body thickness ratio had a complex nonlinear effect on the vocal fold vibration and voice production. Increasing the cover-body thickness ratio promoted the excitation of the wave-type modes of the vocal fold, which were also higher-eigenfrequency modes, driving the vibrations to higher frequencies. This has created complex nonlinear bifurcations. The results from the research has important clinical implications on voice disorder diagnosis and treatment as voice disorders are often associated with mechanical status changes of the vocal fold tissues and their treatment often focus on restoring the mechanical status of the vocal folds

    DETECTION OF HEALTH-RELATED BEHAVIOURS USING HEAD-MOUNTED DEVICES

    Get PDF
    The detection of health-related behaviors is the basis of many mobile-sensing applications for healthcare and can trigger other inquiries or interventions. Wearable sensors have been widely used for mobile sensing due to their ever-decreasing cost, ease of deployment, and ability to provide continuous monitoring. In this dissertation, we develop a generalizable approach to sensing eating-related behavior. First, we developed Auracle, a wearable earpiece that can automatically detect eating episodes. Using an off-the-shelf contact microphone placed behind the ear, Auracle captures the sound of a person chewing as it passes through the head. This audio data is then processed by a custom circuit board. We collected data with 14 participants for 32 hours in free-living conditions and achieved accuracy exceeding 92.8% and F1 score exceeding77.5% for eating detection with 1-minute resolution. Second, we adapted Auracle for measuring children’s eating behavior, and improved the accuracy and robustness of the eating-activity detection algorithms. We used this improved prototype in a laboratory study with a sample of 10 children for 60 total sessions and collected 22.3 hours of data in both meal and snack scenarios. Overall, we achieved 95.5% accuracy and 95.7% F1 score for eating detection with 1-minute resolution. Third, we developed a computer-vision approach for eating detection in free-living scenarios. Using a miniature head-mounted camera, we collected data with 10 participants for about 55 hours. The camera was fixed under the brim of a cap, pointing to the mouth of the wearer and continuously recording video (but not audio) throughout their normal daily activity. We evaluated performance for eating detection using four different Convolutional Neural Network (CNN) models. The best model achieved 90.9% accuracy and 78.7%F1 score for eating detection with 1-minute resolution. Finally, we validated the feasibility of deploying the 3D CNN model in wearable or mobile platforms when considering computation, memory, and power constraints

    Doctor of Philosophy

    Get PDF
    dissertationPatients sometimes suffer apnea during sedation procedures or after general anesthesia. Apnea presents itself in two forms: respiratory depression (RD) and respiratory obstruction (RO). During RD the patients' airway is open but they lose the drive to breathe. During RO the patients' airway is occluded while they try to breathe. Patients' respiration is rarely monitored directly, but in a few cases is monitored with a capnometer. This dissertation explores the feasibility of monitoring respiration indirectly using an acoustic sensor. In addition to detecting apnea in general, this technique has the possibility of differentiating between RD and RO. Data were recorded on 24 subjects as they underwent sedation. During the sedation, subjects experienced RD or RO. The first part of this dissertation involved detecting periods of apnea from the recorded acoustic data. A method using a parameter estimation algorithm to determine the variance of the noise of the audio signal was developed, and the envelope of the audio data was used to determine when the subject had stopped breathing. Periods of apnea detected by the acoustic method were compared to the periods of apnea detected by the direct flow measurement. This succeeded with 91.8% sensitivity and 92.8% specificity in the training set and 100% sensitivity and 98% specificity in the testing set. The second part of this dissertation used the periods during which apnea was detected to determine if the subject was experiencing RD or RO. The classifications determined from the acoustic signal were compared to the classifications based on the flow measurement in conjunction with the chest and abdomen movements. This did not succeed with a 86.9% sensitivity and 52.6% specificity in the training set, and 100% sensitivity and 0% specificity in the testing set. The third part of this project developed a method to reduce the background sounds that were commonly recorded on the microphone. Additive noise was created to simulate noise generated in typical settings and the noise was removed via an adaptive filter. This succeeded in improving or maintaining apnea detection given the different types of sounds added to the breathing data
    corecore