34 research outputs found

    Enhancing Asthma Control through IT: Design, Implementation and Planned Evaluation of the Mobile Asthma Companion

    Get PDF
    The personal and financial burden of asthma highly depends on a patient’s disease self-management skill. Scalable mHealth apps, designed to empower patients, have the potential to play a crucial role in asthma disease management. However, the actual clinical efficacy of mHealth asthma apps is poorly understood due to the lack of both methodologically sound research and accessible evidence-based apps. We therefore apply design science with the goal to design, implement and evaluate a mHealth app for people with asthma, the Mobile Asthma Companion (MAC). The current prototype of MAC delivers health literacy knowledge triggered by nocturnal cough rates. We conclude by proposing a randomized controlled trial to test the efficacy of our prototype

    Emotion elicitation and capture among real couples in the lab

    Get PDF
    Couples’ relationships affect partners’ mental and physical well-being. Automatic recognition of couples’ emotions will not only help to better understand the interplay of emotions, intimate relationships, and health and well-being, but also provide crucial clinical insights into protective and risk factors of relationships, and can ultimately guide interventions. However, several works developing emotion recognition algorithms use data from actors in artificial dyadic interactions and the algorithms are likely not to perform well on real couples. We are developing emotion recognition methods using data from real couples and, in this paper, we describe two studies we ran in which we collected emotion data from real couples — Dutch-speaking couples in Belgium and German-speaking couples in Switzerland. We discuss our approach to eliciting and capturing emotions and make five recommendations based on their relevance for developing well-performing emotion recognition systems for couples

    Towards the Design of a Smartphone-Based Biofeedback Breathing Training: Indentifying Diaphragmatic Breathing Patterns From a Smartphones\u27 Microphone

    Get PDF
    Asthma, diabetes, hypertension, or major depression are non-communicable diseases (NCDs) and impose a major burden on global health. Stress is linked to both the causes and consequences of NCDs and it has been shown that biofeedback-based breathing trainings (BBTs) are effective in coping with stress. Here, diaphragmatic breathing, i.e. deep abdominal breathing, belongs to the most distinguished breathing techniques. However, high costs and low scalability of state-of-the-art BBTs that require expensive medical hardware and health professionals, represent a significant barrier for their widespread adoption. Health information technology has the potential to address this important practical problem. Particularly, it has been shown that a smartphone microphone has the ability to record audio signals from exhalation in a quality that can be compared to professional respiratory devices. As this finding is highly relevant for low-cost and scalable smartphone-based BBTs (SBBT) and – to the best of our knowledge - because it has not been investigated so far, we aim to design and evaluate the efficacy of such a SBBT. As a very first step, we apply design-science research and investigate in this research-in-progress the relationship of diaphragmatic breathing and its acoustic components by just using a smartphone’s microphone. For that purpose, we review related work and develop our hypotheses based on justificatory knowledge from physiology, physics and acoustics. We finally describe a laboratory study that is used to test our hypotheses. We conclude with a brief outlook on future work

    Automatic Recognition, Segmentation, and Sex Assignment of Nocturnal Asthmatic Coughs and Cough Epochs in Smartphone Audio Recordings: Observational Field Study

    Get PDF
    Background: Asthma is one of the most prevalent chronic respiratory diseases. Despite increased investment in treatment, little progress has been made in the early recognition and treatment of asthma exacerbations over the last decade. Nocturnal cough monitoring may provide an opportunity to identify patients at risk for imminent exacerbations. Recently developed approaches enable smartphone-based cough monitoring. These approaches, however, have not undergone longitudinal overnight testing nor have they been specifically evaluated in the context of asthma. Also, the problem of distinguishing partner coughs from patient coughs when two or more people are sleeping in the same room using contact-free audio recordings remains unsolved. Objective: The objective of this study was to evaluate the automatic recognition and segmentation of nocturnal asthmatic coughs and cough epochs in smartphone-based audio recordings that were collected in the field. We also aimed to distinguish partner coughs from patient coughs in contact-free audio recordings by classifying coughs based on sex. Methods: We used a convolutional neural network model that we had developed in previous work for automated cough recognition. We further used techniques (such as ensemble learning, minibatch balancing, and thresholding) to address the imbalance in the data set. We evaluated the classifier in a classification task and a segmentation task. The cough-recognition classifier served as the basis for the cough-segmentation classifier from continuous audio recordings. We compared automated cough and cough-epoch counts to human-annotated cough and cough-epoch counts. We employed Gaussian mixture models to build a classifier for cough and cough-epoch signals based on sex. Results: We recorded audio data from 94 adults with asthma (overall: mean 43 years; SD 16 years; female: 54/94, 57%; male 40/94, 43%). Audio data were recorded by each participant in their everyday environment using a smartphone placed next to their bed; recordings were made over a period of 28 nights. Out of 704,697 sounds, we identified 30,304 sounds as coughs. A total of 26,166 coughs occurred without a 2-second pause between coughs, yielding 8238 cough epochs. The ensemble classifier performed well with a Matthews correlation coefficient of 92% in a pure classification task and achieved comparable cough counts to that of human annotators in the segmentation of coughing. The count difference between automated and human-annotated coughs was a mean –0.1 (95% CI –12.11, 11.91) coughs. The count difference between automated and human-annotated cough epochs was a mean 0.24 (95% CI –3.67, 4.15) cough epochs. The Gaussian mixture model cough epoch–based sex classification performed best yielding an accuracy of 83%. Conclusions: Our study showed longitudinal nocturnal cough and cough-epoch recognition from nightly recorded smartphone-based audio from adults with asthma. The model distinguishes partner cough from patient cough in contact-free recordings by identifying cough and cough-epoch signals that correspond to the sex of the patient. This research represents a step towards enabling passive and scalable cough monitoring for adults with asthma

    Effectiveness of the Austrian disease-management-programme for type 2 diabetes: study protocol of a cluster-randomized controlled trial

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Due to its rising prevalence type 2 diabetes plays an important role concerning population health in Austria and other western countries. In various studies deficiencies in the care of diabetic patients have been revealed. These deficiencies may be overcome by disease-management-programmes (DMPs), but international experience shows that the effectiveness of DMPs is inconsistent. In particular large programmes designed by state-affiliated public health insurances have not been evaluated in randomized controlled trials (RCTs). We are therefore conducting a large scale RCT of the Austrian DMP for type 2 diabetic patients in the province of Salzburg to evaluate the programme regarding its effects on metabolic control, guideline adherent care and the quality of life of diabetic patients.</p> <p>Methods/Design</p> <p>The study is open for participation to all GPs and internists in the province of Salzburg. Physicians are randomized before recruitment of patients with the districts of Salzburg as clusters of randomisation. A total of over 1200 patients with type 2 diabetes will then be recruited. In the intervention group the DMP is applied for one year. Controls receive usual care. Endpoints are a decrease in HbA1c in the intervention group > 0,5% compared to controls, a higher percentage of patients with required diagnostic measures according to guidelines, improved cardiovascular risk profile and higher quality of life scores within one year.</p> <p>Current status of the study</p> <p>98 Physicians agreed to participate in the study. 96 of them recruited 1494 patients, 654 in the intervention and 840 in the control group.</p> <p>Trail Registration</p> <p>This trial has been registered with Current Controlled Trials Ltd. (ISRCTN27414162).</p

    The effectiveness of the Austrian disease management programme for type 2 diabetes: a cluster-randomised controlled trial

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Disease management programmes (DMPs) are costly and impose additional work load on general practitioners (GPs). Data on their effectiveness are inconclusive. We therefore conducted a cluster-randomised controlled trial to evaluate the effectiveness of the Austrian DMP for diabetes mellitus type 2 on HbA1c and quality of care for adult patients in primary care.</p> <p>Methods</p> <p>All GPs of Salzburg-province were invited to participate. After cluster-randomisation by district, all patients with diabetes type 2 were recruited consecutively from 7-11/2007. The DMP, consisting mainly of physician and patient education, standardised documentation and agreement on therapeutic goals, was implemented in the intervention group while the control group received usual care. We aimed to show superiority of the intervention regarding metabolic control and process quality. The primary outcome measure was a change in HbA1c after one year. Secondary outcomes were days in the hospital, blood pressure, lipids, body mass index (BMI), enrolment in patient education and regular guideline-adherent examination. Blinding was not possible.</p> <p>Results</p> <p>92 physicians recruited 1489 patients (649 intervention, 840 control). After 401 ± 47 days, 590 intervention-patients and 754 controls had complete data. In the intention to treat analysis (ITT) of all 1489 patients, HbA1c decreased 0.41% in the intervention group and 0.28% in controls. The difference of -0.13% (95% CI -0.24; -0.02) was significant at p = 0.026. Significance was lost in mixed models adjusted for baseline value and cluster-effects (adjusted mean difference -0.03 (95% CI -0.15; 0.09, p = 0.607). Of the secondary outcome measures, BMI and cholesterol were significantly reduced in the intervention group compared to controls in ITT after adjustments (-0.53 kg/mÂČ; 95% CI -1.03;-0.02; p = 0.014 and -0.10 mmol/l; 95% CI -0.21; -0.003; p = 0.043). Additionally, more patients received patient education (49.5% vs. 20.1%, p < 0.0001), eye- (71.0% vs. 51.2%, p < 0.0001), foot examinations (73.8% vs. 45.1%, p < 0.0001), and regular HbA1c checks (44.1% vs. 36.0%, p < 0.01) in the intervention group.</p> <p>Conclusion</p> <p>The Austrian DMP implemented by statutory health insurance improves process quality and enhances weight reduction, but does not significantly improve metabolic control for patients with type 2 diabetes mellitus. Whether the small benefit seen in secondary outcome measures leads to better patient outcomes, remains unclear.</p> <p>Trial Registration</p> <p>Current Controlled trials Ltd., ISRCTN27414162.</p

    Speech Emotion Recognition among Couples using the Peak-End Rule and Transfer Learning

    No full text
    Extensive couples' literature shows that how couples feel after a conflict is predicted by certain emotional aspects of that conversation. Understanding the emotions of couples leads to a better understanding of partners? mental well-being and consequently their relationships. Hence, automatic emotion recognition among couples could potentially guide interventions to help couples improve their emotional well-being and their relationships. It has been shown that people's global emotional judgment after an experience is strongly influenced by the emotional extremes and ending of that experience, known as the peak-end rule. In this work, we leveraged this theory and used machine learning to investigate, which audio segments can be used to best predict the end-of-conversation emotions of couples. We used speech data collected from 101 Dutch-speaking couples in Belgium who engaged in 10-minute long conversations in the lab. We extracted acoustic features from (1) the audio segments with the most extreme positive and negative ratings, and (2) the ending of the audio. We used transfer learning in which we extracted these acoustic features with a pre-trained convolutional neural network (YAMNet). We then used these features to train machine learning models - support vector machines - to predict the end-of-conversation valence ratings (positive vs negative) of each partner. The results of this work could inform how to best recognize the emotions of couples after conversation-sessions and eventually, lead to a better understanding of couples? relationships either in therapy or in everyday life

    Trajectories of Engagement with a Digital Physical Activity Coach: Secondary Analysis of a Micro-Randomized Trial

    No full text
    Context: Intervention components of a MobileCoach based smartphone app to promote walking were assessed in a seven-week micro-randomized trial (N = 274). In order to make a significant contribution to public health, the app must also engage those at higher risk for adverse health outcomes, i.e. elderly, less active or less healthy participants. Methods: In a secondary analysis, longitudinal trajectories of participants’ number of daily app sessions were clustered using the k-means algorithm with an alternating number of clusters. An app session was defined as any interaction between a participant and the app separated by at least five minutes between interactions. The final number of clusters was determined based on five different clustering quality criteria. Results: Two different clusters emerged: stable high engagement (31.3% of participants, 7.6 (SD = 2.9) mean daily app sessions) and stable low engagement (68.7% of participants, 1.5 (SD = 1.4) mean daily app sessions). Highly engaged participants were older (45.8 vs. 40.1 years, p < .001, d = 0.43) and accumulated more steps per day during the study (7373 vs. 5828 steps per day, p < .001, d = 0.57). Clusters did not differ with regard to participants’ baseline physical activity, gender, body mass index, self-reported health status or education. Conclusions: A chatbot-based walking app engaged participants of a micro-randomized trial over a period of seven weeks independent of their risk for adverse health outcomes. Thus, participants with low risk for adverse health outcomes at baseline do not drive high engagement with the app

    “You made me feel this way”: Investigating Partners’ Influence in Predicting Emotions in Couples’ Conflict Interactions using Speech Data

    No full text
    How romantic partners interact with each other during a conflict influences how they feel at the end of the interaction and is predictive of whether the partners stay together in the long term. Hence understanding the emotions of each partner is important. Yet current approaches that are used include self-reports which are burdensome and hence limit the frequency of this data collection. Automatic emotion prediction could address this challenge. Insights from psychology research indicate that partners’ behaviors influence each other’s emotions in conflict interaction and hence, the behavior of both partners could be considered to better predict each partner’s emotion. However, it is yet to be investigated how doing so compares to only using each partner’s own behavior in terms of emotion prediction performance. In this work, we used BERT to extract linguistic features (i.e., what partners said) and openSMILE to extract paralinguistic features (i.e., how they said it) from a data set of 368 German-speaking Swiss couples (N = 736 individuals) who were videotaped during an 8-minutes conflict interaction in the laboratory. Based on those features, we trained machine learning models to predict if partners feel positive or negative after the conflict interaction. Our results show that including the behavior of the other partner improves the prediction performance. Furthermore, for men, considering how their female partners spoke is most important and for women considering what their male partner said is most important in getting better prediction performance. This work is a step towards automatically recognizing each partners’ emotion based on the behavior of both, which would enable a better understanding of couples in research, therapy, and the real world

    “You made me feel this way”: Investigating Partners' Influence in Predicting Emotions in Couples' Conflict Interactions using Speech Data

    No full text
    How romantic partners interact with each other during a conflict influences how they feel at the end of the interaction and is predictive of whether the partners stay together in the long term. Hence understanding the emotions of each partner is important. Yet current approaches that are used include self-reports which are burdensome and hence limit the frequency of this data collection. Automatic emotion prediction could address this challenge. Insights from psychology research indicate that partners' behaviors influence each other's emotions in conflict interaction and hence, the behavior of both partners could be considered to better predict each partner's emotion. However, it is yet to be investigated how doing so compares to only using each partner's own behavior in terms of emotion prediction performance. In this work, we used BERT to extract linguistic features (i.e., what partners said) and openSMILE to extract paralinguistic features (i.e., how they said it) from a data set of 368 German-speaking Swiss couples (N = 736 individuals) which were videotaped during an 8-minutes conflict interaction in the laboratory. Based on those features, we trained machine learning models to predict if partners feel positive or negative after the conflict interaction. Our results show that including the behavior of the other partner improves the prediction performance. Furthermore, for men, considering how their female partners spoke is most important and for women considering what their male partner said is most important in getting better prediction performance. This work is a step towards automatically recognizing each partners' emotion based on the behavior of both, which would enable a better understanding of couples in research, therapy, and the real world.Comment: 5 pages, Under review at ICMI 202
    corecore