12 research outputs found

    Wearable artificial intelligence for anxiety and depression: A scoping review

    Get PDF
    Background: Anxiety and depression are the most common mental disorders worldwide. Owing to the lack of psychiatrists around the world, the incorporation of AI and wearable devices (wearable artificial intelligence (AI)) have been exploited to provide mental health services. Objective: The current review aimed to explore the features of wearable AI used for anxiety and depression to identify application areas and open research issues. Methods: We searched 8 electronic databases (MEDLINE, PsycINFO, EMBASE, CINAHL, IEEE Xplore, ACM Digital Library, Scopus, and Google Scholar). Then, we checked studies that cited the included studies, and screened studies that were cited by the included studies. Study selection and data extraction were carried out by two reviewers independently. The extracted data were aggregated and summarized using the narrative synthesis. Results: Of the 1203 citations identified, 69 studies were included in this review. About two thirds of the studies used wearable AI for depression while the remaining studies used it for anxiety. The most frequent application of wearable AI was diagnosing anxiety and depression while no studies used it for treatment purposes. The majority of studies targeted individuals between the ages of 18 and 65. The most common wearable devices used in the studies were Actiwatch AW4. The wrist-worn devices were most common in the studies. The most commonly used data for model development were physical activity data, sleep data, and heart rate data. The most frequently used dataset from open sources was Depresjon. The most commonly used algorithms were Random Forest (RF) and Support Vector Machine (SVM). Conclusions: Wearable AI can offer great promise in providing mental health services related to anxiety and depression. Wearable AI can be used by individuals as a pre-screening assessment of anxiety and depression. Further reviews are needed to statistically synthesize studies’ results related to the performance and effectiveness of wearable AI. Given its potential, tech companies should invest more in wearable AI for treatment purposes for anxiety and depression

    The performance of wearable AI in detecting stress among students : systematic review and meta-analysis

    Get PDF
    Students usually encounter stress throughout their academic path. Ongoing stressors may lead to chronic stress, adversely affecting their physical and mental well-being. Thus, early detection and monitoring of stress among students are crucial. Wearable artificial intelligence (AI) has emerged as a valuable tool for this purpose. It offers an objective, noninvasive, nonobtrusive, automated approach to continuously monitor biomarkers in real time, thereby addressing the limitations of traditional approaches such as self-reported questionnaires. This systematic review and meta-analysis aim to assess the performance of wearable AI in detecting and predicting stress among students. Search sources in this review included 7 electronic databases (MEDLINE, Embase, PsycINFO, ACM Digital Library, Scopus, IEEE Xplore, and Google Scholar). We also checked the reference lists of the included studies and checked studies that cited the included studies. The search was conducted on June 12, 2023. This review included research articles centered on the creation or application of AI algorithms for the detection or prediction of stress among students using data from wearable devices. In total, 2 independent reviewers performed study selection, data extraction, and risk-of-bias assessment. The Quality Assessment of Diagnostic Accuracy Studies-Revised tool was adapted and used to examine the risk of bias in the included studies. Evidence synthesis was conducted using narrative and statistical techniques. This review included 5.8% (19/327) of the studies retrieved from the search sources. A meta-analysis of 37 accuracy estimates derived from 32% (6/19) of the studies revealed a pooled mean accuracy of 0.856 (95% CI 0.70-0.93). Subgroup analyses demonstrated that the accuracy of wearable AI was moderated by the number of stress classes (P=.02), type of wearable device (P=.049), location of the wearable device (P=.02), data set size (P=.009), and ground truth (P=.001). The average estimates of sensitivity, specificity, and F -score were 0.755 (SD 0.181), 0.744 (SD 0.147), and 0.759 (SD 0.139), respectively. Wearable AI shows promise in detecting student stress but currently has suboptimal performance. The results of the subgroup analyses should be carefully interpreted given that many of these findings may be due to other confounding factors rather than the underlying grouping characteristics. Thus, wearable AI should be used alongside other assessments (eg, clinical questionnaires) until further evidence is available. Future research should explore the ability of wearable AI to differentiate types of stress, distinguish stress from other mental health issues, predict future occurrences of stress, consider factors such as the placement of the wearable device and the methods used to assess the ground truth, and report detailed results to facilitate the conduct of meta-analyses. PROSPERO CRD42023435051; http://tinyurl.com/3fzb5rnp. [Abstract copyright: ©Alaa Abd-alrazaq, Mohannad Alajlani, Reham Ahmad, Rawan AlSaad, Sarah Aziz, Arfan Ahmed, Mohammed Alsahli, Rafat Damseh, Javaid Sheikh. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 31.01.2024.

    INTERPRETABLE DEEP LEARNING MODELS FOR PREDICTION OF CLINICAL OUTCOMES FROM ELECTRONIC HEALTH RECORDS

    No full text
    The rapid adoption of electronic health records (EHRs) has generated tremendous amounts of valuable clinical data on complex diseases and health trajectories. Yet, achieving successful secondary use of this EHR data for expanding our knowledge about diseases, expediting scientific discoveries in medicine, and facilitating clinical decision-making has remained challenging, owing to the complexity and data quality issues of these EHR data. Artificial intelligence, specifically deep learning, presents a promising approach for analyzing this rich EHR data, represented as a series of timestamped multivariate data packed in irregular intervals. Deep learning-based predictive modeling with longitudinal EHR data offers a great promise for accelerating personalized medicine, enabling disease prevention, better informing clinical decision making, and reducing healthcare costs. However, employing deep learning on EHR data for personalized prediction of clinical outcomes requires coping with numerous issues simultaneously. In this thesis, we focus on addressing three important challenges: data heterogeneity, data irregularity, and model interpretability. We utilize state of the art deep learning techniques and modern machine learning methods to develop accurate and interpretable predictive models using EHR data. Specifically, we demonstrate how temporal clinical data contained in EHRs can be harnessed for providing patient specific predictions and interpretations for several clinical outcomes. We focus on two aspects: 1) code level and visit-level interpretations for predicted outcomes using recurrent neural networks (RNNs), attention mechanism, and contextual decomposition interpretation method, and 2) leveraging the non-stationarity characteristics in EHR data into the predictive models using self-attention mechanism and kernels approximation technique. Our proposed EHR-based deep learning models demonstrate improved performance in terms of predictive accuracy and interpretability on multiple clinical prediction tasks, compared to existing work in this area. These tasks include preterm birth prediction, school-age asthma prediction, and predicting the set of diagnosis codes in the next visit. Such models have a great potential to assist healthcare professionals in making decisions, which are not only dependent on the clinician's clinical knowledge and expertise, but also based on personalized and precise insights about future patient outcomes

    Towards a National Electronic Health Record in Qatar: Building on International Experiences

    Get PDF
    Background: During the past decade, the IT industry has introduced several new concepts within the health domain including e-health, electronic health record, digital hospital, and many more. Although each of these terms has brought its own unique definition and perspectives, they were all based on the foundation that healthcare and wellness management are dependent on effectively using technology to access accurate data in a timely fashion; ensuring enhanced patient care and medical error reduction. The Electronic Health Record (EHR) is an integrated system that collects data from different healthcare providers to create a unified electronic record for each patient among the population. Today, the patient's health information is scattered across different healthcare facilities causing significant inefficiencies within the healthcare system. A national EHR system will tackle these challenges by producing a personal health record for each patient, integrating information from all healthcare providers, and additionally giving access to patients themselves allowing their contribution. Motivation: There are many health IT implementations for EHR programs around the world that serve as great learning experiences for Qatar, offering it a great opportunity to leverage the best national EHR implementation strategies and practices. National EHR initiatives in Qatar emphasize the need to have secure electronic management of health data in structured and standardized formats, which can be communicated across its hospitals, primary healthcare centres, and other healthcare facilities. Personalized medicine initiatives share and extend these goals, with additional precision provided by genetic/genomic-based improved diagnostic, prognostic, and preventive information; thereby demanding a coordinated extension for the adoption and implementation requirements of an integrated national EHR system. It is for this purpose and understanding that the State of Qatar has taken the first concrete steps towards a promising EHR journey that will promote significant changes on how healthcare services are delivered, and more importantly, how each individual in Qatar can be empowered to become an active contributor to the management of their own health. In moving towards the widespread adoption and implementation of a national EHR system in Qatar, it is important to study the different challenges and trends used for the adoption of EHR systems, under national strategies, in other countries. This is essential for health informatics researchers, clinicians, and policy makers, to gain greater insight into the issues concerning the transformation of healthcare using a national EHR system. The results of this review study shall complement, explain, and extend the conclusions of earlier studies commissioned to explore the health information technology ecosystem in the State of Qatar. Objectives: The purpose of this study is to review EHR programs from various countries with regard to the issues documented in the studies commissioned in these countries. Our analysis will derive the most common critical aspects and lessons learned from international experiences during the implementation of national EHR programs. Additionally, it will explore opportunities, constraints, and characteristics present in Qatar, necessary for tailoring the strategies and approaches to fully realize a national EHR system in the country. This review study presents two important contributions: 1) it will significantly support promoting health IT solutions that are right for Qatar's need, recognizing the size and capabilities of the country, leveraging existing healthcare organizations and solutions, and respecting the unique cultural characteristics of its population. 2) it will serve as a baseline from which comparisons, performance against target measures, and forward thinking can be scoped; allowing significant contribution towards productive future development of health information technology and personalized medicine initiatives in Qatar. Methods: The data collection techniques included: (a) literature review for articles about EHR adoption under national strategies in several countries, (b) review of reports regarding national e-health strategy and government policies in Qatar, and (c) interviews of people participating in the policy making for national EHR system in Qatar (health and academic professionals involved in health IT research in Qatar). The reviewed EHR programs were selected according to the following criteria: (a) program for the implementation of national EHR system has been initiated since at least 5 years, (b) pilot projects have already been conducted, and (c) the planned EHR systems encompass various approaches of implementation. In line with these criteria, the EHR programs that have been studied were those of the following five countries: United States, England, Estonia, Japan, and Australia. Results: The analysis performed on the selected international EHR programs revealed many lessons learned, including: 1) To achieve a successful EHR implementation, it is critical to increase the awareness of the Qatari population about the upcoming changes in their healthcare experiences, paving the way to a smoother transition while having people's trust and confidence in the new system. 2) It is essential to legally define the legislation of privacy protection of personal medical data to support new e-health concepts and eliminate the risk of violating the privacy of patient data. 3) It is important to allow appropriate time for procurement, utilization, benefit realization and the complete project, otherwise you may risk having stakeholders and the public lose confidence in the EHR project. 4) Financial incentives for healthcare providers proved to be an effective method towards raising the EHR adoption rate. 5) To expedite EHR program acceptance, it is imperative to recruit knowledgeable and experienced technical staff and healthcare leaders, who encourage others to play a critical role during the transition process, and view this change as a dominantly positive one. 6) In order to make EHR an everyday tool for doctors, nurses, patients and public authorities, it is necessary to implement services based on the interests of the healthcare providers and society. 7) Continuous adjustment and enhancement is needed in order to sustain a successful and efficient system. Conclusion: Experiences from other countries suggest that a clear focus needs to be carefully placed on technical, clinical, organizational, financial, social, and patient perspectives to ensure that the full benefits of a national EHR system in Qatar can be realized. In addition, it demonstrates that strategic and human challenges are more difficult to master than technical aspects. The results of this review study can be used as a baseline to provide recommendations on how to tackle potential barriers towards successful adoption of a national EHR system in Qatar.qscienc

    Automated Classification of Diabetic Retinopathy Severity: A Deep Learning Apprch

    No full text
    Background: Diabetic retinopathy (DR) is a damage to the retina caused by complications of diabetes and is the fastest growing cause of blindness. It is a major concern for the Qatari population, affecting about 40% of diabetic patients in Qatar. Automated DR classification techniques with high accuracy have a strong potential to help doctors in early diagnosis of DR and quickly routing patients who need medical interventions to a specialist. Most of the previous work utilized traditional machine learning techniques for the task of DR severity classification. Such techniques are based on "feature-engineering", which involves computing explicit features designed by domain experts, resulting in models capable of detecting specific regions of DR damage or predicting the classification of DR severity. Recently, deep learning has emerged as an efficient technique that avoids such engineering. It shifts the burden of feature engineering to the design of general-purpose learning system which allows an algorithm to learn by itself the most important predictive features from the raw images, given a large dataset of labeled examples. Objectives: In this work, we studied deep learning techniques described in recent literature and combined it with our own ideas to develop a deep convolutional neural networks (ConvNets) architecture for the task of diagnosing diabetic retinopathy and classifying its severity from retina images. In addition, we explored the impact of the following parameters on the performance of the model: 1) number of fully connected layers, 2) number of units within each fully connected layer, and 3) batch size (number of training examples which will be forward/backward propagated through the network in one pass). Methodology: We trained the ConvNets model on the publicly available retina images dataset from the Kaggle competition for diabetic retinopathy detection. The dataset included labels with information about the presence of DR in each of the images, rated by a clinician on a scale from 0 to 4 (0: No DR, 1: Mild, 2: Moderate, 3: Severe, 4: Proliferative DR). The model was implemented using Theano, Lasagne, and cuDNN libraries and trained on two Amazon EC2 p2.xlarge instances (NVIDIA GPU K40). We used the same evaluation metric of the Kaggle competition, which is the Cohen's quadratic weighted Kappa function. In our case, Kappa is described as being an agreement between two raters: the agreement between the scores assigned by human rater (labels) and the predicted scores. Results: On the dataset of 30,262 training images and 4864 testing images, the model achieved a Kappa of 0.72. Our experimental results demonstrated that the number as well as the size of the fully connected layers does not have a significant impact on the model's performance. Moreover, it indicated that increasing the batch size does not necessarily speed up the convergence of the gradient computations. Conclusion: We have shown that convolutional neural networks have the potential to be trained to identify the features of Diabetic Retinopathy in retina images. Given the many recent advances in deep learning, we hope our work will open the door for many new examples demonstrating the power of deep learning to help solving important problems in medical imaging and healthcare. Keywords: deep learning, machine learning, diabetes, diabetic retinopathy, medical imaging.qscienc

    Interpreting patient-Specific risk prediction using contextual decomposition of BiLSTMs: application to children with asthma

    No full text
    Background Predictive modeling with longitudinal electronic health record (EHR) data offers great promise for accelerating personalized medicine and better informs clinical decision-making. Recently, deep learning models have achieved state-of-the-art performance for many healthcare prediction tasks. However, deep models lack interpretability, which is integral to successful decision-making and can lead to better patient care. In this paper, we build upon the contextual decomposition (CD) method, an algorithm for producing importance scores from long short-term memory networks (LSTMs). We extend the method to bidirectional LSTMs (BiLSTMs) and use it in the context of predicting future clinical outcomes using patients’ EHR historical visits. Methods We use a real EHR dataset comprising 11071 patients, to evaluate and compare CD interpretations from LSTM and BiLSTM models. First, we train LSTM and BiLSTM models for the task of predicting which pre-school children with respiratory system-related complications will have asthma at school-age. After that, we conduct quantitative and qualitative analysis to evaluate the CD interpretations produced by the contextual decomposition of the trained models. In addition, we develop an interactive visualization to demonstrate the utility of CD scores in explaining predicted outcomes. Results Our experimental evaluation demonstrate that whenever a clear visit-level pattern exists, the models learn that pattern and the contextual decomposition can appropriately attribute the prediction to the correct pattern. In addition, the results confirm that the CD scores agree to a large extent with the importance scores generated using logistic regression coefficients. Our main insight was that rather than interpreting the attribution of individual visits to the predicted outcome, we could instead attribute a model’s prediction to a group of visits. Conclusion We presented a quantitative and qualitative evidence that CD interpretations can explain patient-specific predictions using CD attributions of individual visits or a group of visits.Other Information Published in: BMC Medical Informatics and Decision Making License: https://creativecommons.org/licenses/by/4.0See article on publisher's website: http://dx.doi.org/10.1186/s12911-019-0951-4</p

    Systematic review and meta-analysis of performance of wearable artificial intelligence in detecting and predicting depression

    No full text
    Abstract Given the limitations of traditional approaches, wearable artificial intelligence (AI) is one of the technologies that have been exploited to detect or predict depression. The current review aimed at examining the performance of wearable AI in detecting and predicting depression. The search sources in this systematic review were 8 electronic databases. Study selection, data extraction, and risk of bias assessment were carried out by two reviewers independently. The extracted results were synthesized narratively and statistically. Of the 1314 citations retrieved from the databases, 54 studies were included in this review. The pooled mean of the highest accuracy, sensitivity, specificity, and root mean square error (RMSE) was 0.89, 0.87, 0.93, and 4.55, respectively. The pooled mean of lowest accuracy, sensitivity, specificity, and RMSE was 0.70, 0.61, 0.73, and 3.76, respectively. Subgroup analyses revealed that there is a statistically significant difference in the highest accuracy, lowest accuracy, highest sensitivity, highest specificity, and lowest specificity between algorithms, and there is a statistically significant difference in the lowest sensitivity and lowest specificity between wearable devices. Wearable AI is a promising tool for depression detection and prediction although it is in its infancy and not ready for use in clinical practice. Until further research improve its performance, wearable AI should be used in conjunction with other methods for diagnosing and predicting depression. Further studies are needed to examine the performance of wearable AI based on a combination of wearable device data and neuroimaging data and to distinguish patients with depression from those with other diseases

    Serious Games for Learning Among Older Adults With Cognitive Impairment: Systematic Review and Meta-analysis

    No full text
    BackgroundLearning disabilities are among the major cognitive impairments caused by aging. Among the interventions used to improve learning among older adults are serious games, which are participative electronic games designed for purposes other than entertainment. Although some systematic reviews have examined the effectiveness of serious games on learning, they are undermined by some limitations, such as focusing on older adults without cognitive impairments, focusing on particular types of serious games, and not considering the comparator type in the analysis. ObjectiveThis review aimed to evaluate the effectiveness of serious games on verbal and nonverbal learning among older adults with cognitive impairment. MethodsEight electronic databases were searched to retrieve studies relevant to this systematic review and meta-analysis. Furthermore, we went through the studies that cited the included studies and screened the reference lists of the included studies and relevant reviews. Two reviewers independently checked the eligibility of the identified studies, extracted data from the included studies, and appraised their risk of bias and the quality of the evidence. The results of the included studies were summarized using a narrative synthesis or meta-analysis, as appropriate. ResultsOf the 559 citations retrieved, 11 (2%) randomized controlled trials (RCTs) ultimately met all eligibility criteria for this review. A meta-analysis of 45% (5/11) of the RCTs revealed that serious games are effective in improving verbal learning among older adults with cognitive impairment in comparison with no or sham interventions (P=.04), and serious games do not have a different effect on verbal learning between patients with mild cognitive impairment and those with Alzheimer disease (P=.89). A meta-analysis of 18% (2/11) of the RCTs revealed that serious games are as effective as conventional exercises in promoting verbal learning (P=.98). We also found that serious games outperformed no or sham interventions (4/11, 36%; P=.03) and conventional cognitive training (2/11, 18%; P<.001) in enhancing nonverbal learning. ConclusionsSerious games have the potential to enhance verbal and nonverbal learning among older adults with cognitive impairment. However, our findings remain inconclusive because of the low quality of evidence, the small sample size in most of the meta-analyzed studies (6/8, 75%), and the paucity of studies included in the meta-analyses. Thus, until further convincing proof of their effectiveness is offered, serious games should be used to supplement current interventions for verbal and nonverbal learning rather than replace them entirely. Further studies are needed to compare serious games with conventional cognitive training and conventional exercises, as well as different types of serious games, different platforms, different intervention periods, and different follow-up periods. Trial RegistrationPROSPERO CRD42022348849; https://tinyurl.com/y6yewwf

    The performance of serious games for enhancing attention in cognitively impaired older adults

    Get PDF
    Abstract Attention, which is the process of noticing the surrounding environment and processing information, is one of the cognitive functions that deteriorate gradually as people grow older. Games that are used for other than entertainment, such as improving attention, are often referred to as serious games. This study examined the effectiveness of serious games on attention among elderly individuals suffering from cognitive impairment. A systematic review and meta-analyses of randomized controlled trials were carried out. A total of 10 trials ultimately met all eligibility criteria of the 559 records retrieved. The synthesis of very low-quality evidence from three trials, as analyzed in a meta-study, indicated that serious games outperform no/passive interventions in enhancing attention in cognitively impaired older adults (P < 0.001). Additionally, findings from two other studies demonstrated that serious games are more effective than traditional cognitive training in boosting attention among cognitively impaired older adults. One study also concluded that serious games are better than traditional exercises in enhancing attention. Serious games can enhance attention in cognitively impaired older adults. However, given the low quality of the evidence, the limited number of participants in most studies, the absence of some comparative studies, and the dearth of studies included in the meta-analyses, the results remain inconclusive. Thus, until the aforementioned limitations are rectified in future research, serious games should serve as a supplement, rather than a replacement, to current interventions
    corecore