9 research outputs found

    LOCMIC:LOw Complexity Multi-resolution Image Compression

    Get PDF
    Image compression is a well-established and extensively researched field. The huge interest in it has been aroused by the rapid enhancements introduced in imaging techniques and the various applications that use high-resolution images (e.g. medical, astronomical, Internet applications). The image compression algorithms should not only give state-of-art performance, they should also provide other features and functionalities such as progressive transmission. Often, a rough approximation (thumbnail) of an image is sufficient for the user to decide whether to continue the image transmission or to abort; which accordingly helps to reduce time and bandwidth. That in turn necessitated the development of multi-resolution image compression schemes. The existed multi-resolution schemes (e.g., Multi-Level Progressive method) have shown high computational efficiency, but with a lack of the compression performance, in general. In this thesis, a LOw Complexity Multi-resolution Image Compression (LOCMIC) based on the Hierarchical INTerpolation (HINT) framework is presented. Moreover, a novel integration of the Just Noticeable Distortion (JND) for perceptual coding with the HINT framework to achieve a visual-lossless multi-resolution scheme has been proposed. In addition, various prediction formulas, a context-based prediction correction model and a multi-level Golomb parameter adaption approach have been investigated. The proposed LOCMIC (the lossless and the visual lossless) has contributed to the compression performance. The lossless LOCMIC has achieved a 3% reduced bit rate over LOCO-I, about 1% over JPEG2000, 3% over SPIHT, and 2% over CALIC. The Perceptual LOCMIC has been better in terms of bit rate than near-lossless JPEG-LS (at NEAR=2) with about 4.7%. Moreover, the decorrelation efficiency of the LOCMIC in terms of entropy has shown an advance of 2.8%, 4.5% than the MED and the conventional HINT respectively

    Burnout among surgeons before and during the SARS-CoV-2 pandemic: an international survey

    Get PDF
    Background: SARS-CoV-2 pandemic has had many significant impacts within the surgical realm, and surgeons have been obligated to reconsider almost every aspect of daily clinical practice. Methods: This is a cross-sectional study reported in compliance with the CHERRIES guidelines and conducted through an online platform from June 14th to July 15th, 2020. The primary outcome was the burden of burnout during the pandemic indicated by the validated Shirom-Melamed Burnout Measure. Results: Nine hundred fifty-four surgeons completed the survey. The median length of practice was 10 years; 78.2% included were male with a median age of 37 years old, 39.5% were consultants, 68.9% were general surgeons, and 55.7% were affiliated with an academic institution. Overall, there was a significant increase in the mean burnout score during the pandemic; longer years of practice and older age were significantly associated with less burnout. There were significant reductions in the median number of outpatient visits, operated cases, on-call hours, emergency visits, and research work, so, 48.2% of respondents felt that the training resources were insufficient. The majority (81.3%) of respondents reported that their hospitals were included in the management of COVID-19, 66.5% felt their roles had been minimized; 41% were asked to assist in non-surgical medical practices, and 37.6% of respondents were included in COVID-19 management. Conclusions: There was a significant burnout among trainees. Almost all aspects of clinical and research activities were affected with a significant reduction in the volume of research, outpatient clinic visits, surgical procedures, on-call hours, and emergency cases hindering the training. Trial registration: The study was registered on clicaltrials.gov "NCT04433286" on 16/06/2020

    A tree-based explainable AI model for early detection of Covid-19 using physiological data

    No full text
    Abstract With the outbreak of COVID-19 in 2020, countries worldwide faced significant concerns and challenges. Various studies have emerged utilizing Artificial Intelligence (AI) and Data Science techniques for disease detection. Although COVID-19 cases have declined, there are still cases and deaths around the world. Therefore, early detection of COVID-19 before the onset of symptoms has become crucial in reducing its extensive impact. Fortunately, wearable devices such as smartwatches have proven to be valuable sources of physiological data, including Heart Rate (HR) and sleep quality, enabling the detection of inflammatory diseases. In this study, we utilize an already-existing dataset that includes individual step counts and heart rate data to predict the probability of COVID-19 infection before the onset of symptoms. We train three main model architectures: the Gradient Boosting classifier (GB), CatBoost trees, and TabNet classifier to analyze the physiological data and compare their respective performances. We also add an interpretability layer to our best-performing model, which clarifies prediction results and allows a detailed assessment of effectiveness. Moreover, we created a private dataset by gathering physiological data from Fitbit devices to guarantee reliability and avoid bias. The identical set of models was then applied to this private dataset using the same pre-trained models, and the results were documented. Using the CatBoost tree-based method, our best-performing model outperformed previous studies with an accuracy rate of 85% on the publicly available dataset. Furthermore, this identical pre-trained CatBoost model produced an accuracy of 81% when applied to the private dataset. You will find the source code in the link: https://github.com/OpenUAE-LAB/Covid-19-detection-using-Wearable-data.git

    Intelligent Biofeedback Augmented Content Comprehension (TellBack)

    No full text
    Assessing comprehension difficulties requires the ability to assess cognitive load. Changes in cognitive load induced by comprehension difficulties could be detected with an adequate time resolution using different biofeedback measures (e.g., changes in the pupil diameter). However, identifying the Spatio-temporal sources of content comprehension difficulties (i.e., when, and where exactly the difficulty occurs in content regions) with a fine granularity is a big challenge that has not been explicitly addressed in the state-of-the-art. This paper proposes and evaluates an innovative approach named Intelligent BiofeedbackAugmented Content Comprehension (TellBack) to explicitly address this challenge. The goal is to autonomously identify regions of digital content that cause user’s comprehension difficulty, opening the possibility to provide real-time comprehension support to users. TellBack is based on assessing the cognitive load associated with content comprehension through non-intrusive cheap biofeedback devices that acquire measures such as pupil response or Heart Rate Variability (HRV). To identify when exactly the difficulty in comprehension occurs, physiological manifestations of the Autonomic Nervous System (ANS) such as the pupil diameter variability and the modulation of HRV are exploited, whereas the fine spatial resolution (i.e., the region of content where the user is looking at) is provided by eye-tracking. The evaluation results of this approach show an accuracy of 83.00% ± 0.75 in classifying regions of content as difficult or not difficult using Support Vector Machine (SVM), and precision, recall, and micro F1-score of 0.89, 0.79, and 0.83, respectively. Results obtained with 4 other classifiers, namely Random Forest, k-nearest neighbor, Decision Tree, and Gaussian Naive Bayes, showed a slightly lower precision. TellBack outperforms the state-of-the-art in precision & recall by 23% and 17% respectivel

    Wearable Devices and Explainable Unsupervised Learning for COVID-19 Detection and Monitoring

    No full text
    Despite the declining COVID-19 cases, global healthcare systems still face significant challenges due to ongoing infections, especially among fully vaccinated individuals, including adolescents and young adults (AYA). To tackle this issue, cost-effective alternatives utilizing technologies like Artificial Intelligence (AI) and wearable devices have emerged for disease screening, diagnosis, and monitoring. However, many AI solutions in this context heavily rely on supervised learning techniques, which pose challenges such as human labeling reliability and time-consuming data annotation. In this study, we propose an innovative unsupervised framework that leverages smartwatch data to detect and monitor COVID-19 infections. We utilize longitudinal data, including heart rate (HR), heart rate variability (HRV), and physical activity measured via step count, collected through the continuous monitoring of volunteers. Our goal is to offer effective and affordable solutions for COVID-19 detection and monitoring. Our unsupervised framework employs interpretable clusters of normal and abnormal measures, facilitating disease progression detection. Additionally, we enhance result interpretation by leveraging the language model Davinci GPT-3 to gain deeper insights into the underlying data patterns and relationships. Our results demonstrate the effectiveness of unsupervised learning, achieving a Silhouette score of 0.55. Furthermore, validation using supervised learning techniques yields high accuracy (0.884 ± 0.005), precision (0.80 ± 0.112), and recall (0.817 ± 0.037). These promising findings indicate the potential of unsupervised techniques for identifying inflammatory markers, contributing to the development of efficient and reliable COVID-19 detection and monitoring methods. Our study shows the capabilities of AI and wearables, reflecting the pursuit of low-cost, accessible solutions for addressing health challenges related to inflammatory diseases, thereby opening new avenues for scalable and widely applicable health monitoring solutions

    Wearable Devices, Smartphones, and Interpretable Artificial Intelligence in Combating COVID-19

    No full text
    Physiological measures, such as heart rate variability (HRV) and beats per minute (BPM), can be powerful health indicators of respiratory infections. HRV and BPM can be acquired through widely available wrist-worn biometric wearables and smartphones. Successive abnormal changes in these indicators could potentially be an early sign of respiratory infections such as COVID-19. Thus, wearables and smartphones should play a significant role in combating COVID-19 through the early detection supported by other contextual data and artificial intelligence (AI) techniques. In this paper, we investigate the role of the heart measurements (i.e., HRV and BPM) collected from wearables and smartphones in demonstrating early onsets of the inflammatory response to the COVID-19. The AI framework consists of two blocks: an interpretable prediction model to classify the HRV measurements status (as normal or affected by inflammation) and a recurrent neural network (RNN) to analyze users’ daily status (i.e., textual logs in a mobile application). Both classification decisions are integrated to generate the final decision as either “potentially COVID-19 infected” or “no evident signs of infection”. We used a publicly available dataset, which comprises 186 patients with more than 3200 HRV readings and numerous user textual logs. The first evaluation of the approach showed an accuracy of 83.34 ± 1.68% with 0.91, 0.88, 0.89 precision, recall, and F1-Score, respectively, in predicting the infection two days before the onset of the symptoms supported by a model interpretation using the local interpretable model-agnostic explanations (LIME)

    Dynamically predicting comprehension difficulties through physiological data and intelligent wearables

    No full text
    Abstract Comprehending digital content written in natural language online is vital for many aspects of life, including learning, professional tasks, and decision-making. However, facing comprehension difficulties can have negative consequences for learning outcomes, critical thinking skills, decision-making, error rate, and productivity. This paper introduces an innovative approach to predict comprehension difficulties at the local content level (e.g., paragraphs). Using affordable wearable devices, we acquire physiological responses non-intrusively from the autonomous nervous system, specifically pulse rate variability, and electrodermal activity. Additionally, we integrate data from a cost-effective eye-tracker. Our machine learning algorithms identify ’hotspots’ within the content and regions corresponding to a high cognitive load. These hotspots represent real-time predictors of comprehension difficulties. By integrating physiological data with contextual information (such as the levels of experience of individuals), our approach achieves an accuracy of 72.11% ± 2.21, a precision of 0.77, a recall of 0.70, and an f1 score of 0.73. This study opens possibilities for developing intelligent, cognitive-aware interfaces. Such interfaces can provide immediate contextual support, mitigating comprehension challenges within content. Whether through translation, content generation, or content summarization using available Large Language Models, this approach has the potential to enhance language comprehension

    On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement

    No full text
    Complexity is the key element of software quality. This article investigates the problem of measuring code complexity and discusses the results of a controlled experiment to compare different views and methods to measure code complexity. Participants (27 programmers) were asked to read and (try to) understand a set of programs, while the complexity of such programs is assessed through different methods and perspectives: (a) classic code complexity metrics such as McCabe and Halstead metrics, (b) cognitive complexity metrics based on scored code constructs, (c) cognitive complexity metrics from state-of-the-art tools such as SonarQube, (d) human-centered metrics relying on the direct assessment of programmers' behavioral features (e.g., reading time, and revisits) using eye tracking, and (e) cognitive load/mental effort assessed using electroencephalography (EEG). The human-centered perspective was complemented by the subjective evaluation of participants on the mental effort required to understand the programs using the NASA Task Load Index (TLX). Additionally, the evaluation of the code complexity is measured at both the program level and, whenever possible, at the very low level of code constructs/code regions, to identify the actual code elements and the code context that may trigger a complexity surge in the programmers' perception of code comprehension difficulty. The programmers' cognitive load measured using EEG was used as a reference to evaluate how the different metrics can express the (human) difficulty in comprehending the code. Extensive experimental results show that popular metrics such as V(g) and the complexity metric from SonarSource tools deviate considerably from the programmers' perception of code complexity and often do not show the expected monotonic behavior. The article summarizes the findings in a set of guidelines to improve existing code complexity metrics, particularly state-of-the-art metrics such as cognitive complexity from SonarSource tools
    corecore