28 research outputs found

    Prediction of COVID-19 Hospital Length of Stay and Risk of Death Using Artificial Intelligence-Based Modeling

    Get PDF
    Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a highly infectious virus with overwhelming demand on healthcare systems, which require advanced predictive analytics to strategize COVID-19 management in a more effective and efficient manner. We analyzed clinical data of 2017 COVID-19 cases reported in the Dubai health authority and developed predictive models to predict the patient's length of hospital stay and risk of death. A decision tree (DT) model to predict COVID-19 length of stay was developed based on patient clinical information. The model showed very good performance with a coefficient of determination R2 of 49.8% and a median absolute deviation of 2.85 days. Furthermore, another DT-based model was constructed to predict COVID-19 risk of death. The model showed excellent performance with sensitivity and specificity of 96.5 and 87.8%, respectively, and overall prediction accuracy of 96%. Further validation using unsupervised learning methods showed similar separation patterns, and a receiver operator characteristic approach suggested stable and robust DT model performance. The results show that a high risk of death of 78.2% is indicated for intubated COVID-19 patients who have not used anticoagulant medications. Fortunately, intubated patients who are using anticoagulant and dexamethasone medications with an international normalized ratio of <1.69 have zero risk of death from COVID-19. In conclusion, we constructed artificial intelligence–based models to accurately predict the length of hospital stay and risk of death in COVID-19 cases. These smart models will arm physicians on the front line to enhance management strategies to save lives

    Face Plastic Surgery Recognition Model Based on Neural Network and Meta-Learning Model&amp;nbsp;

    Get PDF
    Facial recognition is a procedure of verifying a person&amp;#39;s identity by using the face, which is considered one of the biometric security methods. However, facial recognition methods face many challenges, such as face aging, wearing a face mask, having a beard, and undergoing plastic surgery, which decreases the accuracy of these methods.This study evaluates the impact of plastic surgery on face recognition models. The motivation for conducting the research in that aspect is because plastic surgery treatments do not only change the shape and texture of any face but also have increased rapidly in this era. This paper proposes a model based on an artificial neural network with model-agnostic meta-learning (ANN-MAML) for plastic surgery face recognition. This study aims to build a framework for face recognition before and after undergoing plastic surgery based on an artificial neural network. Also, the study seeks to clarify the collaboration between facial plastic surgery and facial recognition software to determine the issues. The researchers evaluated the proposed ANN-MAML&amp;#39;s performance using the HDA dataset.&amp;nbsp;The experimental results show that the proposed ANN-MAML learning model attained an accuracy of 90% in facial recognition using Rhinoplasty (Nose surgery) images, 91% on Blepharoplasty surgery (Eyelid surgery) images, 94% on Brow lift (Forehead surgery) images, as well as 92% on Rhytidectomy (Facelift) images. Finally, the results of the proposed model were compared with the baseline methods by the researchers, which showed the superiority of the ANN-MAML over the baselines.&amp;nbsp

    Differences in selectivity to natural images in early visual areas (V1–V3)

    Get PDF
    High-level regions of the ventral visual pathway respond more to intact objects compared to scrambled objects. The aim of this study was to determine if this selectivity for objects emerges at an earlier stage of processing. Visual areas (V1–V3) were defined for each participant using retinotopic mapping. Participants then viewed intact and scrambled images from different object categories (bottle, chair, face, house, shoe) while neural responses were measured using fMRI. Our rationale for using scrambled images is that they contain the same low-level properties as the intact objects, but lack the higher-order combinations of features that are characteristic of natural images. Neural responses were higher for scrambled than intact images in all regions. However, the difference between intact and scrambled images was smaller in V3 compared to V1 and V2. Next, we measured the spatial patterns of response to intact and scrambled images from different object categories. We found higher within-category compared to between category correlations for both intact and scrambled images demonstrating distinct patterns of response. Spatial patterns of response were more distinct for intact compared to scrambled images in V3, but not in V1 or V2. These findings demonstrate the emergence of selectivity to natural images in V3

    The Time Course of Segmentation and Cue-Selectivity in the Human Visual Cortex

    Get PDF
    Texture discontinuities are a fundamental cue by which the visual system segments objects from their background. The neural mechanisms supporting texture-based segmentation are therefore critical to visual perception and cognition. In the present experiment we employ an EEG source-imaging approach in order to study the time course of texture-based segmentation in the human brain. Visual Evoked Potentials were recorded to four types of stimuli in which periodic temporal modulation of a central 3° figure region could either support figure-ground segmentation, or have identical local texture modulations but not produce changes in global image segmentation. The image discontinuities were defined either by orientation or phase differences across image regions. Evoked responses to these four stimuli were analyzed both at the scalp and on the cortical surface in retinotopic and functional regions-of-interest (ROIs) defined separately using fMRI on a subject-by-subject basis. Texture segmentation (tsVEP: segmenting versus non-segmenting) and cue-specific (csVEP: orientation versus phase) responses exhibited distinctive patterns of activity. Alternations between uniform and segmented images produced highly asymmetric responses that were larger after transitions from the uniform to the segmented state. Texture modulations that signaled the appearance of a figure evoked a pattern of increased activity starting at ∼143 ms that was larger in V1 and LOC ROIs, relative to identical modulations that didn't signal figure-ground segmentation. This segmentation-related activity occurred after an initial response phase that did not depend on the global segmentation structure of the image. The two cue types evoked similar tsVEPs up to 230 ms when they differed in the V4 and LOC ROIs. The evolution of the response proceeded largely in the feed-forward direction, with only weak evidence for feedback-related activity

    The Effect of Sensory Uncertainty Due to Amblyopia (Lazy Eye) on the Planning and Execution of Visually-Guided 3D Reaching Movements

    Get PDF
    Background: Impairment of spatiotemporal visual processing in amblyopia has been studied extensively, but its effects on visuomotor tasks have rarely been examined. Here, we investigate how visual deficits in amblyopia affect motor planning and online control of visually-guided, unconstrained reaching movements. Methods: Thirteen patients with mild amblyopia, 13 with severe amblyopia and 13 visually-normal participants were recruited. Participants reached and touched a visual target during binocular and monocular viewing. Motor planning was assessed by examining spatial variability of the trajectory at 50–100 ms after movement onset. Online control was assessed by examining the endpoint variability and by calculating the coefficient of determination (R 2) which correlates the spatial position of the limb during the movement to endpoint position. Results: Patients with amblyopia had reduced precision of the motor plan in all viewing conditions as evidenced by increased variability of the reach early in the trajectory. Endpoint precision was comparable between patients with mild amblyopia and control participants. Patients with severe amblyopia had reduced endpoint precision along azimuth and elevation during amblyopic eye viewing only, and along the depth axis in all viewing conditions. In addition, they had significantly higher R 2 values at 70 % of movement time along the elevation and depth axes during amblyopic eye viewing. Conclusion: Sensory uncertainty due to amblyopia leads to reduced precision of the motor plan. The ability to implemen

    Ubiquitous Health Technology Management (uHTM)

    No full text
    corecore