40 research outputs found

    Automated detection of pain levels using deep feature extraction from shutter blinds‑based dynamic‑sized horizontal patches with facial images

    Get PDF
    Pain intensity classification using facial images is a challenging problem in computer vision research. This work proposed a patch and transfer learning-based model to classify various pain intensities using facial images. The input facial images were segmented into dynamic-sized horizontal patches or “shutter blinds”. A lightweight deep network DarkNet19 pre-trained on ImageNet1K was used to generate deep features from the shutter blinds and the undivided resized segmented input facial image. The most discriminative features were selected from these deep features using iterative neighborhood component analysis, which were then fed to a standard shallow fine k-nearest neighbor classifier for classification using tenfold cross-validation. The proposed shutter blinds-based model was trained and tested on datasets derived from two public databases—University of Northern British Columbia-McMaster Shoulder Pain Expression Archive Database and Denver Intensity of Spontaneous Facial Action Database—which both comprised four pain intensity classes that had been labeled by human experts using validated facial action coding system methodology. Our shutter blinds-based classification model attained more than 95% overall accuracy rates on both datasets. The excellent performance suggests that the automated pain intensity classification model can be deployed to assist doctors in the non-verbal detection of pain using facial images in various situations (e.g., non-communicative patients or during surgery). This system can facilitate timely detection and management of pain

    Determination of factors influencing student engagement using a learning management system in a tertiary setting

    Get PDF
    Determining the key factors that affect student engagement will assist academics in improving students’ motivation. The Quality Indicators for Learning and Teaching (QILT) reports have shown low engagement levels in higher education student cohorts (QILT 2016, 2017). While factors such as online education, lack of attendance, and poor course content design have been attributed to this cause, it is still not clear as to the determination of those factors influencing student engagement in a higher education setting. It is widely accepted that the selection of appropriate learning resources is an essential phase in the education process. In contrast, an incompatible range of course materials can demotivate a student from engaging in the course (Quaye & Harper 2014). In the modern tertiary setting, Information and Communication Technology (ICT) plays an essential role in disseminating information with a Learning Management System (LMS) as the platform to communicate crucial course-related information. Academics can develop course materials on these LMSs to engage students beyond the classrooms, and students need to interact through the same platform to comprehend the transmitted knowledge. Since LMSs are operated on a computer platform, academics and students require strong ICT skills which are further utilised in the preparation of course materials. The knowledge required is dependent on the relevance and appropriateness of materials, the way various tasks are prepared, how communication is facilitated, the role and utilisation of discussion forums and other available social media structures, and the way in which assessments are conducted. This cumulatively leads to the development of a Just in Time (JIT) type of knowledge, which can be challenging to measure. The investigation into these major factors forms the basis of this study. Thus, understanding how various factors influence student engagement through the use of LMS platforms in a tertiary setting is the focus of this study. This study used a hybrid method involving a qualitative component to understand the factors that influence the student engagement in an LMS driven learning setting and a quantitative component for confirmation of various factors identified through the literature review. The study developed five specific hypotheses for testing, and the following table shows the outcomes of hypotheses testing: H1: Students are influenced by teaching resources in order to realise engagement in classroom activities - ACCEPTED H2: Academics influence engagement in classroom activities through their involvement in various teaching and management aspects - REJECTED H3: An academic’s activities influence the management of teaching activities, resulting in improved engagement by students in the class - ACCEPTED H4: Learning Management Systems (LMS) are a key part in improving students’ engagement - REJECTED H5: Management of various study-related activities to reach focus in the study will positively influence students’ engagement - ACCEPTED The outcomes of the study indicate that students and associated classroom activities, teaching resources, management of teaching, the way LMSs are established, and students’ requirements and needs play a key role in assuring engagement. This study also found that an academic’s activities play a less significant role in fostering engagement as there appears to be a shift from teaching to teaching management, as evidenced in the qualitative discussion. Further, the participants expected academics to have superior technology communication skills as this is essential in an LMS driven setting. Interestingly, this study correlated with a number of standards dictated by the Tertiary Education Quality Standards of Australia (TEQSA), a regulatory body that enforces standards in Australian tertiary education. This correlation was observed despite the fact that students that participated in this study had limited awareness of these TEQSA standards. The main contribution of this study is in highlighting the fact that academics and other support services in tertiary settings should focus on how the LMS is presented as participants expressed that clear navigation of the system is essential for engagement. This has profound implications in the way the recruitment of academics is conducted. In terms of practice, TEQSA standards are key in assuring quality in tertiary settings, and this study has provided strong evidence as to the needs for support systems, the way learning objectives are mapped to deliver learning outcomes, appropriateness of the content, time imposition on students in managing their study-related activities, and integration of technology. These are now a standard part of the TEQSA assessment. The study can be further improved in the future by collecting data from various cohorts: for example, fulltime vs part-time, domestic vs overseas, and mature vs school leavers, to better assess their views in terms of engagement as these cohorts come with varying needs. These can then be encapsulated in the learning materials and systems development. This would then lead to a better alignment of learning management and engagement to realise better outcomes

    Determination of factors influencing student engagement using a learning management system in a tertiary setting

    Get PDF
    Determining the key factors that affect student engagement will assist academics to improve the student motivation. The Quality Indicators for Learning and Teaching (QILT) reports have shown low engagement levels in higher education students [21, 22, 23]. While factors such as online education, lack of attendance and poor design of course content have been attributed to this cause, it is still not clear as to the determination of those factors influencing student engagement in a higher education setting. In the modern tertiary settings, Information and Communication Technology (ICT) plays an essential role in disseminating the course related information with a Learning Management System (LMS) which become the platform to communicate crucial course-related information. Academics can develop course materials on these LMS’ to engage students beyond the classrooms and students need to interact with those LMS’ to get apprehend the transmitted knowledge. Since LMS’ are operated on a computer platform, academics and students require strong ICT skills which are further utilized in preparation of course materials. Their relevance, appropriateness, the way various tasks are prepared, how communication is facilitated, the role and utilization of discussion forums and other social media structures available to students to interact with, and the way in which assessments are conducted, providing a Just in Time (JIT) type of knowledge students require. The investigation into these major factors forms the basis of this study. Thus, understanding how various factors related to LMS’ in a tertiary setting influence student engagement and then determining those factors that contribute to this engagement are the main objective of this study. To pursue the main objective of this study, a hybrid method mainly involving a pseudo meta-analysis to unearth additional evidence required for the study, a comprehensive qualitative component to understand the sector factors and perhaps a small quantitative component to confirm the sector views will be employed

    NRC-Net: Automated noise robust cardio net for detecting valvular cardiac diseases using optimum transformation method with heart sound signals

    Full text link
    Cardiovascular diseases (CVDs) can be effectively treated when detected early, reducing mortality rates significantly. Traditionally, phonocardiogram (PCG) signals have been utilized for detecting cardiovascular disease due to their cost-effectiveness and simplicity. Nevertheless, various environmental and physiological noises frequently affect the PCG signals, compromising their essential distinctive characteristics. The prevalence of this issue in overcrowded and resource-constrained hospitals can compromise the accuracy of medical diagnoses. Therefore, this study aims to discover the optimal transformation method for detecting CVDs using noisy heart sound signals and propose a noise robust network to improve the CVDs classification performance.For the identification of the optimal transformation method for noisy heart sound data mel-frequency cepstral coefficients (MFCCs), short-time Fourier transform (STFT), constant-Q nonstationary Gabor transform (CQT) and continuous wavelet transform (CWT) has been used with VGG16. Furthermore, we propose a novel convolutional recurrent neural network (CRNN) architecture called noise robust cardio net (NRC-Net), which is a lightweight model to classify mitral regurgitation, aortic stenosis, mitral stenosis, mitral valve prolapse, and normal heart sounds using PCG signals contaminated with respiratory and random noises. An attention block is included to extract important temporal and spatial features from the noisy corrupted heart sound.The results of this study indicate that,CWT is the optimal transformation method for noisy heart sound signals. When evaluated on the GitHub heart sound dataset, CWT demonstrates an accuracy of 95.69% for VGG16, which is 1.95% better than the second-best CQT transformation technique. Moreover, our proposed NRC-Net with CWT obtained an accuracy of 97.4%, which is 1.71% higher than the VGG16

    Natural Language Processing in Electronic Health Records in Relation to Healthcare Decision-making: A Systematic Review

    Full text link
    Background: Natural Language Processing (NLP) is widely used to extract clinical insights from Electronic Health Records (EHRs). However, the lack of annotated data, automated tools, and other challenges hinder the full utilisation of NLP for EHRs. Various Machine Learning (ML), Deep Learning (DL) and NLP techniques are studied and compared to understand the limitations and opportunities in this space comprehensively. Methodology: After screening 261 articles from 11 databases, we included 127 papers for full-text review covering seven categories of articles: 1) medical note classification, 2) clinical entity recognition, 3) text summarisation, 4) deep learning (DL) and transfer learning architecture, 5) information extraction, 6) Medical language translation and 7) other NLP applications. This study follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Result and Discussion: EHR was the most commonly used data type among the selected articles, and the datasets were primarily unstructured. Various ML and DL methods were used, with prediction or classification being the most common application of ML or DL. The most common use cases were: the International Classification of Diseases, Ninth Revision (ICD-9) classification, clinical note analysis, and named entity recognition (NER) for clinical descriptions and research on psychiatric disorders. Conclusion: We find that the adopted ML models were not adequately assessed. In addition, the data imbalance problem is quite important, yet we must find techniques to address this underlining problem. Future studies should address key limitations in studies, primarily identifying Lupus Nephritis, Suicide Attempts, perinatal self-harmed and ICD-9 classification

    A novel framework for distress detection through an automated speech processing system

    Get PDF
    Based on our ongoing work, this work in progress project aims to develop an automated system to detect distress in people to enable early referral for interventions to target anxiety and depression, to mitigate suicidal ideation and to improve adherence to treatment. The project will utilize either use existing voice data to assess people into various scales of distress, or will collect voice data as per existing standards of distress measurement, to develop basic computing algorithms required to detect various attributes associated with distress, detected through a person’s voice in a telephone call to a helpline. This will be then matched with the already available psychological assessment instruments such as the Distress Thermometer for these persons. In order to trigger interventions, organizational contexts are essential as interventions rely on the type of distress. Therefore, the model will be tested on various organizational settings such as the Police, Emergency and Health along with the Distress detection instruments normally used in a psychological assessment for accuracy and validation. The outcome of the project will culminate in a fully automated integrated system, and will save significant resources to organizations. The translation of the project will be realized in step-change improvements to quality of life within the gamut of public policy

    A review of automated sleep disorder detection

    Get PDF
    Automated sleep disorder detection is challenging because physiological symptoms can vary widely. These variations make it difficult to create effective sleep disorder detection models which support hu-man experts during diagnosis and treatment monitoring. From 2010 to 2021, authors of 95 scientific papers have taken up the challenge of automating sleep disorder detection. This paper provides an expert review of this work. We investigated whether digital technology and Artificial Intelligence (AI) can provide automated diagnosis support for sleep disorders. We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines during the content discovery phase. We compared the performance of proposed sleep disorder detection methods, involving differ-ent datasets or signals. During the review, we found eight sleep disorders, of which sleep apnea and insomnia were the most studied. These disorders can be diagnosed using several kinds of biomedical signals, such as Electrocardiogram (ECG), Polysomnography (PSG), Electroencephalogram (EEG), Electromyogram (EMG), and snore sound. Subsequently, we established areas of commonality and distinctiveness. Common to all reviewed papers was that AI models were trained and tested with labelled physiological signals. Looking deeper, we discovered that 24 distinct algorithms were used for the detection task. The nature of these algorithms evolved, before 2017 only traditional Machine Learning (ML) was used. From 2018 onward, both ML and Deep Learning (DL) methods were used for sleep disorder detection. The strong emergence of DL algorithms has considerable implications for future detection systems because these algorithms demand significantly more data for training and testing when compared with ML. Based on our review results, we suggest that both type and amount of labelled data is crucial for the design of future sleep disorder detection systems because this will steer the choice of AI algorithm which establishes the desired decision support. As a guiding principle, more labelled data will help to represent the variations in symptoms. DL algorithms can extract information from these larger data quantities more effectively, therefore; we predict that the role of these algorithms will continue to expand

    Full-resolution Lung Nodule Segmentation from Chest X-ray Images using Residual Encoder-Decoder Networks

    Full text link
    Lung cancer is the leading cause of cancer death and early diagnosis is associated with a positive prognosis. Chest X-ray (CXR) provides an inexpensive imaging mode for lung cancer diagnosis. Suspicious nodules are difficult to distinguish from vascular and bone structures using CXR. Computer vision has previously been proposed to assist human radiologists in this task, however, leading studies use down-sampled images and computationally expensive methods with unproven generalization. Instead, this study localizes lung nodules using efficient encoder-decoder neural networks that process full resolution images to avoid any signal loss resulting from down-sampling. Encoder-decoder networks are trained and tested using the JSRT lung nodule dataset. The networks are used to localize lung nodules from an independent external CXR dataset. Sensitivity and false positive rates are measured using an automated framework to eliminate any observer subjectivity. These experiments allow for the determination of the optimal network depth, image resolution and pre-processing pipeline for generalized lung nodule localization. We find that nodule localization is influenced by subtlety, with more subtle nodules being detected in earlier training epochs. Therefore, we propose a novel self-ensemble model from three consecutive epochs centered on the validation optimum. This ensemble achieved a sensitivity of 85% in 10-fold internal testing with false positives of 8 per image. A sensitivity of 81% is achieved at a false positive rate of 6 following morphological false positive reduction. This result is comparable to more computationally complex systems based on linear and spatial filtering, but with a sub-second inference time that is faster than other methods. The proposed algorithm achieved excellent generalization results against an external dataset with sensitivity of 77% at a false positive rate of 7.6

    L-Tetrolet Pattern-Based Sleep Stage Classification Model Using Balanced EEG Datasets

    Get PDF
    Background: Sleep stage classification is a crucial process for the diagnosis of sleep or sleep-related diseases. Currently, this process is based on manual electroencephalogram (EEG) analysis, which is resource-intensive and error-prone. Various machine learning models have been recommended to standardize and automate the analysis process to address these problems. Materials and methods: The well-known cyclic alternating pattern (CAP) sleep dataset is used to train and test an L-tetrolet pattern-based sleep stage classification model in this research. By using this dataset, the following three cases are created, and they are: Insomnia, Normal, and Fused cases. For each of these cases, the machine learning model is tasked with identifying six sleep stages. The model is structured in terms of feature generation, feature selection, and classification. Feature generation is established with a new L-tetrolet (Tetris letter) function and multiple pooling decomposition for level creation. We fuse ReliefF and iterative neighborhood component analysis (INCA) feature selection using a threshold value. The hybrid and iterative feature selectors are named threshold selection-based ReliefF and INCA (TSRFINCA). The selected features are classified using a cubic support vector machine. Results: The presented L-tetrolet pattern and TSRFINCA-based sleep stage classification model yield 95.43%, 91.05%, and 92.31% accuracies for Insomnia, Normal dataset, and Fused cases, respectively. Conclusion: The recommended L-tetrolet pattern and TSRFINCA-based model push the envelope of current knowledge engineering by accurately classifying sleep stages even in the presence of sleep disorders.</jats:p

    Hybrid deep feature generation for appropriate face mask use detection

    Get PDF
    Mask usage is one of the most important precautions to limit the spread of COVID-19. Therefore, hygiene rules enforce the correct use of face coverings. Automated mask usage classification might be used to improve compliance monitoring. This study deals with the problem of inappropriate mask use. To address that problem, 2075 face mask usage images were collected. The individual images were labeled as either mask, no masked, or improper mask. Based on these labels, the following three cases were created: Case 1: mask versus no mask versus improper mask, Case 2: mask versus no mask + improper mask, and Case 3: mask versus no mask. This data was used to train and test a hybrid deep feature-based masked face classification model. The presented method comprises of three primary stages: (i) pre-trained ResNet101 and DenseNet201 were used as feature generators; each of these generators extracted 1000 features from an image; (ii) the most discriminative features were selected using an improved RelieF selector; and (iii) the chosen features were used to train and test a support vector machine classifier. That resulting model attained 95.95%, 97.49%, and 100.0% classification accuracy rates on Case 1, Case 2, and Case 3, respectively. Having achieved these high accuracy values indicates that the proposed model is fit for a practical trial to detect appropriate face mask use in real time
    corecore