34 research outputs found

    Automated detection of pain levels using deep feature extraction from shutter blinds‑based dynamic‑sized horizontal patches with facial images

    Get PDF
    Pain intensity classification using facial images is a challenging problem in computer vision research. This work proposed a patch and transfer learning-based model to classify various pain intensities using facial images. The input facial images were segmented into dynamic-sized horizontal patches or “shutter blinds”. A lightweight deep network DarkNet19 pre-trained on ImageNet1K was used to generate deep features from the shutter blinds and the undivided resized segmented input facial image. The most discriminative features were selected from these deep features using iterative neighborhood component analysis, which were then fed to a standard shallow fine k-nearest neighbor classifier for classification using tenfold cross-validation. The proposed shutter blinds-based model was trained and tested on datasets derived from two public databases—University of Northern British Columbia-McMaster Shoulder Pain Expression Archive Database and Denver Intensity of Spontaneous Facial Action Database—which both comprised four pain intensity classes that had been labeled by human experts using validated facial action coding system methodology. Our shutter blinds-based classification model attained more than 95% overall accuracy rates on both datasets. The excellent performance suggests that the automated pain intensity classification model can be deployed to assist doctors in the non-verbal detection of pain using facial images in various situations (e.g., non-communicative patients or during surgery). This system can facilitate timely detection and management of pain

    Determination of factors influencing student engagement using a learning management system in a tertiary setting

    Get PDF
    Determining the key factors that affect student engagement will assist academics in improving students’ motivation. The Quality Indicators for Learning and Teaching (QILT) reports have shown low engagement levels in higher education student cohorts (QILT 2016, 2017). While factors such as online education, lack of attendance, and poor course content design have been attributed to this cause, it is still not clear as to the determination of those factors influencing student engagement in a higher education setting. It is widely accepted that the selection of appropriate learning resources is an essential phase in the education process. In contrast, an incompatible range of course materials can demotivate a student from engaging in the course (Quaye & Harper 2014). In the modern tertiary setting, Information and Communication Technology (ICT) plays an essential role in disseminating information with a Learning Management System (LMS) as the platform to communicate crucial course-related information. Academics can develop course materials on these LMSs to engage students beyond the classrooms, and students need to interact through the same platform to comprehend the transmitted knowledge. Since LMSs are operated on a computer platform, academics and students require strong ICT skills which are further utilised in the preparation of course materials. The knowledge required is dependent on the relevance and appropriateness of materials, the way various tasks are prepared, how communication is facilitated, the role and utilisation of discussion forums and other available social media structures, and the way in which assessments are conducted. This cumulatively leads to the development of a Just in Time (JIT) type of knowledge, which can be challenging to measure. The investigation into these major factors forms the basis of this study. Thus, understanding how various factors influence student engagement through the use of LMS platforms in a tertiary setting is the focus of this study. This study used a hybrid method involving a qualitative component to understand the factors that influence the student engagement in an LMS driven learning setting and a quantitative component for confirmation of various factors identified through the literature review. The study developed five specific hypotheses for testing, and the following table shows the outcomes of hypotheses testing: H1: Students are influenced by teaching resources in order to realise engagement in classroom activities - ACCEPTED H2: Academics influence engagement in classroom activities through their involvement in various teaching and management aspects - REJECTED H3: An academic’s activities influence the management of teaching activities, resulting in improved engagement by students in the class - ACCEPTED H4: Learning Management Systems (LMS) are a key part in improving students’ engagement - REJECTED H5: Management of various study-related activities to reach focus in the study will positively influence students’ engagement - ACCEPTED The outcomes of the study indicate that students and associated classroom activities, teaching resources, management of teaching, the way LMSs are established, and students’ requirements and needs play a key role in assuring engagement. This study also found that an academic’s activities play a less significant role in fostering engagement as there appears to be a shift from teaching to teaching management, as evidenced in the qualitative discussion. Further, the participants expected academics to have superior technology communication skills as this is essential in an LMS driven setting. Interestingly, this study correlated with a number of standards dictated by the Tertiary Education Quality Standards of Australia (TEQSA), a regulatory body that enforces standards in Australian tertiary education. This correlation was observed despite the fact that students that participated in this study had limited awareness of these TEQSA standards. The main contribution of this study is in highlighting the fact that academics and other support services in tertiary settings should focus on how the LMS is presented as participants expressed that clear navigation of the system is essential for engagement. This has profound implications in the way the recruitment of academics is conducted. In terms of practice, TEQSA standards are key in assuring quality in tertiary settings, and this study has provided strong evidence as to the needs for support systems, the way learning objectives are mapped to deliver learning outcomes, appropriateness of the content, time imposition on students in managing their study-related activities, and integration of technology. These are now a standard part of the TEQSA assessment. The study can be further improved in the future by collecting data from various cohorts: for example, fulltime vs part-time, domestic vs overseas, and mature vs school leavers, to better assess their views in terms of engagement as these cohorts come with varying needs. These can then be encapsulated in the learning materials and systems development. This would then lead to a better alignment of learning management and engagement to realise better outcomes

    A novel framework for distress detection through an automated speech processing system

    Get PDF
    Based on our ongoing work, this work in progress project aims to develop an automated system to detect distress in people to enable early referral for interventions to target anxiety and depression, to mitigate suicidal ideation and to improve adherence to treatment. The project will utilize either use existing voice data to assess people into various scales of distress, or will collect voice data as per existing standards of distress measurement, to develop basic computing algorithms required to detect various attributes associated with distress, detected through a person’s voice in a telephone call to a helpline. This will be then matched with the already available psychological assessment instruments such as the Distress Thermometer for these persons. In order to trigger interventions, organizational contexts are essential as interventions rely on the type of distress. Therefore, the model will be tested on various organizational settings such as the Police, Emergency and Health along with the Distress detection instruments normally used in a psychological assessment for accuracy and validation. The outcome of the project will culminate in a fully automated integrated system, and will save significant resources to organizations. The translation of the project will be realized in step-change improvements to quality of life within the gamut of public policy

    Full-resolution Lung Nodule Segmentation from Chest X-ray Images using Residual Encoder-Decoder Networks

    Full text link
    Lung cancer is the leading cause of cancer death and early diagnosis is associated with a positive prognosis. Chest X-ray (CXR) provides an inexpensive imaging mode for lung cancer diagnosis. Suspicious nodules are difficult to distinguish from vascular and bone structures using CXR. Computer vision has previously been proposed to assist human radiologists in this task, however, leading studies use down-sampled images and computationally expensive methods with unproven generalization. Instead, this study localizes lung nodules using efficient encoder-decoder neural networks that process full resolution images to avoid any signal loss resulting from down-sampling. Encoder-decoder networks are trained and tested using the JSRT lung nodule dataset. The networks are used to localize lung nodules from an independent external CXR dataset. Sensitivity and false positive rates are measured using an automated framework to eliminate any observer subjectivity. These experiments allow for the determination of the optimal network depth, image resolution and pre-processing pipeline for generalized lung nodule localization. We find that nodule localization is influenced by subtlety, with more subtle nodules being detected in earlier training epochs. Therefore, we propose a novel self-ensemble model from three consecutive epochs centered on the validation optimum. This ensemble achieved a sensitivity of 85% in 10-fold internal testing with false positives of 8 per image. A sensitivity of 81% is achieved at a false positive rate of 6 following morphological false positive reduction. This result is comparable to more computationally complex systems based on linear and spatial filtering, but with a sub-second inference time that is faster than other methods. The proposed algorithm achieved excellent generalization results against an external dataset with sensitivity of 77% at a false positive rate of 7.6

    L-Tetrolet Pattern-Based Sleep Stage Classification Model Using Balanced EEG Datasets

    Get PDF
    Background: Sleep stage classification is a crucial process for the diagnosis of sleep or sleep-related diseases. Currently, this process is based on manual electroencephalogram (EEG) analysis, which is resource-intensive and error-prone. Various machine learning models have been recommended to standardize and automate the analysis process to address these problems. Materials and methods: The well-known cyclic alternating pattern (CAP) sleep dataset is used to train and test an L-tetrolet pattern-based sleep stage classification model in this research. By using this dataset, the following three cases are created, and they are: Insomnia, Normal, and Fused cases. For each of these cases, the machine learning model is tasked with identifying six sleep stages. The model is structured in terms of feature generation, feature selection, and classification. Feature generation is established with a new L-tetrolet (Tetris letter) function and multiple pooling decomposition for level creation. We fuse ReliefF and iterative neighborhood component analysis (INCA) feature selection using a threshold value. The hybrid and iterative feature selectors are named threshold selection-based ReliefF and INCA (TSRFINCA). The selected features are classified using a cubic support vector machine. Results: The presented L-tetrolet pattern and TSRFINCA-based sleep stage classification model yield 95.43%, 91.05%, and 92.31% accuracies for Insomnia, Normal dataset, and Fused cases, respectively. Conclusion: The recommended L-tetrolet pattern and TSRFINCA-based model push the envelope of current knowledge engineering by accurately classifying sleep stages even in the presence of sleep disorders.</jats:p

    Hybrid deep feature generation for appropriate face mask use detection

    Get PDF
    Mask usage is one of the most important precautions to limit the spread of COVID-19. Therefore, hygiene rules enforce the correct use of face coverings. Automated mask usage classification might be used to improve compliance monitoring. This study deals with the problem of inappropriate mask use. To address that problem, 2075 face mask usage images were collected. The individual images were labeled as either mask, no masked, or improper mask. Based on these labels, the following three cases were created: Case 1: mask versus no mask versus improper mask, Case 2: mask versus no mask + improper mask, and Case 3: mask versus no mask. This data was used to train and test a hybrid deep feature-based masked face classification model. The presented method comprises of three primary stages: (i) pre-trained ResNet101 and DenseNet201 were used as feature generators; each of these generators extracted 1000 features from an image; (ii) the most discriminative features were selected using an improved RelieF selector; and (iii) the chosen features were used to train and test a support vector machine classifier. That resulting model attained 95.95%, 97.49%, and 100.0% classification accuracy rates on Case 1, Case 2, and Case 3, respectively. Having achieved these high accuracy values indicates that the proposed model is fit for a practical trial to detect appropriate face mask use in real time

    Automated analysis of small intestinal lamina propria to distinguish normal, Celiac Disease, and Non-Celiac Duodenitis biopsy images

    Get PDF
    Background and objective Celiac Disease (CD) is characterized by gluten intolerance in genetically predisposed individuals. High disease prevalence, absence of a cure, and low diagnosis rates make this disease a public health problem. The diagnosis of CD predominantly relies on recognizing characteristic mucosal alterations of the small intestine, such as villous atrophy, crypt hyperplasia, and intraepithelial lymphocytosis. However, these changes are not entirely specific to CD and overlap with Non-Celiac Duodenitis (NCD) due to various etiologies. We investigated whether Artificial Intelligence (AI) models could assist in distinguishing normal, CD, and NCD (and unaffected individuals) based on the characteristics of small intestinal lamina propria (LP). Methods Our method was developed using a dataset comprising high magnification biopsy images of the duodenal LP compartment of CD patients with different clinical stages of CD, those with NCD, and individuals lacking an intestinal inflammatory disorder (controls). A pre-processing step was used to standardize and enhance the acquired images. Results For the normal controls versus CD use case, a Support Vector Machine (SVM) achieved an Accuracy (ACC) of 98.53%. For a second use case, we investigated the ability of the classification algorithm to differentiate between normal controls and NCD. In this use case, the SVM algorithm with linear kernel outperformed all the tested classifiers by achieving 98.55% ACC. Conclusions To the best of our knowledge, this is the first study that documents automated differentiation between normal, NCD, and CD biopsy images. These findings are a stepping stone toward automated biopsy image analysis that can significantly benefit patients and healthcare providers
    corecore