828 research outputs found

    Empirical comparison of deep learning models for fNIRS pain decoding

    Get PDF
    Introduction: Pain assessment is extremely important in patients unable to communicate and it is often done by clinical judgement. However, assessing pain using observable indicators can be challenging for clinicians due to the subjective perceptions, individual differences in pain expression, and potential confounding factors. Therefore, the need for an objective pain assessment method that can assist medical practitioners. Functional near-infrared spectroscopy (fNIRS) has shown promising results to assess the neural function in response of nociception and pain. Previous studies have explored the use of machine learning with hand-crafted features in the assessment of pain.Methods: In this study, we aim to expand previous studies by exploring the use of deep learning models Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and (CNN-LSTM) to automatically extract features from fNIRS data and by comparing these with classical machine learning models using hand-crafted features.Results: The results showed that the deep learning models exhibited favourable results in the identification of different types of pain in our experiment using only fNIRS input data. The combination of CNN and LSTM in a hybrid model (CNN-LSTM) exhibited the highest performance (accuracy = 91.2%) in our problem setting. Statistical analysis using one-way ANOVA with Tukey's (post-hoc) test performed on accuracies showed that the deep learning models significantly improved accuracy performance as compared to the baseline models.Discussion: Overall, deep learning models showed their potential to learn features automatically without relying on manually-extracted features and the CNN-LSTM model could be used as a possible method of assessment of pain in non-verbal patients. Future research is needed to evaluate the generalisation of this method of pain assessment on independent populations and in real-life scenarios

    Explainable Depression Detection via Head Motion Patterns

    Full text link
    While depression has been studied via multimodal non-verbal behavioural cues, head motion behaviour has not received much attention as a biomarker. This study demonstrates the utility of fundamental head-motion units, termed \emph{kinemes}, for depression detection by adopting two distinct approaches, and employing distinctive features: (a) discovering kinemes from head motion data corresponding to both depressed patients and healthy controls, and (b) learning kineme patterns only from healthy controls, and computing statistics derived from reconstruction errors for both the patient and control classes. Employing machine learning methods, we evaluate depression classification performance on the \emph{BlackDog} and \emph{AVEC2013} datasets. Our findings indicate that: (1) head motion patterns are effective biomarkers for detecting depressive symptoms, and (2) explanatory kineme patterns consistent with prior findings can be observed for the two classes. Overall, we achieve peak F1 scores of 0.79 and 0.82, respectively, over BlackDog and AVEC2013 for binary classification over episodic \emph{thin-slices}, and a peak F1 of 0.72 over videos for AVEC2013

    Cortical Network Response to Acupuncture and the Effect of the Hegu Point:An FNIRS study

    Get PDF
    Acupuncture is a practice of treatment based on influencing specific points on the body by inserting needles. According to traditional Chinese medicine, the aim of acupuncture treatment for pain management is to use specific acupoints to relieve excess, activate qi (or vital energy), and improve blood circulation. In this context, the Hegu point is one of the most widely-used acupoints for this purpose, and it has been linked to having an analgesic effect. However, there exists considerable debate as to its scientific validity. In this pilot study, we aim to identify the functional connectivity related to the three main types of acupuncture manipulations and also identify an analgesic effect based on the hemodynamic response as measured by functional near-infrared spectroscopy (fNIRS). The cortical response of eleven healthy subjects was obtained using fNIRS during an acupuncture procedure. A multiscale analysis based on wavelet transform coherence was employed to assess the functional connectivity of corresponding channel pairs within the left and right somatosensory region. The wavelet analysis was focused on the very-low frequency oscillations (VLFO, 0.01–0.08 Hz) and the low frequency oscillations (LFO, 0.08–0.15 Hz). A mixed model analysis of variance was used to appraise statistical differences in the wavelet domain for the different acupuncture stimuli. The hemodynamic response after the acupuncture manipulations exhibited strong activations and distinctive cortical networks in each stimulus. The results of the statistical analysis showed significant differences ( p < 0.05 ) between the tasks in both frequency bands. These results suggest the existence of different stimuli-specific cortical networks in both frequency bands and the anaesthetic effect of the Hegu point as measured by fNIRS

    Analysis of Pain Hemodynamic Response Using Near-Infrared Spectroscopy (NIRS)

    Full text link
    Despite recent advances in brain research, understanding the various signals for pain and pain intensities in the brain cortex is still a complex task due to temporal and spatial variations of brain hemodynamics. In this paper we have investigated pain based on cerebral hemodynamics via near-infrared spectroscopy (NIRS). This study presents a pain stimulation experiment that uses three acupuncture manipulation techniques to safely induce pain in healthy subjects. Acupuncture pain response was presented and hemodynamic pain signal analysis showed the presence of dominant channels and their relationship among surrounding channels, which contribute the further pain research area.Comment: 11 pages, 11 figure

    Explainable Depression Detection via Head Motion Patterns

    Get PDF
    While depression has been studied via multimodal non-verbal behavioural cues, head motion behaviour has not received much attention as a biomarker. This study demonstrates the utility of fundamental head-motion units, termed \emph{kinemes}, for depression detection by adopting two distinct approaches, and employing distinctive features: (a) discovering kinemes from head motion data corresponding to both depressed patients and healthy controls, and (b) learning kineme patterns only from healthy controls, and computing statistics derived from reconstruction errors for both the patient and control classes. Employing machine learning methods, we evaluate depression classification performance on the \emph{BlackDog} and \emph{AVEC2013} datasets. Our findings indicate that: (1) head motion patterns are effective biomarkers for detecting depressive symptoms, and (2) explanatory kineme patterns consistent with prior findings can be observed for the two classes. Overall, we achieve peak F1 scores of 0.79 and 0.82, respectively, over BlackDog and AVEC2013 for binary classification over episodic \emph{thin-slices}, and a peak F1 of 0.72 over videos for AVEC2013

    Empirical comparison of deep learning models for fNIRS pain decoding

    Get PDF
    IntroductionPain assessment is extremely important in patients unable to communicate and it is often done by clinical judgement. However, assessing pain using observable indicators can be challenging for clinicians due to the subjective perceptions, individual differences in pain expression, and potential confounding factors. Therefore, the need for an objective pain assessment method that can assist medical practitioners. Functional near-infrared spectroscopy (fNIRS) has shown promising results to assess the neural function in response of nociception and pain. Previous studies have explored the use of machine learning with hand-crafted features in the assessment of pain.MethodsIn this study, we aim to expand previous studies by exploring the use of deep learning models Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and (CNN-LSTM) to automatically extract features from fNIRS data and by comparing these with classical machine learning models using hand-crafted features.ResultsThe results showed that the deep learning models exhibited favourable results in the identification of different types of pain in our experiment using only fNIRS input data. The combination of CNN and LSTM in a hybrid model (CNN-LSTM) exhibited the highest performance (accuracy = 91.2%) in our problem setting. Statistical analysis using one-way ANOVA with Tukey's (post-hoc) test performed on accuracies showed that the deep learning models significantly improved accuracy performance as compared to the baseline models.DiscussionOverall, deep learning models showed their potential to learn features automatically without relying on manually-extracted features and the CNN-LSTM model could be used as a possible method of assessment of pain in non-verbal patients. Future research is needed to evaluate the generalisation of this method of pain assessment on independent populations and in real-life scenarios

    Multimodal physiological sensing for the assessment of acute pain

    Get PDF
    Pain assessment is a challenging task encountered by clinicians. In clinical settings, patients’ self-report is considered the gold standard in pain assessment. However, patients who are unable to self-report pain are at a higher risk of undiagnosed pain. In the present study, we explore the use of multiple sensing technologies to monitor physiological changes that can be used as a proxy for objective measurement of acute pain. Electrodermal activity (EDA), photoplethysmography (PPG), and respiration (RESP) signals were collected from 22 participants under two pain intensities (low and high) and on two different anatomical locations (forearm and hand). Three machine learning models were implemented, including support vector machines (SVM), decision trees (DT), and linear discriminant analysis (LDA) for the identification of pain. Various pain scenarios were investigated, identification of pain (no pain, pain), multiclass (no pain, low pain, high pain), and identification of pain location (forearm, hand). Reference classification results from individual sensors and from all sensors together were obtained. After feature selection, results showed that EDA was the most informative sensor in the three pain conditions, 93.2±8% in identification of pain, 68.9±10% in the multiclass problem, and 56.0±8% for the identification of pain location. These results identify EDA as the superior sensor in our experimental conditions. Future work is required to validate the obtained features to improve its feasibility in more realistic scenarios. Finally, this study proposes EDA as a candidate to design a tool that can assist clinicians in the assessment of acute pain of nonverbal patients
    corecore