38 research outputs found
The influence of neuroticism and psychological symptoms on the assessment of images in three-dimensional emotion space
Objective: The present study investigated the influence of neuroticism (NEO Five-Factor Inventory (NEO-FFI)) and psychological symptoms (Brief Symptom Inventory (BSI)) on pleasure, arousal, and dominance (PAD) ratings of the International Affective Picture System (IAPS)
The influence of neuroticism and psychological symptoms on the assessment of images in three-dimensional emotion space
Objective: The present study investigated the influence of neuroticism (NEO Five-Factor Inventory (NEO-FFI)) and psychological symptoms (Brief Symptom Inventory (BSI)) on pleasure, arousal, and dominance (PAD) ratings of the International Affective Picture System (IAPS)
Recommended from our members
Pain Intensity Recognition Rates via Biopotential Feature Patterns with Support Vector Machines
Background: The clinically used methods of pain diagnosis do not allow for objective and robust measurement, and physicians must rely on the patientâs report on the pain sensation. Verbal scales, visual analog scales (VAS) or numeric rating scales (NRS) count among the most common tools, which are restricted to patients with normal mental abilities. There also exist instruments for pain assessment in people with verbal and / or cognitive impairments and instruments for pain assessment in people who are sedated and automated ventilated. However, all these diagnostic methods either have limited reliability and validity or are very time-consuming. In contrast, biopotentials can be automatically analyzed with machine learning algorithms to provide a surrogate measure of pain intensity. Methods: In this context, we created a database of biopotentials to advance an automated pain recognition system, determine its theoretical testing quality, and optimize its performance. Eighty-five participants were subjected to painful heat stimuli (baseline, pain threshold, two intermediate thresholds, and pain tolerance threshold) under controlled conditions and the signals of electromyography, skin conductance level, and electrocardiography were collected. A total of 159 features were extracted from the mathematical groupings of amplitude, frequency, stationarity, entropy, linearity, variability, and similarity. Results: We achieved classification rates of 90.94% for baseline vs. pain tolerance threshold and 79.29% for baseline vs. pain threshold. The most selected pain features stemmed from the amplitude and similarity group and were derived from facial electromyography. Conclusion: The machine learning measurement of pain in patients could provide valuable information for a clinical team and thus support the treatment assessment
Schmerzerkennung anhand psychophysiologischer Signale mithilfe maschineller Lerner
Eine objektive Schmerzdiagnose sollte in der Lage sein, unabhĂ€ngig vom Beobachter sowie unabhĂ€ngig vom leidenden Individuum, Schmerzen valide zu detektieren. Demnach gilt es, reprĂ€sentative Merkmale zu finden, die eine Unterscheidung von Schmerz zu Nicht-Schmerz ermöglichen. Vor allem mit dem Einsatz von sog. maschinellen Lernverfahren, sollte eine Erkennung von Schmerz mithilfe von Klassifikationsmethoden mit einer bestimmten Wahrscheinlichkeit möglich sein, wenn hierfĂŒr schmerzbeschreibende Parameter bekannt sind. Psychophysiologische Parameter, wie beispielsweise Herzrate, HautleitfĂ€higkeit oder MuskelaktivitĂ€t stellen eine vielversprechende Möglichkeit zur Erfassung von Schmerz dar. In der vorliegenden Arbeit wurde mit der Hilfe von maschinellen Lernern ein aus biopsychologischen Signalen abgeleitetes Merkmalsset erstellt, das Schmerz und Nicht-Schmerz bzw. SchmerzintensitĂ€ten objektiv messen und mit befriedigender QualitĂ€t unterscheiden kann. Die Erkennungsgenauigkeiten liegen dabei zwischen 73 % - 90 % und lassen ein groĂes Potential fĂŒr viele Einsatzgebiete durchblicken. Insbesondere Patientengruppen, die ihren Schmerz nicht nach auĂen kommunizieren können, wĂŒrden von solch einer automatischen Schmerzerkennung in klinischen Umgebungen profitieren
Data from: Pain intensity recognition rates via biopotential feature patterns with support vector machines
Background: The clinically used methods of pain diagnosis do not allow for objective and robust measurement, and physicians must rely on the patientâs report on the pain sensation. Verbal scales, visual analog scales (VAS) or numeric rating scales (NRS) count among the most common tools, which are restricted to patients with normal mental abilities. There also exist instruments for pain assessment in people with verbal and / or cognitive impairments and instruments for pain assessment in people who are sedated and automated ventilated. However, all these diagnostic methods either have limited reliability and validity or are very time-consuming. In contrast, biopotentials can be automatically analyzed with machine learning algorithms to provide a surrogate measure of pain intensity. Methods: In this context, we created a database of biopotentials to advance an automated pain recognition system, determine its theoretical testing quality, and optimize its performance. Eighty-five participants were subjected to painful heat stimuli (baseline, pain threshold, two intermediate thresholds, and pain tolerance threshold) under controlled conditions and the signals of electromyography, skin conductance level, and electrocardiography were collected. A total of 159 features were extracted from the mathematical groupings of amplitude, frequency, stationarity, entropy, linearity, variability, and similarity. Results: We achieved classification rates of 90.94% for baseline vs. pain tolerance threshold and 79.29% for baseline vs. pain threshold. The most selected pain features stemmed from the amplitude and similarity group and were derived from facial electromyography. Conclusion: The machine learning measurement of pain in patients could provide valuable information for a clinical team and thus support the treatment assessment
List of extracted biopotential pain features
Reference: Steffen Walter, Philipp Werner, Sascha Gruss, Harald C. Traue, Ayoub Al-Hamadi, et al.: The BioVid Heat Pain Database: Data for the Advancement and Systematic Validation of an Automated Pain Recognition System. In Proceedings of IEEE International Conference on Cybernetics, 2013. Pain stimulation data:
Extracted features of biomedical signals (SCL, ECG, EMG at trapezius, corrugator and zygomaticus muscle); 85 subjects; features extracted of time windows of 5.5 seconds; used to classify pain intensities; 5 classes (pain intensity 0 to pain intensity 4), 20 samples per class per subject
Automatic vs. Human Recognition of Pain Intensity from Facial Expression on the X-ITE Pain Database
Prior work on automated methods demonstrated that it is possible to recognize pain intensity from frontal faces in videos, while there is an assumption that humans are very adept at this task compared to machines. In this paper, we investigate whether such an assumption is correct by comparing the results achieved by two human observers with the results achieved by a Random Forest classifier (RFc) baseline model (called RFc-BL) and by three proposed automated models. The first proposed model is a Random Forest classifying descriptors of Action Unit (AU) time series; the second is a modified MobileNetV2 CNN classifying face images that combine three points in time; and the third is a custom deep network combining two CNN branches using the same input as for MobileNetV2 plus knowledge of the RFc. We conduct experiments with X-ITE phasic pain database, which comprises videotaped responses to heat and electrical pain stimuli, each of three intensities. Distinguishing these six stimulation types plus no stimulation was the main 7-class classification task for the human observers and automated approaches. Further, we conducted reduced 5-class and 3-class classification experiments, applied Multi-task learning, and a newly suggested sample weighting method. Experimental results show that the pain assessments of the human observers are significantly better than guessing and perform better than the automatic baseline approach (RFc-BL) by about 1%; however, the human performance is quite poor due to the challenge that pain that is ethically allowed to be induced in experimental studies often does not show up in facial reaction. We discovered that downweighting those samples during training improves the performance for all samples. The proposed RFc and two-CNNs models (using the proposed sample weighting) significantly outperformed the human observer by about 6% and 7%, respectively
Automatic vs. human recognition of pain intensity from facial expression on the X-ITE pain database
Prior work on automated methods demonstrated that it is possible to recognize pain intensity from frontal faces in videos, while there is an assumption that humans are very adept at this task compared to machines. In this paper, we investigate whether such an assumption is correct by comparing the results achieved by two human observers with the results achieved by a Random Forest classifier (RFc) baseline model (called RFc-BL) and by three proposed automated models. The first proposed model is a Random Forest classifying descriptors of Action Unit (AU) time series; the second is a modified MobileNetV2 CNN classifying face images that combine three points in time; and the third is a custom deep network combining two CNN branches using the same input as for MobileNetV2 plus knowledge of the RFc. We conduct experiments with X-ITE phasic pain database, which comprises videotaped responses to heat and electrical pain stimuli, each of three intensities. Distinguishing these six stimulation types plus no stimulation was the main 7-class classification task for the human observers and automated approaches. Further, we conducted reduced 5-class and 3-class classification experiments, applied Multi-task learning, and a newly suggested sample weighting method. Experimental results show that the pain assessments of the human observers are significantly better than guessing and perform better than the automatic baseline approach (RFc-BL) by about 1%; however, the human performance is quite poor due to the challenge that pain that is ethically allowed to be induced in experimental studies often does not show up in facial reaction. We discovered that downweighting those samples during training improves the performance for all samples. The proposed RFc and two-CNNs models (using the proposed sample weighting) significantly outperformed the human observer by about 6% and 7%, respectively
Autonomous nervous response during sedation in colonoscopy and the relationship with clinician satisfaction
Background: Nurse assisted propofol sedation (NAPS) is a common method used for colonoscopies. It is safe and widely accepted by patients. Little is known, however, about the satisfaction of clinicians performing colonoscopies with NAPS and the factors that negatively influence this perception such as observer-reported pain events. In this study, we aimed to correlate observer-reported pain events with the clinicians' satisfaction with the procedure. Additionally, we aimed to identify patient biosignals from the autonomic nervous system (B-ANS) during an endoscopy that correlate with those pain events.
Methods: Consecutive patients scheduled for a colonoscopy with NAPS were prospectively recruited. During the procedure, observer-reported pain events, which included movements and paralinguistic sounds, were simultaneously recorded with different B-ANS (facial electromyogram (EMG), skin conductance level, body temperature and electrocardiogram). After the procedure, the examiners filled out the Clinician Satisfaction with Sedation Instrument (CSSI). The primary endpoint was the correlation between CSSI and observer-reported pain events. The second primary endpoint was the identification of B-ANS that make it possible to predict those events. Secondary endpoints included the correlation between CSSI and sedation depth, the frequency and dose of sedative use, polyps resected, resection time, the duration of the procedure, the time it took to reach the coecum and the experience of the nurse performing the NAPS. ClinicalTrials.gov: NCT03860779.
Results: 112 patients with 98 (88.5%) available B-ANS recordings were prospectively recruited. There was a significant correlation between an increased number of observer-reported pain events during an endoscopy with NAPS and a lower CSSI (r = â0.318, p = 0.001). Additionally, the EMG-signal from facial muscles correlated best with the event time points, and the signal significantly exceeded the baseline 30 s prior to the occurrence of paralinguistic sounds. The secondary endpoints showed that the propofol dose relative to the procedure time, the cecal intubation time, the time spent on polyp removal and the individual nurse performing the NAPS significantly correlated with CSSI.
Conclusion: This study shows that movements and paralinguistic sounds during an endoscopy negatively correlate with the satisfaction of the examiner measured with the CSSI. Additionally, an EMG of the facial muscles makes it possible to identify such events and potentially predict their occurrence
Automatic Recognition Methods Supporting Pain Assessment: A Survey
IEEE Automated tools for pain assessment have great promise but have not yet become widely used in clinical practice. In this survey paper, we review the literature that proposes and evaluates automatic pain recognition approaches, and discuss challenges and promising directions for advancing this field. Prior to that, we give an overview on pain mechanisms and responses, discuss common clinically used pain assessment tools, and address shared datasets and the challenge of validation in the context of pain recognition