45,214 research outputs found

    Exploiting Deep Semantics and Compositionality of Natural Language for Human-Robot-Interaction

    Full text link
    We develop a natural language interface for human robot interaction that implements reasoning about deep semantics in natural language. To realize the required deep analysis, we employ methods from cognitive linguistics, namely the modular and compositional framework of Embodied Construction Grammar (ECG) [Feldman, 2009]. Using ECG, robots are able to solve fine-grained reference resolution problems and other issues related to deep semantics and compositionality of natural language. This also includes verbal interaction with humans to clarify commands and queries that are too ambiguous to be executed safely. We implement our NLU framework as a ROS package and present proof-of-concept scenarios with different robots, as well as a survey on the state of the art

    Short runs of atrial arrhythmia and stroke risk: a European-wide online survey among stroke physicians and cardiologists

    Get PDF
    Methods: An online survey of cardiologists and stroke physicians was carried out to assess current management of patients with short runs of atrial arrhythmia within Europe. Results: Respondents included 311 clinicians from 32 countries. To diagnose atrial fibrillation, 80% accepted a single 12-lead ECG and 36% accepted a single run of < 30 seconds on ambulatory monitoring. Stroke physicians were twice as likely to accept < 30 seconds of arrhythmia as being diagnostic of atrial fibrillation (OR 2.43, 95% CI 1.19–4.98). They were also more likely to advocate anticoagulation for hypothetical patients with lower risk; OR 1.9 (95% CI 1.0–3.5) for a patient with CHA2DS2-VASc = 2. Conclusion: Short runs of atrial fibrillation create a dilemma for physicians across Europe. Stroke physicians and cardiologists differ in their diagnosis and management of these patients

    Transparent authentication: Utilising heart rate for user authentication

    Get PDF
    There has been exponential growth in the use of wearable technologies in the last decade with smart watches having a large share of the market. Smart watches were primarily used for health and fitness purposes but recent years have seen a rise in their deployment in other areas. Recent smart watches are fitted with sensors with enhanced functionality and capabilities. For example, some function as standalone device with the ability to create activity logs and transmit data to a secondary device. The capability has contributed to their increased usage in recent years with researchers focusing on their potential. This paper explores the ability to extract physiological data from smart watch technology to achieve user authentication. The approach is suitable not only because of the capacity for data capture but also easy connectivity with other devices - principally the Smartphone. For the purpose of this study, heart rate data is captured and extracted from 30 subjects continually over an hour. While security is the ultimate goal, usability should also be key consideration. Most bioelectrical signals like heart rate are non-stationary time-dependent signals therefore Discrete Wavelet Transform (DWT) is employed. DWT decomposes the bioelectrical signal into n level sub-bands of detail coefficients and approximation coefficients. Biorthogonal Wavelet (bior 4.4) is applied to extract features from the four levels of detail coefficents. Ten statistical features are extracted from each level of the coffecient sub-band. Classification of each sub-band levels are done using a Feedforward neural Network (FF-NN). The 1 st , 2 nd , 3 rd and 4 th levels had an Equal Error Rate (EER) of 17.20%, 18.17%, 20.93% and 21.83% respectively. To improve the EER, fusion of the four level sub-band is applied at the feature level. The proposed fusion showed an improved result over the initial result with an EER of 11.25% As a one-off authentication decision, an 11% EER is not ideal, its use on a continuous basis makes this more than feasible in practice

    Biometrics for Emotion Detection (BED): Exploring the combination of Speech and ECG

    Get PDF
    The paradigm Biometrics for Emotion Detection (BED) is introduced, which enables unobtrusive emotion recognition, taking into account varying environments. It uses the electrocardiogram (ECG) and speech, as a powerful but rarely used combination to unravel people’s emotions. BED was applied in two environments (i.e., office and home-like) in which 40 people watched 6 film scenes. It is shown that both heart rate variability (derived from the ECG) and, when people’s gender is taken into account, the standard deviation of the fundamental frequency of speech indicate people’s experienced emotions. As such, these measures validate each other. Moreover, it is found that people’s environment can indeed of influence experienced emotions. These results indicate that BED might become an important paradigm for unobtrusive emotion detection

    Ubiquitous emotion-aware computing

    Get PDF
    Emotions are a crucial element for personal and ubiquitous computing. What to sense and how to sense it, however, remain a challenge. This study explores the rare combination of speech, electrocardiogram, and a revised Self-Assessment Mannequin to assess people’s emotions. 40 people watched 30 International Affective Picture System pictures in either an office or a living-room environment. Additionally, their personality traits neuroticism and extroversion and demographic information (i.e., gender, nationality, and level of education) were recorded. The resulting data were analyzed using both basic emotion categories and the valence--arousal model, which enabled a comparison between both representations. The combination of heart rate variability and three speech measures (i.e., variability of the fundamental frequency of pitch (F0), intensity, and energy) explained 90% (p < .001) of the participants’ experienced valence--arousal, with 88% for valence and 99% for arousal (ps < .001). The six basic emotions could also be discriminated (p < .001), although the explained variance was much lower: 18–20%. Environment (or context), the personality trait neuroticism, and gender proved to be useful when a nuanced assessment of people’s emotions was needed. Taken together, this study provides a significant leap toward robust, generic, and ubiquitous emotion-aware computing

    Multi-modal Approach for Affective Computing

    Full text link
    Throughout the past decade, many studies have classified human emotions using only a single sensing modality such as face video, electroencephalogram (EEG), electrocardiogram (ECG), galvanic skin response (GSR), etc. The results of these studies are constrained by the limitations of these modalities such as the absence of physiological biomarkers in the face-video analysis, poor spatial resolution in EEG, poor temporal resolution of the GSR etc. Scant research has been conducted to compare the merits of these modalities and understand how to best use them individually and jointly. Using multi-modal AMIGOS dataset, this study compares the performance of human emotion classification using multiple computational approaches applied to face videos and various bio-sensing modalities. Using a novel method for compensating physiological baseline we show an increase in the classification accuracy of various approaches that we use. Finally, we present a multi-modal emotion-classification approach in the domain of affective computing research.Comment: Published in IEEE 40th International Engineering in Medicine and Biology Conference (EMBC) 201

    The second Euro Heart Survey on acute coronary syndromes: characteristics, treatment, and outcome of patients with ACS in Europe and the Mediterranean Basin in 2004

    Get PDF
    Aims Our study aimed to examine the management of acute coronary syndromes (ACS) in Europe and the Mediterranean basin, and to compare adherence to guidelines with that reported in the first Euro Heart Survey on ACS (EHS-ACS-I), 4 years earlier. Methods and results In a prospective survey conducted in 2004 (EHS-ACS-II), data describing the characteristics, treatment, and outcome of 6385 patients diagnosed with ACS in 190 medical centres in 32 countries were collected. ACS with ST-elevation was the initial diagnosis in 47% of patients, no ST-elevation in 48%, and undetermined electrocardiographic pattern in 5% of patients. Comparison of data collected in 2000 and 2004 showed similar baseline characteristics, but greater use of recommended medications and coronary interventions in EHS-ACS-II. Among patients with ST-elevation, the use of primary reperfusion increased slightly (from 56 to 64%), with a significant shift from fibrinolytic therapy to primary percutaneous coronary intervention (PPCI). The use of PPCI rose from 37 to 59% among those undergoing primary reperfusion therapy. Analysis of data in 34 centres that participated in both surveys showed even greater improvement with respect to the use of recommended medical therapy, interventions, and outcome. Conclusion Data from EHS-ACS-II suggest an increase in adherence to guidelines for treatment of ACS in comparison with EHS-ACS-I
    corecore