13 research outputs found

    Dataset supporting the paper: High trait anxiety enhances optimal integration of auditory and visual threat cues

    No full text
    This dataset includes data on behavioural outcomes for the audiovisual emotion recognition tasks used in the publication, "High Trait Anxiety Enhances Optimal Integration of Auditory and Visual Threat Cues". In this study the authors investigated perception of happy, sad and angry emotions within unimodal (audio- and visual-only) and audiovisual displays in adults with low vs. high levels of trait anxiety. The data is organised to facilitate replication of the analyses carried out in the aforementioned study, which includes two model-based analyses to elucidate how multisensory integration of emotional information operates in high trait anxiety. This was done by comparing performance in the audiovisual condition for both high and low trait anxiety groups to performance predicted by the Maximum Likelihood Estimation (MLE) model (Ernst & Banks, 2002; Rohde et al., 2016) and Miller’s Race Model (Miller, 1982; Ulrich et al., 2007). Data included in this dataset has already been pre-processed (i.e., univariate outliers have already been identified and dealt with)

    Dataset supporting the paper: Anxiety Biases Audiovisual Processing of Social Signals

    No full text
    This dataset includes data on behavioural outcomes for the audiovisual emotion recognition tasks used in the publication, "Anxiety Biases Audiovisual Processing of Social Signals". In this study the authors investigated perception of happy and angry emotions within unimodal (audio- and visual-only), congruent and incongruent audiovisual displays in healthy adults with higher and lower levels of trait anxiety. The data is organised to facilitate replication of the ANCOVA analyses carried out in the aforementioned study. Data included in this dataset has already been pre-processed (i.e., univariate outliers have already been identified and dealt with)

    Dataset supporting the paper: Anxiety Biases Audiovisual Processing of Social Signals

    No full text
    This dataset includes data on behavioural outcomes for the audiovisual emotion recognition tasks used in the publication, "Anxiety Biases Audiovisual Processing of Social Signals". In this study the authors investigated perception of happy and angry emotions within unimodal (audio- and visual-only), congruent and incongruent audiovisual displays in healthy adults with higher and lower levels of trait anxiety. The data is organised to facilitate replication of the ANCOVA analyses carried out in the aforementioned study. Data included in this dataset has already been pre-processed (i.e., univariate outliers have already been identified and dealt with)

    Dataset supporting the paper: High trait anxiety enhances optimal integration of auditory and visual threat cues

    No full text
    This dataset includes data on behavioural outcomes for the audiovisual emotion recognition tasks used in the publication, "High Trait Anxiety Enhances Optimal Integration of Auditory and Visual Threat Cues". In this study the authors investigated perception of happy, sad and angry emotions within unimodal (audio- and visual-only) and audiovisual displays in adults with low vs. high levels of trait anxiety. The data is organised to facilitate replication of the analyses carried out in the aforementioned study, which includes two model-based analyses to elucidate how multisensory integration of emotional information operates in high trait anxiety. This was done by comparing performance in the audiovisual condition for both high and low trait anxiety groups to performance predicted by the Maximum Likelihood Estimation (MLE) model (Ernst & Banks, 2002; Rohde et al., 2016) and Miller’s Race Model (Miller, 1982; Ulrich et al., 2007). Data included in this dataset has already been pre-processed (i.e., univariate outliers have already been identified and dealt with)

    Dataset for "Touché: Data-Driven Interactive Sword Fighting in Virtual Reality"

    No full text
    This is the data repository for the paper "Touché: Data-Driven Interactive Sword Fighting in Virtual Reality" by Javier Dehesa, Andrew Vidler, Christof Lutteroth and Julian Padget, presented at CHI 2020 conference in Honolulu, HI, USA. See the publication for details. The archives gesture_recognition_data.zip and gesture_recognition_code.zip contain respectively the data and code for the gesture recognition component. Similarly, the archives animation_data.zip and animation_code.zip contain respectively the data and code for the animation component. Instructions about how to use these are provided within them. The archive user_studies.zip contains information about our user studies. The file questionnaire_study.jasp and interactive_study.jasp contain the data and analysis of the questionnaire and interactive studies respectively. They can be consulted with the open source tool JASP (https://jasp-stats.org/). The video questionnaire_conditions.mp4 shows the full videos used as the three conditions for the questionnaire study

    Dataset for "A Novel Neural Network Architecture with Applications to 3D Animation and Interaction in Virtual Reality"

    No full text
    This is the dataset for the doctoral thesis "A Novel Neural Network Architecture with Applications to 3D Animation and Interaction in Virtual Reality" by Javier de la Dehesa Cueto-Felgueroso. See the original document for details. The dataset is structured in three parts. The files `gfnn_code.zip` and `gfnn_data.zip` contain the code and data for the experiments with grid-functioned neural networks discussed in chapter 3 of the thesis. The files `quadruped_code.zip` and `quadruped_data.zip` contain the code and data for the quadruped locomotion experiments and user study discussed in chapter 4. The files `framework_code.zip` and `framework_data.zip` contain the code and data for the human-character interaction framework experiments and user studies discussed in chapter 5. Each pair of files should be decompressed in the same directory, but separate from the other parts. Further details and instructions for each of the parts can be found within the corresponding compressed files

    Supplement for "Me vs. Super(wo)man: Effects of Customization and Identification in a VR Exergame"

    No full text
    This supplement describes an approach that can be used to create an “enhanced“ avatar based on a) a realistic, current avatar (R) and b) an idealised, desired future avatar (I) of a user. The aim of the approach is to create avatars that reflect “enhancements” of the realistic avatar along a realistic trajectory. The realistic avatar is used as a starting point, and the idealised avatar as a “goal”

    Datasets and Analyses for "Affect Recognition using Psychophysiological Correlates in High Intensity VR Exergaming"

    No full text
    Datasets and analyses for the paper "Affect Recognition using Psychophysiological Correlates in High Intensity VR Exergaming" published at CHI 2020. We present the datasets of two experiments that investigate the use of different sensors for affect recognition in a VR exergame. The first experiment compares the impact of physical exertion and gamification on psychophysiological measurements during rest, conventional exercise, VR exergaming, and sedentary VR gaming. The second experiment compares underwhelming, overwhelming and optimal VR exergaming scenarios. We identify gaze fixations, eye blinks, pupil diameter and skin conductivity as psychophysiological measures suitable for affect recognition in VR exergaming and analyse their utility in determining affective valence and arousal. Our findings provide guidelines for researchers of affective VR exergames. The datasets and analyses consist of the following: 1. two CSV sheets containing the quantitative and qualitative data of the Experiments I and II; 2. two JASP files with ANOVAS and t-tests for Experiments I and II; 3. two R scripts with correlation and regression analyses for Experiments I and II

    Final Amended Report on the Safety Assessment of Methylparaben, Ethylparaben, Propylparaben, Isopropylparaben, Butylparaben, Isobutylparaben, and Benzylparaben as used in Cosmetic Products

    No full text

    SEER: A Delphic approach applied to information processing

    No full text
    corecore