330 research outputs found

    Affect Recognition in Hand-Object Interaction Using Object-Sensed Tactile and Kinematic Data

    Get PDF
    We investigate the recognition of the affective states of a person performing an action with an object, by processing the object-sensed data. We focus on sequences of basic actions such as grasping and rotating, which are constituents of daily-life interactions. iCube, a 5 cm cube, was used to collect tactile and kinematics data that consist of tactile maps (without information on the pressure applied to the surface), and rotations. We conduct two studies: classification of i) emotions and ii) the vitality forms. In both, the participants perform a semi-structured task composed of basic actions. For emotion recognition, 237 trials by 11 participants associated with anger, sadness, excitement, and gratitude were used to train models using 10 hand-crafted features. The classifier accuracy reaches up to 82.7%. Interestingly, the same classifier when learned exclusively with the tactile data performs on par with its counterpart modeled with all 10 features. For the second study, 1135 trials by 10 participants were used to classify two vitality forms. The best-performing model differentiated gentle actions from rude ones with an accuracy of 84.85%. The results also confirm that people touch objects differently when performing these basic actions with different affective states and attitudes

    What's on your plate? Collecting multimodal data to understand commensal behavior

    Get PDF
    Eating is a fundamental part of human life and is, more than anything, a social activity. A new field, known as Computational Commensality has been created to computationally address various social aspects of food and eating. This paper illustrates a study on remote dining we conducted online in May 2021. To better understand this phenomenon, known as Digital Commensality, we recorded 11 pairs of friends sharing a meal online through a videoconferencing app. In the videos, participants consume a plate of pasta while chatting with a friend or a family member. After the remote dinner, participants were asked to fill in the Digital Commensality questionnaire, a validated questionnaire assessing the effects of remote commensal experiences, and provide their opinions on the shortcomings of currently available technologies. Besides presenting the study, the paper introduces the first Digital Commensality Data-set, containing videos, facial landmarks, quantitative and qualitative responses. After surveying multimodal data-sets and corpora that we could exploit to understand commensal behavior, we comment on the feasibility of using remote meals as a source to build data-sets to investigate commensal behavior. Finally, we explore possible future research directions emerging from our results

    Social Interaction Data-sets in the Age of Covid-19: a Case Study on Digital Commensality

    Get PDF
    Research focusing on social interaction often leverages data-sets, allowing annotation, analysis, and modeling of social behavior. When it comes to commensality, researchers have started working on computational models of food and eating-related activities recognition. The growing research area known as Digital Commensality, has focused on meals shared online, for instance, through videochat. However, to investigate this topic, traditional data-sets recorded in laboratory settings may not be the best option in terms of ecological validity. Covid-19 restrictions and lock-downs have increased in online gatherings, with many people becoming used to the idea of sharing meals online. Following this trend, we propose the concept of collecting data by recording online interactions and discuss the challenges related to this methodology. We illustrate our approach in creating the first Digital Commensality data-set, containing recordings of food-related social interactions collected online during the Covid-19 outbreak

    Analysis of movement quality in full-body physical activities

    Get PDF
    Full-body human movement is characterized by fine-grain expressive qualities that humans are easily capable of exhibiting and recognizing in others' movement. In sports (e.g., martial arts) and performing arts (e.g., dance), the same sequence of movements can be performed in a wide range of ways characterized by different qualities, often in terms of subtle (spatial and temporal) perturbations of the movement. Even a non-expert observer can distinguish between a top-level and average performance by a dancer or martial artist. The difference is not in the performed movements-the same in both cases-but in the \u201cquality\u201d of their performance. In this article, we present a computational framework aimed at an automated approximate measure of movement quality in full-body physical activities. Starting from motion capture data, the framework computes low-level (e.g., a limb velocity) and high-level (e.g., synchronization between different limbs) movement features. Then, this vector of features is integrated to compute a value aimed at providing a quantitative assessment of movement quality approximating the evaluation that an external expert observer would give of the same sequence of movements. Next, a system representing a concrete implementation of the framework is proposed. Karate is adopted as a testbed. We selected two different katas (i.e., detailed choreographies of movements in karate) characterized by different overall attitudes and expressions (aggressiveness, meditation), and we asked seven athletes, having various levels of experience and age, to perform them. Motion capture data were collected from the performances and were analyzed with the system. The results of the automated analysis were compared with the scores given by 14 karate experts who rated the same performances. Results show that the movement-quality scores computed by the system and the ratings given by the human observers are highly correlated (Pearson's correlations r = 0.84, p = 0.001 and r = 0.75, p = 0.005)

    Toward Emotion Recognition From Physiological Signals in the Wild: Approaching the Methodological Issues in Real-Life Data Collection

    Get PDF
    Emotion, mood, and stress recognition (EMSR) has been studied in laboratory settings for decades. In particular, physiological signals are widely used to detect and classify affective states in lab conditions. However, physiological reactions to emotional stimuli have been found to differ in laboratory and natural settings. Thanks to recent technological progress (e.g., in wearables) the creation of EMSR systems for a large number of consumers during their everyday activities is increasingly possible. Therefore, datasets created in the wild are needed to insure the validity and the exploitability of EMSR models for real-life applications. In this paper, we initially present common techniques used in laboratory settings to induce emotions for the purpose of physiological dataset creation. Next, advantages and challenges of data collection in the wild are discussed. To assess the applicability of existing datasets to real-life applications, we propose a set of categories to guide and compare at a glance different methodologies used by researchers to collect such data. For this purpose, we also introduce a visual tool called Graphical Assessment of Real-life Application-Focused Emotional Dataset (GARAFED). In the last part of the paper, we apply the proposed tool to compare existing physiological datasets for EMSR in the wild and to show possible improvements and future directions of research. We wish for this paper and GARAFED to be used as guidelines for researchers and developers who aim at collecting affect-related data for real-life EMSR-based applications

    Neuroprotection of Cholinergic Neurons with a Tau Aggregation Inhibitor and Rivastigmine in an Alzheimer's-like Tauopathy Mouse Model

    Get PDF
    Acknowledgments The authors acknowledge Joanna Lewandowska for the technical support with the perfusion of mice, brain collections and embedding and assisting in the procedures of histological staining; and Adrianna Wysocka for providing her proprietary Makro tool, enabling the objectification and standardization of the ROI measurements by using Fiji. S1D12 was provided by Dr Soumya Palliyil, Scottish Biologics Facility, University of Aberdeen, Aberdeen, UK.Peer reviewe

    The First 1 1/2 Years of TOTEM Roman Pot Operation at LHC

    Get PDF
    Since the LHC running season 2010, the TOTEM Roman Pots (RPs) are fully operational and serve for collecting elastic and diffractive proton-proton scattering data. Like for other moveable devices approaching the high intensity LHC beams, a reliable and precise control of the RP position is critical to machine protection. After a review of the RP movement control and position interlock system, the crucial task of alignment will be discussed.Comment: 3 pages, 6 figures; 2nd International Particle Accelerator Conference (IPAC 2011), San Sebastian, Spain; contribution MOPO01

    First Results from the TOTEM Experiment

    Full text link
    The first physics results from the TOTEM experiment are here reported, concerning the measurements of the total, differential elastic, elastic and inelastic pp cross-section at the LHC energy of s\sqrt{s} = 7 TeV, obtained using the luminosity measurement from CMS. A preliminary measurement of the forward charged particle η\eta distribution is also shown.Comment: Conference Proceeding. MPI@LHC 2010: 2nd International Workshop on Multiple Partonic Interactions at the LHC. Glasgow (UK), 29th of November to the 3rd of December 201
    • 

    corecore