27 research outputs found

    RoboChain: A Secure Data-Sharing Framework for Human-Robot Interaction

    Full text link
    Robots have potential to revolutionize the way we interact with the world around us. One of their largest potentials is in the domain of mobile health where they can be used to facilitate clinical interventions. However, to accomplish this, robots need to have access to our private data in order to learn from these data and improve their interaction capabilities. Furthermore, to enhance this learning process, the knowledge sharing among multiple robot units is the natural step forward. However, to date, there is no well-established framework which allows for such data sharing while preserving the privacy of the users (e.g., the hospital patients). To this end, we introduce RoboChain - the first learning framework for secure, decentralized and computationally efficient data and model sharing among multiple robot units installed at multiple sites (e.g., hospitals). RoboChain builds upon and combines the latest advances in open data access and blockchain technologies, as well as machine learning. We illustrate this framework using the example of a clinical intervention conducted in a private network of hospitals. Specifically, we lay down the system architecture that allows multiple robot units, conducting the interventions at different hospitals, to perform efficient learning without compromising the data privacy.Comment: 7 pages, 6 figure

    Controlling Robots using Artificial Intelligence and a Consortium Blockchain

    Get PDF
    Blockchain is a disruptive technology that is normally used within financial applications, however it can be very beneficial also in certain robotic contexts, such as when an immutable register of events is required. Among the several properties of Blockchain that can be useful within robotic environments, we find not just immutability but also decentralization of the data, irreversibility, accessibility and non-repudiation. In this paper, we propose an architecture that uses blockchain as a ledger and smart-contract technology for robotic control by using external parties, Oracles, to process data. We show how to register events in a secure way, how it is possible to use smart-contracts to control robots and how to interface with external Artificial Intelligence algorithms for image analysis. The proposed architecture is modular and can be used in multiple contexts such as in manufacturing, network control, robot control, and others, since it is easy to integrate, adapt, maintain and extend to new domains.info:eu-repo/semantics/submittedVersio

    Exploring Human attitude during Human-Robot Interaction

    Get PDF
    The aim of this work is to provide an automatic analysis to assess the user attitude when interacts with a companion robot. In detail, our work focuses on defining which combination of social cues the robot should recognize so that to stimulate the ongoing conversation and how. The analysis is performed on video recordings of 9 elderly users. From each video, low-level descriptors of the behavior of the user are extracted by using open-source automatic tools to extract information on the voice, the body posture, and the face landmarks. The assessment of 3 types of attitude (neutral, positive and negative) is performed through 3 machine learning classification algorithms: k-nearest neighbors, random decision forest and support vector regression. Since intra- and intersubject variability could affect the results of the assessment, this work shows the robustness of the classification models in both scenarios. Further analysis is performed on the type of representation used to describe the attitude. A raw and an auto-encoded representation is applied to the descriptors. The results of the attitude assessment show high values of accuracy (>0.85) both for unimodal and multimodal data. The outcome of this work can be integrated into a robotic platform to automatically assess the quality of interaction and to modify its behavior accordingly

    Technology-assisted emotion recognition for autism spectrum disorder (ASD) children: a systematic literature review

    Get PDF
    The information about affective states in individuals with autism spectrum disorder (ASD) is difficult to obtain as they usually suffer from deficits in facial expression. Affective state conditions of individuals with ASD were associated with impaired regulation of speech, communication, and social skills leading towards poor socio-emotion interaction. It is conceivable that the advance of technology could offer a psychophysiological alternative modality, particularly useful in persons who cannot verbally communicate their emotions as affective states such as individuals with ASD. The study is focusing on the investigation of technology-assisted approach and its relationship to affective states recognition. A systematic review was executed to summarize relevant research that involved technology-assisted implementation to identify the affective states of individuals with ASD using Preferred Reporting Items for Systematic Reviews and MetaAnalyses (PRISMA) approach. The output from the online search process obtained from six publication databases on relevant studies published up to 31 July 2020 was analyzed. Out of 391 publications retrieved, 20 papers met the inclusion and exclusion criteria set in prior. Data were synthesized narratively despite methodological and heterogeneity variations. In this review, some research methods, systems, equipment and models to address all the related issues to the technology-assisted and affective states concerned were presented. As for the consequence, it can be assumed that the emotion recognition with assisted by technology, for evaluating and classifying affective states could help to improve efficacy in therapy sessions between therapists and individuals with ASD. This review will serve as a concise reference for providing general overviews of the current state-of-the-art studies in this area for practitioners, as well as for experienced researchers who are searching for a new direction for future works

    Dyadic Speech-based Affect Recognition using DAMI-P2C Parent-child Multimodal Interaction Dataset

    Full text link
    Automatic speech-based affect recognition of individuals in dyadic conversation is a challenging task, in part because of its heavy reliance on manual pre-processing. Traditional approaches frequently require hand-crafted speech features and segmentation of speaker turns. In this work, we design end-to-end deep learning methods to recognize each person's affective expression in an audio stream with two speakers, automatically discovering features and time regions relevant to the target speaker's affect. We integrate a local attention mechanism into the end-to-end architecture and compare the performance of three attention implementations -- one mean pooling and two weighted pooling methods. Our results show that the proposed weighted-pooling attention solutions are able to learn to focus on the regions containing target speaker's affective information and successfully extract the individual's valence and arousal intensity. Here we introduce and use a "dyadic affect in multimodal interaction - parent to child" (DAMI-P2C) dataset collected in a study of 34 families, where a parent and a child (3-7 years old) engage in reading storybooks together. In contrast to existing public datasets for affect recognition, each instance for both speakers in the DAMI-P2C dataset is annotated for the perceived affect by three labelers. To encourage more research on the challenging task of multi-speaker affect sensing, we make the annotated DAMI-P2C dataset publicly available, including acoustic features of the dyads' raw audios, affect annotations, and a diverse set of developmental, social, and demographic profiles of each dyad.Comment: Accepted by the 2020 International Conference on Multimodal Interaction (ICMI'20
    corecore