1,336 research outputs found

    Ubiquitous Emotion Recognition with Multimodal Mobile Interfaces

    Get PDF
    In 1997 Rosalind Picard introduced fundamental concepts of affect recognition. Since this time, multimodal interfaces such as Brain-computer interfaces (BCIs), RGB and depth cameras, physiological wearables, multimodal facial data and physiological data have been used to study human emotion. Much of the work in this field focuses on a single modality to recognize emotion. However, there is a wealth of information that is available for recognizing emotions when incorporating multimodal data. Considering this, the aim of this workshop is to look at current and future research activities and trends for ubiquitous emotion recognition through the fusion of data from various multimodal, mobile devices

    Breaking fresh ground in human–media interaction research

    Get PDF
    Human-Media Interaction research is devoted to methods and situations where humans individually or collectively interact with digital media, systems, devices and environments. Novel forms of interaction paradigms have been enabled by new sensor and actuator technology in the last decades, combining with advances in our knowledge of human-human interaction and human behavior in general when designing user interfaces

    Ubiquitous Integration and Temporal Synchronisation (UbilTS) framework : a solution for building complex multimodal data capture and interactive systems

    Get PDF
    Contemporary Data Capture and Interactive Systems (DCIS) systems are tied in with various technical complexities such as multimodal data types, diverse hardware and software components, time synchronisation issues and distributed deployment configurations. Building these systems is inherently difficult and requires addressing of these complexities before the intended and purposeful functionalities can be attained. The technical issues are often common and similar among diverse applications. This thesis presents the Ubiquitous Integration and Temporal Synchronisation (UbiITS) framework, a generic solution to address the technical complexities in building DCISs. The proposed solution is an abstract software framework that can be extended and customised to any application requirements. UbiITS includes all fundamental software components, techniques, system level layer abstractions and reference architecture as a collection to enable the systematic construction of complex DCISs. This work details four case studies to showcase the versatility and extensibility of UbiITS framework’s functionalities and demonstrate how it was employed to successfully solve a range of technical requirements. In each case UbiITS operated as the core element of each application. Additionally, these case studies are novel systems by themselves in each of their domains. Longstanding technical issues such as flexibly integrating and interoperating multimodal tools, precise time synchronisation, etc., were resolved in each application by employing UbiITS. The framework enabled establishing a functional system infrastructure in these cases, essentially opening up new lines of research in each discipline where these research approaches would not have been possible without the infrastructure provided by the framework. The thesis further presents a sample implementation of the framework on a device firmware exhibiting its capability to be directly implemented on a hardware platform. Summary metrics are also produced to establish the complexity, reusability, extendibility, implementation and maintainability characteristics of the framework.Engineering and Physical Sciences Research Council (EPSRC) grants - EP/F02553X/1, 114433 and 11394

    Brain-Computer Interfaces for Artistic Expression

    Get PDF
    Artists have been using BCIs for artistic expression since the 1960s. Their interest and creativity is now increasing because of the availability of affordable BCI devices and software that does not require them to invest extensive time in getting the BCI to work or tuning it to their application. Designers of artistic BCIs are often ahead of more traditional BCI researchers in ideas on using BCIs in multimodal and multiparty contexts, where multiple users are involved, and where robustness and efficiency are not the main matters of concern. The aim of this workshop is to look at current (research) activities in BCIs for artistic expression and to identify research areas that are of interest for both BCI and HCI researchers as well as artists/designers of BCI applications

    The Translocal Event and the Polyrhythmic Diagram

    Get PDF
    This thesis identifies and analyses the key creative protocols in translocal performance practice, and ends with suggestions for new forms of transversal live and mediated performance practice, informed by theory. It argues that ontologies of emergence in dynamic systems nourish contemporary practice in the digital arts. Feedback in self-organised, recursive systems and organisms elicit change, and change transforms. The arguments trace concepts from chaos and complexity theory to virtual multiplicity, relationality, intuition and individuation (in the work of Bergson, Deleuze, Guattari, Simondon, Massumi, and other process theorists). It then examines the intersection of methodologies in philosophy, science and art and the radical contingencies implicit in the technicity of real-time, collaborative composition. Simultaneous forces or tendencies such as perception/memory, content/ expression and instinct/intellect produce composites (experience, meaning, and intuition- respectively) that affect the sensation of interplay. The translocal event is itself a diagram - an interstice between the forces of the local and the global, between the tendencies of the individual and the collective. The translocal is a point of reference for exploring the distribution of affect, parameters of control and emergent aesthetics. Translocal interplay, enabled by digital technologies and network protocols, is ontogenetic and autopoietic; diagrammatic and synaesthetic; intuitive and transductive. KeyWorx is a software application developed for realtime, distributed, multimodal media processing. As a technological tool created by artists, KeyWorx supports this intuitive type of creative experience: a real-time, translocal “jamming” that transduces the lived experience of a “biogram,” a synaesthetic hinge-dimension. The emerging aesthetics are processual – intuitive, diagrammatic and transversal

    The Multimodal Tutor: Adaptive Feedback from Multimodal Experiences

    Get PDF
    This doctoral thesis describes the journey of ideation, prototyping and empirical testing of the Multimodal Tutor, a system designed for providing digital feedback that supports psychomotor skills acquisition using learning and multimodal data capturing. The feedback is given in real-time with machine-driven assessment of the learner's task execution. The predictions are tailored by supervised machine learning models trained with human annotated samples. The main contributions of this thesis are: a literature survey on multimodal data for learning, a conceptual model (the Multimodal Learning Analytics Model), a technological framework (the Multimodal Pipeline), a data annotation tool (the Visual Inspection Tool) and a case study in Cardiopulmonary Resuscitation training (CPR Tutor). The CPR Tutor generates real-time, adaptive feedback using kinematic and myographic data and neural networks

    Towards a Personalized Multi-Domain Digital Neurophenotyping Model for the Detection and Treatment of Mood Trajectories

    Get PDF
    The commercial availability of many real-life smart sensors, wearables, and mobile apps provides a valuable source of information about a wide range of human behavioral, physiological, and social markers that can be used to infer the user’s mental state and mood. However, there are currently no commercial digital products that integrate these psychosocial metrics with the real-time measurement of neural activity. In particular, electroencephalography (EEG) is a well-validated and highly sensitive neuroimaging method that yields robust markers of mood and affective processing, and has been widely used in mental health research for decades. The integration of wearable neuro-sensors into existing multimodal sensor arrays could hold great promise for deep digital neurophenotyping in the detection and personalized treatment of mood disorders. In this paper, we propose a multi-domain digital neurophenotyping model based on the socioecological model of health. The proposed model presents a holistic approach to digital mental health, leveraging recent neuroscientific advances, and could deliver highly personalized diagnoses and treatments. The technological and ethical challenges of this model are discussed
    corecore