863 research outputs found

    More playful user interfaces: an introduction

    Get PDF
    In this chapter we embed recent research advances in creating playful user interfaces in a historical context. We have observations on spending leisure time, in particular predictions from previous decades and views expressed in Science Fiction novels. We confront these views and predictions with what has really happened since the advent of computers, the Internet, Worldwide Web and sensors and actuators that are increasingly becoming integrated in our environments and in devices that are with us 24/7. And, not only with us, but also connected to networks of nodes that represent people, institutions, and companies. Playful user interfaces are not only interesting for entertainment applications. Educational or behavior change supporting systems can also profit from a playful approach. The chapter concludes with a meta-level review of the chapters in this book. In this review we distinguish three views on research and application domains for playful user interfaces: (1) Designing Interactions for and by Children, (2) Designing Interactions with Nature, Animals, and Things, and (3) Designing Interactions for Arts, Performances, and Sports

    Assistive telehealth systems for neurorehabilitation

    Get PDF
    Telehealth is an evolving field within the broader domain of Biomedical Engineering, specifically situated within the context of the Internet of Medical Things (IoMT). In today's society, the importance of Telehealth systems is increasingly recognized, as they enable remote patient treatment by physicians. One significant application in neurorehabilitation is Transcranial Direct Current Stimulation (tDCS), which has demonstrated its effectiveness in modulating mental function and learning over several years. Furthermore, tDCS is widely accepted as a safe approach in the field. This presentation focuses on the development of a non-invasive wearable tDCS device with integrated Internet connectivity. This IoMT device enables remote configuration of treatment parameters, such as session duration, current level, and placebo status. Clinicians can remotely access the device and define these parameters within the approved safety ranges for tDCS treatments. In addition to the wearable tDCS device, a prototype web portal is being developed to collect performance data during neurorehabilitation exercises conducted by individuals at home. This portal also facilitates remote interaction between patients and clinicians. To provide a platform-independent solution for accessing up-to-date healthcare information, a Progressive Web Application (PWA) is being developed. The PWA enables real-time communication between patients and doctors through text chat and video conferencing. The primary objective is to create a cross-platform web application with PWA features that can function effectively as a native application in various operating systems

    Body-Borne Computers as Extensions of Self

    Get PDF
    The opportunities for wearable technologies go well beyond always-available information displays or health sensing devices. The concept of the cyborg introduced by Clynes and Kline, along with works in various fields of research and the arts, offers a vision of what technology integrated with the body can offer. This paper identifies different categories of research aimed at augmenting humans. The paper specifically focuses on three areas of augmentation of the human body and its sensorimotor capabilities: physical morphology, skin display, and somatosensory extension. We discuss how such digital extensions relate to the malleable nature of our self-image. We argue that body-borne devices are no longer simply functional apparatus, but offer a direct interplay with the mind. Finally, we also showcase some of our own projects in this area and shed light on future challenges

    Computer-aided investigation of interaction mediated by an AR-enabled wearable interface

    Get PDF
    Dierker A. Computer-aided investigation of interaction mediated by an AR-enabled wearable interface. Bielefeld: Universitätsbibliothek Bielefeld; 2012.This thesis provides an approach on facilitating the analysis of nonverbal behaviour during human-human interaction. Thereby, much of the work that researchers do starting with experiment control, data acquisition, tagging and finally the analysis of the data is alleviated. For this, software and hardware techniques are used as sensor technology, machine learning, object tracking, data processing, visualisation and Augmented Reality. These are combined into an Augmented-Reality-enabled Interception Interface (ARbInI), a modular wearable interface for two users. The interface mediates the users’ interaction thereby intercepting and influencing it. The ARbInI interface consists of two identical setups of sensors and displays, which are mutually coupled. Combining cameras and microphones with sensors, the system offers to record rich multimodal interaction cues in an efficient way. The recorded data can be analysed online and offline for interaction features (e. g. head gestures in head movements, objects in joint attention, speech times) using integrated machine-learning approaches. The classified features can be tagged in the data. For a detailed analysis, the recorded multimodal data is transferred automatically into file bundles loadable in a standard annotation tool where the data can be further tagged by hand. For statistic analyses of the complete multimodal corpus, a toolbox for use in a standard statistics program allows to directly import the corpus and to automate the analysis of multimodal and complex relationships between arbitrary data types. When using the optional multimodal Augmented Reality techniques integrated into ARbInI, the camera records exactly what the participant can see and nothing more or less. The following additional advantages can be used during the experiment: (a) the experiment can be controlled by using the auditory or visual displays thereby ensuring controlled experimental conditions, (b) the experiment can be disturbed, thus offering to investigate how problems in interaction are discovered and solved, and (c) the experiment can be enhanced by interactively comprising the behaviour of the user thereby offering to investigate how users cope with novel interaction channels. This thesis introduces criteria for the design of scenarios in which interaction analysis can benefit from the experimentation interface and presents a set of scenarios. These scenarios are applied in several empirical studies thereby collecting multimodal corpora that particularly include head gestures. The capabilities of computer-aided interaction analysis for the investigation of speech, visual attention and head movements are illustrated on this empirical data. The effects of the head-mounted display (HMD) are evaluated thoroughly in two studies. The results show that the HMD users need more head movements to achieve the same shift of gaze direction and perform less head gestures with slower velocity and fewer repetitions compared to non-HMD users. From this, a reduced willingness to perform head movements if not necessary can be concluded. Moreover, compensation strategies are established like leaning backwards to enlarge the field of view, and increasing the number of utterances or changing the reference to objects to compensate for the absence of mutual eye contact. Two studies investigate the interaction while actively inducing misunderstandings. The participants here use compensation strategies like multiple verification questions and arbitrary gaze movements. Additionally, an enhancement method that highlights the visual attention of the interaction partner is evaluated in a search task. The results show a significantly shorter reaction time and fewer errors

    A Human-Centric Metaverse Enabled by Brain-Computer Interface: A Survey

    Full text link
    The growing interest in the Metaverse has generated momentum for members of academia and industry to innovate toward realizing the Metaverse world. The Metaverse is a unique, continuous, and shared virtual world where humans embody a digital form within an online platform. Through a digital avatar, Metaverse users should have a perceptual presence within the environment and can interact and control the virtual world around them. Thus, a human-centric design is a crucial element of the Metaverse. The human users are not only the central entity but also the source of multi-sensory data that can be used to enrich the Metaverse ecosystem. In this survey, we study the potential applications of Brain-Computer Interface (BCI) technologies that can enhance the experience of Metaverse users. By directly communicating with the human brain, the most complex organ in the human body, BCI technologies hold the potential for the most intuitive human-machine system operating at the speed of thought. BCI technologies can enable various innovative applications for the Metaverse through this neural pathway, such as user cognitive state monitoring, digital avatar control, virtual interactions, and imagined speech communications. This survey first outlines the fundamental background of the Metaverse and BCI technologies. We then discuss the current challenges of the Metaverse that can potentially be addressed by BCI, such as motion sickness when users experience virtual environments or the negative emotional states of users in immersive virtual applications. After that, we propose and discuss a new research direction called Human Digital Twin, in which digital twins can create an intelligent and interactable avatar from the user's brain signals. We also present the challenges and potential solutions in synchronizing and communicating between virtual and physical entities in the Metaverse

    Cognivitra, a digital solution to support dual-task rehabilitation training

    Get PDF
    This article focuses on an eHealth application, CogniViTra, to support cognitive and physical training (i.e., dual-task training), which can be done at home with supervision of a health care provider. CogniViTra was designed and implemented to take advantage of an existing Platform of Services supporting a Cognitive Health Ecosystem and comprises several components, including the CogniViTra Box (i.e., the patient terminal equipment), the Virtual Coach to provide assistance, the Game Presentation for the rehabilitation exercises, and the Pose and Gesture Recognition to quantify responses during dual-task training. In terms of validation, a functional prototype was exposed in a highly specialized event related to healthy and active ageing, and key stakeholders were invited to test it and share their insights. Fifty-seven specialists in information-technology-based applications to support healthy and active ageing were involved and the results and indicated that the functional prototype presents good performance in recognizing poses and gestures such as moving the trunk to the left or to the right, and that most of the participants would use or suggest the utilization of CogniViTra. In general, participants considered that CogniViTra is a useful tool and may represent an added value for remote dual-task training.This study has received funding from the European Union under the AAL programme through project CogniViTra (Grant No. AAL-2018-5-115-CP), with national funding support from FCT, ISCIII, and FNR. This presentation reflects the authors’ views, and neither AAL nor the National Funding Agencies are responsible for any use that may be made of the information

    ENGAGE-DEM: a model of engagement of people with dementia

    Get PDF
    One of the most effective ways to improve quality of life in dementia is by exposing people to meaningful activities. The study of engagement is crucial to identify which activities are significant for persons with dementia and customize them. Previous work has mainly focused on developing assessment tools and the only available model of engagement for people with dementia focused on factors influencing engagement or influenced by engagement. This paper focuses on the internal functioning of engagement and presents the development and testing of a model specifying the components of engagement, their measures, and the relationships they entertain. We collected behavioral and physiological data while participants with dementia (N=14) were involved in six sessions of play, three of game-based cognitive stimulation and three of robot-based free play. We tested the concurrent validity of the measures employed to gauge engagement and ran factorial analysis and Structural Equation Modeling to determine whether the components of engagement and their relationships were those hypothesized. The model we constructed, which we call the ENGAGE-DEM, achieved excellent goodness of fit and can be considered a scaffold to the development of affective computing frameworks for measuring engagement online and offline, especially in HCI and HRI.Postprint (author's final draft

    Cognition in action: Imaging brain/body dynamics in mobile humans

    Full text link
    We have recently developed a mobile brain imaging method (MoBI), that allows for simultaneous recording of brain and body dynamics of humans actively behaving in and interacting with their environment. A mobile imaging approach was needed to study cognitive processes that are inherently based on the use of human physical structure to obtain behavioral goals. This review gives examples of the tight coupling between human physical structure with cognitive processing and the role of supraspinal activity during control of human stance and locomotion. Existing brain imaging methods for actively behaving participants are described and new sensor technology allowing for mobile recordings of different behavioral states in humans is introduced. Finally, we review recent work demonstrating the feasibility of a MoBI system that was developed at the Swartz Center for Computational Neuroscience at the University of California, San Diego, demonstrating the range of behavior that can be investigated with this method. Copyright © 2011 by Walter de Gruyter, Berlin, Boston

    Intelligent Biosignal Processing in Wearable and Implantable Sensors

    Get PDF
    This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine

    Neuron-level dynamics of oscillatory network structure and markerless tracking of kinematics during grasping

    Get PDF
    Oscillatory synchrony is proposed to play an important role in flexible sensory-motor transformations. Thereby, it is assumed that changes in the oscillatory network structure at the level of single neurons lead to flexible information processing. Yet, how the oscillatory network structure at the neuron-level changes with different behavior remains elusive. To address this gap, we examined changes in the fronto-parietal oscillatory network structure at the neuron-level, while monkeys performed a flexible sensory-motor grasping task. We found that neurons formed separate subnetworks in the low frequency and beta bands. The beta subnetwork was active during steady states and the low frequency network during active states of the task, suggesting that both frequencies are mutually exclusive at the neuron-level. Furthermore, both frequency subnetworks reconfigured at the neuron-level for different grip and context conditions, which was mostly lost at any scale larger than neurons in the network. Our results, therefore, suggest that the oscillatory network structure at the neuron-level meets the necessary requirements for the coordination of flexible sensory-motor transformations. Supplementarily, tracking hand kinematics is a crucial experimental requirement to analyze neuronal control of grasp movements. To this end, a 3D markerless, gloveless hand tracking system was developed using computer vision and deep learning techniques. 2021-11-3
    • …
    corecore