71 research outputs found
Body swarm interface (BOSI) : controlling robotic swarms using human bio-signals
Traditionally robots are controlled using devices like joysticks, keyboards, mice and other
similar human computer interface (HCI) devices. Although this approach is effective and
practical for some cases, it is restrictive only to healthy individuals without disabilities,
and it also requires the user to master the device before its usage. It becomes complicated and non-intuitive when multiple robots need to be controlled simultaneously with these traditional devices, as in the case of Human Swarm Interfaces (HSI).
This work presents a novel concept of using human bio-signals to control swarms of
robots. With this concept there are two major advantages: Firstly, it gives amputees and
people with certain disabilities the ability to control robotic swarms, which has previously
not been possible. Secondly, it also gives the user a more intuitive interface to control
swarms of robots by using gestures, thoughts, and eye movement.
We measure different bio-signals from the human body including Electroencephalography
(EEG), Electromyography (EMG), Electrooculography (EOG), using off the shelf
products. After minimal signal processing, we then decode the intended control action
using machine learning techniques like Hidden Markov Models (HMM) and K-Nearest
Neighbors (K-NN). We employ formation controllers based on distance and displacement
to control the shape and motion of the robotic swarm. Comparison for ground truth for
thoughts and gesture classifications are done, and the resulting pipelines are evaluated with both simulations and hardware experiments with swarms of ground robots and aerial vehicles
Look me in the eyes: A survey of eye and gaze animation for virtual agents and artificial systems
International audienceA person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: "The face is the portrait of the mind; the eyes, its informers.". This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and Robotics
Development Of a Multisensorial System For Emotions Recognition
Automated reading and analysis of human emotion has the potential to be a powerful tool to develop a wide variety of applications, such as human-computer interaction systems, but, at the same time, this is a very difficult issue because the human communication is very complex. Humans employ multiple sensory systems in emotion recognition. At the same way, an emotionally intelligent machine requires multiples sensors to be able to create an affective interaction with users. Thus, this Master thesis proposes the development of a multisensorial system for automatic emotion recognition.
The multisensorial system is composed of three sensors, which allowed exploring different emotional aspects, as the eye tracking, using the IR-PCR technique, helped conducting studies about visual social attention; the Kinect, in conjunction with the FACS-AU system technique, allowed developing a tool for facial expression recognition; and the thermal camera, using the
FT-RoI technique, was employed for detecting facial thermal variation. When performing the multisensorial integration of the system, it was possible to obtain a more complete and varied analysis of the emotional aspects, allowing evaluate focal attention, valence comprehension, valence expressions, facial expression, valence recognition and arousal recognition. Experiments were performed with sixteen healthy adult volunteers and 105 healthy children
volunteers and the results were the developed system, which was able to detect eye gaze, recognize facial expression and estimate the valence and arousal for emotion recognition, This system also presents the potential to analyzed emotions of people by facial features using contactless sensors in semi-structured environments, such as clinics, laboratories, or classrooms. This system also presents the potential to become an embedded tool in robots to
endow these machines with an emotional intelligence for a more natural interaction with humans.
Keywords: emotion recognition, eye tracking, facial expression, facial thermal variation, integration multisensoria
Pathway to Future Symbiotic Creativity
This report presents a comprehensive view of our vision on the development
path of the human-machine symbiotic art creation. We propose a classification
of the creative system with a hierarchy of 5 classes, showing the pathway of
creativity evolving from a mimic-human artist (Turing Artists) to a Machine
artist in its own right. We begin with an overview of the limitations of the
Turing Artists then focus on the top two-level systems, Machine Artists,
emphasizing machine-human communication in art creation. In art creation, it is
necessary for machines to understand humans' mental states, including desires,
appreciation, and emotions, humans also need to understand machines' creative
capabilities and limitations. The rapid development of immersive environment
and further evolution into the new concept of metaverse enable symbiotic art
creation through unprecedented flexibility of bi-directional communication
between artists and art manifestation environments. By examining the latest
sensor and XR technologies, we illustrate the novel way for art data collection
to constitute the base of a new form of human-machine bidirectional
communication and understanding in art creation. Based on such communication
and understanding mechanisms, we propose a novel framework for building future
Machine artists, which comes with the philosophy that a human-compatible AI
system should be based on the "human-in-the-loop" principle rather than the
traditional "end-to-end" dogma. By proposing a new form of inverse
reinforcement learning model, we outline the platform design of machine
artists, demonstrate its functions and showcase some examples of technologies
we have developed. We also provide a systematic exposition of the ecosystem for
AI-based symbiotic art form and community with an economic model built on NFT
technology. Ethical issues for the development of machine artists are also
discussed
Protocolo de procesamiento de electrooculogramas para la evaluación y el seguimiento de pacientes con Ataxia Espinocerebelosa tipo 2
The analysis of eye movements is an useful tool to evaluate various neurological disfunctions, among them is the Spinocerebellar Ataxia Type 2 (SCA2). This work is about the process of design a protocol for the the processing of eye movement records carried out at the Centre of Research and Rehabilitation of Hereditary Ataxias (CIRAH, spanish accronym) of Holguin city, Cuba. To accomplish this task, the process of processing was separated in four stages: filtering, differentiation, annotation and calculation of features; choosing at every stage the fundamentals methods and tools used frequently to solve each of yielded problems.El análisis de los movimientos oculares constituye una herramienta útil para el estudio de una gran variedad de disfunciones neurológicas entre las que se encuentra la Ataxia Espinocerebelosa Tipo 2 (SCA2, en inglés Spinocerebellar Ataxia Type 2 ). Este trabajo aborda el proceso de diseño de un protocolo para el procesamiento de los registros de movimientos oculares que se realizan en el Centro de Investigación y Rehabilitación de Ataxias Hereditarias (CIRAH) de la ciudad de Holguín. Para lograr esta tarea se separa el proceso de procesamiento en cuatro etapas fundamentales: filtrado, diferenciación, etiquetado y cálculo de características; seleccionándose para cada una de las etapas los principales métodos y herramientas empleados comúnmente en éstas
Towards disappearing user interfaces for ubiquitous computing: human enhancement from sixth sense to super senses
The enhancement of human senses electronically is possible when pervasive computers interact unnoticeably with humans in Ubiquitous Computing. The design of computer user interfaces towards “disappearing” forces the interaction with humans using a content rather than a menu driven approach, thus the emerging requirement for huge number of non-technical users interfacing intuitively with billions of computers in the Internet of Things is met. Learning to use particular applications in Ubiquitous Computing is either too slow or sometimes impossible so the design of user interfaces must be naturally enough to facilitate intuitive human behaviours. Although humans from different racial, cultural and ethnic backgrounds own the same physiological sensory system, the perception to the same stimuli outside the human bodies can be different. A novel taxonomy for Disappearing User Interfaces (DUIs) to stimulate human senses and to capture human responses is proposed. Furthermore, applications of DUIs are reviewed. DUIs with sensor and data fusion to simulate the Sixth Sense is explored. Enhancement of human senses through DUIs and Context Awareness is discussed as the groundwork enabling smarter wearable devices for interfacing with human emotional memories
Gaze-directed gameplay in first person computer games
The use of eye tracking systems in computer games is still at an early stage.
Commercial eye trackers and researches have been focusing in gaze-oriented gameplay as an alternative to traditional input devices. This dissertation proposes to
investigate the advantages and disadvantages of the use of these systems in computer games. For it, instead of using eye tracking as a simple direct control input,
it is proposed to use it in order to control the attention of the player’s avatar
(e.g., if the player notices an obstacle in the way, the avatar will notice it too
and avoid it) and the game’s procedural content generation (e.g., spawn obstacles
in the opposite side of the screen to where the player’s attention is focused). To
demonstrate the value of this proposal, it was developed and is herein presented
the first-person shooter "Zombie Runner". Tests showed that the implementation
meets the stipulated technical requirements and that, although it still needs improvements in terms of precision and robustness, eye tracking technology can be
successfully used to to make the player experience more immersive and challenging.A utilização de sistemas de rastreamento ocular em jogos de computador ainda
se encontra numa fase embrionária. Aparelhos de rastreamento ocular comerciais
e pesquisas na área têm-se focado em jogabilidade à base da atenção visual como
uma alternativa a métodos de entrada tradicionais. Esta dissertação propõe-se
a investigar as vantages e desvantagens do uso destes sistemas em jogos de computador. Para isso, invés de se usar rastreamento ocular apenas como um método
directo de entrada, é proposto usá-lo para controlar a atenção do personagem do
jogo (e.g., se o jogador reparar num obstáculo, a personagem também repara e
desvia-se do mesmo) assim como afectar a geração procedimental do jogo (e.g.,
gerar obstáculos no lado oposto ao qual o jogador tem a sua atenção focada).
Para demonstrar o valor desta proposta, foi desenvolvido e aqui apresentado o
jogo de tiros em primeira pessoa "Zombie Runner". Os testes demonstraram que a
implementação cumpre os requisitos técnicos estipulados e que, apesar de ainda
carecer de melhorias em termos de precisão e robustez, a tecnologia para rastreamento ocular pode ser utilizada com sucesso para tornar a experiência do jogador
mais imersiva e desafiante
A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging
Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools.
In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering.
In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract
Foreword and Acknowledgements
Overview and Contributions
Part 1 - Introduction
1 Fluorescence Microscopy
2 Introduction to Visual Processing
3 A Short Introduction to Cross Reality
4 Eye Tracking and Gaze-based Interaction
Part 2 - VR and AR for System Biology
5 scenery — VR/AR for Systems Biology
6 Rendering
7 Input Handling and Integration of External Hardware
8 Distributed Rendering
9 Miscellaneous Subsystems
10 Future Development Directions
Part III - Case Studies
C A S E S T U D I E S
11 Bionic Tracking: Using Eye Tracking for Cell Tracking
12 Towards Interactive Virtual Reality Laser Ablation
13 Rendering the Adaptive Particle Representation
14 sciview — Integrating scenery into ImageJ2 & Fiji
Part IV - Conclusion
15 Conclusions and Outlook
Backmatter & Appendices
A Questionnaire for VR Ablation User Study
B Full Correlations in VR Ablation Questionnaire
C Questionnaire for Bionic Tracking User Study
List of Tables
List of Figures
Bibliography
Selbstständigkeitserklärun
- …