286 research outputs found
A myographic-based HCI solution proposal for upper limb amputees
"Conference on ENTERprise Information Systems / International Conference on Project
MANagement / Conference on Health and Social Care Information Systems and Technologies,
CENTERIS / ProjMAN / HCist 2016, October 5-7, 2016 "Interaction plays a fundamental role as it sets bridges between humans and computers. However, people with disability are prevented to use computers by the ordinary means, due to physical or intellectual impairments. Thus, the human-computer interaction (HCI) research area has been developing solutions to improve the technological accessibility of impaired people, by enhancing computers and similar devices with the necessary means to attend to the different disabilities, thereby contributing to reduce digital exclusion.
Within the aforementioned scope, this paper presents an interaction solution for upper limb amputees, supported on a myographic gesture-control device named Myo. This device is an emergent wearable technology, which consists in a muscle-sensitive bracelet. It transmits myographic and inertial data, susceptible of being converted into actions for interaction purposes (e.g. clicking or moving a mouse cursor).
Although being a gesture control armband, Myo can also be used in the legs, as was ascertained through some preliminary tests with users. Both data types (myographic and inertial) remain to be transmitted and are available to be converted into gestures.
A general architecture, a use case diagram and the two main functional modules specification are presented. These will guide the future implementation of the proposed Myo-based HCI solution, which is intended to be a solid contribution for the interaction between upper limb amputees and computers
Methods and metrics for the improvement of the interaction and the rehabilitation of cerebral palsy through inertial technology
Cerebral palsy (CP) is one of the most limiting disabilities in childhood, with 2.2 cases
per 1000 1-year survivors. It is a disorder of movement and posture due to a defect or
lesion of the immature brain during the pregnancy or the birth. These motor limitations
appear frequently in combination with sensory and cognitive alterations generally result
in great difficulties for some people with CP to manipulate objects, communicate and
interact with their environment, as well as limiting their mobility.
Over the last decades, instruments such as personal computers have become a popular
tool to overcome some of the motor limitations and promote neural plasticity, especially
during childhood. According to some estimations, 65% of youths with CP that present
severely limited manipulation skills cannot use standard mice nor keyboards. Unfortunately,
even when people with CP use assistive technology for computer access, they face
barriers that lead to the use of typical mice, track balls or touch screens for practical
reasons. Nevertheless, with the proper customization, novel developments of alternative
input devices such as head mice or eye trackers can be a valuable solution for these
individuals.
This thesis presents a collection of novel mapping functions and facilitation algorithms
that were proposed and designed to ease the act of pointing to graphical elements on
the screen—the most elemental task in human-computer interaction—to individuals with
CP. These developments were implemented to be used with any head mouse, although
they were all tested with the ENLAZA, an inertial interface. The development of such
techniques required the following approach:
Developing a methodology to evaluate the performance of individuals with CP in
pointing tasks, which are usually described as two sequential subtasks: navigation
and targeting.
Identifying the main motor abnormalities that are present in individuals with CP
as well as assessing the compliance of these people with standard motor behaviour
models such as Fitts’ law.
Designing and validating three novel pointing facilitation techniques to be implemented
in a head mouse. They were conceived for users with CP and muscle
weakness that have great difficulties to maintain their heads in a stable position.
The first two algorithms consist in two novel mapping functions that aim to facilitate
the navigation phase, whereas the third technique is based in gravity wells
and was specially developed to facilitate the selection of elements in the screen.
In parallel with the development of the facilitation techniques for the interaction
process, we evaluated the feasibility of use inertial technology for the control of
serious videogames as a complement to traditional rehabilitation therapies of posture
and balance. The experimental validation here presented confirms that this
concept could be implemented in clinical practice with good results.
In summary, the works here presented prove the suitability of using inertial technology
for the development of an alternative pointing device—and pointing algorithms—based
on movements of the head for individuals with CP and severely limited manipulation
skills and new rehabilitation therapies for the improvement of posture and balance. All
the contributions were validated in collaboration with several centres specialized in CP
and similar disorders and users with disability recruited in those centres.La parálisis cerebral (PC) es una de las deficiencias más limitantes de la infancia, con
un incidencia de 2.2 casos por cada 1000 supervivientes tras un año de vida. La PC
se manifiesta principalmente como una alteración del movimiento y la postura y es
consecuencia de un defecto o lesión en el cerebro inmaduro durante el embarazo o el
parto. Las limitaciones motrices suelen aparecer además en compañía de alteraciones
sensoriales y cognitivas, lo que provoca por lo general grandes dificultades de movilidad,
de manipulación, de relación y de interacción con el entorno.
En las últimas décadas, el ordenador personal se ha extendido como herramienta para la
compensación de parte de estas limitaciones motoras y como medio de promoción de la
neuroplasticidad, especialmente durante la infancia. Desafortunadamente, cerca de un
65% de las personas PC que son diagnosticadas con limitaciones severas de manipulación
son incapaces de utilizar ratones o teclados convencionales. A veces, ni siquiera la
tecnología asistencial les resulta de utilidad ya que se encuentran con impedimentos que
hacen que opten por usar dispositivos tradicionales aun sin dominar su manejo. Para
estas personas, los desarrollos recientes de ratones operados a través de movimientos
residuales con la cabeza o la mirada podrían ser una solución válida, siempre y cuando
se personalice su manejo.
Esta tesis presenta un conjunto de novedosas funciones de mapeo y algoritmos de facilitaci
ón que se han propuesto y diseñado con el ánimo de ayudar a personas con PC
en las tareas de apuntamiento de objetos en la pantalla —las más elementales dentro
de la interacción con el ordenador. Aunque todas las contribuciones se evaluaron con
la interfaz inercial ENLAZA, desarrollada igualmente en nuestro grupo, podrían ser
aplicadas a cualquier ratón basado en movimientos de cabeza. El desarrollo de los
trabajos se resume en las siguientes tareas abordadas:
Desarrollo de una metodología para la evaluación de la habilidad de usuarios con
PC en tareas de apuntamiento, que se contemplan como el encadenamiento de dos
sub-tareas: navegación (alcance) y selección (clic).
Identificación de los tipos de alteraciones motrices presentes en individuos con PC
y el grado de ajuste de éstos a modelos estándares de comportamiento motriz como
puede ser la ley de Fitts.
Propuesta y validación de tres técnicas de facilitación del alcance para ser implementadas
en un ratón basado en movimientos de cabeza. La facilitación se ha centrado
en personas que presentan debilidad muscular y dificultades para mantener
la posición de la cabeza. Mientras que los dos primeros algoritmos se centraron
en facilitar la navegación, el tercero tuvo como objetivo ayudar en la selección a
través de una técnica basada en pozos gravitatorios de proximidad.
En paralelo al desarrollo de estos algoritmos de facilitación de la interacción, evaluamos
la posibilidad de utilizar tecnología inercial para el control de videojuegos en
rehabilitación. Nuestra validación experimental demostró que este concepto puede
implementarse en la práctica clínica como complemento a terapias tradicionales de
rehabilitación de la postura y el equilibrio.
Como conclusión, los trabajos desarrollados en esta tesis vienen a constatar la idoneidad
de utilizar sensores inerciales para el desarrollo de interfaces de accesso alternativo al
ordenador basados en movimientos residuales de la cabeza para personas con limitaciones
severas de manipulación. Esta solución se complementa con algoritmos de facilitación
del alcance. Por otra parte, estas soluciones tecnológicas de interfaz con el ordenador
representan igualmente un complemento de terapias tradicionales de rehabilitación de
la postura y el equilibrio. Todas las contribuciones se validaron en colaboración con
una serie de centros especializados en parálisis cerebral y trastornos afines contando con
usuarios con discapacidad reclutados en dichos centros.This thesis was completed in the Group of Neural and Cognitive Engineering (gNEC) of the CAR UPM-CSIC with the financial support of the FP7 Framework EU Research Project ABC (EU-2012-287774), the IVANPACE Project (funded by Obra Social de Caja Cantabria, 2012-2013), and the Spanish Ministry of Economy and Competitiveness in the framework of two projects: the Interplay Project (RTC-2014-1812-1) and most
recently the InterAAC Project (RTC-2015-4327-1)Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Juan Manuel Belda Lois.- Secretario: María Dolores Blanco Rojas.- Vocal: Luis Fernando Sánchez Sante
Towards Developing a Virtual Guitar Instructor through Biometrics Informed Human-Computer Interaction
Within the last few years, wearable sensor technologies have allowed us to access novel biometrics that give us the ability to connect musical gesture to computing systems. Doing this affords us to study how we perform musically and understand the process at data level. However, biometric information is complex and cannot be directly mapped to digital systems. In this work, we study how guitar performance techniques can be captured/analysed towards developing an AI which can provide real-time feedback to guitar students. We do this by performing musical exercises on the guitar whilst acquiring and processing biometric (plus audiovisual) information during their performance. Our results show: there are notable differences within biometrics when playing a guitar scale in two different ways (legato and staccato) and this outcome can be used to motivate our intention to build an AI guitar tutor
Body swarm interface (BOSI) : controlling robotic swarms using human bio-signals
Traditionally robots are controlled using devices like joysticks, keyboards, mice and other
similar human computer interface (HCI) devices. Although this approach is effective and
practical for some cases, it is restrictive only to healthy individuals without disabilities,
and it also requires the user to master the device before its usage. It becomes complicated and non-intuitive when multiple robots need to be controlled simultaneously with these traditional devices, as in the case of Human Swarm Interfaces (HSI).
This work presents a novel concept of using human bio-signals to control swarms of
robots. With this concept there are two major advantages: Firstly, it gives amputees and
people with certain disabilities the ability to control robotic swarms, which has previously
not been possible. Secondly, it also gives the user a more intuitive interface to control
swarms of robots by using gestures, thoughts, and eye movement.
We measure different bio-signals from the human body including Electroencephalography
(EEG), Electromyography (EMG), Electrooculography (EOG), using off the shelf
products. After minimal signal processing, we then decode the intended control action
using machine learning techniques like Hidden Markov Models (HMM) and K-Nearest
Neighbors (K-NN). We employ formation controllers based on distance and displacement
to control the shape and motion of the robotic swarm. Comparison for ground truth for
thoughts and gesture classifications are done, and the resulting pipelines are evaluated with both simulations and hardware experiments with swarms of ground robots and aerial vehicles
From Unimodal to Multimodal: improving the sEMG-Based Pattern Recognition via deep generative models
Multimodal hand gesture recognition (HGR) systems can achieve higher
recognition accuracy. However, acquiring multimodal gesture recognition data
typically requires users to wear additional sensors, thereby increasing
hardware costs. This paper proposes a novel generative approach to improve
Surface Electromyography (sEMG)-based HGR accuracy via virtual Inertial
Measurement Unit (IMU) signals. Specifically, we trained a deep generative
model based on the intrinsic correlation between forearm sEMG signals and
forearm IMU signals to generate virtual forearm IMU signals from the input
forearm sEMG signals at first. Subsequently, the sEMG signals and virtual IMU
signals were fed into a multimodal Convolutional Neural Network (CNN) model for
gesture recognition. To evaluate the performance of the proposed approach, we
conducted experiments on 6 databases, including 5 publicly available databases
and our collected database comprising 28 subjects performing 38 gestures,
containing both sEMG and IMU data. The results show that our proposed approach
outperforms the sEMG-based unimodal HGR method (with increases of
2.15%-13.10%). It demonstrates that incorporating virtual IMU signals,
generated by deep generative models, can significantly enhance the accuracy of
sEMG-based HGR. The proposed approach represents a successful attempt to
transition from unimodal HGR to multimodal HGR without additional sensor
hardware
ImAiR: Airwriting Recognition framework using Image Representation of IMU Signals
The problem of Airwriting Recognition is focused on identifying letters
written by movement of finger in free space. It is a type of gesture
recognition where the dictionary corresponds to letters in a specific language.
In particular, airwriting recognition using sensor data from wrist-worn devices
can be used as a medium of user input for applications in Human-Computer
Interaction (HCI). Recognition of in-air trajectories using such wrist-worn
devices is limited in literature and forms the basis of the current work. In
this paper, we propose an airwriting recognition framework by first encoding
the time-series data obtained from a wearable Inertial Measurement Unit (IMU)
on the wrist as images and then utilizing deep learning-based models for
identifying the written alphabets. The signals recorded from 3-axis
accelerometer and gyroscope in IMU are encoded as images using different
techniques such as Self Similarity Matrix (SSM), Gramian Angular Field (GAF)
and Markov Transition Field (MTF) to form two sets of 3-channel images. These
are then fed to two separate classification models and letter prediction is
made based on an average of the class conditional probabilities obtained from
the two models. Several standard model architectures for image classification
such as variants of ResNet, DenseNet, VGGNet, AlexNet and GoogleNet have been
utilized. Experiments performed on two publicly available datasets demonstrate
the efficacy of the proposed strategy. The code for our implementation will be
made available at https://github.com/ayushayt/ImAiR
Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey
Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe
Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey
Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe
- …