449 research outputs found

    From head to toe:body movement for human-computer interaction

    Get PDF
    Our bodies are the medium through which we experience the world around us, so human-computer interaction can highly benefit from the richness of body movements and postures as an input modality. In recent years, the widespread availability of inertial measurement units and depth sensors led to the development of a plethora of applications for the body in human-computer interaction. However, the main focus of these works has been on using the upper body for explicit input. This thesis investigates the research space of full-body human-computer interaction through three propositions. The first proposition is that there is more to be inferred by natural users’ movements and postures, such as the quality of activities and psychological states. We develop this proposition in two domains. First, we explore how to support users in performing weight lifting activities. We propose a system that classifies different ways of performing the same activity; an object-oriented model-based framework for formally specifying activities; and a system that automatically extracts an activity model by demonstration. Second, we explore how to automatically capture nonverbal cues for affective computing. We developed a system that annotates motion and gaze data according to the Body Action and Posture coding system. We show that quality analysis can add another layer of information to activity recognition, and that systems that support the communication of quality information should strive to support how we implicitly communicate movement through nonverbal communication. Further, we argue that working at a higher level of abstraction, affect recognition systems can more directly translate findings from other areas into their algorithms, but also contribute new knowledge to these fields. The second proposition is that the lower limbs can provide an effective means of interacting with computers beyond assistive technology To address the problem of the dispersed literature on the topic, we conducted a comprehensive survey on the lower body in HCI, under the lenses of users, systems and interactions. To address the lack of a fundamental understanding of foot-based interactions, we conducted a series of studies that quantitatively characterises several aspects of foot-based interaction, including Fitts’s Law performance models, the effects of movement direction, foot dominance and visual feedback, and the overhead incurred by using the feet together with the hand. To enable all these studies, we developed a foot tracker based on a Kinect mounted under the desk. We show that the lower body can be used as a valuable complementary modality for computing input. Our third proposition is that by treating body movements as multiple modalities, rather than a single one, we can enable novel user experiences. We develop this proposition in the domain of 3D user interfaces, as it requires input with multiple degrees of freedom and offers a rich set of complex tasks. We propose an approach for tracking the whole body up close, by splitting the sensing of different body parts across multiple sensors. Our setup allows tracking gaze, head, mid-air gestures, multi-touch gestures, and foot movements. We investigate specific applications for multimodal combinations in the domain of 3DUI, specifically how gaze and mid-air gestures can be combined to improve selection and manipulation tasks; how the feet can support the canonical 3DUI tasks; and how a multimodal sensing platform can inspire new 3D game mechanics. We show that the combination of multiple modalities can lead to enhanced task performance, that offloading certain tasks to alternative modalities not only frees the hands, but also allows simultaneous control of multiple degrees of freedom, and that by sensing different modalities separately, we achieve a more detailed and precise full body tracking

    Human Health Engineering Volume II

    Get PDF
    In this Special Issue on “Human Health Engineering Volume II”, we invited submissions exploring recent contributions to the field of human health engineering, i.e., technology for monitoring the physical or mental health status of individuals in a variety of applications. Contributions could focus on sensors, wearable hardware, algorithms, or integrated monitoring systems. We organized the different papers according to their contributions to the main parts of the monitoring and control engineering scheme applied to human health applications, namely papers focusing on measuring/sensing physiological variables, papers highlighting health-monitoring applications, and examples of control and process management applications for human health. In comparison to biomedical engineering, we envision that the field of human health engineering will also cover applications for healthy humans (e.g., sports, sleep, and stress), and thus not only contribute to the development of technology for curing patients or supporting chronically ill people, but also to more general disease prevention and optimization of human well-being

    Addressing Situational and Physical Impairments and Disabilities with a Gaze-Assisted, Multi-Modal, Accessible Interaction Paradigm

    Get PDF
    Every day we encounter a variety of scenarios that lead to situationally induced impairments and disabilities, i.e., our hands are assumed to be engaged in a task, and hence unavailable for interacting with a computing device. For example, a surgeon performing an operation, a worker in a factory with greasy hands or wearing thick gloves, a person driving a car, and so on all represent scenarios of situational impairments and disabilities. In such cases, performing point-and-click interactions, text entry, or authentication on a computer using conventional input methods like the mouse, keyboard, and touch is either inefficient or not possible. Unfortunately, individuals with physical impairments and disabilities, by birth or due to an injury, are forced to deal with these limitations every single day. Generally, these individuals experience difficulty or are completely unable to perform basic operations on a computer. Therefore, to address situational and physical impairments and disabilities it is crucial to develop hands-free, accessible interactions. In this research, we try to address the limitations, inabilities, and challenges arising from situational and physical impairments and disabilities by developing a gaze-assisted, multi-modal, hands-free, accessible interaction paradigm. Specifically, we focus on the three primary interactions: 1) point-and-click, 2) text entry, and 3) authentication. We present multiple ways in which the gaze input can be modeled and combined with other input modalities to enable efficient and accessible interactions. In this regard, we have developed a gaze and foot-based interaction framework to achieve accurate “point-and-click" interactions and to perform dwell-free text entry on computers. In addition, we have developed a gaze gesture-based framework for user authentication and to interact with a wide range of computer applications using a common repository of gaze gestures. The interaction methods and devices we have developed are a) evaluated using the standard HCI procedures like the Fitts’ Law, text entry metrics, authentication accuracy and video analysis attacks, b) compared against the speed, accuracy, and usability of other gaze-assisted interaction methods, and c) qualitatively analyzed by conducting user interviews. From the evaluations, we found that our solutions achieve higher efficiency than the existing systems and also address the usability issues. To discuss each of these solutions, first, the gaze and foot-based system we developed supports point-and-click interactions to address the “Midas Touch" issue. The system performs at least as good (time and precision) as the mouse, while enabling hands-free interactions. We have also investigated the feasibility, advantages, and challenges of using gaze and foot-based point-and-click interactions on standard (up to 24") and large displays (up to 84") through Fitts’ Law evaluations. Additionally, we have compared the performance of the gaze input to the other standard inputs like the mouse and touch. Second, to support text entry, we developed a gaze and foot-based dwell-free typing system, and investigated foot-based activation methods like foot-press and foot gestures. We have demonstrated that our dwell-free typing methods are efficient and highly preferred over conventional dwell-based gaze typing methods. Using our gaze typing system the users type up to 14.98 Words Per Minute (WPM) as opposed to 11.65 WPM with dwell-based typing. Importantly, our system addresses the critical usability issues associated with gaze typing in general. Third, we addressed the lack of an accessible and shoulder-surfing resistant authentication method by developing a gaze gesture recognition framework, and presenting two authentication strategies that use gaze gestures. Our authentication methods use static and dynamic transitions of the objects on the screen, and they authenticate users with an accuracy of 99% (static) and 97.5% (dynamic). Furthermore, unlike other systems, our dynamic authentication method is not susceptible to single video iterative attacks, and has a lower success rate with dual video iterative attacks. Lastly, we demonstrated how our gaze gesture recognition framework can be extended to allow users to design gaze gestures of their choice and associate them to appropriate commands like minimize, maximize, scroll, etc., on the computer. We presented a template matching algorithm which achieved an accuracy of 93%, and a geometric feature-based decision tree algorithm which achieved an accuracy of 90.2% in recognizing the gaze gestures. In summary, our research demonstrates how situational and physical impairments and disabilities can be addressed with a gaze-assisted, multi-modal, accessible interaction paradigm

    Mining the brain to predict gait characteristics: a BCI study

    Get PDF
    Tese de mestrado integrado em Engenharia Biomédica e Biofísica, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, em 2018A locomoção é uma das atividades mais comuns e relevantes da vida quotidiana, sendo que envolve a ativação dos sistemas nervoso e músculo-esquelético. Os distúrbios da locomoção são comuns principalmente na população idosa, sendo que frequentemente estão associados a uma diminuição da qualidade de vida. A ocorrência destes distúrbios aumenta com a idade, estimando-se que aproximadamente 10% das pessoas com idades entre 60 e 69 anos sofram de algum tipo de distúrbio da locomoção, enquanto esse número aumenta para mais de 60% em pessoas com idade superior a 80 anos. Os padrões da locomoção são influenciados por doenças, condições físicas, personalidade e humor, sendo que um padrão anormal ocorre quando uma pessoa não é capaz de andar da maneira usual, maioritariamente devido a lesões, doenças ou outras condições subjacentes. As causas dos distúrbios da marcha incluem condições neurológicas e músculo-esqueléticas. Um grande número de condições neurológicas pode causar um padrão de marcha anormal, como por exemplo um acidente vascular cerebral, paralisia cerebral ou a doença de Parkinson. Por outro lado, as causas músculo-esqueléticas devem-se principalmente a doenças ósseas ou musculares. A avaliação ou análise da marcha, inclui a medição, descrição e avaliação das variáveis que caracterizam a locomoção humana. Como resultado, este estudo permite o diagnóstico de várias condições, bem como avaliar a progressão da reabilitação e desenvolver estratégias de intervenção. Convencionalmente, a marcha é estudada subjetivamente com protocolos observacionais. No entanto, recentemente foram desenvolvidos métodos mais objetivos e viáveis. Os métodos de análise da marcha podem ser classificados em laboratoriais ou portáteis. Embora a análise baseada em laboratório utilize equipamentos especializados, os sistemas portáteis permitem o estudo da marcha em ambientes naturais e durante atividades da vida diária. A análise laboratorial da marcha é baseada principalmente em informações de imagem e vídeo, embora sensores de piso e placas de força também sejam comuns. Por outro lado, os sistemas portáteis consistem em um ou vários sensores, ligados ao corpo. A adaptação da locomoção é um dos mais relevantes conceitos na análise da mesma, sendo que a sua origem e dinâmica neuronal têm sido amplamente estudadas nos últimos anos. A adaptação da marcha reflete a capacidade de um sujeito em mudar de velocidade e direção, manter o equilíbrio ou evitar obstáculos. Em termos da reabilitação neurológica, a adaptação da locomoção interfere na dinâmica neuronal, permitindo que os pacientes restaurem certas funções motoras. Atualmente, os dispositivos robóticos para membros inferiores e os exoesqueletos são cada vez mais usados não só para facilitar a reabilitação motora, mas também para apoiar as funções da vida diária. No entanto, a sua eficiência e segurança depende da sua eficácia em detetar a intenção humana de mover e adaptar a locomoção. Recentemente, foi demonstrado que o ritmo auditivo tem um forte efeito no sistema motor. Consequentemente, a adaptação tem sido estudada com base em ritmos auditivos, onde os pacientes seguem tons de estimulação para melhorar a coordenação da marcha. A imagem motora (MI), uma prática emergente em BCI, ou interface cérebro-máquina, é definida como a atividade de simular mentalmente uma determinada ação, sem a execução real do movimento. O desempenho da classificação da MI é importante para desenvolver ambientes robustos de interface cérebro-máquina, para neuro-reabilitação de pacientes e controle de próteses robóticas. O desempenho da classificação da MI é importante para desenvolver ambientes robustos de interface cérebro-máquina, para neuro-reabilitação de pacientes e controle de próteses robóticas, uma vez que, estudos anteriores, concluíram que realizar uma sessão de MI ativa parcialmente as mesmas regiões cerebrais que o desempenho da tarefa real. Inicialmente, a tarefas de MI centravam-se apenas nos movimentos dos membros superiores, no entanto, recentemente, estas começaram também a focar-se nos movimentos dos membros inferiores, de modo a estudar a locomoção humana. A deteção da intenção motora em tarefas de MI enfrenta vários desafios, mesmo para duas classes (esquerda / direita, por exemplo), sendo que um dos principais desafios se deve ao número, localização e tipo de elétrodos de EEG usados. Recentemente, um número crescente de estudos investigou a atividade cerebral durante a locomoção humana. Esses estudos, baseados maioritariamente no EEG, encontraram várias relações entre regiões cerebrais e ações ou movimentos específicos. Por exemplo, concluiu-se que a atividade cerebral aumenta durante a caminhada ou a preparação para caminhar e que a potência nas bandas μ e β diminui durante a execução voluntária do movimento. Em termos de adaptação da marcha, foi demonstrado que a atividade eletrocortical varia de acordo com a tarefa motora executada. Recentemente, as Interfaces Cérebro-Máquina permitiram o desenvolvimento de novas terapias de reabilitação para restaurar as funções motoras em pessoas com deficiências na locomoção, envolvendo o SNC para ativar dispositivos externos. Na primeira parte desta tese, foram realizadas várias tarefas de MI, juntamente com os movimentos reais dos membros inferiores, de modo a comparar o desempenho da classificação de um sistema wireless de 16 elétrodos secos com um sistema wireless de 32 elétrodos com gel condutor. A extração e classificação das características do sinal foram também avaliadas com mais de um método (LDA e CSP). No final, a combinação de um filtro beta passa-banda com um filtro RCSP mostrou a melhor taxa de classificação. Embora durante a aquisição do EEG todos os canais tenham sido utilizados, durante os métodos de processamento, foram escolhidas duas configurações específicas, onde os elétrodos foram selecionados de acordo com sua posição relativamente ao córtex motor. Desde modo, infere-se que uma seleção cuidada da localização dos elétrodos é mais importante do que ter um denso mapa de elétrodos, o que torna os sistemas EEG mais confortáveis e de fácil utilização. Os resultados mostram também a viabilidade do uso doméstico de sistemas de elétrodos secos com um reduzido número de sensores, e a possibilidade de diferenciar entre as tarefas de MI (esquerda e direita), para ambos os membros, com uma precisão relativamente alta. Por outro lado, a segunda parte desta tese apresenta um esquema de adaptação da marcha em ambientes naturais. De modo a avaliar a adaptação da marcha, os sujeitos seguem um tom rítmico que alterna entre três modos distintos (lento, normal e acelerado). As características da locomoção foram extraídas com base numa câmara RGB, sendo que os sinais de EEG foram monitorados simultaneamente. De seguida, estas características bem como as informações do tempo de reação foram utilizadas para extrair as etapas de adaptação da marcha versus etapas de não adaptação. De modo a remover os artefactos presentes no EEG, devidos maioritariamente ao movimento do sujeito, o sinal for filtrado com uma filtro passa-banda e sujeito a uma análise de componentes independentes (ICA). Posteriormente, as características de adaptação da marcha do EEG foram investigadas com base em dois problemas de classificação: i) classificação dos passos em direito ou esquerdo e ii) etapas de adaptação versus não adaptação da marcha. As características foram extraídas com base em padrões espaciais comuns (CSP) e padrões espaciais comuns regularizados (RCSP). Os resultados mostram que é possível discriminar com sucesso a adaptação versus não adaptação com mais de 90% de precisão. Este procedimento permite a monitoração dos participantes em ambientes mais realistas, sem a necessidade de equipamentos especializados, como sensores de pressão. Este método demonstrou que é possível detetar a adaptação com mais de 90% de precisão, quando os participantes tentam adaptar sua velocidade de marcha para uma velocidade maior ou menor.Gait adaptation is one of the most relevant concepts in gait analysis and its neuronal origin and dynamics has been extensively studied in the past few years. In terms of neurorehabilitation, gait adaptation perturbs neuronal dynamics and allows patients to restore some of their motor functions. In fact, lower-limbs robotic devices and exoskeletons are increasingly used to facilitate rehabilitation as well as supporting daily life functions. However, their efficiency and safety depend on how well they can detect the human intention to move and adapt the gait. Motor imagery (MI), an emerging practise in Brain Computer Interface (BCI), is defined as the activity of mentally simulating a given action, without the actual execution of the movement. MI classification performance is important in order to develop robust brain computer interface environments for neuro-rehabilitation of patients and robotic prosthesis control. In the first section of this thesis, it was performed a number of motor imagery tasks along with actual movements of the limbs to compare the classification performance of a dry 16-channel and a wet, 32-channel, wireless (Electroencephalography) EEG system. Results showed the feasibility of home use of dry electrode systems with a small number of sensors, and the possibility to discriminate between left and right MI tasks for both arms and legs, with a relatively high accuracy. The second part of this thesis presents a gait adaptation scheme in natural settings. This procedure allows the monitorization of subjects in more realistic environments without the requirement of specialized equipment such as treadmill and foot pressure sensors. Gait characteristics were extracted based on a single RGB camera, and EEG signals are monitored simultaneously. This method demonstrated that it is possible to detect adaptation steps with more than 90% accuracy, when subjects tries to adapt their walking speed to a higher or lower speed

    Physical sketching tools and techniques for customized sensate surfaces

    Get PDF
    Sensate surfaces are a promising avenue for enhancing human interaction with digital systems due to their inherent intuitiveness and natural user interface. Recent technological advancements have enabled sensate surfaces to surpass the constraints of conventional touchscreens by integrating them into everyday objects, creating interactive interfaces that can detect various inputs such as touch, pressure, and gestures. This allows for more natural and intuitive control of digital systems. However, prototyping interactive surfaces that are customized to users' requirements using conventional techniques remains technically challenging due to limitations in accommodating complex geometric shapes and varying sizes. Furthermore, it is crucial to consider the context in which customized surfaces are utilized, as relocating them to fabrication labs may lead to the loss of their original design context. Additionally, prototyping high-resolution sensate surfaces presents challenges due to the complex signal processing requirements involved. This thesis investigates the design and fabrication of customized sensate surfaces that meet the diverse requirements of different users and contexts. The research aims to develop novel tools and techniques that overcome the technical limitations of current methods and enable the creation of sensate surfaces that enhance human interaction with digital systems.Sensorische Oberflächen sind aufgrund ihrer inhärenten Intuitivität und natürlichen Benutzeroberfläche ein vielversprechender Ansatz, um die menschliche Interaktionmit digitalen Systemen zu verbessern. Die jüngsten technologischen Fortschritte haben es ermöglicht, dass sensorische Oberflächen die Beschränkungen herkömmlicher Touchscreens überwinden, indem sie in Alltagsgegenstände integriert werden und interaktive Schnittstellen schaffen, die diverse Eingaben wie Berührung, Druck, oder Gesten erkennen können. Dies ermöglicht eine natürlichere und intuitivere Steuerung von digitalen Systemen. Das Prototyping interaktiver Oberflächen, die mit herkömmlichen Techniken an die Bedürfnisse der Nutzer angepasst werden, bleibt jedoch eine technische Herausforderung, da komplexe geometrische Formen und variierende Größen nur begrenzt berücksichtigt werden können. Darüber hinaus ist es von entscheidender Bedeutung, den Kontext, in dem diese individuell angepassten Oberflächen verwendet werden, zu berücksichtigen, da eine Verlagerung in Fabrikations-Laboratorien zum Verlust ihres ursprünglichen Designkontextes führen kann. Zudem stellt das Prototyping hochauflösender sensorischer Oberflächen aufgrund der komplexen Anforderungen an die Signalverarbeitung eine Herausforderung dar. Diese Arbeit erforscht dasDesign und die Fabrikation individuell angepasster sensorischer Oberflächen, die den diversen Anforderungen unterschiedlicher Nutzer und Kontexte gerecht werden. Die Forschung zielt darauf ab, neuartigeWerkzeuge und Techniken zu entwickeln, die die technischen Beschränkungen derzeitigerMethoden überwinden und die Erstellung von sensorischen Oberflächen ermöglichen, die die menschliche Interaktion mit digitalen Systemen verbessern

    Augmented reality device for first response scenarios

    Get PDF
    A prototype of a wearable computer system is proposed and implemented using commercial off-shelf components. The system is designed to allow the user to access location-specific information about an environment, and to provide capability for user tracking. Areas of applicability include primarily first response scenarios, with possible applications in maintenance or construction of buildings and other structures. Necessary preparation of the target environment prior to system\u27s deployment is limited to noninvasive labeling using optical fiducial markers. The system relies on computational vision methods for registration of labels and user position. With the system the user has access to on-demand information relevant to a particular real-world location. Team collaboration is assisted by user tracking and real-time visualizations of team member positions within the environment. The user interface and display methods are inspired by Augmented Reality1 (AR) techniques, incorporating a video-see-through Head Mounted Display (HMD) and fingerbending sensor glove.*. 1Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. At present, most AR research is concerned with the use of live video imagery which is digitally processed and augmented by the addition of computer generated graphics. Advanced research includes the use of motion tracking data, fiducial marker recognition using machine vision, and the construction of controlled environments containing any number of sensors and actuators. (Source: Wikipedia) *This dissertation is a compound document (contains both a paper copy and a CD as part of the dissertation). The CD requires the following system requirements: Adobe Acrobat; Microsoft Office; Windows MediaPlayer or RealPlayer

    Comparison of knee loading during walking via musculoskeletal modelling using marker-based and IMU-based approaches

    Get PDF
    openThe current thesis is the result of the candidate's work over a six-month period with the assistance of the supervisor and co-supervisors, thanks to the collaboration between the Human Movement Bioengineering Laboratory Research group at the University of Padova (Italy) and the Human Movement Biomechanics Research group at KU Leuven (Belgium). Gait analysis, at a clinical level, is a diagnostic test with multiple potentials, in particular in identifying functional limitations related to a pathological path. Three-dimensional motion capture is now consolidated as an approach for human movement research studies and consists of a set of very precise measurements, the latter are processed by biomechanical models, and curves relating to the kinematics and indirect dynamics, i.e., the joint angles and the relative forces and moments, can be obtained. These results are considered fully reliable and based on these curves it is decided how to intervene on the specific subject to make the path as less pathological as possible. However, the use of wearable sensors (IMUs) consisting of accelerometers, gyroscopes, and magnetic sensors for gait analysis, has increased in the last decade due to the low production costs, portability, and small size that have allowed for studies in everyday life conditions. Inertial capture (InCap) systems have become an appealing alternative to 3D Motion Capture (MoCap) systems due to the ability of inertial measurement units (IMUs) to estimate the orientation of 3D sensors and segments. Musculoskeletal modelling and simulation provide the ideal framework to examine quantities in silico that cannot be measured in vivo, such as musculoskeletal loading, muscle forces and joint contact forces. The specific software used in this study is Opensim: an open-source software that allows modelling, analysis, and simulation of the musculoskeletal system. The aim of this thesis is to compare a marker-based musculoskeletal modelling approach with an IMUs-based one, in terms of kinematics, dynamics, and muscle activations. In particular, the project will focus on knee loading, using an existing musculoskeletal model of the lower limb. The current project was organized as follows: first, the results for the MoCap approach were obtained, following a specific workflow that used the COMAK IK tool and the COMAK algorithm to get the secondary knee kinematics, muscle activations, and knee contact forces. Where COMAK is a modified static optimization algorithm that solves for muscle activations and secondary kinematics to obtain measured primary DOF accelerations while minimizing muscle activation. Then these results were used to make a comparison with those obtained by the inertial-based approach, with the attempt to use as little information as possible from markers while estimating kinematics from IMU data using an OpenSim toolbox called OpenSense. Afterward, in order to promote an approach more independent from the constraints of a laboratory, the Zero Moment Point (ZMP) method was used to estimate the center of pressure position of the measured ground reaction forces (GRFs), and a specific Matlab code was implemented to improve this estimation. Using the measured GRFs with the new CoPs, the results of Inverse Dynamics, muscle activations, and finally knee loading were calculated and compared to the MoCap results. The final step was to conduct a statistical analysis to compare the two approaches and emphasize the importance of using IMUs for gait analysis, particularly to study knee mechanics
    corecore