821 research outputs found

    A human computer interactions framework for biometric user identification

    Get PDF
    Computer assisted functionalities and services have saturated our world becoming such an integral part of our daily activities that we hardly notice them. In this study we are focusing on enhancements in Human-Computer Interaction (HCI) that can be achieved by natural user recognition embedded in the employed interaction models. Natural identification among humans is mostly based on biometric characteristics representing what-we-are (face, body outlook, voice, etc.) and how-we-behave (gait, gestures, posture, etc.) Following this observation, we investigate different approaches and methods for adapting existing biometric identification methods and technologies to the needs of evolving natural human computer interfaces

    To Draw or Not to Draw: Recognizing Stroke-Hover Intent in Gesture-Free Bare-Hand Mid-Air Drawing Tasks

    Get PDF
    Over the past several decades, technological advancements have introduced new modes of communication with the computers, introducing a shift from traditional mouse and keyboard interfaces. While touch based interactions are abundantly being used today, latest developments in computer vision, body tracking stereo cameras, and augmented and virtual reality have now enabled communicating with the computers using spatial input in the physical 3D space. These techniques are now being integrated into several design critical tasks like sketching, modeling, etc. through sophisticated methodologies and use of specialized instrumented devices. One of the prime challenges in design research is to make this spatial interaction with the computer as intuitive as possible for the users. Drawing curves in mid-air with fingers, is a fundamental task with applications to 3D sketching, geometric modeling, handwriting recognition, and authentication. Sketching in general, is a crucial mode for effective idea communication between designers. Mid-air curve input is typically accomplished through instrumented controllers, specific hand postures, or pre-defined hand gestures, in presence of depth and motion sensing cameras. The user may use any of these modalities to express the intention to start or stop sketching. However, apart from suffering with issues like lack of robustness, the use of such gestures, specific postures, or the necessity of instrumented controllers for design specific tasks further result in an additional cognitive load on the user. To address the problems associated with different mid-air curve input modalities, the presented research discusses the design, development, and evaluation of data driven models for intent recognition in non-instrumented, gesture-free, bare-hand mid-air drawing tasks. The research is motivated by a behavioral study that demonstrates the need for such an approach due to the lack of robustness and intuitiveness while using hand postures and instrumented devices. The main objective is to study how users move during mid-air sketching, develop qualitative insights regarding such movements, and consequently implement a computational approach to determine when the user intends to draw in mid-air without the use of an explicit mechanism (such as an instrumented controller or a specified hand-posture). By recording the user’s hand trajectory, the idea is to simply classify this point as either hover or stroke. The resulting model allows for the classification of points on the user’s spatial trajectory. Drawing inspiration from the way users sketch in mid-air, this research first specifies the necessity for an alternate approach for processing bare hand mid-air curves in a continuous fashion. Further, this research presents a novel drawing intent recognition work flow for every recorded drawing point, using three different approaches. We begin with recording mid-air drawing data and developing a classification model based on the extracted geometric properties of the recorded data. The main goal behind developing this model is to identify drawing intent from critical geometric and temporal features. In the second approach, we explore the variations in prediction quality of the model by improving the dimensionality of data used as mid-air curve input. Finally, in the third approach, we seek to understand the drawing intention from mid-air curves using sophisticated dimensionality reduction neural networks such as autoencoders. Finally, the broad level implications of this research are discussed, with potential development areas in the design and research of mid-air interactions

    Integrating passive ubiquitous surfaces into human-computer interaction

    Get PDF
    Mobile technologies enable people to interact with computers ubiquitously. This dissertation investigates how ordinary, ubiquitous surfaces can be integrated into human-computer interaction to extend the interaction space beyond the edge of the display. It turns out that acoustic and tactile features generated during an interaction can be combined to identify input events, the user, and the surface. In addition, it is shown that a heterogeneous distribution of different surfaces is particularly suitable for realizing versatile interaction modalities. However, privacy concerns must be considered when selecting sensors, and context can be crucial in determining whether and what interaction to perform.Mobile Technologien ermöglichen den Menschen eine allgegenwĂ€rtige Interaktion mit Computern. Diese Dissertation untersucht, wie gewöhnliche, allgegenwĂ€rtige OberflĂ€chen in die Mensch-Computer-Interaktion integriert werden können, um den Interaktionsraum ĂŒber den Rand des Displays hinaus zu erweitern. Es stellt sich heraus, dass akustische und taktile Merkmale, die wĂ€hrend einer Interaktion erzeugt werden, kombiniert werden können, um Eingabeereignisse, den Benutzer und die OberflĂ€che zu identifizieren. DarĂŒber hinaus wird gezeigt, dass eine heterogene Verteilung verschiedener OberflĂ€chen besonders geeignet ist, um vielfĂ€ltige InteraktionsmodalitĂ€ten zu realisieren. Bei der Auswahl der Sensoren mĂŒssen jedoch Datenschutzaspekte berĂŒcksichtigt werden, und der Kontext kann entscheidend dafĂŒr sein, ob und welche Interaktion durchgefĂŒhrt werden soll

    Augmented reality system with application in physical rehabilitation

    Get PDF
    The aging phenomenon causes increased physiotherapy services requirements, with increased costs associated with long rehabilitation periods. Traditional rehabilitation methods rely on the subjective assessment of physiotherapists without supported training data. To overcome the shortcoming of traditional rehabilitation method and improve the efficiency of rehabilitation, AR (Augmented Reality) which represents a promissory technology that provides an immersive interaction with real and virtual objects is used. The AR devices may assure the capture body posture and scan the real environment that conducted to a growing number of AR applications focused on physical rehabilitation. In this MSc thesis, an AR platform used to materialize a physical rehabilitation plan for stroke patients is presented. Gait training is a significant part of physical rehabilitation for stroke patients. AR represents a promissory solution for training assessment providing information to the patients and physiotherapists about exercises to be done and the reached results. As part of MSc work an iOS application was developed in unity 3D platform. This application immersing patients in a mixed environment that combine real-world and virtual objects. The human computer interface is materialized by an iPhone as head-mounted 3D display and a set of wireless sensors for physiological and motion parameters measurement. The position and velocity of the patient are recorded by a smart carpet that includes capacitive sensors connected to a computation unit characterized by Wi-Fi communication capabilities. AR training scenario and the corresponding experimental results are part of the thesis.O envelhecimento causa um aumento das necessidades dos serviços de fisioterapia, com aumento dos custos associados a longos perĂ­odos de reabilitação. Os mĂ©todos tradicionais de reabilitação dependem da avaliação subjetiva de fisioterapeutas sem registo automatizado de dados de treino. Com o principal objetivo de superar os problemas do mĂ©todo tradicional e melhorar a eficiĂȘncia da reabilitação, Ă© utilizada a RA (Realidade Aumentada), que representa uma tecnologia promissora, que fornece uma interação imersiva com objetos reais e virtuais. Os dispositivos de RA sĂŁo capazes de garantir uma postura correta do corpo de capturar e verificar o ambiente real, o que levou a um nĂșmero crescente de aplicaçÔes de RA focados na reabilitação fĂ­sica. Neste projeto, Ă© apresentada uma plataforma de RA, utilizada para materializar um plano de reabilitação fĂ­sica para pacientes que sofreram AVC. O treino de marcha Ă© uma parte significativa da reabilitação fĂ­sica para pacientes com AVC. A RA apresenta-se como uma solução promissora para a avaliação do treino, fornecendo informaçÔes aos pacientes e aos profissionais de fisioterapia sobre os exercĂ­cios a serem realizados e os resultados alcançados. Como parte deste projeto, uma aplicação iOS foi desenvolvida na plataforma Unity 3D. Esta aplicação fornece aos pacientes um ambiente imersivo que combina objetos reais e virtuais. A interface de RA Ă© materializada por um iPhone montado num suporte de cabeça do utilizador, assim como um conjunto de sensores sem fios para medição de parĂąmetros fisiolĂłgicos e de movimento. A posição e a velocidade do paciente sĂŁo registadas por um tapete inteligente que inclui sensores capacitivos conectados a uma unidade de computação, caracterizada por comunicação via Wi-Fi. O cenĂĄrio de treino em RA e os resultados experimentais correspondentes fazem parte desta dissertação

    Systematic literature review of hand gestures used in human computer interaction interfaces

    Get PDF
    Gestures, widely accepted as a humans' natural mode of interaction with their surroundings, have been considered for use in human-computer based interfaces since the early 1980s. They have been explored and implemented, with a range of success and maturity levels, in a variety of fields, facilitated by a multitude of technologies. Underpinning gesture theory however focuses on gestures performed simultaneously with speech, and majority of gesture based interfaces are supported by other modes of interaction. This article reports the results of a systematic review undertaken to identify characteristics of touchless/in-air hand gestures used in interaction interfaces. 148 articles were reviewed reporting on gesture-based interaction interfaces, identified through searching engineering and science databases (Engineering Village, Pro Quest, Science Direct, Scopus and Web of Science). The goal of the review was to map the field of gesture-based interfaces, investigate the patterns in gesture use, and identify common combinations of gestures for different combinations of applications and technologies. From the review, the community seems disparate with little evidence of building upon prior work and a fundamental framework of gesture-based interaction is not evident. However, the findings can help inform future developments and provide valuable information about the benefits and drawbacks of different approaches. It was further found that the nature and appropriateness of gestures used was not a primary factor in gesture elicitation when designing gesture based systems, and that ease of technology implementation often took precedence

    Review of three-dimensional human-computer interaction with focus on the leap motion controller

    Get PDF
    Modern hardware and software development has led to an evolution of user interfaces from command-line to natural user interfaces for virtual immersive environments. Gestures imitating real-world interaction tasks increasingly replace classical two-dimensional interfaces based on Windows/Icons/Menus/Pointers (WIMP) or touch metaphors. Thus, the purpose of this paper is to survey the state-of-the-art Human-Computer Interaction (HCI) techniques with a focus on the special field of three-dimensional interaction. This includes an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition. Focus is on interfaces based on the Leap Motion Controller (LMC) and corresponding methods of gesture design and recognition. Further, a review of evaluation methods for the proposed natural user interfaces is given
    • 

    corecore