354 research outputs found

    Predictive text-entry in immersive environments

    Get PDF
    Virtual Reality (VR) has progressed significantly since its conception, enabling previously impossible applications such as virtual prototyping, telepresence, and augmented reality However, text-entry remains a difficult problem for immersive environments (Bowman et al, 2001b, Mine et al , 1997). Wearing a head-mounted display (HMD) and datagloves affords a wealth of new interaction techniques. However, users no longer have access to traditional input devices such as a keyboard. Although VR allows for more natural interfaces, there is still a need for simple, yet effective, data-entry techniques. Examples include communicating in a collaborative environment, accessing system commands, or leaving an annotation for a designer m an architectural walkthrough (Bowman et al, 2001b). This thesis presents the design, implementation, and evaluation of a predictive text-entry technique for immersive environments which combines 5DT datagloves, a graphically represented keyboard, and a predictive spelling paradigm. It evaluates the fundamental factors affecting the use of such a technique. These include keyboard layout, prediction accuracy, gesture recognition, and interaction techniques. Finally, it details the results of user experiments, and provides a set of recommendations for the future use of such a technique in immersive environments

    Застосування аналізу зв’язних областей до задачі розпізнавання елементів дактильно-жестової мови

    Get PDF
    Об’єктом дослідження є дактильно-жестова мова, яка використовується для спілкування людьми з вадами слуху. Метою дослідження є розробка та реалізація алгоритмів розпізнавання дактильно-жестової мови. У роботі розглядається застосування методів аналізу зв’язних областей, а також використання кутових параметрів для ідентифікації елементів дактильно-жестової мови.Объектом исследования является дактильно-жестовый язык, используемый для общения людьми с проблемами слуха. Целью исследования является разработка и реализация алгоритмов распознавания дактильно-жестового языка. В работе рассматривается применение методов анализа связных областей, а также использование угловых параметров для идентификации элементов дактильнo-жестового языка.The object of study is the finger-sign language used for communication by people with hearing impairments. The study is devoted to the development and implementation of algorithms for recognizing finger-sign language. This paper considers the application of methods for analysis of connected regions, and the use of angular parameters for identifying elements of finger-sign language

    Continual Learning of Hand Gestures for Human-Robot Interaction

    Full text link
    In this paper, we present an efficient method to incrementally learn to classify static hand gestures. This method allows users to teach a robot to recognize new symbols in an incremental manner. Contrary to other works which use special sensors or external devices such as color or data gloves, our proposed approach makes use of a single RGB camera to perform static hand gesture recognition from 2D images. Furthermore, our system is able to incrementally learn up to 38 new symbols using only 5 samples for each old class, achieving a final average accuracy of over 90\%. In addition to that, the incremental training time can be reduced to a 10\% of the time required when using all data available

    Kinect crowd interaction

    Full text link
    Most of the state-of-the-art commercial simulation software mainly focuses on providing realistic animations and convincing artificial intelligence to avatars in the scenario. However, works on how to trigger the events and avatar reactions in the scenario in a natural and intuitive way are less noticed and developed. Typical events are usually triggered by predefined timestamps. Once the events are set, there is no easy way to interactively generate new events while the scene is running and therefore difficult to dynamically affect the avatar reactions. Based on this situation, we propose a framework to use human gesture as input to trigger events within a DI-Guy simulation scenario in real-time, which could greatly help users to control events and avatar reactions in the scenario. By implementing such a framework, we will be able to identify user’s intentions interactively and ensure that the avatars make corresponding reactions

    Studio e implementazione di metodi per la classificazione automatica di movimenti umani basata su dati accelerometrici

    Get PDF
    Questo lavoro si pone come obiettivo lo studio di algoritmi per la classificazione automatica di posture e movimenti eseguiti da un soggetto, mediante elaborazione dei segnali provenienti da cinque accelerometri biassiali posti in corrispondenza di determinati punti anatomici. Un sistema di classificazione automatica del movimento è di grande interesse in applicazioni di pervasive computing che richiedano la conoscenza del contesto per facilitare l’interazione uomo-macchina, e in biomedicina, per la realizzazione di sistemi wearable per la valutazione a lungo-termine di parametri fisiologici e biomeccanici. In questo lavoro ci proponiamo in primo luogo di studiare algoritmi di classificazione one-shot, in cui l’esito della classificazione a un certo istante non dipende dalla storia delle classificazioni precedenti, e algoritmi di classificazione sequenziale basati sugli Hidden Markov Model (HMM), per sfruttare la conoscenza delle statistiche di un task risultante dal concatenamento di singole primitive di movimento. All’algoritmo di classificazione automatica delle sequenze di movimenti e posture è stato inoltre introdotto un sistema di rimozione automatica dei dati non classificabili, relativi alle transizioni posturali o ai movimenti non noti al sistema. The aim of this study is the development of an algorithm for automatic classification of human postures and movements, starting from accelerometer data. The acceleration data can be measured by a few sensors affixed to selected points of the human body. Movement classifiers can be interesting in applications of pervasive computing, whereas contextual awareness may ease the human-machine interaction, or in biomedicine, whereas wearable systems are developed for long-term monitoring of physiological and biomechanical parameters. In this paper we intend to study one-shot and sequential classifiers. One-shot classifiers deliver their actual outcome, without any regard to previous outcomes. Conversely, sequential classifiers, i.e. Hidden Markov Model (HMM), incorporate the statistical information acquired about the movement dynamics into the classification process. An automatic spurious data removing algorithm has been added to this kind of classifier, to make possible the automatic detection and removal of data relative to unknown movements or postural transitions

    Machine Learning Methods for Classifying Human Physical Activity from On-Body Accelerometers

    Get PDF
    The use of on-body wearable sensors is widespread in several academic and industrial domains. Of great interest are their applications in ambulatory monitoring and pervasive computing systems; here, some quantitative analysis of human motion and its automatic classification are the main computational tasks to be pursued. In this paper, we discuss how human physical activity can be classified using on-body accelerometers, with a major emphasis devoted to the computational algorithms employed for this purpose. In particular, we motivate our current interest for classifiers based on Hidden Markov Models (HMMs). An example is illustrated and discussed by analysing a dataset of accelerometer time series

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    American Sign Language Recognition System by Using Surface EMG Signal

    Get PDF
    Sign Language Recognition (SLR) system is a novel method that allows hard of hearing to communicate with general society. In this study, American Sign Language (ASL) recognition system was proposed by using the surface Electromyography (sEMG). The objective of this study is to recognize the American Sign Language alphabet letters and allow users to spell words and sentences. For this purpose, sEMG data are acquired from subject right forearm for twenty-seven American Sign Language gestures of twenty-six English alphabets and one for home position. Time and frequency domain (band power) information used in the feature extraction process. As a classification method, Support Vector Machine and Ensemble Learning algorithm were used and their performances are compared with tabulated results. In conclusion, the results of this study show that sEMG signal can be used for SLR systems
    corecore