461 research outputs found

    Interfaces baseadas em gestos e movimento

    Get PDF
    Dissertação de Mestrado em Engenharia InformáticaEsta tese estuda novas formas de interacção pessoa-máquina, baseadas em sensores de infravermelhos. O objectivo foi criar uma interface que tornasse a interacção com o computador mais natural e divertida, utilizando gestos e movimentos que são usados intuitivamente no dia-a-dia. Foi necessário o desenho e implementação de um sistema flexível e modular, que permite detectar as posições e movimentos das mãos e cabeça do utilizador. Adicionalmente, esta interface tambem permite a utilização de botões e fornece feedback háptico ao utilizador. Foram encontrados vários problemas durante a realização do hardware, que levaram à utilização de novas abordagens e à construcção e teste de vários protótipos Paralelamente à construção dos protótipos do hardware, foi implementada uma biblioteca que permite detectar a posição das mãos e cabeça cabeça do utilizador, num espaço tridimensional. Esta biblioteca trata de toda a comunicação com o hardware, fornecendo funções e callbacks simples ao programador das aplicações. Foram desenvolvidas quatro aplicações que permitiram testar e demonstrar as várias funcionalidades desta interface em diferentes cenários. Uma destas aplicações foi um jogo, que foi demonstrado publicamente durante o dia aberto da FCT/UNL, tendo sido experimentado e avaliado por um grande número de utilizadores

    Requirement analysis and sensor specifications – First version

    Get PDF
    In this first version of the deliverable, we make the following contributions: to design the WEKIT capturing platform and the associated experience capturing API, we use a methodology for system engineering that is relevant for different domains such as: aviation, space, and medical and different professions such as: technicians, astronauts, and medical staff. Furthermore, in the methodology, we explore the system engineering process and how it can be used in the project to support the different work packages and more importantly the different deliverables that will follow the current. Next, we provide a mapping of high level functions or tasks (associated with experience transfer from expert to trainee) to low level functions such as: gaze, voice, video, body posture, hand gestures, bio-signals, fatigue levels, and location of the user in the environment. In addition, we link the low level functions to their associated sensors. Moreover, we provide a brief overview of the state-of-the-art sensors in terms of their technical specifications, possible limitations, standards, and platforms. We outline a set of recommendations pertaining to the sensors that are most relevant for the WEKIT project taking into consideration the environmental, technical and human factors described in other deliverables. We recommend Microsoft Hololens (for Augmented reality glasses), MyndBand and Neurosky chipset (for EEG), Microsoft Kinect and Lumo Lift (for body posture tracking), and Leapmotion, Intel RealSense and Myo armband (for hand gesture tracking). For eye tracking, an existing eye-tracking system can be customised to complement the augmented reality glasses, and built-in microphone of the augmented reality glasses can capture the expert’s voice. We propose a modular approach for the design of the WEKIT experience capturing system, and recommend that the capturing system should have sufficient storage or transmission capabilities. Finally, we highlight common issues associated with the use of different sensors. We consider that the set of recommendations can be useful for the design and integration of the WEKIT capturing platform and the WEKIT experience capturing API to expedite the time required to select the combination of sensors which will be used in the first prototype.WEKI

    Toward natural interaction in the real world: real-time gesture recognition

    Get PDF
    Using a new hand tracking technology capable of tracking 3D hand postures in real-time, we developed a recognition system for continuous natural gestures. By natural gestures, we mean those encountered in spontaneous interaction, rather than a set of artificial gestures chosen to simplify recognition. To date we have achieved 95.6% accuracy on isolated gesture recognition, and 73% recognition rate on continuous gesture recognition, with data from 3 users and twelve gesture classes. We connected our gesture recognition system to Google Earth, enabling real time gestural control of a 3D map. We describe the challenges of signal accuracy and signal interpretation presented by working in a real-world environment, and detail how we overcame them.National Science Foundation (U.S.) (award IIS-1018055)Pfizer Inc.Foxconn Technolog

    Digitisation of manual composite layup task knowledge using gaming technology

    Get PDF
    Increased market demand for composite products and shortage of expert laminators is compelling the composite industry to explore ways to acquire layup skills from experts and transfer them to novices and eventually to machines. There is a lack of holistic methods in literature for capturing composite layup skills especially involving complex moulds. This research aims to develop an informatics-based method, enabled by consumer-grade gaming technology and machine learning, to capture and digitise manufacturing task knowledge from skill-intensive hand layup. The digitisation is underpinned by the proposed human-workpiece interaction theory and implemented to automatically extract and decode key knowledge constituents such as layup strategies, ply manipulation techniques, motion mechanics and problem-solving during hand layup, collectively categorised as layup skills. The significance of this research is its potential to facilitate cost-effective transfer of skills from experts to novices, real-time automated supervision of hand layup and automation of layup tasks in the future

    Real-time Immersive human-computer interaction based on tracking and recognition of dynamic hand gestures

    Get PDF
    With fast developing and ever growing use of computer based technologies, human-computer interaction (HCI) plays an increasingly pivotal role. In virtual reality (VR), HCI technologies provide not only a better understanding of three-dimensional shapes and spaces, but also sensory immersion and physical interaction. With the hand based HCI being a key HCI modality for object manipulation and gesture based communication, challenges are presented to provide users a natural, intuitive, effortless, precise, and real-time method for HCI based on dynamic hand gestures, due to the complexity of hand postures formed by multiple joints with high degrees-of-freedom, the speed of hand movements with highly variable trajectories and rapid direction changes, and the precision required for interaction between hands and objects in the virtual world. Presented in this thesis is the design and development of a novel real-time HCI system based on a unique combination of a pair of data gloves based on fibre-optic curvature sensors to acquire finger joint angles, a hybrid tracking system based on inertia and ultrasound to capture hand position and orientation, and a stereoscopic display system to provide an immersive visual feedback. The potential and effectiveness of the proposed system is demonstrated through a number of applications, namely, hand gesture based virtual object manipulation and visualisation, hand gesture based direct sign writing, and hand gesture based finger spelling. For virtual object manipulation and visualisation, the system is shown to allow a user to select, translate, rotate, scale, release and visualise virtual objects (presented using graphics and volume data) in three-dimensional space using natural hand gestures in real-time. For direct sign writing, the system is shown to be able to display immediately the corresponding SignWriting symbols signed by a user using three different signing sequences and a range of complex hand gestures, which consist of various combinations of hand postures (with each finger open, half-bent, closed, adduction and abduction), eight hand orientations in horizontal/vertical plans, three palm facing directions, and various hand movements (which can have eight directions in horizontal/vertical plans, and can be repetitive, straight/curve, clockwise/anti-clockwise). The development includes a special visual interface to give not only a stereoscopic view of hand gestures and movements, but also a structured visual feedback for each stage of the signing sequence. An excellent basis is therefore formed to develop a full HCI based on all human gestures by integrating the proposed system with facial expression and body posture recognition methods. Furthermore, for finger spelling, the system is shown to be able to recognise five vowels signed by two hands using the British Sign Language in real-time

    Hand Gesture Interaction with Human-Computer

    Get PDF
    Hand gestures are an important modality for human computer interaction. Compared to many existing interfaces, hand gestures have the advantages of being easy to use, natural, and intuitive. Successful applications of hand gesture recognition include computer games control, human-robot interaction, and sign language recognition, to name a few. Vision-based recognition systems can give computers the capability of understanding and responding to hand gestures. The paper gives an overview of the field of hand gesture interaction with Human- Computer, and describes the early stages of a project about gestural command sets, an issue that has often been neglected. Currently we have built a first prototype for exploring the use of pieand marking menus in gesture-based interaction. The purpose is to study if such menus, with practice, could support the development of autonomous gestural command sets. The scenario is remote control of home appliances, such as TV sets and DVD players, which in the future could be extended to the more general scenario of ubiquitous computing in everyday situations. Some early observations are reported, mainly concerning problems with user fatigue and precision of gestures. Future work is discussed, such as introducing flow menus for reducing fatigue, and control menus for continuous control functions. The computer vision algorithms will also have to be developed further

    Get a Grip: Evaluating Grip Gestures for VR Input Using a Lightweight Pen

    Get PDF
    The use of Virtual Reality (VR) in applications such as data analysis, artistic creation, and clinical settings requires high precision input. However, the current design of handheld controllers, where wrist rotation is the primary input approach, does not exploit the human fingers' capability for dexterous movements for high precision pointing and selection. To address this issue, we investigated the characteristics and potential of using a pen as a VR input device. We conducted two studies. The first examined which pen grip allowed the largest range of motion---we found a tripod grip at the rear end of the shaft met this criterion. The second study investigated target selection via 'poking' and ray-casting, where we found the pen grip outperformed the traditional wrist-based input in both cases. Finally, we demonstrate potential applications enabled by VR pen input and grip postures
    corecore