3,535 research outputs found

    Development Of Eye Gaze Estimation System Using Two Cameras

    Get PDF
    Eye Gaze is the direction where a person is looking at. It is suitable to be used as a type of natural Human Computer Interface (HCI). Current researches uses infrared or LED to locate the iris of the user to have better gaze estimation accuracy compared to researches that does not. Infrared and LED are intrusive to human eyes and might cause damage to the cornea and the retina of the eye. This research suggests a non-intrusive approach to locate the iris of the user. By using two remote cameras to capture the images of the user, a better accuracy gaze estimation system can be achieved. The system uses Haar cascade algorithms to detect the face and eye regions. The iris detection uses Hough Circle Transform algorithm to locate the position of the iris, which is critical for the gaze estimation calculation. To enable the system to track the eye and the iris location of the user in real time, the system uses CAMshift (Continuously Adaptive Meanshift) to track the eye and iris of the user. The parameters of the eye and iris are then collected and are used to calculate the gaze direction of the user. The left and right camera achieves 70.00% and 74.67% accuracy respectively. When two cameras are used to estimate the gaze direction, 88.67% accuracy is achieved. This shows that by using two cameras, the accuracy of gaze estimation is improved

    A Multicamera System for Gesture Tracking With Three Dimensional Hand Pose Estimation

    Get PDF
    The goal of any visual tracking system is to successfully detect then follow an object of interest through a sequence of images. The difficulty of tracking an object depends on the dynamics, the motion and the characteristics of the object as well as on the environ ment. For example, tracking an articulated, self-occluding object such as a signing hand has proven to be a very difficult problem. The focus of this work is on tracking and pose estimation with applications to hand gesture interpretation. An approach that attempts to integrate the simplicity of a region tracker with single hand 3D pose estimation methods is presented. Additionally, this work delves into the pose estimation problem. This is ac complished by both analyzing hand templates composed of their morphological skeleton, and addressing the skeleton\u27s inherent instability. Ligature points along the skeleton are flagged in order to determine their effect on skeletal instabilities. Tested on real data, the analysis finds the flagging of ligature points to proportionally increase the match strength of high similarity image-template pairs by about 6%. The effectiveness of this approach is further demonstrated in a real-time multicamera hand tracking system that tracks hand gestures through three-dimensional space as well as estimate the three-dimensional pose of the hand

    Face Detection Technique Based on Skin Color and Facial Features

    Get PDF
    Face detection is an essential first step in face recognition systems with the purpose of localizing and extracting the face region from the background. Apart from increasing the efficiency of face recognition systems, face detection technique also opens up the door of opportunity for application areas such as content based image retrieval, video encoding, video conferencing, crowd surveillance and intelligent human computer interfaces. This thesis presents the design of face detection approach which is capable of detecting human faces from complex background. A skin color modeling process is adopted for the face segmentation process. Image enhancement is then used to improve the face candidate before feeding to the face object classifier based on Modified Hausdroff distance. The results indicate that the system is able to detect human faces with reasonable accurac

    Automated tracing of myelinated axons and detection of the nodes of Ranvier in serial images of peripheral nerves

    Get PDF
    The development of realistic neuroanatomical models of peripheral nerves for simulation purposes requires the reconstruction of the morphology of the myelinated fibres in the nerve, including their nodes of Ranvier. Currently, this information has to be extracted by semimanual procedures, which severely limit the scalability of the experiments. In this contribution, we propose a supervised machine learning approach for the detailed reconstruction of the geometry of fibres inside a peripheral nerve based on its high-resolution serial section images. Learning from sparse expert annotations, the algorithm traces myelinated axons, even across the nodes of Ranvier. The latter are detected automatically. The approach is based on classifying the myelinated membranes in a supervised fashion, closing the membrane gaps by solving an assignment problem, and classifying the closed gaps for the nodes of Ranvier detection. The algorithm has been validated on two very different datasets: (i) rat vagus nerve subvolume, SBFSEM microscope, 200 × 200 × 200 nm resolution, (ii) rat sensory branch subvolume, confocal microscope, 384 × 384 × 800 nm resolution. For the first dataset, the algorithm correctly reconstructed 88% of the axons (241 out of 273) and achieved 92% accuracy on the task of Ranvier node detection. For the second dataset, the gap closing algorithm correctly closed 96.2% of the gaps, and 55% of axons were reconstructed correctly through the whole volume. On both datasets, training the algorithm on a small data subset and applying it to the full dataset takes a fraction of the time required by the currently used semiautomated protocols. Our software, raw data and ground truth annotations are available at http://hci.iwr.uni-heidelberg.de/Benchmarks/. The development version of the code can be found at https://github.com/RWalecki/ATMA

    Real-time Immersive human-computer interaction based on tracking and recognition of dynamic hand gestures

    Get PDF
    With fast developing and ever growing use of computer based technologies, human-computer interaction (HCI) plays an increasingly pivotal role. In virtual reality (VR), HCI technologies provide not only a better understanding of three-dimensional shapes and spaces, but also sensory immersion and physical interaction. With the hand based HCI being a key HCI modality for object manipulation and gesture based communication, challenges are presented to provide users a natural, intuitive, effortless, precise, and real-time method for HCI based on dynamic hand gestures, due to the complexity of hand postures formed by multiple joints with high degrees-of-freedom, the speed of hand movements with highly variable trajectories and rapid direction changes, and the precision required for interaction between hands and objects in the virtual world. Presented in this thesis is the design and development of a novel real-time HCI system based on a unique combination of a pair of data gloves based on fibre-optic curvature sensors to acquire finger joint angles, a hybrid tracking system based on inertia and ultrasound to capture hand position and orientation, and a stereoscopic display system to provide an immersive visual feedback. The potential and effectiveness of the proposed system is demonstrated through a number of applications, namely, hand gesture based virtual object manipulation and visualisation, hand gesture based direct sign writing, and hand gesture based finger spelling. For virtual object manipulation and visualisation, the system is shown to allow a user to select, translate, rotate, scale, release and visualise virtual objects (presented using graphics and volume data) in three-dimensional space using natural hand gestures in real-time. For direct sign writing, the system is shown to be able to display immediately the corresponding SignWriting symbols signed by a user using three different signing sequences and a range of complex hand gestures, which consist of various combinations of hand postures (with each finger open, half-bent, closed, adduction and abduction), eight hand orientations in horizontal/vertical plans, three palm facing directions, and various hand movements (which can have eight directions in horizontal/vertical plans, and can be repetitive, straight/curve, clockwise/anti-clockwise). The development includes a special visual interface to give not only a stereoscopic view of hand gestures and movements, but also a structured visual feedback for each stage of the signing sequence. An excellent basis is therefore formed to develop a full HCI based on all human gestures by integrating the proposed system with facial expression and body posture recognition methods. Furthermore, for finger spelling, the system is shown to be able to recognise five vowels signed by two hands using the British Sign Language in real-time
    corecore