6 research outputs found

    Haptic feedback to gaze events

    Get PDF
    Eyes are the window to the world, and most of the input from the surrounding environment is captured through the eyes. In Human-Computer Interaction too, gaze based interactions are gaining prominence, where the user’s gaze acts as an input to the system. Of late portable and inexpensive eye-tracking devices have made inroads in the market, opening up wider possibilities for interacting with a gaze. However, research on feedback to the gaze-based events is limited. This thesis proposes to study vibrotactile feedback to gaze-based interactions. This thesis presents a study conducted to evaluate different types of vibrotactile feedback and their role in response to a gaze-based event. For this study, an experimental setup was designed wherein when the user fixated the gaze on a functional object, vibrotactile feedback was provided either on the wrist or on the glasses. The study seeks to answer questions such as the helpfulness of vibrotactile feedback in identifying functional objects, user preference for the type of vibrotactile feedback, and user preference of the location of the feedback. The results of this study indicate that vibrotactile feedback was an important factor in identifying the functional object. The preference for the type of vibrotactile feedback was somewhat inconclusive as there were wide variations among the users over the type of vibrotactile feedback. The personal preference largely influenced the choice of location for receiving the feedback

    Haptic feedback in eye typing

    Get PDF
    Proper feedback is essential in gaze based interfaces, where the same modality is used for both perception and control. We measured how vibrotactile feedback, a form of haptic feedback, compares with the commonly used visual and auditory feedback in eye typing. Haptic feedback was found to produce results that are close to those of auditory feedback; both were easy to perceive and participants liked both the auditory ”click” and the tactile “tap” of the selected key. Implementation details (such as the placement of the haptic actuator) were also found important

    Low-Cost Video-Oculography System for Eye Tracking

    Get PDF
    The vestibular system plays a critical role in balancing and the vestibulo-ocular reflex (VOR), which aids in maintaining visual stability during head movements. Current methods of vestibular research rely on scleral coils and video-oculography (VOG) with markers. These processes are potentially damaging to the test subject and painful. A comfortable non-invasive procedure is VOG without the use of markers. However, this option foregoes the accuracy of the others. A machine learning approach was explored to see if this gap in functionality could be closed. VOG is a visual-based technique of measuring eye movements. This method is used in vestibular and oculomotor research and medical diagnosis involving vertigo and stroke. A Machine Learning system was developed by training object-detection models from TensorFlow with a headset fabricated for this project. The horizontal/vertical movements were tracked by recording the model’s bounding box. From the bounding box, the center of the pupil can be derived via the geometric center. The location of the pupil center is used to calculate the angular velocity of the eye. A 3d-printed headset was fabricated to test the system using a gyroscope, raspberry pi, button light, and camera. The headset’s rotational data collection is processed along with the images captured. The rate of error was calculated to be more than scleral coils, although a more thoroughly trained model could be able to reduce error. The pupil miss rate limits the accuracy but using a higher speed and resolution camera will ameliorate the problem. A machine learning process was explored for the use of vestibular-ocular research in 2D. A low-cost headset was fabricated as an alternative to the current methods which are significantly more expensive

    View on education:I see; therefore, I learn

    Get PDF

    Haptic feedback to gaze events

    No full text
    corecore