124 research outputs found
A study of manual control methodology with annotated bibliography
Manual control methodology - study with annotated bibliograph
Augmenting User Interfaces with Haptic Feedback
Computer assistive technologies have developed considerably over the past decades.
Advances in computer software and hardware have provided motion-impaired operators
with much greater access to computer interfaces. For people with motion
impairments, the main di�culty in the communication process is the input of data
into the system. For example, the use of a mouse or a keyboard demands a high level
of dexterity and accuracy. Traditional input devices are designed for able-bodied
users and often do not meet the needs of someone with disabilities. As the key feature
of most graphical user interfaces (GUIs) is to point-and-click with a cursor this
can make a computer inaccessible for many people.
Human-computer interaction (HCI) is an important area of research that aims
to improve communication between humans and machines. Previous studies have
identi�ed haptics as a useful method for improving computer access. However, traditional
haptic techniques su�er from a number of shortcomings that have hindered
their inclusion with real world software. The focus of this thesis is to develop haptic
rendering algorithms that will permit motion-impaired operators to use haptic assistance
with existing graphical user interfaces. The main goal is to improve interaction
by reducing error rates and improving targeting times. A number of novel haptic
assistive techniques are presented that utilise the three degrees-of-freedom (3DOF)
capabilities of modern haptic devices to produce assistance that is designed speci�-
cally for motion-impaired computer users. To evaluate the e�ectiveness of the new
techniques a series of point-and-click experiments were undertaken in parallel with
cursor analysis to compare the levels of performance. The task required the operator
to produce a prede�ned sentence on the densely populated Windows on-screen keyboard
(OSK). The results of the study prove that higher performance levels can be
i
ii
achieved using techniques that are less constricting than traditional assistance
Engineering data compendium. Human perception and performance, volume 3
The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design of military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by system designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is Volume 3, containing sections on Human Language Processing, Operator Motion Control, Effects of Environmental Stressors, Display Interfaces, and Control Interfaces (Real/Virtual)
Recommended from our members
Modeling and Analysis of Next Generation 9-1-1 Emergency Medical Dispatch Protocols
Emergency Medical Dispatch Protocols are guidelines that a 9-1-1 dispatcher uses to evaluate the nature of emergency, resources to send and the nature of help provided to the 9-1-1 caller. The current Dispatch Protocols are based on voice only call. But the Next Generation 9-1-1 (NG9-1-1) architecture will allow multimedia emergency calls. In this thesis I analyze and model the Emergency Medical Dispatch Protocols for NG9-1-1 architecture. I have identified various technical aspects to improve the NG9-1-1 Dispatch Protocols. The devices (smartphone) at the caller end have advanced to a point where they can be used to send and receive video, pictures and text. There are sensors embedded in them that can be used for initial diagnosis of the injured person. There is a need to improve the human computer (smartphone) interface to take advantage of technology so that callers can easily make use of various features available to them. The dispatchers at the 9-1-1 call center can make use of these new protocols to improve the quality and the response time. They will have capability of multiple media streams to interact with the caller and the first responders.The specific contributions in this thesis include developing applications that use smartphone sensors. The CPR application uses the smartphone to help administer effective CPR even if the person is not trained. The application makes the CPR process closed loop, i.e., the person who administers the CPR as well as the 9-1-1 operator receive feedback and prompt from the application about the correctness of the CPR. The breathing application analyzes the quality of breathing of the affected person and automatically sends the information to the 9-1-1 operator. In order to improve the Human Computer Interface at the caller and the operator end, I have analyzed Fitts law and extended it so that it can be used to improve the instructions given to a caller. In emergency situations, the caller may be physically or cognitively impaired. This may happen either because the caller is the injured person, or because the caller is a close relative or friend of the injured person. Using EEG waves, I have analyzed and developed a mathematical model of a person's cognitive impairment. Finally, I have developed a mathematical model of the response time of a 9-1-1 call and analyzed the factors that can be improved to reduce the response time. In this regard, another application, I have developed, allows the 9-1-1 operator to remotely control the media features of a caller's smartphone. This is needed in case the caller is unable to operate the multimedia features of the smartphone. For example, the caller may not know how to zoom in the smartphone camera.All these building blocks come together in the development of an efficient NG9-1-1 Emergency Medical Dispatch protocols. I have provided a sample of these protocols, using the existing Emergency Dispatch Protocols used in the state of New Jersey. The new protocols will have fewer questions and more visual prompts to evaluate the nature of the emergency
Computational Analysis of Eye-Strain for Digital Screens based on Eye Tracking Studies
Computer vision syndrome (CVS) is composed of multiple eye vision problems due to the prolonged use of digital displays, including tablets and smartphones. These problems were shown to affect visual comfort as well as work productivity in both adults and teenagers. CVS causes eye and vision symptoms such as eye-strain, eye burn, dry eyes, double vision, and blurred vision. CVS, which causes severe vision and muscular problems due to repeated eye movements and excessive eye focus on computer screens, is a cause of work-related stress. In this thesis, we address this problem and present three general-purpose mathematical compound models for assessing eye-strain in eye-tracking applications, namely (1) Fixation-based Eye fatigue Load Index (FELiX), (2) Index of Difficulty for Eye-tracking Applications (IDEA), and (3) Eye-Strain Probation Model (ESPiM) based on eye-tracking parameters and subjective ratings to measure, predict, and compare the amount of fatigue or cognitive workload during target selection tasks for different user groups or interaction techniques. The ESPiM model is the outcome of both FELiX and IDEA, which benefit from direct subjective rating and, therefore, can be applied to assess the ESPiM model's efficacy. We present experiments and user studies that show that these models can measure potential eye-strain levels on individuals based on physical circumstances such as screen resolution and target positions per time
Designing Text Entry Methods for Non-Verbal Vocal Input
Katedra počítačové grafiky a interakc
Development and Evaluation of Facial Gesture Recognition and Head Tracking for Assistive Technologies
Globally, the World Health Organisation estimates that there are about 1 billion people suffering from disabilities and the UK has about 10 million people suffering from neurological disabilities in particular. In extreme cases these individuals with disabilities such as Motor Neuron Disease(MND), Cerebral Palsy(CP) and Multiple Sclerosis(MS) may only be able to perform limited head movement, move their eyes or make facial gestures. The aim of this research is to investigate low-cost and reliable assistive devices using automatic gesture recognition systems that will enable the most severely disabled user to access electronic assistive technologies and communication devices thus enabling them to communicate with friends and relative.
The research presented in this thesis is concerned with the detection of head movements, eye movements, and facial gestures, through the analysis of video and depth images. The proposed system, using web cameras or a RGB-D sensor coupled with computer vision and pattern recognition techniques, will have to be able to detect the movement of the user and calibrate it to facilitate communication. The system will also provide the user with the functionality of choosing the sensor to be used i.e. the web camera or the RGB-D sensor, and the interaction or switching mechanism i.e. eye blink or eyebrows movement to use. This ability to system to enable the user to select according to the user's needs would make it easier on the users as they would not have to learn how to operating the same system as their condition changes.
This research aims to explore in particular the use of depth data for head movement based assistive devices and the usability of different gesture modalities as switching mechanisms. The proposed framework consists of a facial feature detection module, a head tracking module and a gesture recognition module. Techniques such as Haar-Cascade and skin detection were used to detect facial features such as the face, eyes and nose. The depth data from the RGB-D sensor was used to segment the area nearest to the sensor. Both the head tracking module and the gesture recognition module rely on the facial feature module as it provided data such as the location of the facial features. The head tracking module uses the facial feature data to calculate the centroid of the face, the distance to the sensor, the location of the eyes and the nose to detect head motion and translate it into pointer movement. The gesture detection module uses features such as the location of the eyes, the location of the pupil, the size of the pupil and calculates the interocular distance for the detection of blink or eyebrows movement to perform a click action. The research resulted in the creation of four assistive devices based on the combination of the sensors (Web Camera and RGB-D sensor) and facial gestures (Blink and Eyebrows movement): Webcam-Blink, Webcam-Eyebrows, Kinect-Blink and Kinect-Eyebrows. Another outcome of this research has been the creation of an evaluation framework based on Fitts' Law with a modified multi-directional task including a central location and a dataset consisting of both colour images and depth data of people performing head movement towards different direction and performing gestures such as eye blink, eyebrows movement and mouth movements.
The devices have been tested with healthy participants. From the observed data, it was found that both Kinect-based devices have lower Movement Time and higher Index of Performance and Effective Throughput than the web camera-based devices thus showing that the introduction of the depth data has had a positive impact on the head tracking algorithm. The usability assessment survey, suggests that there is a significant difference in eye fatigue experienced by the participants; blink gesture was less tiring to the eye than eyebrows movement gesture. Also, the analysis of the gestures showed that the Index of Difficulty has a large effect on the error rates of the gesture detection and also that the smaller the Index of Difficulty the higher the error rate
- …