7 research outputs found

    Ergonomics of using a mouse or other non-keyboard input device

    Get PDF
    Ten years ago, when the Health and Safety (Display Screen Equipment) Regulations (HSE, 1992) were drafted, the majority of computer interaction occurred with text driven interfaces, using a keyboard. It is not surprising then that the guidance accompanying the DSE Regulations included virtually no mention of the computer mouse or other non-keyboard input devices (NKID). In the intervening period, graphical user interfaces, incorporating ‘windows, icons and pull down menus’ (WIMPS), with a heavy reliance on pointing devices such as the mouse, have transformed user computer interaction. Accompanying this, however, have been increasing anecdotal reports of musculoskeletal health problems affecting NKID users. While the performance aspects of NKID (e.g. accuracy and speed) have been the subject of detailed research, the possible implications for user health have received comparatively little attention. The research presented in this report was commissioned by the Health and Safety Executive to improve understanding of the nature and extent of NKID health problems. This investigation, together with another project examining mobile computing (Heasman et. al., 2000), was intended to contribute to a planned review and updating of the DSE Regulations and accompanying guidance

    An investigation into alternative human-computer interaction in relation to ergonomics for gesture interface design

    Get PDF
    Recent, innovative developments in the field of gesture interfaces as input techniques have the potential to provide a basic, lower-cost, point-and-click function for graphic user interfaces (GUIs). Since these gesture interfaces are not yet widely used, indeed no tilt-based gesture interface is currently on the market, there is neither an international standard for the testing procedure nor a guideline for their ergonomic design and development. Hence, the research area demands more design case studies on a practical basis. The purpose of the research is to investigate the design factors of gesture interfaces for the point-andclick task in the desktop computer environment. The key function of gesture interfaces is to transfer the specific body movement into the cursor movement on the two-dimensional graphical user interface(2D GUI) on a real-time basis, based in particular on the arm movement. The initial literature review identified limitations related to the cursor movement behaviour with gesture interfaces. Since the cursor movement is the machine output of the gesture interfaces that need to be designed, a new accuracy measure based on the calculation of the cursor movement distance and an associated model was then proposed in order to validate the continuous cursor movement. Furthermore, a design guideline with detailed design requirements and specifications for the tilt-based gesture interfaces was suggested. In order to collect the human performance data and the cursor movement distance, a graphical measurement platform was designed and validated with the ordinary mouse. Since there are typically two types of gesture interface, i.e. the sweep-based and the tilt-based, and no commercial tilt-based gesture interface has yet been developed, a commercial sweep-based gesture interface, namely the P5 Glove, was studied and the causes and effects of the discrete cursor movement on the usability was investigated. According to the proposed design guideline, two versions of the tilt-based gesture 3 interface were designed and validated based on an iterative design process. Most of the phenomena and results from the trials undertaken, which are inter-related, were analyzed and discussed. The research has contributed new knowledge through design improvement of tilt-based gesture interfaces and the improvement of the discrete cursor movement by elimination of the manual error compensation. This research reveals that there is a relation between the cursor movement behaviour and the adjusted R 2 for the prediction of the movement time across models expanded from Fitts’ Law. In such a situation, the actual working area and the joint ranges are lengthy and appreciably different from those that had been planned. Further studies are suggested. The research was associated with the University Alliance Scheme technically supported by Freescale Semiconductor Co., U.S

    Study of Touch Gesture Performance by Four and Five Year-Old Children: Point-and-Touch, Drag-and-Drop, Zoom-in and Zoom-out, and Rotate

    Get PDF
    Past research has focused on children\u27s interaction with computers through mouse clicks, and mouse research studies focused on point-and-click and drag-and-drop. However, More research is necessary in regard to children\u27s ability to perform touch gestures such as point-and-touch, drag-and-drop, zoom-in and zoom-out, and rotate. Furthermore, research should consider specific gestures such as zoom-in and zoom-out, and rotate tasks for young children. The aim of this thesis is to study the ability of 4 and 5 year-old children to interact with touch devices and perform tasks such as: point-and-touch, drag-and-drop, zoom-in and zoom-out, and rotate. This thesis tests an iPad application with four experiments on 17 four and five-year-old children, 16 without motor impairment and 1 with a motor impairment disability. The results show that 5-year-old children perform better than 4-year-old children in the four experiments. Results indicate that interaction design for young children that uses Point-and-Touch gestures should consider distance between targets, and designs using Drag-and-Drop gestures should consider size of targets, as these have significant effects in the way children perform these gestures. Also, designers should consider size and rotation direction in rotate tasks, as it is smoother for young children to rotate clockwise objects. The result of the four different touch gestures tasks shows that time was not an important factor in children\u27s performance

    Investigating the usability of touch-based user interfaces

    Get PDF
    With the emergence of pen-and-touch operated personal digital assistants (PDAs), tablet computers, and wall-size displays (e.g., Liveboard and Smartboard), touch and pen input have gained popularity. Touch-based user interfaces such as mobile phones, PDAs and tablet PCs (with touch screens) have become more attractive in consumer electronics because they enable quick learning and rapid performance whilst evoking high user satisfaction. Today, countless supermarket checkouts, restaurant tills, automated-teller machines, airport check-in kiosks, museum information-booths and voting kiosks use touchscreens. Nevertheless, initial literature identified that the widespread use of a touch-based user interface has been limited by the high error rates shown in many studies, the lack of precision, the fatigue in arm motion, and the concern for screen smudging. Furthermore, most research into touch-based interaction has tended to not directly investigate efficiency, effectiveness and user satisfaction. There is therefore a need to add to the body of knowledge in this area, especially as devices using touch-based interaction are becoming more pervasive.Hence, the purpose of this research is to evaluate the usability of touch-based user interfaces in terms of efficiency, effectiveness and user satisfaction. In order to answer the question of whether a touch-based user interface is better - more effective, useful, practical and satisfying to the user -, an investigation of comparison to other, alternative interaction methods, by means of mouse, touch and stylus has been conducted.Therefore, the research sets out to concentrate on a series of empirical experiments that will be designed and developed to evaluate the efficiency, effectiveness and user satisfaction of using touchscreen interfaces. Furthermore, in order to collect the human performance data, a series of small software prototypes involving touch-based interaction were developed and designed using Adobe Flash.Initially a pilot experiment is carried out and followed by the abstract experiment and context experiment that were based on the guidance of The International Organization for Standardization known as ISO (ISO 9241-420, 2011).The abstract experiment consist of four tests (Tracing test, Dragging test, One direction test and Multi directional test) which are deliberately developed as abstract tasks with the purpose of analysing the user’s ability on simple tasks without a real world context. The context experiment consist of four tests as well (Tracing test, Dragging test, One direction test and Multi directional test) which are deliberately developed as contextual tasks with the purpose of analysing the user’s ability in a real world context. Overall, the aim of both abstract and context experiments was to discover if there are differences in mouse, stylus and touch on the tracing test and dragging test with different levels of difficulty that could affect users’ performance and satisfaction.The significant contribution to knowledge that may arise from this research might provoke the gaining of evidence to show if touch-based interaction is more effective and preferred by users in real-world-type tasks and scenarios. Currently there is very little evidence to indicate whether touch-based interaction is more effective and preferred by users. It seems that the proliferation of touch-based devices is market-driven rather than usability-driven. Moreover, this is the first study that has been carried out which compare three input devices (stylus, mouse and touch) in tracing, dragging, one direction tapping and multi directional tapping test for both abstract and context tasks and therefore contributes to the up-to-date HCI literature.The main strength of the current study is that it provides findings from well-designed experiment that is based on ISO standard (ISO 9241-420, 2011). It provided a useful guideline that can be further developed and applied to other research in this area

    Articulatory Copy Synthesis Based on the Speech Synthesizer VocalTractLab

    Get PDF
    Articulatory copy synthesis (ACS), a subarea of speech inversion, refers to the reproduction of natural utterances and involves both the physiological articulatory processes and their corresponding acoustic results. This thesis proposes two novel methods for the ACS of human speech using the articulatory speech synthesizer VocalTractLab (VTL) to address or mitigate the existing problems of speech inversion, such as non-unique mapping, acoustic variation among different speakers, and the time-consuming nature of the process. The first method involved finding appropriate VTL gestural scores for given natural utterances using a genetic algorithm. It consisted of two steps: gestural score initialization and optimization. In the first step, gestural scores were initialized using the given acoustic signals with speech recognition, grapheme-to-phoneme (G2P), and a VTL rule-based method for converting phoneme sequences to gestural scores. In the second step, the initial gestural scores were optimized by a genetic algorithm via an analysis-by-synthesis (ABS) procedure that sought to minimize the cosine distance between the acoustic features of the synthetic and natural utterances. The articulatory parameters were also regularized during the optimization process to restrict them to reasonable values. The second method was based on long short-term memory (LSTM) and convolutional neural networks, which were responsible for capturing the temporal dependence and the spatial structure of the acoustic features, respectively. The neural network regression models were trained, which used acoustic features as inputs and produced articulatory trajectories as outputs. In addition, to cover as much of the articulatory and acoustic space as possible, the training samples were augmented by manipulating the phonation type, speaking effort, and the vocal tract length of the synthetic utterances. Furthermore, two regularization methods were proposed: one based on the smoothness loss of articulatory trajectories and another based on the acoustic loss between original and predicted acoustic features. The best-performing genetic algorithms and convolutional LSTM systems (evaluated in terms of the difference between the estimated and reference VTL articulatory parameters) obtained average correlation coefficients of 0.985 and 0.983 for speaker-dependent utterances, respectively, and their reproduced speech achieved recognition accuracies of 86.25% and 64.69% for speaker-independent utterances of German words, respectively. When applied to German sentence utterances, as well as English and Mandarin Chinese word utterances, the neural network based ACS systems achieved recognition accuracies of 73.88%, 52.92%, and 52.41%, respectively. The results showed that both of these methods not only reproduced the articulatory processes but also reproduced the acoustic signals of reference utterances. Moreover, the regularization methods led to more physiologically plausible articulatory processes and made the estimated articulatory trajectories be more articulatorily preferred by VTL, thus reproducing more natural and intelligible speech. This study also found that the convolutional layers, when used in conjunction with batch normalization layers, automatically learned more distinctive features from log power spectrograms. Furthermore, the neural network based ACS systems trained using German data could be generalized to the utterances of other languages
    corecore