220 research outputs found

    An analysis of interaction in the context of wearable computers

    Get PDF
    The focus of this thesis is on the evaluation of input modalities for generic input tasks, such inputting text and pointer based interaction. In particular, input systems that can be used within a wearable computing system are examined in terms of human-wearable computer interaction. The literature identified a lack of empirical research into the use of input devices for text input and pointing, when used as part of a wearable computing system. The research carried out within this thesis took an approach that acknowledged the movement condition of the user of a wearable system, and evaluated the wearable input devices while the participants were mobile and stationary. Each experiment was based on the user's time on task, their accuracy, and a NASA TLX assessment which provided the participant's subjective workload. The input devices assessed were 'off the shelf' systems. These were chosen as they are readily available to a wider range of users than bespoke inpu~ systems. Text based input was examined first. The text input systems evaluated were: a keyboard,; an on-screen keyboard, a handwriting recognition system, a voice 'recognition system and a wrist- keyboard (sometimes known as a wrist-worn keyboard). It was found that the most appropriate text input system to use overall, was the handwriting recognition system, (This is forther explored in the discussion of Chapters three and seven.) The text input evaluations were followed by a series of four experiments that examined pointing devices, and assessed their appropriateness as part of a wearable computing system. The devices were; an off-table mouse, a speech recognition system, a stylus and a track-pad. These were assessed in relation to the following generic pointing tasks: target acquisition, dragging and dropping, and trajectory-based interaction. Overall the stylus was found to be the most appropriate input device for use with a wearable system, when used as a pointing device. (This isforther covered in Chapters four to six.) By completing this series of experiments, evidence has been scientifically established that can support both a wearable computer designer and a wearable user's choice of input device. These choices can be made in regard to generic interface task activities such as: inputting text, target acquisition, dragging and dropping and trajectory-based interaction.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Extraction of Dynamic Trajectory on Multi-Stroke Static Handwriting Images Using Loop Analysis and Skeletal Graph Model

    Get PDF
    The recovery of handwriting’s dynamic stroke is an effective method to help improve the accuracy of any handwriting’s authentication or verification system. The recovered trajectory can be considered as a dynamic feature of any static handwritten images. Capitalising on this temporal information can significantly increase the accuracy of the verification phase. Extraction of dynamic features from static handwritings remains a challenge due to the lack of temporal information as compared to the online methods. Previously, there are two typical approaches to recover the handwriting’s stroke. The first approach is based on the script’s skeleton. The skeletonisation method has highly computational efficiency whereas it often produces noisy artifacts and mismatches on the resulted skeleton. The second approach deals with the handwriting’s contour, crossing areas and overlaps using parametric representations of lines and thickness of strokes. This method can avoid the artifacts, but it requires complicated mathematical models and may lead to computational explosion. Our paper is based on the script’s extracted skeleton and provides an approach to processing static handwriting’s objects, including edges, vertices and loops, as the important aspects of any handwritten image. Our paper is also to provide analysing and classifying loops types and human’s natural writing behavior to improve the global construction of stroke order. Then, a detailed tracing algorithm on global stroke reconstruction is presented. The experimental results reveal the superiority of our method as compared with the existing ones

    Character Recognition

    Get PDF
    Character recognition is one of the pattern recognition technologies that are most widely used in practical applications. This book presents recent advances that are relevant to character recognition, from technical topics such as image processing, feature extraction or classification, to new applications including human-computer interfaces. The goal of this book is to provide a reference source for academic research and for professionals working in the character recognition field

    CHAPTER 3. QUANTITATIVE PERSPECTIVES TO THE STUDY OF WRITING ACROSS THE LIFESPAN: A CONCEPTUAL OVERVIEW AND FOCUS ON STRUCTURAL EQUATION MODELING

    Get PDF
    As echoed throughout this edited collection, writing researchers are well aware of the complexities involved when adopting lifespan approaches to the study of written language. Writing researchers come from a wide array of fields (e.g., composition studies, rhetoric, psychology, education, and special education) that adopt different methodological approaches to answer a variety of research questions. A central issue to unpacking the complexities underlying the development of written language across the lifespan requires examining the available tools and methods offered by different research designs to pose and answer different types of research questions

    An Efficient Fusion Scheme for Human Hand Trajectory Reconstruction Using Inertial Measurement Unit and Kinect Camera

    Get PDF
    The turn of 21st century has witnessed an evolving trend in wearable devices research and improvements in human-computer interfaces. In such systems, position information of human hands in 3-D space has become extremely important as various applications require knowledge of user’s hand position. A promising example of which is a wearable ring that can naturally and ubiquitously reconstruct handwriting based on motion of human hand in an indoor environment. A common approach is to exploit the portability and affordability of commercially available inertial measurement units (IMU). However, these IMUs suffer from drift errors accumulated by double integration of acceleration readings. This process accrues intrinsic errors coming from sensor’s sensitivity, factory bias, thermal noise, etc., which result in large deviation from position’s ground truth over time. Other approaches utilize optical sensors for better position estimation, but these sensors suffer from occlusion and environment lighting conditions. In this thesis, we first present techniques to calibrate IMU, minimizing undesired effects of intrinsic imperfection resided within cheap MEMS sensors. We then introduce a Kalman filter-based fusion scheme incorporating data collected from IMU and Kinect camera, which is shown to overcome each sensor’s disadvantages and improve the overall quality of reconstructed trajectory of human hands

    Video Based Handwritten Characters Recognition

    Get PDF

    WristSketcher: Creating Dynamic Sketches in AR with a Sensing Wristband

    Full text link
    Restricted by the limited interaction area of native AR glasses (e.g., touch bars), it is challenging to create sketches in AR glasses. Recent works have attempted to use mobile devices (e.g., tablets) or mid-air bare-hand gestures to expand the interactive spaces and can work as the 2D/3D sketching input interfaces for AR glasses. Between them, mobile devices allow for accurate sketching but are often heavy to carry, while sketching with bare hands is zero-burden but can be inaccurate due to arm instability. In addition, mid-air bare-hand sketching can easily lead to social misunderstandings and its prolonged use can cause arm fatigue. As a new attempt, in this work, we present WristSketcher, a new AR system based on a flexible sensing wristband for creating 2D dynamic sketches, featuring an almost zero-burden authoring model for accurate and comfortable sketch creation in real-world scenarios. Specifically, we have streamlined the interaction space from the mid-air to the surface of a lightweight sensing wristband, and implemented AR sketching and associated interaction commands by developing a gesture recognition method based on the sensing pressure points on the wristband. The set of interactive gestures used by our WristSketcher is determined by a heuristic study on user preferences. Moreover, we endow our WristSketcher with the ability of animation creation, allowing it to create dynamic and expressive sketches. Experimental results demonstrate that our WristSketcher i) faithfully recognizes users' gesture interactions with a high accuracy of 96.0%; ii) achieves higher sketching accuracy than Freehand sketching; iii) achieves high user satisfaction in ease of use, usability and functionality; and iv) shows innovation potentials in art creation, memory aids, and entertainment applications

    Trajectory Prediction with Event-Based Cameras for Robotics Applications

    Get PDF
    This thesis presents the study, analysis, and implementation of a framework to perform trajectory prediction using an event-based camera for robotics applications. Event-based perception represents a novel computation paradigm based on unconventional sensing technology that holds promise for data acquisition, transmission, and processing at very low latency and power consumption, crucial in the future of robotics. An event-based camera, in particular, is a sensor that responds to light changes in the scene, producing an asynchronous and sparse output over a wide illumination dynamic range. They only capture relevant spatio-temporal information - mostly driven by motion - at high rate, avoiding the inherent redundancy in static areas of the field of view. For such reasons, this device represents a potential key tool for robots that must function in highly dynamic and/or rapidly changing scenarios, or where the optimisation of the resources is fundamental, like robots with on-board systems. Prediction skills are something humans rely on daily - even unconsciously - for instance when driving, playing sports, or collaborating with other people. In the same way, predicting the trajectory or the end-point of a moving target allows a robot to plan for appropriate actions and their timing in advance, interacting with it in many different manners. Moreover, prediction is also helpful for compensating robot internal delays in the perception-action chain, due for instance to limited sensors and/or actuators. The question I addressed in this work is whether event-based cameras are advantageous or not in trajectory prediction for robotics. In particular, if classical deep learning architecture used for this task can accommodate for event-based data, working asynchronously, and which benefit they can bring with respect to standard cameras. The a priori hypothesis is that being the sampling of the scene driven by motion, such a device would allow for more meaningful information acquisition, improving the prediction accuracy and processing data only when needed - without any information loss or redundant acquisition. To test the hypothesis, experiments are mostly carried out using the neuromorphic iCub, a custom version of the iCub humanoid platform that mounts two event-based cameras in the eyeballs, along with standard RGB cameras. To further motivate the work on iCub, a preliminary step is the evaluation of the robot's internal delays, a value that should be compensated by the prediction to interact in real-time with the object perceived. The first part of this thesis sees the implementation of the event-based framework for prediction, to answer the question if Long Short-Term Memory neural networks, the architecture used in this work, can be combined with event-based cameras. The task considered is the handover Human-Robot Interaction, during which the trajectory of the object in the human's hand must be inferred. Results show that the proposed pipeline can predict both spatial and temporal coordinates of the incoming trajectory with higher accuracy than model-based regression methods. Moreover, fast recovery from failure cases and adaptive prediction horizon behavior are exhibited. Successively, I questioned how much the event-based sampling approach can be convenient with respect to the classical fixed-rate approach. The test case used is the trajectory prediction of a bouncing ball, implemented with the pipeline previously introduced. A comparison between the two sampling methods is analysed in terms of error for different working rates, showing how the spatial sampling of the event-based approach allows to achieve lower error and also to adapt the computational load dynamically, depending on the motion in the scene. Results from both works prove that the merging of event-based data and Long Short-Term Memory networks looks promising for spatio-temporal features prediction in highly dynamic tasks, and paves the way to further studies about the temporal aspect and to a wide range of applications, not only robotics-related. Ongoing work is now focusing on the robot control side, finding the best way to exploit the spatio-temporal information provided by the predictor and defining the optimal robot behavior. Future work will see the shift of the full pipeline - prediction and robot control - to a spiking implementation. First steps in this direction have been already made thanks to a collaboration with a group from the University of Zurich, with which I propose a closed-loop motor controller implemented on a mixed-signal analog/digital neuromorphic processor, emulating a classical PID controller by means of spiking neural networks
    • …
    corecore