185 research outputs found

    Dynamic motion coupling of body movement for input control

    Get PDF
    Touchless gestures are used for input when touch is unsuitable or unavailable, such as when interacting with displays that are remote, large, public, or when touch is prohibited for hygienic reasons. Traditionally user input is spatially or semantically mapped to system output, however, in the context of touchless gestures these interaction principles suffer from several disadvantages including memorability, fatigue, and ill-defined mappings. This thesis investigates motion correlation as the third interaction principle for touchless gestures, which maps user input to system output based on spatiotemporal matching of reproducible motion. We demonstrate the versatility of motion correlation by using movement as the primary sensing principle, relaxing the restrictions on how a user provides input. Using TraceMatch, a novel computer vision-based system, we show how users can provide effective input through investigation of input performance with different parts of the body, and how users can switch modes of input spontaneously in realistic application scenarios. Secondly, spontaneous spatial coupling shows how motion correlation can bootstrap spatial input, allowing any body movement, or movement of tangible objects, to be appropriated for ad hoc touchless pointing on a per interaction basis. We operationalise the concept in MatchPoint, and demonstrate the unique capabilities through an exploration of the design space with application examples. Finally, we explore how users synchronise with moving targets in the context of motion correlation, revealing how simple harmonic motion leads to better synchronisation. Using the insights gained we explore the robustness of algorithms used for motion correlation, showing how it is possible to successfully detect a user's intent to interact whilst suppressing accidental activations from common spatial and semantic gestures. Finally, we look across our work to distil guidelines for interface design, and further considerations of how motion correlation can be used, both in general and for touchless gestures

    DialPlates: Enabling Pursuits-based User Interfaces with Large Target Numbers

    Get PDF
    In this paper we introduce a novel approach for smooth pursuits eye movement detection and demonstrate that it allows up to 160 targets to be distinguished. With this work we advance the well-established smooth pursuits technique, which allows gaze interaction without calibration. The approach is valuable for researchers and practitioners, since it enables novel user interfaces and applications to be created that employ a large number of targets, for example, a pursuits-based keyboard or a smart home where many different objects can be controlled using gaze. We present findings from two studies. In particular, we compare our novel detection algorithm based on linear regression with the correlation method. We quantify its accuracy for around 20 targets on a single circle and up to 160 targets on multiple circles. Finally, we implemented a pursuits-based keyboard app with 108 targets as proof-of-concept

    Exploration of smooth pursuit eye movements for gaze calibration in games

    Get PDF
    Eye tracking offers opportunities to extend novel interfaces and promises new ways of interaction for gameplay. However, gaze has been found challenging to use in dynamic interfaces involving motion. Moving targets are hard to select with state of the art gaze input methods and gaze estimation requires calibration in order to be accurate when offering a successful interaction experience. Smooth pursuit eye movements have been used to solve this new paradigm, but there is not enough information on the behavior of the eyes when performing such eye movement. In this work, we tried to understand the relationship between gaze and motion when performing smooth pursuit movements through the integration of calibration within a videogame. In our rst study, we propose to leverage the attentive gaze behavior of the eyes during gameplay for implicit and continuous re-calibration. We demonstrated this with GazeBall, a retro-inspired version of Atari's BreakOut game in which we continually calibrate gaze based on the ball's movement and the player's inevitable ocular pursuit on the ball. Continuous calibration enabled the extension of the game with a gaze-based `power-up'. In the evaluation of GazeBall, we show that our approach is effective in maintaining highly accurate gaze input over time, while re-calibration remains invisible to the player. GazeBall raised awareness on the lack of information about smooth pursuit for interaction. Therefore, in our second study, we focused on gaining more understanding on the behavior of the eyes. By testing different motion directions and speeds we found anticipatory predictions during gaze trajectory that indicates that the common reaction of the eyes when a moving target is present is not only following but trying to predict and advance the displayed movement

    Aerial Vehicles

    Get PDF
    This book contains 35 chapters written by experts in developing techniques for making aerial vehicles more intelligent, more reliable, more flexible in use, and safer in operation.It will also serve as an inspiration for further improvement of the design and application of aeral vehicles. The advanced techniques and research described here may also be applicable to other high-tech areas such as robotics, avionics, vetronics, and space

    Gaze control modelling and robotic implementation

    Get PDF
    Although we have the impression that we can process the entire visual field in a single fixation, in reality we would be unable to fully process the information outside of foveal vision if we were unable to move our eyes. Because of acuity limitations in the retina, eye movements are necessary for processing the details of the array. Our ability to discriminate fine detail drops off markedly outside of the fovea in the parafovea (extending out to about 5 degrees on either side of fixation) and in the periphery (everything beyond the parafovea). While we are reading or searching a visual array for a target or simply looking at a new scene, our eyes move every 200-350 ms. These eye movements serve to move the fovea (the high resolution part of the retina encompassing 2 degrees at the centre of the visual field) to an area of interest in order to process it in greater detail. During the actual eye movement (or saccade), vision is suppressed and new information is acquired only during the fixation (the period of time when the eyes remain relatively still). While it is true that we can move our attention independently of where the eyes are fixated, it does not seem to be the case in everyday viewing. The separation between attention and fixation is often attained in very simple tasks; however, in tasks like reading, visual search, and scene perception, covert attention and overt attention (the exact eye location) are tightly linked. Because eye movements are essentially motor movements, it takes time to plan and execute a saccade. In addition, the end-point is pre-selected before the beginning of the movement. There is considerable evidence that the nature of the task influences eye movements. Depending on the task, there is considerable variability both in terms of fixation durations and saccade lengths. It is possible to outline five separate movement systems that put the fovea on a target and keep it there. Each of these movement systems shares the same effector pathway—the three bilateral groups of oculomotor neurons in the brain stem. These five systems include three that keep the fovea on a visual target in the environment and two that stabilize the eye during head movement. Saccadic eye movements shift the fovea rapidly to a visual target in the periphery. Smooth pursuit movements keep the image of a moving target on the fovea. Vergence movements move the eyes in opposite directions so that the image is positioned on both foveae. Vestibulo-ocular movements hold images still on the retina during brief head movements and are driven by signals from the vestibular system. Optokinetic movements hold images during sustained head rotation and are driven by visual stimuli. All eye movements but vergence movements are conjugate: each eye moves the same amount in the same direction. Vergence movements are disconjugate: The eyes move in different directions and sometimes by different amounts. Finally, there are times that the eye must stay still in the orbit so that it can examine a stationary object. Thus, a sixth system, the fixation system, holds the eye still during intent gaze. This requires active suppression of eye movement. Vision is most accurate when the eyes are still. When we look at an object of interest a neural system of fixation actively prevents the eyes from moving. The fixation system is not as active when we are doing something that does not require vision, for example, mental arithmetic. Our eyes explore the world in a series of active fixations connected by saccades. The purpose of the saccade is to move the eyes as quickly as possible. Saccades are highly stereotyped; they have a standard waveform with a single smooth increase and decrease of eye velocity. Saccades are extremely fast, occurring within a fraction of a second, at speeds up to 900°/s. Only the distance of the target from the fovea determines the velocity of a saccadic eye movement. We can change the amplitude and direction of our saccades voluntarily but we cannot change their velocities. Ordinarily there is no time for visual feedback to modify the course of the saccade; corrections to the direction of movement are made in successive saccades. Only fatigue, drugs, or pathological states can slow saccades. Accurate saccades can be made not only to visual targets but also to sounds, tactile stimuli, memories of locations in space, and even verbal commands (“look left”). The smooth pursuit system keeps the image of a moving target on the fovea by calculating how fast the target is moving and moving the eyes accordingly. The system requires a moving stimulus in order to calculate the proper eye velocity. Thus, a verbal command or an imagined stimulus cannot produce smooth pursuit. Smooth pursuit movements have a maximum velocity of about 100°/s, much slower than saccades. The saccadic and smooth pursuit systems have very different central control systems. A coherent integration of these different eye movements, together with the other movements, essentially corresponds to a gating-like effect on the brain areas controlled. The gaze control can be seen in a system that decides which action should be enabled and which should be inhibited and in another that improves the action performance when it is executed. It follows that the underlying guiding principle of the gaze control is the kind of stimuli that are presented to the system, by linking therefore the task that is going to be executed. This thesis aims at validating the strong relation between actions and gaze. In the first part a gaze controller has been studied and implemented in a robotic platform in order to understand the specific features of prediction and learning showed by the biological system. The eye movements integration opens the problem of the best action that should be selected when a new stimuli is presented. The action selection problem is solved by the basal ganglia brain structures that react to the different salience values of the environment. In the second part of this work the gaze behaviour has been studied during a locomotion task. The final objective is to show how the different tasks, such as the locomotion task, imply the salience values that drives the gaze

    Design, Control, and Evaluation of a Human-Inspired Robotic Eye

    Get PDF
    Schulz S. Design, Control, and Evaluation of a Human-Inspired Robotic Eye. Bielefeld: Universität Bielefeld; 2020.The field of human-robot interaction deals with robotic systems that involve humans and robots closely interacting with each other. With these systems getting more complex, users can be easily overburdened by the operation and can fail to infer the internal state of the system or its ”intentions”. A social robot, replicating the human eye region with its familiar features and movement patterns, that are the result of years of evolution, can counter this. However, the replication of these patterns requires hard- and software that is able to compete with the human characteristics and performance. Comparing previous systems found in literature with the human capabili- ties reveal a mismatch in this regard. Even though individual systems solve single aspects, the successful combination into a complete system remains an open challenge. In contrast to previous work, this thesis targets to close this gap by viewing the system as a whole — optimizing the hard- and software, while focusing on the replication of the human model right from the beginning. This work ultimately provides a set of interlocking building blocks that, taken together, form a complete end-to-end solution for the de- sign, control, and evaluation of a human-inspired robotic eye. Based on the study of the human eye, the key driving factors are identified as the success- ful combination of aesthetic appeal, sensory capabilities, performance, and functionality. Two hardware prototypes, each based on a different actua- tion scheme, have been developed in this context. Furthermore, both hard- ware prototypes are evaluated against each other, a previous prototype, and the human by comparing objective numbers obtained by real-world mea- surements of the real hardware. In addition, a human-inspired and model- driven control framework is developed out, again, following the predefined criteria and requirements. The quality and human-likeness of the motion, generated by this model, is evaluated by means of a user study. This frame- work not only allows the replication of human-like motion on the specific eye prototype presented in this thesis, but also promotes the porting and adaption to less equipped humanoid robotic heads. Unlike previous systems found in literature, the presented approach provides a scaling and limiting function that allows intuitive adjustments of the control model, which can be used to reduce the requirements set on the target platform. Even though a reduction of the overall velocities and accelerations will result in a slower motion execution, the human characteristics and the overall composition of the interlocked motion patterns remain unchanged

    Investigation of low-cost infrared sensing for intelligent deployment of occupant restraints

    Get PDF
    In automotive transport, airbags and seatbelts are effective at restraining the driver and passenger in the event of a crash, with statistics showing a dramatic reduction in the number of casualties from road crashes. However, statistics also show that a small number of these people have been injured or even killed from striking the airbag, and that the elderly and small children are especially at risk of airbag-related injury. This is the result of the fact that in-car restraint systems were designed for the average male at an average speed of 50 km/hr, and people outside these norms are at risk. Therefore one of the future safety goals of the car manufacturers is to deploy sensors that would gain more information about the driver or passenger of their cars in order to tailor the safety systems specifically for that person, and this is the goal of this project. This thesis describes a novel approach to occupant detection, position measurement and monitoring using a low-cost thermal imaging based system, which is a departure from traditional video camera-based systems, and at an affordable price. Experiments were carried out using a specially designed test rig and a car driving simulator with members of the public. Results have shown that the thermal imager can detect a human in a car cabin mock up and provide crucial real-time position data, which could be used to support intelligent restraint deployment. Other valuable information has been detected such as whether the driver is smoking, drinking a hot or cold drink, using a mobile phone, which can help to infer the level of driver attentiveness or engagement

    Advanced Knowledge Application in Practice

    Get PDF
    The integration and interdependency of the world economy leads towards the creation of a global market that offers more opportunities, but is also more complex and competitive than ever before. Therefore widespread research activity is necessary if one is to remain successful on the market. This book is the result of research and development activities from a number of researchers worldwide, covering concrete fields of research
    • …
    corecore