39 research outputs found

    Multimodal interactions in virtual environments using eye tracking and gesture control.

    Get PDF
    Multimodal interactions provide users with more natural ways to interact with virtual environments than using traditional input methods. An emerging approach is gaze modulated pointing, which enables users to perform virtual content selection and manipulation conveniently through the use of a combination of gaze and other hand control techniques/pointing devices, in this thesis, mid-air gestures. To establish a synergy between the two modalities and evaluate the affordance of this novel multimodal interaction technique, it is important to understand their behavioural patterns and relationship, as well as any possible perceptual conflicts and interactive ambiguities. More specifically, evidence shows that eye movements lead hand movements but the question remains that whether the leading relationship is similar when interacting using a pointing device. Moreover, as gaze modulated pointing uses different sensors to track and detect user behaviours, its performance relies on users perception on the exact spatial mapping between the virtual space and the physical space. It raises an underexplored issue that whether gaze can introduce misalignment of the spatial mapping and lead to users misperception and interactive errors. Furthermore, the accuracy of eye tracking and mid-air gesture control are not comparable with the traditional pointing techniques (e.g., mouse) yet. This may cause pointing ambiguity when fine grainy interactions are required, such as selecting in a dense virtual scene where proximity and occlusion are prone to occur. This thesis addresses these concerns through experimental studies and theoretical analysis that involve paradigm design, development of interactive prototypes, and user study for verification of assumptions, comparisons and evaluations. Substantial data sets were obtained and analysed from each experiment. The results conform to and extend previous empirical findings that gaze leads pointing devices movements in most cases both spatially and temporally. It is testified that gaze does introduce spatial misperception and three methods (Scaling, Magnet and Dual-gaze) were proposed and proved to be able to reduce the impact caused by this perceptual conflict where Magnet and Dual-gaze can deliver better performance than Scaling. In addition, a coarse-to-fine solution is proposed and evaluated to compensate the degradation introduced by eye tracking inaccuracy, which uses a gaze cone to detect ambiguity followed by a gaze probe for decluttering. The results show that this solution can enhance the interaction accuracy but requires a compromise on efficiency. These findings can be used to inform a more robust multimodal inter- face design for interactions within virtual environments that are supported by both eye tracking and mid-air gesture control. This work also opens up a technical pathway for the design of future multimodal interaction techniques, which starts from a derivation from natural correlated behavioural patterns, and then considers whether the design of the interaction technique can maintain perceptual constancy and whether any ambiguity among the integrated modalities will be introduced

    Development of a Compact, Configurable, Real-Time Range Imaging System

    No full text
    This thesis documents the development of a time-of-flight (ToF) camera suitable for autonomous mobile robotics applications. By measuring the round trip time of emitted light to and from objects in the scene, the system is capable of simultaneous full-field range imaging. This is achieved by projecting amplitude modulated continuous wave (AMCW) light onto the scene, and recording the reflection using an image sensor array with a high-speed shutter amplitude modulated at the same frequency (of the order of tens of MHz). The effect is to encode the phase delay of the reflected light as a change in pixel intensity, which is then interpreted as distance. A full field range imaging system has been constructed based on the PMD Technologies PMD19k image sensor, where the high-speed shuttering mechanism is builtin to the integrated circuit. This produces a system that is considerably more compact and power efficient than previous iterations that employed an image intensifier to provide sensor modulation. The new system has comparable performance to commercially available systems in terms of distance measurement precision and accuracy, but is much more flexible with regards to its operating parameters. All of the operating parameters, including the image integration time, sensor modulation phase offset and modulation frequency can be changed in realtime either manually or automatically through software. This highly configurable system serves as an excellent platform for research into novel range imaging techniques. One promising technique is the utilisation of measurements using multiple modulation frequencies in order to maximise precision over an extended operating range. Each measurement gives an independent estimate of the distance with limited range depending on the modulation frequency. These are combined to give a measurement with extended maximum range using a novel algorithm based on the New Chinese Remainder Theorem. A theoretical model for the measurement precision and accuracy of the new algorithm is presented and verified with experimental results. All distance image processing is performed on a per-pixel basis in real-time using a Field Programmable Gate Array (FPGA). An efficient hardware implementation of the phase determination algorithm for calculating distance is investigated. The limiting resource for such an implementation is random access memory (RAM), and a detailed analysis of the trade-off between this resource and measurement precision is also presented

    Intelligent Sensors for Human Motion Analysis

    Get PDF
    The book, "Intelligent Sensors for Human Motion Analysis," contains 17 articles published in the Special Issue of the Sensors journal. These articles deal with many aspects related to the analysis of human movement. New techniques and methods for pose estimation, gait recognition, and fall detection have been proposed and verified. Some of them will trigger further research, and some may become the backbone of commercial systems

    Studies in ambient intelligent lighting

    Get PDF
    The revolution in lighting we are arguably experiencing is led by technical developments in the area of solid state lighting technology. The improved lifetime, efficiency and environmentally friendly raw materials make LEDs the main contender for the light source of the future. The core of the change is, however, not in the basic technology, but in the way users interact with it and the way the quality of the produced effect on the environment is judged. With the new found freedom the users can switch their focus from the confines of the technology to the expression of their needs, regardless of the details of the lighting system. Identifying the user needs, creating an effective language to communicate them to the system, and translating them to control signals that fulfill them, as well as defining the means to measure the quality of the produced result are the topic of study of a new multidisciplinary area of study, Ambient Intelligent Lighting. This thesis describes a series of studies in the field of Ambient Intelligent Lighting, divided in two parts. The first part of the thesis demonstrates how, by adopting a user centric design philosophy, the traditional control paradigms can be superseded by novel, so-called effect driven controls. Chapter 3 describes an algorithm that, using statistical methods and image processing, generates a set of colors based on a term or set of terms. The algorithm uses Internet image search engines (Google Images, Flickr) to acquire a set of images that represent a term and subsequently extracts representative colors from the set. Additionally, an estimate of the quality of the extracted set of colors is computed. Based on the algorithm, a system that automatically enriches music with lyrics based images and lighting was built and is described. Chapter 4 proposes a novel effect driven control algorithm, enabling users easy, natural and system agnostic means to create a spatial light distribution. By using an emerging technology, visible light communication, and an intuitive effect definition, a real time interactive light design system was developed. Usability studies on a virtual prototype of the system demonstrated the perceived ease of use and increased efficiency of an effect driven approach. In chapter 5, using stochastic models, natural temporal light transitions are modeled and reproduced. Based on an example video of a natural light effect, a Markov model of the transitions between colors of a single light source representing the effect is learned. The model is a compact, easy to reproduce, and as the user studies show, recognizable representation of the original light effect. The second part of the thesis studies the perceived quality of one of the unique capabilities of LEDs, chromatic temporal transitions. Using psychophysical methods, existing spatial models of human color vision were found to be unsuitable for predicting the visibility of temporal artifacts caused by the digital controls. The chapters in this part demonstrate new perceptual effects and make the first steps towards building a temporal model of human color vision. In chapter 6 the perception of smoothness of digital light transitions is studied. The studies presented demonstrate the dependence of the visibility of digital steps in a temporal transition on the frequency of change, chromaticity, intensity and direction of change of the transition. Furthermore, a clear link between the visibility of digital steps and flicker visibility is demonstrated. Finally, a new, exponential law for the dependence of the threshold speed of smooth transitions on the changing frequency is hypothesized and proven in subsequent experiments. Chapter 7 studies the discrimination and preference of different color transitions between two colors. Due to memory effects, the discrimination threshold for complete transitions was shown to be larger than the discrimination threshold for two single colors. Two linear transitions in different color spaces were shown to be significantly preferred over a set of other, curved, transitions. Chapter 8 studies chromatic and achromatic flicker visibility in the periphery. A complex change of both the absolute visibility thresholds for different frequencies, as well as the critical flicker frequency is observed. Finally, an increase in the absolute visibility thresholds caused by an addition of a mental task in central vision is demonstrated

    From motion capture to interactive virtual worlds : towards unconstrained motion-capture algorithms for real-time performance-driven character animation

    Get PDF
    This dissertation takes performance-driven character animation as a representative application and advances motion capture algorithms and animation methods to meet its high demands. Existing approaches have either coarse resolution and restricted capture volume, require expensive and complex multi-camera systems, or use intrusive suits and controllers. For motion capture, set-up time is reduced using fewer cameras, accuracy is increased despite occlusions and general environments, initialization is automated, and free roaming is enabled by egocentric cameras. For animation, increased robustness enables the use of low-cost sensors input, custom control gesture definition is guided to support novice users, and animation expressiveness is increased. The important contributions are: 1) an analytic and differentiable visibility model for pose optimization under strong occlusions, 2) a volumetric contour model for automatic actor initialization in general scenes, 3) a method to annotate and augment image-pose databases automatically, 4) the utilization of unlabeled examples for character control, and 5) the generalization and disambiguation of cyclical gestures for faithful character animation. In summary, the whole process of human motion capture, processing, and application to animation is advanced. These advances on the state of the art have the potential to improve many interactive applications, within and outside virtual reality.Diese Arbeit befasst sich mit Performance-driven Character Animation, insbesondere werden Motion Capture-Algorithmen entwickelt um den hohen Anforderungen dieser Beispielanwendung gerecht zu werden. Existierende Methoden haben entweder eine geringe Genauigkeit und einen eingeschränkten Aufnahmebereich oder benötigen teure Multi-Kamera-Systeme, oder benutzen störende Controller und spezielle Anzüge. Für Motion Capture wird die Setup-Zeit verkürzt, die Genauigkeit für Verdeckungen und generelle Umgebungen erhöht, die Initialisierung automatisiert, und Bewegungseinschränkung verringert. Für Character Animation wird die Robustheit für ungenaue Sensoren erhöht, Hilfe für benutzerdefinierte Gestendefinition geboten, und die Ausdrucksstärke der Animation verbessert. Die wichtigsten Beiträge sind: 1) ein analytisches und differenzierbares Sichtbarkeitsmodell für Rekonstruktionen unter starken Verdeckungen, 2) ein volumetrisches Konturenmodell für automatische Körpermodellinitialisierung in genereller Umgebung, 3) eine Methode zur automatischen Annotation von Posen und Augmentation von Bildern in großen Datenbanken, 4) das Nutzen von Beispielbewegungen für Character Animation, und 5) die Generalisierung und Übertragung von zyklischen Gesten für genaue Charakteranimation. Es wird der gesamte Prozess erweitert, von Motion Capture bis hin zu Charakteranimation. Die Verbesserungen sind für viele interaktive Anwendungen geeignet, innerhalb und außerhalb von virtueller Realität
    corecore