10 research outputs found

    Navigating large-scale virtual environments: what differences occur between helmet-mounted and desk-top displays?

    Get PDF
    Participants used a helmet-mounted display (HMD) and a desk-top (monitor) display to learn the layouts of two large-scale virtual environments (VEs) through repeated, direct navigational experience. Both VEs were ‘‘virtual buildings’’ containing more than seventy rooms. Participants using the HMD navigated the buildings significantly more quickly and developed a significantly more accurate sense of relative straight-line distance. There was no significant difference between the two types of display in terms of the distance that participants traveled or the mean accuracy of their direction estimates. Behavioral analyses showed that participants took advantage of the natural, head-tracked interface provided by the HMD in ways that included ‘‘looking around’’more often while traveling through the VEs, and spending less time stationary in the VEs while choosing a direction in which to travel

    Robot-aided neurorehabilitation of the upper extremities

    Get PDF
    Task-oriented repetitive movements can improve muscle strength and movement co-ordination in patients with impairments due to neurological lesions. The application of robotics and automation technology can serve to assist, enhance, evaluate and document the rehabilitation of movements. The paper provides an overview of existing devices that can support movement therapy of the upper extremities in subjects with neurological pathologies. The devices are critically compared with respect to technical function, clinical applicability, and, if they exist, clinical outcome

    Handheld Augmented Reality in education

    Full text link
    [ES] En esta tesis llevamos a cabo una investigación en Realidad Aumentada (AR) orientada a entornos de aprendizaje, donde la interacción con los estudiantes se realiza con dispositivos de mano. A través de tres estudios exploramos las respuestas en el aprendizaje que se pueden obtener usando AR en dispositivos de mano, en un juego que desarrollamos para niños. Exploramos la influencia de AR en Entornos de Aprendizaje de Realidad Virtual (VRLE) y las ventajas que pueden aportar, así como sus límites. También probamos el juego en dos dispositivos de mano distintos (un smartphone y un Tablet PC) y presentamos las conclusiones comparándolos en torno a la satisfación y la interacción. Finalmente, comparamos interfaces táctiles y tangibles en aplicaciones de AR para niños bajo una perspectiva en Interacción Hombre-Máquina.[EN] In this thesis we conduct a research in Augmented Reality (AR) aimed to learning environments, where the interaction with the students is carried out using handheld devices. Through three studies we explore the learning outcomes that can be obtained using handheld AR in a game that we developed for children. We explored the influence of AR in Virtual Reality Learning Environments (VRLE) and the advantages that can involve, as well as the limits. We also tested the game in two different handheld devices (a smartphone and a Tablet PC) and present the conclusions comparing them concerning satisfaction and interaction. Finally, we compare the use tactile and tangible user interfaces in AR applications for children under a Human-Computer Interaction perspective.González Gancedo, S. (2012). Handheld Augmented Reality in education. http://hdl.handle.net/10251/17973Archivo delegad

    Enabling Human-Robot Collaboration via Holistic Human Perception and Partner-Aware Control

    Get PDF
    As robotic technology advances, the barriers to the coexistence of humans and robots are slowly coming down. Application domains like elderly care, collaborative manufacturing, collaborative manipulation, etc., are considered the need of the hour, and progress in robotics holds the potential to address many societal challenges. The future socio-technical systems constitute of blended workforce with a symbiotic relationship between human and robot partners working collaboratively. This thesis attempts to address some of the research challenges in enabling human-robot collaboration. In particular, the challenge of a holistic perception of a human partner to continuously communicate his intentions and needs in real-time to a robot partner is crucial for the successful realization of a collaborative task. Towards that end, we present a holistic human perception framework for real-time monitoring of whole-body human motion and dynamics. On the other hand, the challenge of leveraging assistance from a human partner will lead to improved human-robot collaboration. In this direction, we attempt at methodically defining what constitutes assistance from a human partner and propose partner-aware robot control strategies to endow robots with the capacity to meaningfully engage in a collaborative task

    Visualisation of Long in Time Dynamic Networks on Large Touch Displays

    Get PDF
    Any dataset containing information about relationships between entities can be modelled as a network. This network can be static, where the entities/relationships do not change over time, or dynamic, where the entities/relationships change over time. Network data that changes over time, dynamic network data, is a powerful resource when studying many important phenomena, across wide-ranging fields from travel networks to epidemiology.However, it is very difficult to analyse this data, especially if it covers a long period of time (e.g, one month) with respect to its temporal resolution (e.g. seconds). In this thesis, we address the problem of visualising long in time dynamic networks: networks that may not be particularly large in terms of the number of entities or relationships, but are long in terms of the length of time they cover when compared to their temporal resolution.We first introduce Dynamic Network Plaid, a system for the visualisation and analysis of long in time dynamic networks. We design and build for an 84" touch-screen vertically-mounted display as existing work reports positive results for the use of these in a visualisation context, and that they are useful for collaboration. The Plaid integrates multiple views and we prioritise the visualisation of interaction provenance. In this system we also introduce a novel method of time exploration called ‘interactive timeslicing’. This allows the selection and comparison of points that are far apart in time, a feature not offered by existing visualisation systems. The Plaid is validated through an expert user evaluation with three public health researchers.To confirm observations of the expert user evaluation, we then carry out a formal laboratory study with a large touch-screen display to verify our novel method of time navigation against existing animation and small multiples approaches. From this study, we find that interactive timeslicing outperforms animation and small multiples for complex tasks requiring a compari-son between multiple points that are far apart in time. We also find that small multiples is best suited to comparisons of multiple sequential points in time across a time interval.To generalise the results of this experiment, we later run a second formal laboratory study in the same format as the first, but this time using standard-sized displays with indirect mouse input. The second study reaffirms the results of the first, showing that our novel method of time navigation can facilitate the visual comparison of points that are distant in time in a way that existing approaches, small multiples and animation, cannot. The study demonstrates that our previous results generalise across display size and interaction type (touch vs mouse).In this thesis we introduce novel representations and time interaction techniques to improve the visualisation of long in time dynamic networks, and experimentally show that our novel method of time interaction outperforms other popular methods for some task types

    Conception, développement et validation expérimentale d'une boussole haptique

    Get PDF
    Ce mémoire présente la conception, le contrôle et la validation expérimentale d’une boussole haptique servant à diriger les utilisateurs aux prises avec une déficience visuelle, et ce, dans tous les environnements. La revue de littérature décrit le besoin pour un guidage haptique et permet de mettre en perspective cette technologie dans le marché actuel. La boussole proposée utilise le principe de couples asymétriques. Son design est basé sur une architecture de moteur à entraînement direct et un contrôle en boucle ouverte étalonné au préalable. Cette conception permet d’atteindre une vaste plage de fréquences pour la rétroaction haptique. Les propriétés mécaniques de l’assemblage sont évaluées. Puis, l’étalonnage des couples permet d’assurer que le contrôle en boucle ouverte produit des couples avec une précision suffisante. Un premier test avec des utilisateurs a permis d’identifier que les paramètres de fréquence entre 5 et 15 Hz combinés avec des couples au-delà de 40 mNm permettent d’atteindre une efficacité intéressante pour la tâche. L’expérience suivante démontre qu’utiliser une rétroaction haptique proportionnelle à l’erreur d’orientation améliore significativement les performances. Le concept est ensuite éprouvé avec dix-neuf sujets qui doivent se diriger sur un parcours avec l’aide seule de cette boussole haptique. Les résultats montrent que tous les sujets ont réussi à rencontrer tous les objectifs de la route, tout en maintenant des déviations latérales relativement faibles (0:39 m en moyenne). Les performances obtenues et les impressions des utilisateurs sont prometteuses et plaident en faveur de ce dispositif. Pour terminer, un modèle simplifié du comportement d’un individu pour la tâche d’orientation est développé et démontre l’importance de la personnalisation de l’appareil. Ce modèle est ensuite utilisé pour mettre en valeur la stratégie d’horizon défilant pour le placement de la cible intermédiaire actuelle dans un parcours sur une longue distance.This Master’s thesis presents the design, control and experimental validation of a haptic compass, designed as a guiding device for the visually impaired in all environments. The literature review shows that there is a need for haptic guidance and how this technology differs from current haptic devices. The proposed device uses the principle of asymmetric torques. Its design is based on a direct drive motor and a pre-calibrated open-loop control, which allows the generation of stimuli in a wide range of frequencies. The device is calibrated and its mechanical properties are evaluated to ensure that the open-loop control provides sufficient precision. A first user study presents interesting effectiveness in the frequency range 5 to 15 Hz and for torques over 40 mNm. In a second experiment, the use of a haptic feedback proportional to the anglular error is shown to significantly improve the results. An experimental validation by a group of subjects walking with the aid of the portable device in an open environment is then reported. The results show that all participants met all route objectives with small lateral deviations (0:39 m on average). The performances obtained and the user’s impressions are favorable and confirm the potential of this device. Finally, a model of the human orientation task is developed and demonstrates the importance of individual customization. A receding horizon strategy for the placement of the current target on the path is thereby proposed

    Depth, shading, and stylization in stereoscopic cinematography

    Get PDF
    Due to the constantly increasing focus of the entertainment industry on stereoscopic imaging, techniques and tools that enable precise control over the depth impression and help to overcome limitations of the current stereoscopic hardware are gaining in importance. In this dissertation, we address selected problems encountered during stereoscopic content production, with a particular focus on stereoscopic cinema. First, we consider abrupt changes of depth, such as those induced by cuts in films. We derive a model predicting the time the visual system needs to adapt to such changes and propose how to employ this model for film cut optimization. Second, we tackle the issue of discrepancies between the two views of a stereoscopic image due to view-dependent shading of glossy materials. The suggested solution eliminates discomfort caused by non-matching specular highlights while preserving the perception of gloss. Last, we deal with the problem of filmgrainmanagement in stereoscopic productions and propose a new method for film grain application that reconciles visual comfort with the idea of medium-scene separation.Aufgrund der ständig steigenden Beachtung der stereoskopische Abbildung durch die Unterhaltungsindustrie, gewinnen Techniken und Werkzeuge an Bedeutung, die eine präzise Steuerung der Tiefenwahrnehmung ermöglichen und Einschränkungen der gegenwärtigen stereoskopischen Geräte überwinden. In dieser Dissertation adressieren wir ausgewählte Probleme, die während der Erzeugung von stereoskopischen Inhalten auftreten, mit besonderem Schwerpunkt auf der stereoskopischen Kinematographie. Zuerst betrachten wir abrupte Tiefenänderungen, wie sie durch Filmschnitte hervergerufen werden. Wir leiten ein Modell her, das die Zeit vorhersagt, die für das menschliche Sehsystem notwendig ist, um sich an solche Änderungen der Tiefe zu adaptieren, und schlagen vor wie dieses Modell für Schnittoptimierung angewendet werden kann. Danach gehen wir das Problem der Unstimmigkeiten zwischen den zwei Ansichten eines stereoskopischen Bildes, infolge der sichtabhängigen Schattierung von glänzenden Materialien, an. Die vorgeschlagene Lösung eliminiert das visuelle Unbehagen, welches von nicht zusammenpassenden Glanzlichtern verursacht wird, indessen bewahrt sie die Glanzwahrnehmung. Zuletzt behandeln wir das Problem des Filmkornsmanagements in stereoskopischen Produktionen und schlagen eine neue Methode für das Hinzufügen vom Filmkorn vor, die die visuelle Behaglichkeit mit der Idee der Medium-Szenen-Trennung in Einklang bringt

    Étude de la performance humaine en téléopération : le cas forestier

    Get PDF
    Analyse de la situation en téléopération forestière -- Revue de la littérature sur les interfaces de commande en téléopération -- Développement du banc d'essai et plan expérimental- -- Description du simulateur graphique -- Étude comparative de différentes interfaces de commande -- Apprentissage et modèle de performance

    The use of mobile phones as service-delivery devices in sign language machine translation system

    Get PDF
    Masters of ScienceThis thesis investigates the use of mobile phones as service-delivery devices in a sign language machine translation system. Four sign language visualization methods were evaluated on mobile phones. Three of the methods were synthetic sign language visualization methods. Three factors were considered: the intelligibility of sign language, as rendered by the method; the power consumption; and the bandwidth usage associated with each method. The average intelligibility rate was 65%, with some methods achieving intelligibility rates of up to 92%. The average size was 162 KB and, on average, the power consumption increased to 180% of the idle state, across all methods. This research forms part of the Integration of Signed and Verbal Communication: South African Sign Language Recognition and Animation (SASL) project at the University of the Western Cape and serves as an integration platform for the group's research. In order to perform this research a machine translation system that uses mobile phones as service-delivery devices was developed as well as a 3D Avatar for mobile phones. It was concluded that mobile phones are suitable service-delivery platforms for sign language machine translation systems.South Afric

    Evaluation Experiments of a Teleexistence Manipulation System

    No full text
    corecore