334 research outputs found

    Design of a Simulated Hospital Tour in a Scanned 3D Model for Children

    Get PDF
    Master's Thesis in Joint Master's Programme in Software Engineering - collaboration with HVLPROG399MAMN-PRO

    Vuforia v1.5 SDK: Analysis and evaluation of capabilities

    Get PDF
    This thesis goes into the augmented reality world and, being more specific in Vuforia uses, searching as an achievement the analysis of its characteristics. The first objective of this thesis is make a short explanation of what is understood by augmented reality and the actual different varieties of AR applications, and then the SDK’s features and its architecture and elements. In other hand, to understand the basis of the detection process realized by the Vuforia’s library is important to explain the approach to the considerations of image recognition, because it is the way in which Vuforia recognizes the different patterns. Another objective has been the exposition of the possible fields of applications using this library and a brief of the main steps to create an implementation always using Unity3D, due to Vuforia is only a SDK not an IDE. The reason to choose this way is due to the facilities that are provided by Unity3D when creating the application itself, because it already has implemented all necessary to access the hardware of the smartphone, as well as those that control the Vuforia’s elements. In other way, the Vuforia’s version used during the thesis has been the 1.5, but two months ago Qualcomm was launched the new 2.0 version, that it is not intended to form part of this study, although some of the most significant new capabilities are explained. Finally, the last and perhaps the most important objective have been the test and the results, where they have used three different smartphones to compare the values. Following this methodology has been possible to conclude which part of the results are due to the features and capabilities of the different smartphones and which part depends only of the Vuforia’s library.CatalĂ : Aquest projecte s’endinsa al mĂłn de la realitat augmentada, mĂ©s concretament a l’anĂ lisi de les caracterĂ­stiques y funcionalitats del SDK Vuforia. En primer objectiu serĂ  posar en perspectiva el que s’entĂ©n per realitat augmentada i de les variants existents avui dia d’aplicacions que fan Ășs de la RA. A continuaciĂł es mencionen les caracterĂ­stiques d’aquest SDK, la seva arquitectura i els seus elements. En aquesta part tambĂ© s’han tingut en compte les consideracions de reconeixement d’imatge, ja que es la manera en la qual Vuforia realitza el reconeixement dels diferents patrons. El segĂŒent pas es tractar d’exposar els possibles camps d’aplicaciĂł d’aquesta llibreria, i una breu explicaciĂł dels principals passos per crear una aplicaciĂł sota Unity3D, tenint en compte sempre que Vuforia es nomĂ©s un SDK i no un IDE. La raĂł per escollir aquest entorn es degut a les ventatges que ofereix Unity3D a l’hora de crear l’aplicaciĂł, degut a que ja disposa de tot el necessari per accedir tant al hardware del propi dispositiu mĂČbil com a els propis elements que integren Vuforia. D’altra banda, la versiĂł de Vuforia utilitzada durant el projecte ha sigut la 1.5, encara que fa poc mĂ©s de dos mesos Qualcomm va alliberar la nova versiĂł 2.0, la qual no forma part dels objectius d’aquest projecte, encara que una part de les noves funcionalitats mĂ©s significatives s’exposen breument. Finalment, es conclourĂ  amb els tests i resultats obtinguts. Per realitzar totes aquestes proves s’han utilitzat tres terminals diferents per poder comparar valors. A mĂ©s, utilitzant aquest mĂštode, ha sigut possible concloure quina part dels resultats obtinguts es deuen a les caracterĂ­stiques i capacitats dels diferents terminals i quina part depĂšn exclusivament de la prĂČpia llibreria Vuforia

    Learning Lens Blur Fields

    Full text link
    Optical blur is an inherent property of any lens system and is challenging to model in modern cameras because of their complex optical elements. To tackle this challenge, we introduce a high-dimensional neural representation of blur−-the lens blur field\textit{the lens blur field}−-and a practical method for acquiring it. The lens blur field is a multilayer perceptron (MLP) designed to (1) accurately capture variations of the lens 2D point spread function over image plane location, focus setting and, optionally, depth and (2) represent these variations parametrically as a single, sensor-specific function. The representation models the combined effects of defocus, diffraction, aberration, and accounts for sensor features such as pixel color filters and pixel-specific micro-lenses. To learn the real-world blur field of a given device, we formulate a generalized non-blind deconvolution problem that directly optimizes the MLP weights using a small set of focal stacks as the only input. We also provide a first-of-its-kind dataset of 5D blur fields−-for smartphone cameras, camera bodies equipped with a variety of lenses, etc. Lastly, we show that acquired 5D blur fields are expressive and accurate enough to reveal, for the first time, differences in optical behavior of smartphone devices of the same make and model

    Digital Fishes - An Interactive Virtual Aquarium

    Get PDF
    This work is supported by AUIP - AsociaciĂłn Universitaria Iberoamericana de PostgradoThis work describes the creation of an interactive virtual aquarium, Digital Fishes, with a physical representation in a cubic fish tank with synchronized visual information and animated elements, such as fish and vegetation, which can be used for educational purposes. Here the solution to simulate the movement of marine flora and fauna added to our virtual Aquarium is described. An implementation of the fish behavior, to move in groups using the Boids algorithm, is created, in addition to autonomous behaviors such as chasing prey and fleeing from predators, interactive behaviors such as following the finger on the screen and going towards the food that the user may throw in the pond. The solution described in this work has managed to bring into the hands of children an interactive virtual representation of an interactive as one more learning tool at their disposal. These children who participated in the Digital Fishes Interactive Aquarium's evaluation provided positive feedback, showing great enthusiasm and positively assessed the solution

    CGAMES'2009

    Get PDF

    Performance Factors in Neurosurgical Simulation and Augmented Reality Image Guidance

    Get PDF
    Virtual reality surgical simulators have seen widespread adoption in an effort to provide safe, cost-effective and realistic practice of surgical skills. However, the majority of these simulators focus on training low-level technical skills, providing only prototypical surgical cases. For many complex procedures, this approach is deficient in representing anatomical variations that present clinically, failing to challenge users’ higher-level cognitive skills important for navigation and targeting. Surgical simulators offer the means to not only simulate any case conceivable, but to test novel approaches and examine factors that influence performance. Unfortunately, there is a void in the literature surrounding these questions. This thesis was motivated by the need to expand the role of surgical simulators to provide users with clinically relevant scenarios and evaluate human performance in relation to image guidance technologies, patient-specific anatomy, and cognitive abilities. To this end, various tools and methodologies were developed to examine cognitive abilities and knowledge, simulate procedures, and guide complex interventions all within a neurosurgical context. The first chapter provides an introduction to the material. The second chapter describes the development and evaluation of a virtual anatomical training and examination tool. The results suggest that learning occurs and that spatial reasoning ability is an important performance predictor, but subordinate to anatomical knowledge. The third chapter outlines development of automation tools to enable efficient simulation studies and data management. In the fourth chapter, subjects perform abstract targeting tasks on ellipsoid targets with and without augmented reality guidance. While the guidance tool improved accuracy, performance with the tool was strongly tied to target depth estimation – an important consideration for implementation and training with similar guidance tools. In the fifth chapter, neurosurgically experienced subjects were recruited to perform simulated ventriculostomies. Results showed anatomical variations influence performance and could impact outcome. Augmented reality guidance showed no marked improvement in performance, but exhibited a mild learning curve, indicating that additional training may be warranted. The final chapter summarizes the work presented. Our results and novel evaluative methodologies lay the groundwork for further investigation into simulators as versatile research tools to explore performance factors in simulated surgical procedures

    Virtual Reality applications for visualization of 6000-year-old Neolithic graves from Lenzburg (Switzerland)

    Get PDF
    The last decade has seen a steady increase in the application of virtual 3D approaches in cultural heritage research. Although a large literature exists about the advantages of 3D methods in this field, here we go one step further and elucidate a) how image-based 3D reconstructions can be displayed in virtual reality (VR) space using freeware game engine software and low-cost VR hardware and b) highlight the relative benefits and advantages with a focus on interactive museum displays of relatively large archaeological objects. Specifically, we present three 3D models of different stone grave structures from the Neolithic necropolis of Lenzburg (Northern Switzerland, 4450-3500 BCE). The site has been excavated in 1959/60 and certain graves were subsequently preserved for museum display. By means of VR applications, it is now possible to experience these approximately 6000-year-old tombs with an innovative approach circumventing various barriers or constraints and offering interactive display options

    An inertial motion capture framework for constructing body sensor networks

    Get PDF
    Motion capture is the process of measuring and subsequently reconstructing the movement of an animated object or being in virtual space. Virtual reconstructions of human motion play an important role in numerous application areas such as animation, medical science, ergonomics, etc. While optical motion capture systems are the industry standard, inertial body sensor networks are becoming viable alternatives due to portability, practicality and cost. This thesis presents an innovative inertial motion capture framework for constructing body sensor networks through software environments, smartphones and web technologies. The first component of the framework is a unique inertial motion capture software environment aimed at providing an improved experimentation environment, accompanied by programming scaffolding and a driver development kit, for users interested in studying or engineering body sensor networks. The software environment provides a bespoke 3D engine for kinematic motion visualisations and a set of tools for hardware integration. The software environment is used to develop the hardware behind a prototype motion capture suit focused on low-power consumption and hardware-centricity. Additional inertial measurement units, which are available commercially, are also integrated to demonstrate the functionality the software environment while providing the framework with additional sources for motion data. The smartphone is the most ubiquitous computing technology and its worldwide uptake has prompted many advances in wearable inertial sensing technologies. Smartphones contain gyroscopes, accelerometers and magnetometers, a combination of sensors that is commonly found in inertial measurement units. This thesis presents a mobile application that investigates whether the smartphone is capable of inertial motion capture by constructing a novel omnidirectional body sensor network. This thesis proposes a novel use for web technologies through the development of the Motion Cloud, a repository and gateway for inertial data. Web technologies have the potential to replace motion capture file formats with online repositories and to set a new standard for how motion data is stored. From a single inertial measurement unit to a more complex body sensor network, the proposed architecture is extendable and facilitates the integration of any inertial hardware configuration. The Motion Cloud’s data can be accessed through an application-programming interface or through a web portal that provides users with the functionality for visualising and exporting the motion data
    • 

    corecore