6 research outputs found

    Urbanmix

    Get PDF
    “Realidad aumentada”, se puede definir como la amplificación de la capacidad sensorial de la percepción del mundo que nos rodea mediante el uso de dispositivos que superponen elementos virtuales a la imagen real. La aplicación Urbanmix, nos posibilita este concepto a escala urbana y en exteriores usando un sistema propio de realidad aumentada, que se basa en modelos 3D y el programa Max/MSP Jitter [2] de edición de video en tiempo real. A través de la aplicación Urbanmix, podemos acceder a una visión de la ciudad o espacio en el que nos encontremos, ampliada con algunos edificios o monumentos provenientes de otras. Con ello, y utilizando escalas 1:1 en el sistema, podemos realizar fácilmente comparaciones reales y así tomar verdadera conciencia de la magnitud de los edificios que visualizamos. Ante la gran dimensión de algunos de los edificios que podemos introducir en el sistema, es necesario ubicarlos a una distancia lo suficientemente grande como para que el sistema pueda tomar las referencias necesarias para ubicarlo correctamente y su visualización continúe siendo creíble; por la falta de espacio en las ciudades lo normal es que algunos edificios reales se superpongan generalmente a los virtuales, por una superposición de canales de video, con lo que nos encontramos con un problema de oclusión de formas reales a virtuales que no se corresponde con lo habitual en estos sistemas, oclusión de formas virtuales a reales. El problema se resuelve con una mascara tridimensional de video que actualiza sus datos en tiempo real, que permite al usuario moverse a su antojo dentro del área acotada en donde se desarrolle la aplicación, y continuar viendo la inmersión del edificio o monumento escogido en el espacio precargado desde cualquier punto de vista

    Designing and implementing interactive and realistic augmented reality experiences

    Get PDF
    In this paper, we propose an approach for supporting the design and implementation of interactive and realistic Augmented Reality (AR). Despite the advances in AR technology, most software applications still fail to support AR experiences where virtual objects appear as merged into the real setting. To alleviate this situation, we propose to combine the use of model-based AR techniques with the advantages of current game engines to develop AR scenes in which the virtual objects collide, are occluded, project shadows and, in general, are integrated into the augmented environment more realistically. To evaluate the feasibility of the proposed approach, we extended an existing game platform named GREP to enhance it with AR capacities. The realism of the AR experiences produced with the software was assessed in an event in which more than 100 people played two AR games simultaneously.This work is supported by the project CREAx and PACE funded by the Spanish Ministry of Economy, Industry and Competitiveness (TIN2014-56534-R and TIN2016-77690-R)

    A Survey on Augmented Reality Challenges and Tracking

    Get PDF
    This survey paper presents a classification of different challenges and tracking techniques in the field of augmented reality. The challenges in augmented reality are categorized into performance challenges, alignment challenges, interaction challenges, mobility/portability challenges and visualization challenges. Augmented reality tracking techniques are mainly divided into sensor-based tracking, visionbased tracking and hybrid tracking. The sensor-based tracking is further divided into optical tracking, magnetic tracking, acoustic tracking, inertial tracking or any combination of these to form hybrid sensors tracking. Similarly, the vision-based tracking is divided into marker-based tracking and markerless tracking. Each tracking technique has its advantages and limitations. Hybrid tracking provides a robust and accurate tracking but it involves financial and tehnical difficulties

    Detecting Dynamic Occlusion in front of Static Backgrounds for AR Scenes

    No full text
    Correctly finding and handling occlusion between virtual and real objects in an Augmented Reality scene is essential for achieving visual realism. Here, we present an approach for detecting occlusion of virtual parts of the scene by natural occluders. Our algorithm is based on a graphical model of static backgrounds in the natural surroundings, which has to be acquired beforehand. The design of the approach aims at providing real-time performance and an easy integration into existing AR systems. No assumptions about the shape or color of occluding objects are required. The algorithm has been tested with several graphical models

    Entornos multimedia de realidad aumentada en el campo del arte

    Full text link
    La relación ente Ciencia y Arte ha mantenido a lo largo de la historia momentos de proximidad o distanciamiento, llegando a entenderse como dos culturas diferentes, pero también se han producido situaciones interdisciplinares de colaboración e intercambio que en nuestros días mantienen como nexo común la cultura digital y el uso del ordenador. Según Berenguer (2002) desde la aparición del ordenador, científicos y artistas están encontrando un espacio común de trabajo y entendimiento. Mediante el empleo de las nuevas tecnologías, la distancia que separa ambas disciplinas es cada vez más corta. En esta tesis, cuyo título es "Entornos Multimedia de Realidad Aumentada en el Campo del Arte", se presenta una investigación teórico-práctica de la tecnología de realidad aumentada aplicada al arte y campos afines, como el edutainment (educación + entretenimiento). La investigación se ha realizado en dos bloques: en el primer bloque se trata la tecnología desde distintos factores que se han considerado relevantes para su entendimiento y funcionamiento; en el segundo se presentan un total de seis ensayos que constituyen la parte práctica de esta tesis.Portalés Ricart, C. (2008). Entornos multimedia de realidad aumentada en el campo del arte [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/3402Palanci

    ARVISCOPE: Georeferenced Visualization of Dynamic Construction Processes in Three-Dimensional Outdoor Augmented Reality.

    Full text link
    Construction processes can be conceived as systems of discrete, interdependent activities. Discrete Event Simulation (DES) has thus evolved as an effective tool to model operations that compete over available resources (personnel, material, and equipment). A DES model has to be verified and validated to ensure that it reflects a modeler’s intentions, and faithfully represents a real operation. 3D visualization is an effective means of achieving this, and facilitating the process of communicating and accrediting simulation results. Visualization of simulated operations has traditionally been achieved in Virtual Reality (VR). In order to create convincing VR animations, detailed information about an operation and the environment has to be obtained. The data must describe the simulated processes, and provide 3D CAD models of project resources, the facility under construction, and the surrounding terrain (Model Engineering). As the size and complexity of an operation increase, such data collection becomes an arduous, impractical, and often impossible task. This directly translates into loss of financial and human resources that could otherwise be productively used. In an effort to remedy this situation, this dissertation proposes an alternate approach of visualizing simulated operations using Augmented Reality (AR) to create mixed views of real existing jobsite facilities and virtual CAD models of construction resources. The application of AR in animating simulated operations has significant potential in reducing the aforementioned Model Engineering and data collection tasks, and at the same time can help in creating visually convincing output that can be effectively communicated. This dissertation presents the design, methodology, and development of ARVISCOPE, a general purpose AR animation authoring language, and ROVER, a mobile computing hardware framework. When used together, ARVISCOPE and ROVER can create three-dimensional AR animations of any length and complexity from the results of running DES models of engineering operations. ARVISCOPE takes advantage of advanced Global Positioning System (GPS) and orientation tracking technologies to accurately track a user’s spatial context, and georeferences superimposed 3D graphics in an augmented environment. In achieving the research objectives, major technical challenges such as accurate registration, automated occlusion handling, and dynamic scene construction and manipulation have been successfully identified and addressed.Ph.D.Civil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/60761/1/abehzada_1.pd
    corecore