10 research outputs found

    Fuzzy Soft Shadow in Augmented Reality Systems

    Get PDF
    Realistic soft shadows in Augmented Reality (AR) is a fascinating topic in computer graphics. Many researchers are involved to have a significant improvement on this demand. In this paper, we have presented a new technique to produce soft shadows using one of the well-known methods in mathematics called Fuzzy Logic. Fuzzy logic is taken into account to generate the realistic soft shadows in AR. The wide light source is split into some parts that each of them plays the rule of a single light source. The desired soft shadow is generated by splitting the wide light source into multiple parts and considering each part as a single light source. The method which we called Fuzzy Soft Shadow is employed in AR to enhance the quality of semi-soft shadows and soft shadows

    An Empirical Evaluation of the Performance of Real-Time Illumination Approaches: Realistic Scenes in Augmented Reality

    Get PDF
    Augmented, Virtual, and Mixed Reality (AR/VR/MR) systems have been developed in general, with many of these applications having accomplished significant results, rendering a virtual object in the appropriate illumination model of the real environment is still under investigation. The entertainment industry has presented an astounding outcome in several media form, albeit the rendering process has mostly been done offline. The physical scene contains the illumination information which can be sampled and then used to render the virtual objects in real-time for realistic scene. In this paper, we evaluate the accuracy of our previous and current developed systems that provide real-time dynamic illumination for coherent interactive augmented reality based on the virtual object’s appearance in association with the real world and related criteria. The system achieves that through three simultaneous aspects. (1) The first is to estimate the incident light angle in the real environment using a live-feed 360∘ camera instrumented on an AR device. (2) The second is to simulate the reflected light using two routes: (a) global cube map construction and (b) local sampling. (3) The third is to define the shading properties for the virtual object to depict the correct lighting assets and suitable shadowing imitation. Finally, the performance efficiency is examined in both routes of the system to reduce the general cost. Also, The results are evaluated through shadow observation and user study

    LivePhantom: Retrieving Virtual World Light Data to Real Environments.

    Get PDF
    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems

    Generating Light Estimation for Mixed-reality Devices through Collaborative Visual Sensing

    Get PDF
    abstract: Mixed reality mobile platforms co-locate virtual objects with physical spaces, creating immersive user experiences. To create visual harmony between virtual and physical spaces, the virtual scene must be accurately illuminated with realistic physical lighting. To this end, a system was designed that Generates Light Estimation Across Mixed-reality (GLEAM) devices to continually sense realistic lighting of a physical scene in all directions. GLEAM optionally operate across multiple mobile mixed-reality devices to leverage collaborative multi-viewpoint sensing for improved estimation. The system implements policies that prioritize resolution, coverage, or update interval of the illumination estimation depending on the situational needs of the virtual scene and physical environment. To evaluate the runtime performance and perceptual efficacy of the system, GLEAM was implemented on the Unity 3D Game Engine. The implementation was deployed on Android and iOS devices. On these implementations, GLEAM can prioritize dynamic estimation with update intervals as low as 15 ms or prioritize high spatial quality with update intervals of 200 ms. User studies across 99 participants and 26 scene comparisons reported a preference towards GLEAM over other lighting techniques in 66.67% of the presented augmented scenes and indifference in 12.57% of the scenes. A controlled lighting user study on 18 participants revealed a general preference for policies that strike a balance between resolution and update rate.Dissertation/ThesisMasters Thesis Computer Science 201

    Photorealistic rendering: a survey on evaluation

    Get PDF
    This article is a systematic collection of existing methods and techniques for evaluating rendering category in the field of computer graphics. The motive for doing this study was the difficulty of selecting appropriate methods for evaluating and validating specific results reported by many researchers. This difficulty lies in the availability of numerous methods and lack of robust discussion of them. To approach such problems, the features of well-known methods are critically reviewed to provide researchers with backgrounds on evaluating different styles in photo-realistic rendering part of computer graphics. There are many ways to evaluating a research. For this article, classification and systemization method is use. After reviewing the features of different methods, their future is also discussed. Finally, dome pointers are proposed as to the likely future issues in evaluating the research on realistic rendering. It is expected that this analysis helps researchers to overcome the difficulties of evaluation not only in research, but also in application

    Sistema Híbrido Baseado em Geometria e Vídeo 360º 3D para Realidade Virtual em Dispositivos Móveis

    Get PDF
    A realidade virtual tem vindo a ganhar popularidade na sociedade devido ao facto de facultar uma experiência imersiva com menor custo e melhor usabilidade do que teve no passado. Porém, verifica-se que o desenvolvimento dos dispositivos utilizados para difundir essa tecnologia tende a divergir em dois caminhos diferentes. Por um lado, existem dispositivos que usam o poder computacional de computadores fixos permitindo uma experiência com alta qualidade mas que limitam a capacidade de movimento do utilizador pelo facto de usarem cabos. Por outro lado, assiste-se ao crescimento de dispositivos móveis que possibilitam uma maior liberdade de movimento mas que limitam a experiência pelo facto de possuírem menor capacidade de processamento.Para promover a realidade virtual em dispositivos móveis com alta qualidade e fidelidade visual sem quebras na taxa de refrescamento e sem prejuízo na experiência imersiva, é necessário simplificar os ambientes virtuais visualizados, reduzindo assim os requisitos computacionais.Um possível compromisso passa pela combinação de objetos com geometria tridimensional sobrepostos a uma imagem ou vídeo 360º que cobre a área de visualização do utilizador. Essa imagem ou vídeo 360º gerados previamente num sistema com elevado poder de processamento, apresentam assim os detalhes necessários para oferecer uma boa qualidade visual. A imagem ou vídeo 360º não permitem a interação que normalmente é necessária em ambientes de realidade virtual, mas a sobreposição de geometria em tempo real pode permitir a interatividade com determinados objetos do cenário.Porém, esta solução acarreta problemas de coerência entre os objetos 3D e o fundo pré-gerado como a manutenção de colisões, oclusões, iluminação, sombras e transparências juntamente com problemas relacionados com o fluxo de dados e compressão do vídeo de fundo. Por isso, deve ser preservado um conjunto de meta-dados bem definido provenientes da cena que originou o vídeo de fundo, e estes devem ser utilizados no dispositivo final com técnicas apropriadas para resolver os problemas de coerência.Neste trabalho propõe-se uma tecnologia destinada a criar um sistema híbrido baseado em objetos 3D sobrepostos a imagem ou vídeo 360º para proporcionar ambientes de realidade virtual de alta qualidade em dispositivos com baixa capacidade de processamento.A solução proposta foi devidamente testada e validada através de medidas quantitativas e qualitativas.Espera-se que esta abordagem permita uma maior qualidade das experiências baseadas em dispositivos móveis, o que por sua vez, terá um impacto no custo dos sistemas e na progressão da nova tecnologia numa maior parcela da sociedade.Nowadays virtual reality is becoming popular in society because it provides an immersive experience at lower costs and better usability that it had in the past. However, it turns out that the development of the devices used to spread this technology tends to diverge in two different ways. On the one hand, there are devices that use the computing power of fixed computers allowing a high-quality experience but that limit the user's ability to move due to the necessary cables. On the other hand, there is the growth of mobile devices that allow greater freedom of movement but limit the experience due to the fact that they have less processing capacity.To promote virtual reality in mobile devices with high quality and visual fidelity without frame rate losses and without prejudice in the immersive experience, it is necessary to simplify the virtual environments visualized, thus reducing the computational requirements.A possible compromise is the combination of objects with three-dimensional geometry superimposed on a 360º image or video that covers the user's viewing area. This 360º image or video is previously generated on a high-processing power system, thus containing the details needed to provide good visual quality. The generated 360º image or video cannot provide the interaction that is typically needed in virtual reality environments, but overlapping real-time geometry may allow for interactivity with specific objects of the scenario.However, this solution causes coherence problems between 3D objects and the pre-generated background, such as collisions, occlusions, lighting, shadows, and transparency maintenance along with problems with data streaming and background video compression. Therefore, a set of well-defined metadata must be preserved from the scene that originated the background video, and used in the final device with appropriate techniques to tackle the problems of coherence.In this work we propose a technology aimed at creating a hybrid system based on 3D objects superimposed on the 360º image or video to provide virtual reality environments in devices with low processing capacity.The proposed solution was duly tested and validated through quantitative and qualitative measures.It is expected that such an approach will enable higher quality on mobile-based experiences, which in turn will have an impact on the cost of systems and the progression of the new technology in a larger part of society

    Multi-User 3D Augmented Reality Anwendung für die gemeinsame Interaktion mit virtuellen 3D Objekten

    Get PDF
    This thesis covers the development of a network supported multi user augmented reality application for mobile devices. A presenter can load 3D models dynamically, display them on an augmented reality 2D tracker and is capable of manipulating certain single objects. These manipulations are already well-defined in advance through the hierarchy of the 3D object. Executable manipulations are translation, rotation, scaling and the change of materials. Any number of spectators can follow the presentation with their own device from a point of view of individual choice. If the data of the model is not present on their own device it will automatically be transferred from the presenter via network. Thoughts that were made in advance are described, followed by the details of implementation and the occured problems as well as chosen solutions. With the prototype a user study was conducted to define guidelines for the choice of different kinds of lighting for certain applications. The choice is between static, dynamic and combined lighting. Additionally the general usability of the app is evaluated in the study

    Light factorization for mixed-frequency shadows in augmented reality

    No full text
    Integrating animated virtual objects with their surroundings for high-quality augmented reality requires both geometric and radiometric consistency. We focus on the latter of these problems and present an approach that captures and factorizes external lighting in a manner that allows for realistic relighting of both animated and static virtual objects. Our factorization facilitates a combination of hard and soft shadows, with high-performance, in a manner that is consistent with the surrounding scene lighting

    Aspects of User Experience in Augmented Reality

    Get PDF
    corecore