793 research outputs found

    A pointillism style for the non-photorealistic display of augmented reality scenes

    Get PDF
    The ultimate goal of augmented reality is to provide the user with a view of the surroundings enriched by virtual objects. Practically all augmented reality systems rely on standard real-time rendering methods for generating the images of virtual scene elements. Although such conventional computer graphics algorithms are fast, they often fail to produce sufficiently realistic renderings. The use of simple lighting and shading methods, as well as the lack of knowledge about actual lighting conditions in the real surroundings, cause virtual objects to appear artificial. We have recently proposed a novel approach for generating augmented reality images. Our method is based on the idea of applying stylization techniques for reducing the visual realism of both the camera image and the virtual graphical objects. Special non-photorealistic image filters are applied to the camera video stream. The virtual scene elements are rendered using non-photorealistic rendering methods. Since both the camera image and the virtual objects are stylized in a corresponding way, they appear very similar. As a result, graphical objects can become indistinguishable from the real surroundings. Here, we present a new method for the stylization of augmented reality images. This approach generates a painterly "brush stroke" rendering. The resulting stylized augmented reality video frames look similar to paintings created in the "pointillism" style. We describe the implementation of the camera image filter and the non-photorealistic renderer for virtual objects. These components have been newly designed or adapted for this purpose. They are fast enough for generating augmented reality images in real-time and are customizable. The results obtained using our approach are very promising and show that it improves immersion in augmented reality

    Advances in Spatially Faithful (3D) Telepresence

    Get PDF
    Benefits of AR technologies have been well proven in collaborative industrial applications, for example in remote maintenance and consultancy. Benefits may also be high in telepresence applications, where virtual and mixed reality (nowadays often referred as extended reality, XR) technologies are used for sharing information or objects over network. Since the 90’s, technical enablers for advanced telepresence solutions have developed considerably. At the same time, the importance of remote technologies has grown immensely due to general disruption of work, demands for reducing travelling and CO2, and the need for preventing pandemics. An advanced 3D telepresence solution benefits from using XR technologies. Particularly interesting are solutions based on HMD or glasses type of near-eye-displays (NED). However, as AR/VR glasses supporting natural occlusions and accommodation are still missing from the market, a good alternative is to use screen displays in new ways, better supporting e.g. virtual meeting geometries and other important cues for 3D perception. In this article, researchers Seppo Valli, Mika Hakkarainen, and Pekka Siltanen from VTT Technical Research Centre of Finland describe the status, challenges, and opportunities in both glasses and screen based 3D telepresence. The writers also specify an affordable screen based solution with improved immersiveness, naturalness, and efficiency, enhanced by applying XR technologies

    Augmented Reality and Its Application

    Get PDF
    Augmented Reality (AR) is a discipline that includes the interactive experience of a real-world environment, in which real-world objects and elements are enhanced using computer perceptual information. It has many potential applications in education, medicine, and engineering, among other fields. This book explores these potential uses, presenting case studies and investigations of AR for vocational training, emergency response, interior design, architecture, and much more

    Visual Perception and Cognition in Image-Guided Intervention

    Get PDF
    Surgical image visualization and interaction systems can dramatically affect the efficacy and efficiency of surgical training, planning, and interventions. This is even more profound in the case of minimally-invasive surgery where restricted access to the operative field in conjunction with limited field of view necessitate a visualization medium to provide patient-specific information at any given moment. Unfortunately, little research has been devoted to studying human factors associated with medical image displays and the need for a robust, intuitive visualization and interaction interfaces has remained largely unfulfilled to this day. Failure to engineer efficient medical solutions and design intuitive visualization interfaces is argued to be one of the major barriers to the meaningful transfer of innovative technology to the operating room. This thesis was, therefore, motivated by the need to study various cognitive and perceptual aspects of human factors in surgical image visualization systems, to increase the efficiency and effectiveness of medical interfaces, and ultimately to improve patient outcomes. To this end, we chose four different minimally-invasive interventions in the realm of surgical training, planning, training for planning, and navigation: The first chapter involves the use of stereoendoscopes to reduce morbidity in endoscopic third ventriculostomy. The results of this study suggest that, compared with conventional endoscopes, the detection of the basilar artery on the surface of the third ventricle can be facilitated with the use of stereoendoscopes, increasing the safety of targeting in third ventriculostomy procedures. In the second chapter, a contour enhancement technique is described to improve preoperative planning of arteriovenous malformation interventions. The proposed method, particularly when combined with stereopsis, is shown to increase the speed and accuracy of understanding the spatial relationship between vascular structures. In the third chapter, an augmented-reality system is proposed to facilitate the training of planning brain tumour resection. The results of our user study indicate that the proposed system improves subjects\u27 performance, particularly novices\u27, in formulating the optimal point of entry and surgical path independent of the sensorimotor tasks performed. In the last chapter, the role of fully-immersive simulation environments on the surgeons\u27 non-technical skills to perform vertebroplasty procedure is investigated. Our results suggest that while training surgeons may increase their technical skills, the introduction of crisis scenarios significantly disturbs the performance, emphasizing the need of realistic simulation environments as part of training curriculum

    Preliminary Survey of Multiview Synthesis Technology

    Get PDF
    [[abstract]]With the maturity of digital camera technology, it is feasible to form an array of cameras. The major usage of a camera array is to acquire different views of a scene in one shot. The captured data can be used to analyze the depths of the objects. Once we have the 3D model, we can synthesize virtual views, relight the scene, etc. The potential applications are virtual reality and augmented reality. In order to investigate the multiview technology, we studied the fundamental concepts, including the single lens camera, the eccentric lens camera, the plenoptic camera, and the multiview camera. We also discussed a few application examples for understanding the practical usages. The study showed that the initial depth estimation technology becomes important nowadays in providing photo realistic natural feel to people. The application areas were also extended to entertainment, and also to some crucial tasks such as medical operations and combat missions.[[conferencetype]]國際[[conferencedate]]20101015~20101017[[conferencelocation]]Darmstadt, German

    Physically Based Rendering of Synthetic Objects in Real Environments

    Full text link

    TangiPaint: Interactive tangible media

    Get PDF
    Currently, there is a wide disconnection between the real and virtual worlds in computer graphics. Art created with textured paints on canvases have visual effects which naturally supplement simple color. Real paint exhibits shadows and highlights, which change in response to viewing and lighting directions. The colors interact with this environment and can produce very noticeable effects. Additionally, the traditional means of human-computer interaction using a keyboard and mouse is unnatural and inefficient---gestures and actions are not performed on the objects themselves. These visual effects and natural interactions are missing from digital media in the virtual world. The absence of these visual characteristics disconnects users from their content. Our research looks into simulating these missing pieces and reconnecting users. TangiPaint is an interactive, tangible application for creating and exploring digital media. It gives the experience of working with real materials, such as oil paints and textured canvases, on a digital display. TangiPaint implements natural gestures and allows users to directly interact with their work. The Tangible Display technology allows users to tilt and reorient the device and screen to see the subtle gloss, shadow, and impasto lighting effects of the simulated surface. To simulate realistic lighting effects we use a Ward BRDF illumination model. This model is implemented as an OpenGL shader program. Our system tracks the texture and relief of a piece of art by saving topographical information. We implement height fields, normal vectors, and parameter maps to store this information. These textures are submitted to the lighting model that renders a final product. TangiPaint builds on previous work and applications in this area, but is the first to integrate these aspects into a single software application. The system is entirely self-contained and implemented on the Apple iOS platforms, the iPhone, iPad, and iPod Touch. No additional hardware is required and the interface is easy to learn and use. TangiPaint is a step in the direction of interactive digital art media that looks and behaves like real materials
    • …
    corecore