30 research outputs found

    Komparatiivinen arviointi kiiltävien pintojen valaistustuloksista mallintilan valaistuksen ja ruuduntilan valaistuksen välillä

    Get PDF
    The field of computer graphics places a premium on obtaining an optimal balance between the fidelity of visual of representation and the performance of rendering. The level of fidelity for traditional shading techniques that operate in screen-space is generally related to the screen resolution and thus the number of pixels that we render. Special application areas, such as stereo rendering for virtual reality head-mounted displays, demand high output update rates and screen pixel resolutions which can then lead to significant performance penalties. This means that it would be beneficial to utilize a rendering technique which could be decoupled from the output update rate and resolution, without too severely affecting the achieved rendering quality. One technique capable of meeting this goal is that of performing a 3D model's surface shading in an object-specific space. In this thesis we have implemented such a shading method, with the lighting computations over a model's surface being done on a model-specific, uniquely parameterized texture map we call a light map. As the shading is computed per light map texel, the costs do not depend on the output resolution or update rate. Additionally, we utilize the texture sampling hardware built into the Graphics Processing Units ubiquitous in modern computing systems to gain high quality anti-aliasing on the shading results. The end result is a surface appearance that is expected to theoretically be close to those resulting from highly supersampled screen-space shading techniques. In addition to the object-space lighting technique, we also implemented a traditional screen-space version of our shading algorithm. Both of these techniques were used in a user study we organized to test against the theoretical expectation. The results from the study indicated that the object-space shaded images are perceptually close to identical compared to heavily supersampled screen-space images.Tietokonegrafiikan alalla optimaalisen tasapainon saavuttaminen kuvanlaadun ja laskentanopeuden välillä on keskeisessä asemassa. Perinteisillä, kuvaruuduntilassa toimivilla valaistusalgoritmeilla kuvanlaatu on tyypillisesti riippuvainen käytetyn piirtoikkunan erottelutarkkuudesta ja näin ollen kuvaelementtien kokonaismäärästä. Tietyt sovellusalueet, kuten stereopiirtäminen keinotodellisuussovelluksille, edellyttävät korkeata ruudunpäivitystaajuutta sekä erottelutarkkuutta, mikä taas johtaa laskentatehovaatimusten kasvuun. Näin ollen on tarkoituksenmukaista hyödyntää algoritmeja, joissa valaistuslaskenta saataisiin erotettua näistä ominaisuuksista ilman merkittävää kuvanlaadun heikkenemistä. Yksi algoritmikategoria, joka täyttää nämä asetetut vaatimukset on valaistuslaskenta 3D-mallikohtaisessa tilassa. Tämän diplomityön puitteissa olemme toteuttaneet tähän kategoriaan lukeutuvan valaistusalgoritmin, jossa valaistuslaskenta suoritetaan mallikohtaisella, yksikäsitteisesti parametrisoidulla tekstuurikartalla. Tämä tarkoittaa, että valaistuslaskennasta koituvat suorituskykykustannukset eivät ole riippuvaisia aiemmin mainituista ruudun ominaisuuksista. Valaistuslaskenta yksilöllisiin tekstuurikarttoihin mahdollistaa näytönohjaimiin sisäänrakennetun teksturointilaitteiston käyttämisen korkealaatuiseen valaistustulosten suodattamiseen. Lopputuloksena saavutetaan piirretty kuva, jonka teoreettisesti oletetaan olevan laadultaan lähellä merkittävästi ylinäytteistettyä ruuduntilan valaistusalgoritmeille saavutettuja tuloksia. Mallikohtaisen tilan valaistusalgoritmin lisäksi toteutimme perinteisen ruuduntilan valaistusalgoritmiversion. Molempia toteutuksia käytettiin järjestämässämme käyttäjätestissä, jonka tavoitteena oli testata toteutuuko mainittu teoreettinen oletus käytännössä. Käyttäjätestin tulokset viittasivat vahvasti oletuksen pätevyyteen, käyttäjien kokonaisvaltaisesti kokien ylinäytteistetyn perinteisen valaistuslaskennan tulokset lähes identtisiksi mallintilan valaistuslaskennan tuloksiin

    REAL-TIME CAPTURE AND RENDERING OF PHYSICAL SCENE WITH AN EFFICIENTLY CALIBRATED RGB-D CAMERA NETWORK

    Get PDF
    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. With the recent explosive growth of Augmented Reality (AR) and Virtual Reality (VR) platforms, utilizing camera RGB-D camera networks to capture and render dynamic physical space can enhance immersive experiences for users. To maximize coverage and minimize costs, practical applications often use a small number of RGB-D cameras and sparsely place them around the environment for data capturing. While sparse color camera networks have been studied for decades, the problems of extrinsic calibration of and rendering with sparse RGB-D camera networks are less well understood. Extrinsic calibration is difficult because of inappropriate RGB-D camera models and lack of shared scene features. Due to the significant camera noise and sparse coverage of the scene, the quality of rendering 3D point clouds is much lower compared with synthetic models. Adding virtual objects whose rendering depend on the physical environment such as those with reflective surfaces further complicate the rendering pipeline. In this dissertation, I propose novel solutions to tackle these challenges faced by RGB-D camera systems. First, I propose a novel extrinsic calibration algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Second, I propose a novel rendering pipeline that can capture and render, in real-time, dynamic scenes in the presence of arbitrary-shaped reflective virtual objects. Third, I have demonstrated a teleportation application that uses the proposed system to merge two geographically separated 3D captured scenes into the same reconstructed environment. To provide a fast and robust calibration for a sparse RGB-D camera network, first, the correspondences between different camera views are established by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic using rigid transformation that is optimal only for pinhole cameras, different view transformation functions including rigid transformation, polynomial transformation, and manifold regression are systematically tested to determine the most robust mapping that generalizes well to unseen data. Third, the celebrated bundle adjustment procedure is reformulated to minimize the global 3D projection error so as to fine-tune the initial estimates. To achieve a realistic mirror rendering, a robust eye detector is used to identify the viewer\u27s 3D location and render the reflective scene accordingly. The limited field of view obtained from a single camera is overcome by our calibrated RGB-D camera network system that is scalable to capture an arbitrarily large environment. The rendering is accomplished by raytracing light rays from the viewpoint to the scene reflected by the virtual curved surface. To the best of our knowledge, the proposed system is the first to render reflective dynamic scenes from real 3D data in large environments. Our scalable client-server architecture is computationally efficient - the calibration of a camera network system, including data capture, can be done in minutes using only commodity PCs

    Comparative linear accuracy and reliability of cone beam CT derived 2-dimensional and 3-dimensional images constructed using an orthodontic volumetric rendering program.

    Get PDF
    The purpose of this project was to compare the accuracy and reliability of linear measurements made on 2D projections and 3D reconstructions using Dolphin 3D software (Chatsworth, CA) as compared to direct measurements made on human skulls. The linear dimensions between 6 bilateral and 8 mid-sagittal anatomical landmarks on 23 dentate dry human skulls were measured three times by multiple observers using a digital caliper to provide twenty orthodontic linear measurements. The skulls were stabilized and imaged via PSP digital cephalometry as well as CBCT. The PSP cephalograms were imported into Dolphin (Chatsworth, CA, USA) and the 3D volumetric data set was imported into Dolphin 3D (Version 2.3, Chatsworth, CA, USA). Using Dolphin 3D, planar cephalograms as well as 3D volumetric surface reconstructions were (3D CBCT) generated. The linear measurements between landmarks of each three modalities were then computed by a single observer three times. For 2D measurements, a one way ANOVA for each measurement dimension was calculated as well as a post hoc Scheffe multiple comparison test with the anatomic distance as the control group. 3D measurements were compared to anatomic truth using Student\u27s t test (PiÜ50.05). The intraclass correlation coefficient (ICC) and absolute linear and percentage error were determined as indices of intraobserver reliability. Our results show that for 2D mid sagittal measurements that Simulated LC images are accurate and similar to those from PSP images (except for Ba-Na), and for bilateral measurements simulated LC measurements were similar to PSP but less accurate, underestimating dimensions by between 4.7% to 17%.For 3D volumetric renderings, 2/3 rd of CBCT measurements are statistically different from actual measurements, however this possibly is not clinically relevant

    Fast Objective Coupled Planar Illumination Microscopy

    Get PDF
    Among optical imaging techniques light sheet fluorescence microscopy stands out as one of the most attractive for capturing high-speed biological dynamics unfolding in three dimensions. The technique is potentially millions of times faster than point-scanning techniques such as two-photon microscopy. This potential is especially poignant for neuroscience applications due to the fact that interactions between neurons transpire over mere milliseconds within tissue volumes spanning hundreds of cubic microns. However current-generation light sheet microscopes are limited by volume scanning rate and/or camera frame rate. We begin by reviewing the optical principles underlying light sheet fluorescence microscopy and the origin of these rate bottlenecks. We present an analysis leading us to the conclusion that Objective Coupled Planar Illumination (OCPI) microscopy is a particularly promising technique for recording the activity of large populations of neurons at high sampling rate. We then present speed-optimized OCPI microscopy, the first fast light sheet technique to avoid compromising image quality or photon efficiency. We enact two strategies to develop the fast OCPI microscope. First, we devise a set of optimizations that increase the rate of the volume scanning system to 40 Hz for volumes up to 700 microns thick. Second, we introduce Multi-Camera Image Sharing (MCIS), a technique to scale imaging rate by incorporating additional cameras. MCIS can be applied not only to OCPI but to any widefield imaging technique, circumventing the limitations imposed by the camera. Detailed design drawings are included to aid in dissemination to other research groups. We also demonstrate fast calcium imaging of the larval zebrafish brain and find a heartbeat-induced motion artifact. We recommend a new preprocessing step to remove the artifact through filtering. This step requires a minimal sampling rate of 15 Hz, and we expect it to become a standard procedure in zebrafish imaging pipelines. In the last chapter we describe essential computational considerations for controlling a fast OCPI microscope and processing the data that it generates. We introduce a new image processing pipeline developed to maximize computational efficiency when analyzing these multi-terabyte datasets, including a novel calcium imaging deconvolution algorithm. Finally we provide a demonstration of how combined innovations in microscope hardware and software enable inference of predictive relationships between neurons, a promising complement to more conventional correlation-based analyses

    Modeling and evaluation of new collimator geometries in SPECT

    Get PDF

    Computational Multimedia for Video Self Modeling

    Get PDF
    Video self modeling (VSM) is a behavioral intervention technique in which a learner models a target behavior by watching a video of oneself. This is the idea behind the psychological theory of self-efficacy - you can learn or model to perform certain tasks because you see yourself doing it, which provides the most ideal form of behavior modeling. The effectiveness of VSM has been demonstrated for many different types of disabilities and behavioral problems ranging from stuttering, inappropriate social behaviors, autism, selective mutism to sports training. However, there is an inherent difficulty associated with the production of VSM material. Prolonged and persistent video recording is required to capture the rare, if not existed at all, snippets that can be used to string together in forming novel video sequences of the target skill. To solve this problem, in this dissertation, we use computational multimedia techniques to facilitate the creation of synthetic visual content for self-modeling that can be used by a learner and his/her therapist with a minimum amount of training data. There are three major technical contributions in my research. First, I developed an Adaptive Video Re-sampling algorithm to synthesize realistic lip-synchronized video with minimal motion jitter. Second, to denoise and complete the depth map captured by structure-light sensing systems, I introduced a layer based probabilistic model to account for various types of uncertainties in the depth measurement. Third, I developed a simple and robust bundle-adjustment based framework for calibrating a network of multiple wide baseline RGB and depth cameras

    Towards A Self-calibrating Video Camera Network For Content Analysis And Forensics

    Get PDF
    Due to growing security concerns, video surveillance and monitoring has received an immense attention from both federal agencies and private firms. The main concern is that a single camera, even if allowed to rotate or translate, is not sufficient to cover a large area for video surveillance. A more general solution with wide range of applications is to allow the deployed cameras to have a non-overlapping field of view (FoV) and to, if possible, allow these cameras to move freely in 3D space. This thesis addresses the issue of how cameras in such a network can be calibrated and how the network as a whole can be calibrated, such that each camera as a unit in the network is aware of its orientation with respect to all the other cameras in the network. Different types of cameras might be present in a multiple camera network and novel techniques are presented for efficient calibration of these cameras. Specifically: (i) For a stationary camera, we derive new constraints on the Image of the Absolute Conic (IAC). These new constraints are shown to be intrinsic to IAC; (ii) For a scene where object shadows are cast on a ground plane, we track the shadows on the ground plane cast by at least two unknown stationary points, and utilize the tracked shadow positions to compute the horizon line and hence compute the camera intrinsic and extrinsic parameters; (iii) A novel solution to a scenario where a camera is observing pedestrians is presented. The uniqueness of formulation lies in recognizing two harmonic homologies present in the geometry obtained by observing pedestrians; (iv) For a freely moving camera, a novel practical method is proposed for its self-calibration which even allows it to change its internal parameters by zooming; and (v) due to the increased application of the pan-tilt-zoom (PTZ) cameras, a technique is presented that uses only two images to estimate five camera parameters. For an automatically configurable multi-camera network, having non-overlapping field of view and possibly containing moving cameras, a practical framework is proposed that determines the geometry of such a dynamic camera network. It is shown that only one automatically computed vanishing point and a line lying on any plane orthogonal to the vertical direction is sufficient to infer the geometry of a dynamic network. Our method generalizes previous work which considers restricted camera motions. Using minimal assumptions, we are able to successfully demonstrate promising results on synthetic as well as on real data. Applications to path modeling, GPS coordinate estimation, and configuring mixed-reality environment are explored

    An interest point based illumination condition matching approach to photometric registration within augmented reality worlds

    Get PDF
    With recent and continued increases in computing power, and advances in the field of computer graphics, realistic augmented reality environments can now offer inexpensive and powerful solutions in a whole range of training, simulation and leisure applications. One key challenge to maintaining convincing augmentation, and therefore user immersion, is ensuring consistent illumination conditions between virtual and real environments, so that objects appear to be lit by the same light sources. This research demonstrates how real world lighting conditions can be determined from the two-dimensional view of the user. Virtual objects can then be illuminated and virtual shadows cast using these conditions. This new technique uses pairs of interest points from real objects and the shadows that they cast, viewed from a binocular perspective, to determine the position of the illuminant. This research has been initially focused on single point light sources in order to show the potential of the technique and has investigated the relationships between the many parameters of the vision system. Optimal conditions have been discovered by mapping the results of experimentally varying parameters such as FoV, camera angle and pose, image resolution, aspect ratio and illuminant distance. The technique is able to provide increased robustness where greater resolution imagery is used. Under optimal conditions it is possible to derive the position of a real world light source with low average error. An investigation of available literature has revealed that other techniques can be inflexible, slow, or disrupt scene realism. This technique is able to locate and track a moving illuminant within an unconstrained, dynamic world without the use of artificial calibration objects that would disrupt scene realism. The technique operates in real-time as the new algorithms are of low computational complexity. This allows high framerates to be maintained within augmented reality applications. Illuminant updates occur several times a second on an average to high end desktop computer. Future work will investigate the automatic identification and selection of pairs of interest points and the exploration of global illuminant conditions. The latter will include an analysis of more complex scenes and the consideration of multiple and varied light sources.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Polarized Light Applications towards Biomedical Diagnosis and Monitoring

    Get PDF
    Utilization of polarized light for improved specificity and sensitivity in disease diagnosis is occurring more often in fields of sensing, measurement, and medical diagnostics. This dissertation focuses on two distinct areas where polarized light is applied in biomedical sensing/monitoring: The first portion of worked reported in this dissertation focuses on addressing several major obstacles that exist prohibiting the use of polarized light as a means of developing an optical based non-invasive polarimetric glucose sensor to improve the quality of life and disease monitoring for millions of people currently afflicted by diabetes mellitus. In this work there are two key areas, which were focused on that require further technical advances for the technology to be realized as a viable solution. First, in vivo studies performed on New Zealand White (NZW) rabbits using a dual-wavelength polarimeter were conducted to allow for performance validation and modeling for predictive glucose measurements accounting for the time delay associated with blood aqueous humor glucose concentrations in addition to overcoming motion induced birefringence utilizing multiple linear regression analysis. Further, feasibility of non-matched index of refraction eye coupling between the system and corneal surface was evaluated using modeling and verified with in vitro testing validation. The system was initially modeled followed by construction of the non-matched coupling configuration for testing in vitro. The second half of the dissertation focuses on the use of polarized light microscopy designed, built, and tested as a low-cost high quality cellphone based polarimetric imaging system to aid medical health professionals in improved diagnosis of disease in the clinic and in low-resource settings. Malaria remains a major global health burden and new methods for, low-cost, high-sensitivity diagnosis of malaria are needed particularly in remote low-resource areas throughout the world. Here, a cost effective optical cell-phone based transmission polarized light microscope system is presented utilized for imaging the malaria pigment known as hemozoin. Validation testing of the optical resolution required to provide diagnosis similar to commercial polarized imaging systems will be conducted and the optimal design will be utilized in addition to image processing to improve the diagnostic capability
    corecore