1,546 research outputs found

    The Impact of Surface Normals on Appearance

    Get PDF
    The appearance of an object is the result of complex light interaction with the object. Beyond the basic interplay between incident light and the object\u27s material, a multitude of physical events occur between this illumination and the microgeometry at the point of incidence, and also beneath the surface. A given object, made as smooth and opaque as possible, will have a completely different appearance if either one of these attributes - amount of surface mesostructure (small-scale surface orientation) or translucency - is altered. Indeed, while they are not always readily perceptible, the small-scale features of an object are as important to its appearance as its material properties. Moreover, surface mesostructure and translucency are inextricably linked in an overall effect on appearance. In this dissertation, we present several studies examining the importance of surface mesostructure (small-scale surface orientation) and translucency on an object\u27s appearance. First, we present an empirical study that establishes how poorly a mesostructure estimation technique can perform when translucent objects are used as input. We investigate the two major factors in determining an object\u27s translucency: mean free path and scattering albedo. We exhaustively vary the settings of these parameters within realistic bounds, examining the subsequent blurring effect on the output of a common shape estimation technique, photometric stereo. Based on our findings, we identify a dramatic effect that the input of a translucent material has on the quality of the resultant estimated mesostructure. In the next project, we discuss an optimization technique for both refining estimated surface orientation of translucent objects and determining the reflectance characteristics of the underlying material. For a globally planar object, we use simulation and real measurements to show that the blurring effect on normals that was observed in the previous study can be recovered. The key to this is the observation that the normalization factor for recovered normals is proportional to the error on the accuracy of the blur kernel created from estimated translucency parameters. Finally, we frame the study of the impact of surface normals in a practical, image-based context. We discuss our low-overhead, editing tool for natural images that enables the user to edit surface mesostructure while the system automatically updates the appearance in the natural image. Because a single photograph captures an instant of the incredibly complex interaction of light and an object, there is a wealth of information to extract from a photograph. Given a photograph of an object in natural lighting, we allow mesostructure edits and infer any missing reflectance information in a realistically plausible way

    Diffuse Optical Imaging with Ultrasound Priors and Deep Learning

    Get PDF
    Diffuse Optical Imaging (DOI) techniques are an ever growing field of research as they are noninvasive, compact, cost-effective and can furnish functional information about human tissues. Among others, they include techniques such as Tomography, which solves an inverse reconstruction problem in a tissue volume, and Mapping which only seeks to find values on a tissue surface. Limitations in reliability and resolution, due to the ill-posedness of the underlying inverse problems, have hindered the clinical uptake of this medical imaging modality. Multimodal imaging and Deep Learning present themselves as two promising solutions to further research in DOI. In relation to the first idea, we implement and assess here a set of methods for SOLUS, a combined Ultrasound (US) and Diffuse Optical Tomography (DOT) probe for breast cancer diagnosis. An ad hoc morphological prior is extracted from US B-mode images and utilised for the regularisation of the inverse problem in DOT. Combination of the latter in reconstruction with a linearised forward model for DOT is assessed on specifically designed dual phantoms. The same reconstruction approach with the incorporation of a spectral model has been assessed on meat phantoms for reconstruction of functional properties. A simulation study with realistic digital phantoms is presented for an assessment of a non-linear model in reconstruction for the quantification of optical properties of breast lesions. A set of machine learning tools is presented for diagnosis breast lesions based on the reconstructed optical properties. A preliminary clinical study with the SOLUS probe is presented. Finally, a specifically designed deep learning architecture for diffusion is applied to mapping on the brain cortex or Diffuse Optical Cortical Mapping (DOCM). An assessment of its performances is presented on simulated and experimental data

    Detection of CO and HCN in Pluto's atmosphere with ALMA

    Full text link
    Observations of the Pluto-Charon system, acquired with the ALMA interferometer on June 12-13, 2015, have yielded a detection of the CO(3-2) and HCN(4-3) rotational transitions from Pluto, providing a strong confirmation of the presence of CO, and the first observation of HCN, in Pluto's atmosphere. The CO and HCN lines probe Pluto's atmosphere up to ~450 km and ~900 km altitude, respectively. The CO detection yields (i) a much improved determination of the CO mole fraction, as 515+/-40 ppm for a 12 ubar surface pressure (ii) clear evidence for a well-marked temperature decrease (i.e., mesosphere) above the 30-50 km stratopause and a best-determined temperature of 70+/-2 K at 300 km, in agreement with recent inferences from New Horizons / Alice solar occultation data. The HCN line shape implies a high abundance of this species in the upper atmosphere, with a mole fraction >1.5x10-5 above 450 km and a value of 4x10-5 near 800 km. The large HCN abundance and the cold upper atmosphere imply supersaturation of HCN to a degree (7-8 orders of magnitude) hitherto unseen in planetary atmospheres, probably due to the slow kinetics of condensation at the low pressure and temperature conditions of Pluto's upper atmosphere. HCN is also present in the bottom ~100 km of the atmosphere, with a 10-8 - 10-7 mole fraction; this implies either HCN saturation or undersaturation there, depending on the precise stratopause temperature. The HCN column is (1.6+/-0.4)x10^14 cm-2, suggesting a surface-referred net production rate of ~2x10^7 cm-2s-1. Although HCN rotational line cooling affects Pluto's atmosphere heat budget, the amounts determined in this study are insufficient to explain the well-marked mesosphere and upper atmosphere's ~70 K temperature. We finally report an upper limit on the HC3N column density (< 2x10^13 cm-2) and on the HC15N / HC14N ratio (< 1/125).Comment: Revised version. Icarus, in press, Oct. 11, 2016. 57 pages, including 13 figures and 4 table

    Compression, Modeling, and Real-Time Rendering of Realistic Materials and Objects

    Get PDF
    The realism of a scene basically depends on the quality of the geometry, the illumination and the materials that are used. Whereas many sources for the creation of three-dimensional geometry exist and numerous algorithms for the approximation of global illumination were presented, the acquisition and rendering of realistic materials remains a challenging problem. Realistic materials are very important in computer graphics, because they describe the reflectance properties of surfaces, which are based on the interaction of light and matter. In the real world, an enormous diversity of materials can be found, comprising very different properties. One important objective in computer graphics is to understand these processes, to formalize them and to finally simulate them. For this purpose various analytical models do already exist, but their parameterization remains difficult as the number of parameters is usually very high. Also, they fail for very complex materials that occur in the real world. Measured materials, on the other hand, are prone to long acquisition time and to huge input data size. Although very efficient statistical compression algorithms were presented, most of them do not allow for editability, such as altering the diffuse color or mesostructure. In this thesis, a material representation is introduced that makes it possible to edit these features. This makes it possible to re-use the acquisition results in order to easily and quickly create deviations of the original material. These deviations may be subtle, but also substantial, allowing for a wide spectrum of material appearances. The approach presented in this thesis is not based on compression, but on a decomposition of the surface into several materials with different reflection properties. Based on a microfacette model, the light-matter interaction is represented by a function that can be stored in an ordinary two-dimensional texture. Additionally, depth information, local rotations, and the diffuse color are stored in these textures. As a result of the decomposition, some of the original information is inevitably lost, therefore an algorithm for the efficient simulation of subsurface scattering is presented as well. Another contribution of this work is a novel perception-based simplification metric that includes the material of an object. This metric comprises features of the human visual system, for example trichromatic color perception or reduced resolution. The proposed metric allows for a more aggressive simplification in regions where geometric metrics do not simplif

    Ultrasound imaging operation capture and image analysis for speckle noise reduction and detection of shadows

    Get PDF
    Ultrasound is becoming increasingly important in medicine, both as a diagnostic tool and as a therapeutic modality. At present, experienced sonographers observe trainees as they generate hundreds of images, constantly providing them feedback and eventually deciding if they have the appropriate skills and knowledge to perform ultrasound independently. This research seeks to advance towards developing an automated system capable of assessing the motion of an ultrasound transducer and differentiate between a novice, an intermediate and an expert sonographer. The research in this thesis synchronizes the ultrasound images with three depth sensors (Microsoft Kinect) placed on the top, left and right side of the patient to ensure the visibility of the ultrasound probe. Videos obtained from the three categories of sonographers are manually labeled and compared using Studiocode Development Environment to complete the items on the medical form checklist. Next, this thesis investigates and applies well known techniques used to smooth and suppress speckle noise in ultrasound images by using quality metrics to test their performance and show the benefits each one can contribute. Finally, this thesis investigates the problem of shadow detection in ultrasound imaging and proposes to detect shadows automatically with an ultrasound confidence map using a random walks algorithm. The results show that the proposed algorithm achieves an accuracy of automatic detection of up to 85%, based on both the expert and manual segmentation

    Artistic Path Space Editing of Physically Based Light Transport

    Get PDF
    Die Erzeugung realistischer Bilder ist ein wichtiges Ziel der Computergrafik, mit Anwendungen u.a. in der Spielfilmindustrie, Architektur und Medizin. Die physikalisch basierte Bildsynthese, welche in letzter Zeit anwendungsübergreifend weiten Anklang findet, bedient sich der numerischen Simulation des Lichttransports entlang durch die geometrische Optik vorgegebener Ausbreitungspfade; ein Modell, welches für übliche Szenen ausreicht, Photorealismus zu erzielen. Insgesamt gesehen ist heute das computergestützte Verfassen von Bildern und Animationen mit wohlgestalteter und theoretisch fundierter Schattierung stark vereinfacht. Allerdings ist bei der praktischen Umsetzung auch die Rücksichtnahme auf Details wie die Struktur des Ausgabegeräts wichtig und z.B. das Teilproblem der effizienten physikalisch basierten Bildsynthese in partizipierenden Medien ist noch weit davon entfernt, als gelöst zu gelten. Weiterhin ist die Bildsynthese als Teil eines weiteren Kontextes zu sehen: der effektiven Kommunikation von Ideen und Informationen. Seien es nun Form und Funktion eines Gebäudes, die medizinische Visualisierung einer Computertomografie oder aber die Stimmung einer Filmsequenz -- Botschaften in Form digitaler Bilder sind heutzutage omnipräsent. Leider hat die Verbreitung der -- auf Simulation ausgelegten -- Methodik der physikalisch basierten Bildsynthese generell zu einem Verlust intuitiver, feingestalteter und lokaler künstlerischer Kontrolle des finalen Bildinhalts geführt, welche in vorherigen, weniger strikten Paradigmen vorhanden war. Die Beiträge dieser Dissertation decken unterschiedliche Aspekte der Bildsynthese ab. Dies sind zunächst einmal die grundlegende Subpixel-Bildsynthese sowie effiziente Bildsyntheseverfahren für partizipierende Medien. Im Mittelpunkt der Arbeit stehen jedoch Ansätze zum effektiven visuellen Verständnis der Lichtausbreitung, die eine lokale künstlerische Einflussnahme ermöglichen und gleichzeitig auf globaler Ebene konsistente und glaubwürdige Ergebnisse erzielen. Hierbei ist die Kernidee, Visualisierung und Bearbeitung des Lichts direkt im alle möglichen Lichtpfade einschließenden "Pfadraum" durchzuführen. Dies steht im Gegensatz zu Verfahren nach Stand der Forschung, die entweder im Bildraum arbeiten oder auf bestimmte, isolierte Beleuchtungseffekte wie perfekte Spiegelungen, Schatten oder Kaustiken zugeschnitten sind. Die Erprobung der vorgestellten Verfahren hat gezeigt, dass mit ihnen real existierende Probleme der Bilderzeugung für Filmproduktionen gelöst werden können

    Algorithms and Methods for Imaging of Brain Activity from Non-Invasive Techniques

    Get PDF
    The imaging of brain activity, also called “Functional Neuroimaging”, is used to understand the relationship between activity in certain brain areas and specific functions. These techniques include fMRI (functional Magnetic Resonance Imaging), PET (Positron Emittance Tomography), EIT (Electrical Impedance Tomography), EEG (ElectroEncephaloGraphy) and DOT (Diffuse Optical Tomography) and are widely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In these contexts, usage of classical solutions (fMRI and PET) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons, portable low cost techniques are objects of the proposed thesis’s research, with focus on DOT and EEG. The main contribution of this thesis focuses on the implementation of a numerical solver for DOT based on the radiosity-diffusion model, integrating the anatomical information provided by a structural MRI.In particular, we obtained a 7x speed-up over an single run of isotropic-scattered parallel Monte Carlo engine for a domain of 2 million voxels, with an accuracy comparable to 10 runs of anisotropic scattered Monte Carlo in the same geometry. The speed-up significantly increases for larger domains, allowing one to compute the light distribution of a full human head (about 3 million voxels) in 116 seconds for the platform used. The secondary contribution of this thesis focuses on EEG and it concerns the implementation of software libraries for time-domain source localization in the scope of an open-source framework called Creamino, which can be used to simplify and speed-up the design of BCI systems. It consists of firmware and software libraries that allow designers to connect new EEG platforms to software tools for BCI
    • …
    corecore