10 research outputs found

    A rigorous and realistic Shape From Shading method and some of its applications

    Get PDF
    This article proposes a rigorous and realistic solution of the Lambertian Shape From Shading (SFS) problem. The power of our approach is threefolds. First, our work is based on a rigorous mathematical method: we define a new notion of weak solutions (in the viscosity sense) which does not necessarily requires boundary data (contrary to the work of [rouy-tourin:92,prados-faugeras-etal:02,prados-faugeras:03,camilli-falcone:96,falcone-sagona-etal:01]) and which allows to define a solution as soon as the image is (Lipschitz) continuous (contrary to the work of [oliensis:91,dupuis-oliensis:94]). We prove the existence and uniqueness of this (new) solution and we approximate it by using a provably convergent algorithm. Second, it improves the applicability of the SFS to real images: we complete the realistic work of [prados-faugeras:03,tankus-sochen-etal:03], by modeling the problem with a pinhole camera and with a single point light source located at the optical center. This new modelization appears very relevant for applications. Moreover, our algorithm can deal with images containing discontinuities and black shadows. It is very robust to pixel noise and to errors on parameters. It is also generic: i.e. we propose a unique algorithm which can compute numerical solutions of the various perspective and orthographic SFS models. Finally, our algorithm seems to be the most efficient iterative algorithm of the SFS literature. Third, we propose three applications (in three different areas) based on our SFS method

    Wavelet Algorithms for Complex Models

    Get PDF
    Rapport interne.We present the result of experimentations and tests with Wavelet Radiosity. We have developed a powerful wavelet radiosity implementation where we can independantly modify every geometrical component of the scene (description of the input data, representation of spectral distribution, etc.) and every component of the global illumination algorithm (visibility algorithm, wavelet basis, etc.). This implementation has been tested on real world applications: an archaeological site reconstruction with daylight illumination, an opera house front with artificial illumination and the Soda Hall building inside illumination. In this paper, wepresent the results of our experiments, which are mostly about the interdependencies of the different parts of the general algorithm and the influence of each one on the final result. We also introduce several improvements to the wavelet radiosity algorithm that allow for higher rendering speed and lower memory use, thereby allowing rendering of architectural models of high complexity

    Natural Metamers

    Get PDF
    Given only a color camera\u27s RGB measurement of a complete color signal spectrum, how can the spectrum be estimated? We propose and test a new method that answers this question and recovers an approximating spectrum. Although this approximation has intrinsic interest, our main focus is on using it to generate tristimulus values for color reproduction. In essence, this provides a new method of converting color camera signals to tristimulus coordinates, because a spectrum defines a unique point in tristimulus coordinates. Color reproduction is founded on producing spectra that are metamers to those appearing in the original scene. Once a spectrum\u27s tristimulus coordinates are known, generating a metamer is a well defined problem. Unfortunately, most color cameras cannot produce the necessary tristimulus coordinates directly because their color separation filters are not related by a linear transformation to the human color-matching functions. Color cameras are more likely to reproduce colors that look correct to the camera than to a human observer. Conversion from camera RGB triples to tristimulus values will always involve some type of estimation procedure unless cameras are redesigned. We compare the accuracy of our conversion strategy to that of one based on Horn\u27s work on the exact reproduction of colored images. Our new method relies on expressing the color signal spectrum in terms of a linear combination of basis functions. The results show that a principal component analysis in color-signal space yields the best basis for our purposes, since using it leads to the most “natural” color signal spectrum that is statistically likely to have generated a given camera signal

    Photochromic textiles

    Get PDF
    This thesis describes a new investigation into the relationship between the developed colour intensity of photochromic textiles and the time of UV exposure and also the time of relaxation. As a result of this relationship the potential of flexible textilebased sensor constructions which might be used for the identification of radiation intensity is demonstrated. In addition the differences between photochromic pigment behaviour in solution and incorporated into prints on textiles are demonstrated. Differences in the effect of the spectral power distributions of light sources on the photochromic response are also examined. Bi-exponential functions, which are used in optical yield (Oy) calculations, have been described to provide a good description of the kinetics of colour change intensity of photochromic pigments, giving a good fit. The optical yield of the photochromic reaction Oy is linearly related to the intensity of illumination E. The optical yield obtained from the photochromic reaction curves are described by a kinetic model, which defines the rate of colour change initiated by external stimulus of UV light. Verification of the kinetic model is demonstrated for textile sensors with photochromic pigments applied by textile printing and by fibre mass dyeing. The thesis also describes a unique instrument developed by author, which measures colour differences ΔE* and spectral remission curves derived from photochromic colour change simultaneously with UV irradiation. In this theses the photochromic behaviour of selected pigments in three different applications (type of media – textile prints, non-woven textiles and solution is investigated

    Computer synthesis of spectroradiometric images for color imaging systems analysis

    Get PDF
    A technique to perform full spectral based color calculations through an extension of OpenGL has been created. This method of color computations is more accurate than the standard RGB model that most computer graphics algorithms utilize. By maintaining full wavelength information in color calculations, it is also possible to interactively simulate and display many important color phenomena such as metamerism and fluorescence. This technique is not limited to creating simple images suitable for interactive display, however. Using this extension, it is also possible to synthesize spectroradiometric images of arbitrary spatial and spectral resolution, for use in color imaging system analysis

    Image synthesis based on a model of human vision

    Get PDF
    Modern computer graphics systems are able to construct renderings of such high quality that viewers are deceived into regarding the images as coming from a photographic source. Large amounts of computing resources are expended in this rendering process, using complex mathematical models of lighting and shading. However, psychophysical experiments have revealed that viewers only regard certain informative regions within a presented image. Furthermore, it has been shown that these visually important regions contain low-level visual feature differences that attract the attention of the viewer. This thesis will present a new approach to image synthesis that exploits these experimental findings by modulating the spatial quality of image regions by their visual importance. Efficiency gains are therefore reaped, without sacrificing much of the perceived quality of the image. Two tasks must be undertaken to achieve this goal. Firstly, the design of an appropriate region-based model of visual importance, and secondly, the modification of progressive rendering techniques to effect an importance-based rendering approach. A rule-based fuzzy logic model is presented that computes, using spatial feature differences, the relative visual importance of regions in an image. This model improves upon previous work by incorporating threshold effects induced by global feature difference distributions and by using texture concentration measures. A modified approach to progressive ray-tracing is also presented. This new approach uses the visual importance model to guide the progressive refinement of an image. In addition, this concept of visual importance has been incorporated into supersampling, texture mapping and computer animation techniques. Experimental results are presented, illustrating the efficiency gains reaped from using this method of progressive rendering. This visual importance-based rendering approach is expected to have applications in the entertainment industry, where image fidelity may be sacrificed for efficiency purposes, as long as the overall visual impression of the scene is maintained. Different aspects of the approach should find many other applications in image compression, image retrieval, progressive data transmission and active robotic vision

    異なる空間を繋ぐ光輸送シミュレーション

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 稲葉 雅幸, 東京大学教授 千葉 滋, 東京大学教授 五十嵐 健夫, 東京大学教授 松尾 宇泰, 東京大学講師 中山 英樹, 東京大学講師 蜂須賀 恵也University of Tokyo(東京大学

    Lancer de photons multi-passes et écrasement de photons pour le rendu optronique

    Get PDF
    La simulation de l'éclairage par illumination globale a fait l'objet de nombreuses recherches et applications au cours des dernières années. Tout d'abord utilisée dans le domaine visible, la simulation est aujourd'hui de plus en plus appliquée au rendu infrarouge. On appelle optronique l'union de ces deux domaines. Le problème principal des méthodes d'illumination globale actuelles provient de la difficulté à traiter le phénomène de diffusion de la lumière, aussi bien dans le cas des surfaces que des milieux participants. Ces méthodes offrent des résultats satisfaisants dans le cas de scènes simples, mais les performances s'effrondrent lorsque la complexité augmente. Dans la première partie de cette thèse, nous exposons la nécessité de la prise en compte des phénomènes de diffusion pour la simulation optronique. Dans la deuxième partie nous posons les équations qui unifient les différentes méthodes de synthèse d'image, c'est à dire l'équation du rendu et l'équation volumique du transfert radiatif. L'état de l'art des méthodes d'illumination globale présenté dans la troisième partie montre qu'à l'heure actuelle la méthode des cartes de photons est celle qui offre le meilleur compromis performance/qualité. Néanmoins, la qualité des résultats obtenus grâce à cette méthode est dépendante du nombre de photons qui peuvent être stockés et donc de la quantité de mémoire disponible. Dans la quatrième partie de la thèse, nous proposons une évolution de la méthode, le lancer de photons multi-passes, qui permet de lever cette dépendance mémoire, et ainsi d'obtenir une très grande qualité sans pour autant utiliser une configuration matérielle onéreuse. Un autre problème de la méthode des cartes de photons est le temps de calcul important nécessaire lors du rendu de milieux participants. Dans la cinquième et dernière partie de cette thèse, nous proposons une méthode, l'écrasement de photons volumique, qui prend avantage de l'estimation de densité pour reconstruire efficacement la luminance volumique à partir de la carte de photons. Notre idée est d'isoler le calcul de la diffusion et d'utiliser une approche duale de l'estimation de densité pour l'optimiser car il constitue la partie coûteuse du calcul. Bien que les temps de rendu obtenus par notre méthode sont meilleurs que ceux obtenus en utilisant la méthode des cartes de photons pour la même qualité, nous proposons aussi une optimisation de la méthode utilisant les nouvelles capacités des cartes graphiques.Much research have been done on global illumination simulation. Firstly used in the visible spectrum domain, today, simulation is more and more applied to infrared rendering. The union of these two domains is called optronic. The main problem of the current global illumination methods comes from the complexity of the light scattering phenomena, as well for surfaces as for participating media. These methods offer satisfactory results for simple scenes, but performances crash when complexity raises. In the first part of this thesis, we expose the necessity to take scattering phenomena into account for optronic simulation. In the second part, we pose the equations that unify all global illumination methods, i.e. the rendering equation and the volume radiative tranfer equation. The state of the art presented in the third part shows that the Photon Mapping method is, at this moment, the one that offers the better compromise between performance and quality. Nevertheless, the quality of the results obtained with this method depends on the number of photons that can be stocked, and then on the available memory. In the fourth part, we propose an evolution of the method, called Multipass Photon Mapping, which permits to get rid of this memory dependency, and hence, to achieve a great accuracy without using a costly harware configuration. Another problem inherent to Photon Mapping, is the enormous rendering time needed for participating media rendering. In the fifth and last part of this thesis, we propose a method, called Volume Photon Splatting, which takes advantage of density estimation to efficiently reconstruct volume radiance from the photon map. Our idea is to separate the computation of emission, absorption and out-scattering from the computation of in-scattering. Then we use a dual approach of density estimation to optimize this last part as it is the most computational expensive. Our method extends Photon Splatting, which optimizes the computation time of Photon Mapping for surface rendering, to participating media, and then considerably reduce participating media rendering times. Even though our method is faster than Photon Mapping for equal quality, we also propose a GPU based optimization of our algorithm

    Fast Volume Rendering and Deformation Algorithms

    Full text link
    Volume rendering is a technique for simultaneous visualization of surfaces and inner structures of objects. However, the huge number of volume primitives (voxels) in a volume, leads to high computational cost. In this dissertation I developed two algorithms for the acceleration of volume rendering and volume deformation. The first algorithm accelerates the ray casting of volume. Previous ray casting acceleration techniques like space-leaping and early-ray-termination are only efficient when most voxels in a volume are either opaque or transparent. When many voxels are semi-transparent, the rendering time will increase considerably. Our new algorithm improves the performance of ray casting of semi-transparently mapped volumes by exploiting the opacity coherency in object space, leading to a speedup factor between 1.90 and 3.49 in rendering semi-transparent volumes. The acceleration is realized with the help of pre-computed coherency distances. We developed an efficient algorithm to encode the coherency information, which requires less than 12 seconds for data sets with about 8 million voxels. The second algorithm is for volume deformation. Unlike the traditional methods, our method incorporates the two stages of volume deformation, i.e. deformation and rendering, into a unified process. Instead to deform each voxel to generate an intermediate deformed volume, the algorithm follows inversely deformed rays to generate the desired deformation. The calculations and memory for generating the intermediate volume are thus saved. The deformation continuity is achieved by adaptive ray division which matches the amplitude of local deformation. We proposed approaches for shading and opacit adjustment which guarantee the visual plausibility of deformation results. We achieve an additional deformation speedup factor of 2.34~6.58 by incorporating early-ray-termination, space-leaping and the coherency acceleration technique in the new deformation algorithm
    corecore