1,739 research outputs found

    Steerable Discrete Cosine Transform

    Get PDF
    In image compression, classical block-based separable transforms tend to be inefficient when image blocks contain arbitrarily shaped discontinuities. For this reason, transforms incorporating directional information are an appealing alternative. In this paper, we propose a new approach to this problem, namely a discrete cosine transform (DCT) that can be steered in any chosen direction. Such transform, called steerable DCT (SDCT), allows to rotate in a flexible way pairs of basis vectors, and enables precise matching of directionality in each image block, achieving improved coding efficiency. The optimal rotation angles for SDCT can be represented as solution of a suitable rate-distortion (RD) problem. We propose iterative methods to search such solution, and we develop a fully fledged image encoder to practically compare our techniques with other competing transforms. Analytical and numerical results prove that SDCT outperforms both DCT and state-of-the-art directional transforms

    The Fiber Walk: A Model of Tip-Driven Growth with Lateral Expansion

    Get PDF
    Tip-driven growth processes underlie the development of many plants. To date, tip-driven growth processes have been modelled as an elongating path or series of segments without taking into account lateral expansion during elongation. Instead, models of growth often introduce an explicit thickness by expanding the area around the completed elongated path. Modelling expansion in this way can lead to contradictions in the physical plausibility of the resulting surface and to uncertainty about how the object reached certain regions of space. Here, we introduce "fiber walks" as a self-avoiding random walk model for tip-driven growth processes that includes lateral expansion. In 2D, the fiber walk takes place on a square lattice and the space occupied by the fiber is modelled as a lateral contraction of the lattice. This contraction influences the possible follow-up steps of the fiber walk. The boundary of the area consumed by the contraction is derived as the dual of the lattice faces adjacent to the fiber. We show that fiber walks generate fibers that have well-defined curvatures, enable the identification of the process underlying the occupancy of physical space. Hence, fiber walks provide a base from which to model both the extension and expansion of physical biological objects with finite thickness.Comment: Plos One (in press

    Photorealistic physically based render engines: a comparative study

    Full text link
    PĂ©rez Roig, F. (2012). Photorealistic physically based render engines: a comparative study. http://hdl.handle.net/10251/14797.Archivo delegad

    Exact Symbolic-Numeric Computation of Planar Algebraic Curves

    Full text link
    We present a novel certified and complete algorithm to compute arrangements of real planar algebraic curves. It provides a geometric-topological analysis of the decomposition of the plane induced by a finite number of algebraic curves in terms of a cylindrical algebraic decomposition. From a high-level perspective, the overall method splits into two main subroutines, namely an algorithm denoted Bisolve to isolate the real solutions of a zero-dimensional bivariate system, and an algorithm denoted GeoTop to analyze a single algebraic curve. Compared to existing approaches based on elimination techniques, we considerably improve the corresponding lifting steps in both subroutines. As a result, generic position of the input system is never assumed, and thus our algorithm never demands for any change of coordinates. In addition, we significantly limit the types of involved exact operations, that is, we only use resultant and gcd computations as purely symbolic operations. The latter results are achieved by combining techniques from different fields such as (modular) symbolic computation, numerical analysis and algebraic geometry. We have implemented our algorithms as prototypical contributions to the C++-project CGAL. They exploit graphics hardware to expedite the symbolic computations. We have also compared our implementation with the current reference implementations, that is, LGP and Maple's Isolate for polynomial system solving, and CGAL's bivariate algebraic kernel for analyses and arrangement computations of algebraic curves. For various series of challenging instances, our exhaustive experiments show that the new implementations outperform the existing ones.Comment: 46 pages, 4 figures, submitted to Special Issue of TCS on SNC 2011. arXiv admin note: substantial text overlap with arXiv:1010.1386 and arXiv:1103.469

    Toward robust and efficient physically-based rendering

    Get PDF
    Le rendu fondé sur la physique est utilisé pour le design, l'illustration ou l'animation par ordinateur. Ce type de rendu produit des images photo-réalistes en résolvant les équations qui décrivent le transport de la lumière dans une scène. Bien que ces équations soient connues depuis longtemps, et qu'un grand nombre d'algorithmes aient été développés pour les résoudre, il n'en existe pas qui puisse gérer de manière efficace toutes les scènes possibles. Plutôt qu'essayer de développer un nouvel algorithme de simulation d'éclairage, nous proposons d'améliorer la robustesse de la plupart des méthodes utilisées à ce jour et/ou qui sont amenées à être développées dans les années à venir. Nous faisons cela en commençant par identifier les sources de non-robustesse dans un moteur de rendu basé sur la physique, puis en développant des méthodes permettant de minimiser leur impact. Le résultat de ce travail est un ensemble de méthodes utilisant différents outils mathématiques et algorithmiques, chacune de ces méthodes visant à améliorer une partie spécifique d'un moteur de rendu. Nous examinons aussi comment les architectures matérielles actuelles peuvent être utilisées à leur maximum afin d'obtenir des algorithmes plus rapides, sans ajouter d'approximations. Bien que les contributions présentées dans cette thèse aient vocation à être combinées, chacune d'entre elles peut être utilisée seule : elles sont techniquement indépendantes les unes des autres.Physically-based rendering is used for design, illustration or computer animation. It consists in producing photorealistic images by solving the equations which describe how light travels in a scene. Although these equations have been known for a long time and many algorithms for light simulation have been developed, no algorithm exists to solve them efficiently for any scene. Instead of trying to develop a new algorithm devoted to light simulation, we propose to enhance the robustness of most methods used nowadays and/or which can be developed in the years to come. We do this by first identifying the sources of non-robustness in a physically-based rendering engine, and then addressing them by specific algorithms. The result is a set of methods based on different mathematical or algorithmic methods, each aiming at improving a different part of a rendering engine. We also investigate how the current hardware architectures can be used at their maximum to produce more efficient algorithms, without adding approximations. Although the contributions presented in this dissertation are meant to be combined, each of them can be used in a standalone way: they have been designed to be internally independent of each other

    Hierarchical Variance Reduction Techniques for Monte Carlo Rendering

    Get PDF
    Ever since the first three-dimensional computer graphics appeared half a century ago, the goal has been to model and simulate how light interacts with materials and objects to form an image. The ultimate goal is photorealistic rendering, where the created images reach a level of accuracy that makes them indistinguishable from photographs of the real world. There are many applications ñ visualization of products and architectural designs yet to be built, special effects, computer-generated films, virtual reality, and video games, to name a few. However, the problem has proven tremendously complex; the illumination at any point is described by a recursive integral to which a closed-form solution seldom exists. Instead, computer simulation and Monte Carlo methods are commonly used to statistically estimate the result. This introduces undesirable noise, or variance, and a large body of research has been devoted to finding ways to reduce the variance. I continue along this line of research, and present several novel techniques for variance reduction in Monte Carlo rendering, as well as a few related tools. The research in this dissertation focuses on using importance sampling to pick a small set of well-distributed point samples. As the primary contribution, I have developed the first methods to explicitly draw samples from the product of distant high-frequency lighting and complex reflectance functions. By sampling the product, low noise results can be achieved using a very small number of samples, which is important to minimize the rendering times. Several different hierarchical representations are explored to allow efficient product sampling. In the first publication, the key idea is to work in a compressed wavelet basis, which allows fast evaluation of the product. Many of the initial restrictions of this technique were removed in follow-up work, allowing higher-resolution uncompressed lighting and avoiding precomputation of reflectance functions. My second main contribution is to present one of the first techniques to take the triple product of lighting, visibility and reflectance into account to further reduce the variance in Monte Carlo rendering. For this purpose, control variates are combined with importance sampling to solve the problem in a novel way. A large part of the technique also focuses on analysis and approximation of the visibility function. To further refine the above techniques, several useful tools are introduced. These include a fast, low-distortion map to represent (hemi)spherical functions, a method to create high-quality quasi-random points, and an optimizing compiler for analyzing shaders using interval arithmetic. The latter automatically extracts bounds for importance sampling of arbitrary shaders, as opposed to using a priori known reflectance functions. In summary, the work presented here takes the field of computer graphics one step further towards making photorealistic rendering practical for a wide range of uses. By introducing several novel Monte Carlo methods, more sophisticated lighting and materials can be used without increasing the computation times. The research is aimed at domain-specific solutions to the rendering problem, but I believe that much of the new theory is applicable in other parts of computer graphics, as well as in other fields

    Mesh adaptation for high-order flow simulations

    Get PDF
    Mesh adaptation has only been considered for high-order flow simulations in recent years and many techniques are still to be made more robust and efficient with curvilinear meshes required by these high-order methods. This thesis covers the developments made to improve the mesh generation and adaptation capabilities of the open-source spectral/hp element framework Nektar++ and its dedicated mesh utility NekMesh. This thesis first covers the generation of quality initial meshes typically required before an iterative adaptation procedure can be used. For optimal performance of the spectral/hp element method, quadrilateral and hexahedral meshes are preferred and two methods are presented to achieve this, either entirely or partially. The first method, inspired from cross field methods, solves a Laplace problem to obtain a guiding field from which a valid two-dimensional quadrilateral block decomposition can be automatically obtained. In turn, naturally curved meshes are generated. The second method takes advantage of the medial axis to generate structured partitions in the boundary layer region of three-dimensional domains. The method proves to be robust in generating hybrid high-order meshes with boundary layer aligned prismatic elements near boundaries and tetrahedral elements elsewhere. The thesis goes on to explore the adaptation of high-order meshes for the simulation of flows using a spectral/hp element formulation. First a new approach to moving meshes, referred to here as r-adaptation, based on a variational framework, is described. This new r-adaptation module is then enhanced by p-adaptation for the simulation of compressible inviscid flows with shocks. Where the flow is smooth, p-adaptation is used to benefit from the spectral convergence of the spectral/hp element methods. Where the flow is discontinuous, e.g. at shock waves, r-adaptation clusters nodes together to better capture these field discontinuities. The benefits of this dual, rp-adaptation approach are demonstrated through two-dimensional benchmark test cases.Open Acces

    Parallel hierarchical global illumination

    Get PDF
    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, we have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations
    • …
    corecore