55 research outputs found

    Hierarchical Variance Reduction Techniques for Monte Carlo Rendering

    Get PDF
    Ever since the first three-dimensional computer graphics appeared half a century ago, the goal has been to model and simulate how light interacts with materials and objects to form an image. The ultimate goal is photorealistic rendering, where the created images reach a level of accuracy that makes them indistinguishable from photographs of the real world. There are many applications ñ visualization of products and architectural designs yet to be built, special effects, computer-generated films, virtual reality, and video games, to name a few. However, the problem has proven tremendously complex; the illumination at any point is described by a recursive integral to which a closed-form solution seldom exists. Instead, computer simulation and Monte Carlo methods are commonly used to statistically estimate the result. This introduces undesirable noise, or variance, and a large body of research has been devoted to finding ways to reduce the variance. I continue along this line of research, and present several novel techniques for variance reduction in Monte Carlo rendering, as well as a few related tools. The research in this dissertation focuses on using importance sampling to pick a small set of well-distributed point samples. As the primary contribution, I have developed the first methods to explicitly draw samples from the product of distant high-frequency lighting and complex reflectance functions. By sampling the product, low noise results can be achieved using a very small number of samples, which is important to minimize the rendering times. Several different hierarchical representations are explored to allow efficient product sampling. In the first publication, the key idea is to work in a compressed wavelet basis, which allows fast evaluation of the product. Many of the initial restrictions of this technique were removed in follow-up work, allowing higher-resolution uncompressed lighting and avoiding precomputation of reflectance functions. My second main contribution is to present one of the first techniques to take the triple product of lighting, visibility and reflectance into account to further reduce the variance in Monte Carlo rendering. For this purpose, control variates are combined with importance sampling to solve the problem in a novel way. A large part of the technique also focuses on analysis and approximation of the visibility function. To further refine the above techniques, several useful tools are introduced. These include a fast, low-distortion map to represent (hemi)spherical functions, a method to create high-quality quasi-random points, and an optimizing compiler for analyzing shaders using interval arithmetic. The latter automatically extracts bounds for importance sampling of arbitrary shaders, as opposed to using a priori known reflectance functions. In summary, the work presented here takes the field of computer graphics one step further towards making photorealistic rendering practical for a wide range of uses. By introducing several novel Monte Carlo methods, more sophisticated lighting and materials can be used without increasing the computation times. The research is aimed at domain-specific solutions to the rendering problem, but I believe that much of the new theory is applicable in other parts of computer graphics, as well as in other fields

    Towards a Traceable Data Model Accommodating Bounded Uncertainty for DST Based Computation of BRCA1/2 Mutation Probability With Age

    Get PDF
    In this paper, we describe the requirements for traceable open-source data retrieval in the context of computation of BRCA1/2 mutation probabilities (mutations in two tumor-suppressor genes responsible for hereditary BReast or/and ovarian CAncer). We show how such data can be used to develop a Dempster-Shafer model for computing the probability of BRCA1/2 mutations enhanced by taking into account the actual age of a patient or a family member in an appropriate way even if it is not known exactly. The model is compared with PENN II and BOADICEA (based on undisclosed data), two established platforms for this purpose accessible online, as well as with our own previous models. A proof-of-concept implementation shows that set-based techniques are able to provide better information about mutation probabilities, simultaneously highlighting the necessity for ground truth data of high quality

    Representação de formas por distância euclidiana truncada

    Get PDF
    Orientador: Jorge StolfDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Nesta dissertação estudamos o uso da transformada de distância euclidiana com sinal truncada para representar a forma de objetos n-dimensionais, com ênfase em três dimen- sões, e para aplicações de projeto e manufatura assistidas por computador (CAD/CAM). A representação consiste de uma imagem digital onde cada pixel contém a distância de seu centro à fronteira do objeto, quantizada e truncada. Nós elaboramos ferramentas para geração dessa representação com garantias informativas sobre o interior e o exterior do objeto a ser representado, e também estudamos algoritmos para conversão de e para outras representações de formas, como imagens binárias e ternárias, polígonos e malhas de triângulos, e modelos procedurais. Por fim, investigamos os erros empíricos da extração da representação de borda através da representação de distância truncada e o impacto de seus parâmetros nessa tarefaAbstract: In this dissertation, we study the usage of the clipped signed distance transform to rep- resent shapes of n-dimensional objects, with emphasis in three dimensions and for ap- plications in computer-aided design and manufacting (CAD/CAM). The representation consists in a digital image where every pixel holds the value of its center to the boundary of the object, truncated and quantized. We elaborate tools for generating the representation with informative properties about the interior and exterior of the object being represented and examine algorithms for conversions from and to other common shape representations, like binary and ternary images, polygons, triangle meshes, and procedural models. Fi- nally, we investigate the empirical errors of the boundary representation extraction of the clipped distance representation and the impact of its parameters for this purposeMestradoCiência da ComputaçãoMestre em Ciência da Computação131045/2018-0CAPESCNP

    Rigorous solution techniques for numerical constraint satisfaction problems

    Get PDF
    A constraint satisfaction problem (e.g., a system of equations and inequalities) consists of a finite set of constraints specifying which value combinations from given variable domains are admitted. It is called numerical if its variable domains are continuous. Such problems arise in many applications, but form a difficult problem class since they are NP-hard. Solving a constraint satisfaction problem is to find one or more value combinations satisfying all its constraints. Numerical computations on floating-point numbers in computers often suffer from rounding errors. The rigorous control of rounding errors during numerical computations is highly desired in many applications because it would benefit the quality and reliability of the decisions based on the solutions found by the computations. Various aspects of rigorous numerical computations in solving constraint satisfaction problems are addressed in this thesis: search, constraint propagation, combination of inclusion techniques, and post-processing. The solution of a constraint satisfaction problem is essentially performed by a search. In this thesis, we propose a new complete search technique (i.e., it can find all solutions within a predetermined tolerance) for numerical constraint satisfaction problems. This technique is general and can be used in place of branching steps in most branch-and-prune methods. Moreover, this new technique speeds up the most recent general search strategy (often by an order of magnitude) and provides a concise representation of solutions. To make a constraint satisfaction problem easier to solve, a major approach, called constraint propagation, in the constraint programming1 field is often used to reduce the variable domains (by discarding redundant value combinations from the domains). Basing on directed acyclic graphs, we propose a new constraint propagation technique and a method for coordinating constraint propagation and search. More importantly, we propose a novel generic scheme for combining multiple inclusion techniques2 in numerical constraint propagation. This scheme allows bringing into the constraint propagation framework the strengths of various techniques coming from different fields. To illustrate the flexibility and efficiency of the generic scheme, we base on this scheme and devise several specific combination strategies for rigorous numerical constraint propagation using interval constraint propagation, interval arithmetic, affine arithmetic, and linear programming. Our experiments show that the new propagation techniques outperform previously available methods by 1 to 4 orders of magnitude or more in speed. We also propose several post-processing techniques for the representation of continuums of solutions. Based on connectedness, they allow grouping each cluster of connected solution subsets into a larger subset, thus allowing getting additional grouping information. Potentially, these techniques enable interval-based solution techniques to be alternatives to bounding-volume techniques in applications such as collision detection and interactive graphics. __________________________________________________ 1 Constraint programming is an approach to programming that relies on both reasoning and computing. 2 An inclusion technique is to include a set of interest into enclosures. It is also called an enclosure technique

    Dynamically reconfigurable architecture for embedded computer vision systems

    Get PDF
    The objective of this research work is to design, develop and implement a new architecture which integrates on the same chip all the processing levels of a complete Computer Vision system, so that the execution is efficient without compromising the power consumption while keeping a reduced cost. For this purpose, an analysis and classification of different mathematical operations and algorithms commonly used in Computer Vision are carried out, as well as a in-depth review of the image processing capabilities of current-generation hardware devices. This permits to determine the requirements and the key aspects for an efficient architecture. A representative set of algorithms is employed as benchmark to evaluate the proposed architecture, which is implemented on an FPGA-based system-on-chip. Finally, the prototype is compared to other related approaches in order to determine its advantages and weaknesses

    Deliverable D1.1 State of the art and requirements analysis for hypervideo

    Get PDF
    This deliverable presents a state-of-art and requirements analysis report for hypervideo authored as part of the WP1 of the LinkedTV project. Initially, we present some use-case (viewers) scenarios in the LinkedTV project and through the analysis of the distinctive needs and demands of each scenario we point out the technical requirements from a user-side perspective. Subsequently we study methods for the automatic and semi-automatic decomposition of the audiovisual content in order to effectively support the annotation process. Considering that the multimedia content comprises of different types of information, i.e., visual, textual and audio, we report various methods for the analysis of these three different streams. Finally we present various annotation tools which could integrate the developed analysis results so as to effectively support users (video producers) in the semi-automatic linking of hypervideo content, and based on them we report on the initial progress in building the LinkedTV annotation tool. For each one of the different classes of techniques being discussed in the deliverable we present the evaluation results from the application of one such method of the literature to a dataset well-suited to the needs of the LinkedTV project, and we indicate the future technical requirements that should be addressed in order to achieve higher levels of performance (e.g., in terms of accuracy and time-efficiency), as necessary

    Abstractions and performance optimisations for finite element methods

    Get PDF
    Finding numerical solutions to partial differential equations (PDEs) is an essential task in the discipline of scientific computing. In designing software tools for this task, one of the ultimate goals is to balance the needs for generality, ease to use and high performance. Domain-specific systems based on code generation techniques, such as Firedrake, attempt to address this problem with a design consisting of a hierarchy of abstractions, where the users can specify the mathematical problems via a high-level, descriptive interface, which is progressively lowered through the intermediate abstractions. Well-designed abstraction layers are essential to enable performing code transformations and optimisations robustly and efficiently, generating high-performance code without user intervention. This thesis discusses several topics on the design of the abstraction layers of Firedrake, and presents the benefit of its software architecture by providing examples of various optimising code transformations at the appropriate abstraction layers. In particular, we discuss the advantage of describing the local assembly stage of a finite element solver in an intermediate representation based on symbolic tensor algebra. We successfully lift specific loop optimisations, previously implemented by rewriting ASTs of the local assembly kernels, to this higher-level tensor language, improving the compilation speed and optimisation effectiveness. The global assembly phase involves the application of local assembly kernels on a collection of entities of an unstructured mesh. We redesign the abstraction to express the global assembly loop nests using tools and concepts based on the polyhedral model. This enables us to implement the cross-element vectorisation algorithm that delivers stable vectorisation performance on CPUs automatically. This abstraction also improves the portability of Firedrake, as we demonstrate targeting GPU devices transparently from the same software stack.Open Acces

    Proceedings of the XIII Global Optimization Workshop: GOW'16

    Get PDF
    [Excerpt] Preface: Past Global Optimization Workshop shave been held in Sopron (1985 and 1990), Szeged (WGO, 1995), Florence (GO’99, 1999), Hanmer Springs (Let’s GO, 2001), Santorini (Frontiers in GO, 2003), San José (Go’05, 2005), Mykonos (AGO’07, 2007), Skukuza (SAGO’08, 2008), Toulouse (TOGO’10, 2010), Natal (NAGO’12, 2012) and Málaga (MAGO’14, 2014) with the aim of stimulating discussion between senior and junior researchers on the topic of Global Optimization. In 2016, the XIII Global Optimization Workshop (GOW’16) takes place in Braga and is organized by three researchers from the University of Minho. Two of them belong to the Systems Engineering and Operational Research Group from the Algoritmi Research Centre and the other to the Statistics, Applied Probability and Operational Research Group from the Centre of Mathematics. The event received more than 50 submissions from 15 countries from Europe, South America and North America. We want to express our gratitude to the invited speaker Panos Pardalos for accepting the invitation and sharing his expertise, helping us to meet the workshop objectives. GOW’16 would not have been possible without the valuable contribution from the authors and the International Scientific Committee members. We thank you all. This proceedings book intends to present an overview of the topics that will be addressed in the workshop with the goal of contributing to interesting and fruitful discussions between the authors and participants. After the event, high quality papers can be submitted to a special issue of the Journal of Global Optimization dedicated to the workshop. [...

    Learning from minimally labeled data with accelerated convolutional neural networks

    Get PDF
    The main objective of an Artificial Vision Algorithm is to design a mapping function that takes an image as an input and correctly classifies it into one of the user-determined categories. There are several important properties to be satisfied by the mapping function for visual understanding. First, the function should produce good representations of the visual world, which will be able to recognize images independently of pose, scale and illumination. Furthermore, the designed artificial vision system has to learn these representations by itself. Recent studies on Convolutional Neural Networks (ConvNets) produced promising advancements in visual understanding. These networks attain significant performance upgrades by relying on hierarchical structures inspired by biological vision systems. In my research, I work mainly in two areas: 1) how ConvNets can be programmed to learn the optimal mapping function using the minimum amount of labeled data, and 2) how these networks can be accelerated for practical purposes. In this work, algorithms that learn from unlabeled data are studied. A new framework that exploits unlabeled data is proposed. The proposed framework obtains state-of-the-art performance results in different tasks. Furthermore, this study presents an optimized streaming method for ConvNets’ hardware accelerator on an embedded platform. It is tested on object classification and detection applications using ConvNets. Experimental results indicate high computational efficiency, and significant performance upgrades over all other existing platforms

    Pixel-Level Deep Multi-Dimensional Embeddings for Homogeneous Multiple Object Tracking

    Get PDF
    The goal of Multiple Object Tracking (MOT) is to locate multiple objects and keep track of their individual identities and trajectories given a sequence of (video) frames. A popular approach to MOT is tracking by detection consisting of two processing components: detection (identification of objects of interest in individual frames) and data association (connecting data from multiple frames). This work addresses the detection component by introducing a method based on semantic instance segmentation, i.e., assigning labels to all visible pixels such that they are unique among different instances. Modern tracking methods often built around Convolutional Neural Networks (CNNs) and additional, explicitly-defined post-processing steps. This work introduces two detection methods that incorporate multi-dimensional embeddings. We train deep CNNs to produce easily-clusterable embeddings for semantic instance segmentation and to enable object detection through pose estimation. The use of embeddings allows the method to identify per-pixel instance membership for both tasks. Our method specifically targets applications that require long-term tracking of homogeneous targets using a stationary camera. Furthermore, this method was developed and evaluated on a livestock tracking application which presents exceptional challenges that generalized tracking methods are not equipped to solve. This is largely because contemporary datasets for multiple object tracking lack properties that are specific to livestock environments. These include a high degree of visual similarity between targets, complex physical interactions, long-term inter-object occlusions, and a fixed-cardinality set of targets. For the reasons stated above, our method is developed and tested with the livestock application in mind and, specifically, group-housed pigs are evaluated in this work. Our method reliably detects pigs in a group housed environment based on the publicly available dataset with 99% precision and 95% using pose estimation and achieves 80% accuracy when using semantic instance segmentation at 50% IoU threshold. Results demonstrate our method\u27s ability to achieve consistent identification and tracking of group-housed livestock, even in cases where the targets are occluded and despite the fact that they lack uniquely identifying features. The pixel-level embeddings used by the proposed method are thoroughly evaluated in order to demonstrate their properties and behaviors when applied to real data. Adivser: Lance C. Pére
    corecore