145 research outputs found

    IST Austria Thesis

    Get PDF
    Fabrication of curved shells plays an important role in modern design, industry, and science. Among their remarkable properties are, for example, aesthetics of organic shapes, ability to evenly distribute loads, or efficient flow separation. They find applications across vast length scales ranging from sky-scraper architecture to microscopic devices. But, at the same time, the design of curved shells and their manufacturing process pose a variety of challenges. In this thesis, they are addressed from several perspectives. In particular, this thesis presents approaches based on the transformation of initially flat sheets into the target curved surfaces. This involves problems of interactive design of shells with nontrivial mechanical constraints, inverse design of complex structural materials, and data-driven modeling of delicate and time-dependent physical properties. At the same time, two newly-developed self-morphing mechanisms targeting flat-to-curved transformation are presented. In architecture, doubly curved surfaces can be realized as cold bent glass panelizations. Originally flat glass panels are bent into frames and remain stressed. This is a cost-efficient fabrication approach compared to hot bending, when glass panels are shaped plastically. However such constructions are prone to breaking during bending, and it is highly nontrivial to navigate the design space, keeping the panels fabricable and aesthetically pleasing at the same time. We introduce an interactive design system for cold bent glass façades, while previously even offline optimization for such scenarios has not been sufficiently developed. Our method is based on a deep learning approach providing quick and high precision estimation of glass panel shape and stress while handling the shape multimodality. Fabrication of smaller objects of scales below 1 m, can also greatly benefit from shaping originally flat sheets. In this respect, we designed new self-morphing shell mechanisms transforming from an initial flat state to a doubly curved state with high precision and detail. Our so-called CurveUps demonstrate the encodement of the geometric information into the shell. Furthermore, we explored the frontiers of programmable materials and showed how temporal information can additionally be encoded into a flat shell. This allows prescribing deformation sequences for doubly curved surfaces and, thus, facilitates self-collision avoidance enabling complex shapes and functionalities otherwise impossible. Both of these methods include inverse design tools keeping the user in the design loop

    Fast Volume Rendering and Deformation Algorithms

    Full text link
    Volume rendering is a technique for simultaneous visualization of surfaces and inner structures of objects. However, the huge number of volume primitives (voxels) in a volume, leads to high computational cost. In this dissertation I developed two algorithms for the acceleration of volume rendering and volume deformation. The first algorithm accelerates the ray casting of volume. Previous ray casting acceleration techniques like space-leaping and early-ray-termination are only efficient when most voxels in a volume are either opaque or transparent. When many voxels are semi-transparent, the rendering time will increase considerably. Our new algorithm improves the performance of ray casting of semi-transparently mapped volumes by exploiting the opacity coherency in object space, leading to a speedup factor between 1.90 and 3.49 in rendering semi-transparent volumes. The acceleration is realized with the help of pre-computed coherency distances. We developed an efficient algorithm to encode the coherency information, which requires less than 12 seconds for data sets with about 8 million voxels. The second algorithm is for volume deformation. Unlike the traditional methods, our method incorporates the two stages of volume deformation, i.e. deformation and rendering, into a unified process. Instead to deform each voxel to generate an intermediate deformed volume, the algorithm follows inversely deformed rays to generate the desired deformation. The calculations and memory for generating the intermediate volume are thus saved. The deformation continuity is achieved by adaptive ray division which matches the amplitude of local deformation. We proposed approaches for shading and opacit adjustment which guarantee the visual plausibility of deformation results. We achieve an additional deformation speedup factor of 2.34~6.58 by incorporating early-ray-termination, space-leaping and the coherency acceleration technique in the new deformation algorithm

    Quantitative Image Simulation and Analysis of Nanoparticles

    Get PDF

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    Mathematical and Data-driven Pattern Representation with Applications in Image Processing, Computer Graphics, and Infinite Dimensional Dynamical Data Mining

    Get PDF
    Patterns represent the spatial or temporal regularities intrinsic to various phenomena in nature, society, art, and science. From rigid ones with well-defined generative rules to flexible ones implied by unstructured data, patterns can be assigned to a spectrum. On one extreme, patterns are completely described by algebraic systems where each individual pattern is obtained by repeatedly applying simple operations on primitive elements. On the other extreme, patterns are perceived as visual or frequency regularities without any prior knowledge of the underlying mechanisms. In this thesis, we aim at demonstrating some mathematical techniques for representing patterns traversing the aforementioned spectrum, which leads to qualitative analysis of the patterns' properties and quantitative prediction of the modeled behaviors from various perspectives. We investigate lattice patterns from material science, shape patterns from computer graphics, submanifold patterns encountered in point cloud processing, color perception patterns applied in underwater image processing, dynamic patterns from spatial-temporal data, and low-rank patterns exploited in medical image reconstruction. For different patterns and based on their dependence on structured or unstructured data, we present suitable mathematical representations using techniques ranging from group theory to deep neural networks.Ph.D

    Creating and Recognizing 3D Objects

    Get PDF
    This thesis aims at investigating on 3D Computer Vision, a research topic which is gathering even increasing attention thanks to the more and more widespread availability of affordable 3D visual sensor, such as, in particular consumer grade RGB-D cameras. The contribution of this dissertation is twofold. First, the work addresses how to compactly represent the content of images acquired with RGB-D cameras. Second, the thesis focuses on 3D Reconstruction, key issue to efficiently populate the databases of 3D models deployed in object/category recognition scenarios. As 3D Registration plays a fundamental role in 3D Reconstruction, the former part of the thesis proposes a pipeline for coarse registration of point clouds that is entirely based on the computation of 3D Local Reference Frames (LRF). Unlike related work in literature, we also propose a comprehensive experimental evaluation based on diverse kinds of data (such as those acquired by laser scanners, RGB-D and stereo cameras) as well as on quantitative comparison with respect to three other methods. Driven by the ever-lower costs and growing distribution of 3D sensing devices, we expect broad-scale integration of depth sensing into mobile devices to be forthcoming. Accordingly, the thesis investigates on merging appearance and shape information for Mobile Visual Search and focuses on encoding RGB-D images in compact binary codes. An extensive experimental analysis on three state-of-the-art datasets, addressing both category and instance recognition scenarios, has led to the development of an RGB-D search engine architecture that can attain high recognition rates with peculiarly moderate bandwidth requirements. Our experiments also include a comparison with the CDVS (Compact Descriptors for Visual Search) pipeline, candidate to become part of the MPEG-7 standard

    NASA thesaurus. Volume 2: Access vocabulary

    Get PDF
    The Access Vocabulary, which is essentially a permuted index, provides access to any word or number in authorized postable and nonpostable terms. Additional entries include postable and nonpostable terms, other word entries, and pseudo-multiword terms that are permutations of words that contain words within words. The Access Vocabulary contains 40,738 entries that give increased access to the hierarchies in Volume 1 - Hierarchical Listing

    The application of computational modeling to data visualization

    Get PDF
    Researchers have argued that perceptual issues are important in determining what makes an effective visualization, but generally only provide descriptive guidelines for transforming perceptual theory into practical designs. In order to bridge the gap between theory and practice in a more rigorous way, a computational model of the primary visual cortex is used to explore the perception of data visualizations. A method is presented for automatically evaluating and optimizing data visualizations for an analytical task using a computational model of human vision. The method relies on a neural network simulation of early perceptual processing in the retina and visual cortex. The neural activity resulting from viewing an information visualization is simulated and evaluated to produce metrics of visualization effectiveness for analytical tasks. Visualization optimization is achieved by applying these effectiveness metrics as the utility function in a hill-climbing algorithm. This method is applied to the evaluation and optimization of two visualization types: 2D flow visualizations and node-link graph visualizations. The computational perceptual model is applied to various visual representations of flow fields evaluated using the advection task of Laidlaw et al. The predictive power of the model is examined by comparing its performance to that of human subjects on the advection task using four flow visualization types. The results show the same overall pattern for humans and the model. In both cases, the best performance was obtained from visualizations containing aligned visual edges. Flow visualization optimization is done using both streaklet-based and pixel-based visualization parameterizations. An emergent property of the streaklet-based optimization is head-to-tail streaklet alignment, the pixel-based parameterization results in a LIC-like result. The model is also applied to node-link graph diagram visualizations for a node connectivity task using two-layer node-link diagrams. The model evaluation of node-link graph visualizations correlates with human performance, in terms of both accuracy and response time. Node-link graph visualizations are optimized using the perceptual model. The optimized node-link diagrams exhibit the aesthetic properties associated with good node-link diagram design, such as straight edges, minimal edge crossings, and maximal crossing angles, and yields empirically better performance on the node connectivity task

    Complex and Adaptive Dynamical Systems: A Primer

    Full text link
    An thorough introduction is given at an introductory level to the field of quantitative complex system science, with special emphasis on emergence in dynamical systems based on network topologies. Subjects treated include graph theory and small-world networks, a generic introduction to the concepts of dynamical system theory, random Boolean networks, cellular automata and self-organized criticality, the statistical modeling of Darwinian evolution, synchronization phenomena and an introduction to the theory of cognitive systems. It inludes chapter on Graph Theory and Small-World Networks, Chaos, Bifurcations and Diffusion, Complexity and Information Theory, Random Boolean Networks, Cellular Automata and Self-Organized Criticality, Darwinian evolution, Hypercycles and Game Theory, Synchronization Phenomena and Elements of Cognitive System Theory.Comment: unformatted version of the textbook; published in Springer, Complexity Series (2008, second edition 2010
    corecore