508 research outputs found

    Collision response analysis and fracture simulation of deformable objects for computer graphics

    Get PDF
    Computer Animation is a sub-field of computer graphics with an emphasis on the time-dependent description of interested events. It has been used in many disciplines such as entertainment, scientific visualization, industrial design, multimedia, etc. Modeling of deformable objects in a dynamic interaction and/or fracture process has been an active research topic in the past decade. The main objective of this thesis is to provide a new effective approach to address the dynamic interaction and fracture simulation. With respect to the dynamic interaction between deformable objects, this thesis proposes a new semi-explicit local collision response analysis (CRA) algorithm which is better than most of previous approaches in three aspects: computational efficiency, accuracy mid generality. The computational cost of the semi-explicit local CRA algorithm is guaranteed to be O('n') for each time step, which is especially desirable for the collision response analysis of complex systems. With the use of the Lagrange multiplier method, the send-explicit local CPA algorithm avoids shortcomings associated with the penalty method and provides an accurate description of detailed local deformation during a collision process. The generic geometric constraint and the Gauss-Seidel iteration for enforcing the loading constraint such as Coulomb friction law make the semi-explicit local CRA algorithm to be general enough to handle arbitrary oblique collisions. The experimental results indicate that the semi-explicit local CRA approach is capable of capturing all the key features during collision of deformable objects and matches closely with the theoretical solution of a classic collision problem in solid mechanics. In the fracture simulation, a new element-split method is proposed, which has a sounder mechanical basis than previous approaches in computer graphics and is more flexible to accommodate different material fracture criteria such that different failure patterns are obtained accordingly. Quantitative simulation results show that the element-split approach is consistent with the theoretical Mohr's circle analysis and the slip-line theory in plasticity, while qualitative results indicate its visual effectiveness

    Synthesis and Optimization of Reversible Circuits - A Survey

    Full text link
    Reversible logic circuits have been historically motivated by theoretical research in low-power electronics as well as practical improvement of bit-manipulation transforms in cryptography and computer graphics. Recently, reversible circuits have attracted interest as components of quantum algorithms, as well as in photonic and nano-computing technologies where some switching devices offer no signal gain. Research in generating reversible logic distinguishes between circuit synthesis, post-synthesis optimization, and technology mapping. In this survey, we review algorithmic paradigms --- search-based, cycle-based, transformation-based, and BDD-based --- as well as specific algorithms for reversible synthesis, both exact and heuristic. We conclude the survey by outlining key open challenges in synthesis of reversible and quantum logic, as well as most common misconceptions.Comment: 34 pages, 15 figures, 2 table

    A parameter-less algorithm for tensor co-clustering

    Get PDF

    A tensor-based selection hyper-heuristic for cross-domain heuristic search

    Get PDF
    Hyper-heuristics have emerged as automated high level search methodologies that manage a set of low level heuristics for solving computationally hard problems. A generic selection hyper-heuristic combines heuristic selection and move acceptance methods under an iterative single point-based search framework. At each step, the solution in hand is modified after applying a selected heuristic and a decision is made whether the new solution is accepted or not. In this study, we represent the trail of a hyper-heuristic as a third order tensor. Factorization of such a tensor reveals the latent relationships between the low level heuristics and the hyper-heuristic itself. The proposed learning approach partitions the set of low level heuristics into two subsets where heuristics in each subset are associated with a separate move acceptance method. Then a multi-stage hyper-heuristic is formed and while solving a given problem instance, heuristics are allowed to operate only in conjunction with the associated acceptance method at each stage. To the best of our knowledge, this is the first time tensor analysis of the space of heuristics is used as a data science approach to improve the performance of a hyper-heuristic in the prescribed manner. The empirical results across six different problem domains from a benchmark indeed indicate the success of the proposed approach

    Evaluating and Interpreting Deep Convolutional Neural Networks via Non-negative Matrix Factorization

    Get PDF
    With ever greater computational resources and more accessible software, deep neural networks have become ubiquitous across industry and academia. Their remarkable ability to generalize to new samples defies the conventional view, which holds that complex, over-parameterized networks would be prone to overfitting. This apparent discrepancy is exacerbated by our inability to inspect and interpret the high-dimensional, non-linear, latent representations they learn, which has led many to refer to neural networks as ``black-boxes''. The Law of Parsimony states that ``simpler solutions are more likely to be correct than complex ones''. Since they perform quite well in practice, a natural question to ask, then, is in what way are neural networks simple? We propose that compression is the answer. Since good generalization requires invariance to irrelevant variations in the input, it is necessary for a network to discard this irrelevant information. As a result, semantically similar samples are mapped to similar representations in neural network deep feature space, where they form simple, low-dimensional structures. Conversely, a network that overfits relies on memorizing individual samples. Such a network cannot discard information as easily. In this thesis we characterize the difference between such networks using the non-negative rank of activation matrices. Relying on the non-negativity of rectified-linear units, the non-negative rank is the smallest number that admits an exact non-negative matrix factorization. We derive an upper bound on the amount of memorization in terms of the non-negative rank, and show it is a natural complexity measure for rectified-linear units. With a focus on deep convolutional neural networks trained to perform object recognition, we show that the two non-negative factors derived from deep network layers decompose the information held therein in an interpretable way. The first of these factors provides heatmaps which highlight similarly encoded regions within an input image or image set. We find that these networks learn to detect semantic parts and form a hierarchy, such that parts are further broken down into sub-parts. We quantitatively evaluate the semantic quality of these heatmaps by using them to perform semantic co-segmentation and co-localization. In spite of the convolutional network we use being trained solely with image-level labels, we achieve results comparable or better than domain-specific state-of-the-art methods for these tasks. The second non-negative factor provides a bag-of-concepts representation for an image or image set. We use this representation to derive global image descriptors for images in a large collection. With these descriptors in hand, we perform two variations content-based image retrieval, i.e. reverse image search. Using information from one of the non-negative matrix factors we obtain descriptors which are suitable for finding semantically related images, i.e., belonging to the same semantic category as the query image. Combining information from both non-negative factors, however, yields descriptors that are suitable for finding other images of the specific instance depicted in the query image, where we again achieve state-of-the-art performance
    • …
    corecore