62,895 research outputs found

    Shape Generation using Spatially Partitioned Point Clouds

    Full text link
    We propose a method to generate 3D shapes using point clouds. Given a point-cloud representation of a 3D shape, our method builds a kd-tree to spatially partition the points. This orders them consistently across all shapes, resulting in reasonably good correspondences across all shapes. We then use PCA analysis to derive a linear shape basis across the spatially partitioned points, and optimize the point ordering by iteratively minimizing the PCA reconstruction error. Even with the spatial sorting, the point clouds are inherently noisy and the resulting distribution over the shape coefficients can be highly multi-modal. We propose to use the expressive power of neural networks to learn a distribution over the shape coefficients in a generative-adversarial framework. Compared to 3D shape generative models trained on voxel-representations, our point-based method is considerably more light-weight and scalable, with little loss of quality. It also outperforms simpler linear factor models such as Probabilistic PCA, both qualitatively and quantitatively, on a number of categories from the ShapeNet dataset. Furthermore, our method can easily incorporate other point attributes such as normal and color information, an additional advantage over voxel-based representations.Comment: To appear at BMVC 201

    A New General Method to Generate Random Modal Formulae for Testing Decision Procedures

    Get PDF
    The recent emergence of heavily-optimized modal decision procedures has highlighted the key role of empirical testing in this domain. Unfortunately, the introduction of extensive empirical tests for modal logics is recent, and so far none of the proposed test generators is very satisfactory. To cope with this fact, we present a new random generation method that provides benefits over previous methods for generating empirical tests. It fixes and much generalizes one of the best-known methods, the random CNF_[]m test, allowing for generating a much wider variety of problems, covering in principle the whole input space. Our new method produces much more suitable test sets for the current generation of modal decision procedures. We analyze the features of the new method by means of an extensive collection of empirical tests

    A New General Method to Generate Random Modal Formulae for Testing Decision Procedures

    Full text link
    The recent emergence of heavily-optimized modal decision procedures has highlighted the key role of empirical testing in this domain. Unfortunately, the introduction of extensive empirical tests for modal logics is recent, and so far none of the proposed test generators is very satisfactory. To cope with this fact, we present a new random generation method that provides benefits over previous methods for generating empirical tests. It fixes and much generalizes one of the best-known methods, the random CNF_[]m test, allowing for generating a much wider variety of problems, covering in principle the whole input space. Our new method produces much more suitable test sets for the current generation of modal decision procedures. We analyze the features of the new method by means of an extensive collection of empirical tests

    Multi-modal gated recurrent units for image description

    Full text link
    Using a natural language sentence to describe the content of an image is a challenging but very important task. It is challenging because a description must not only capture objects contained in the image and the relationships among them, but also be relevant and grammatically correct. In this paper a multi-modal embedding model based on gated recurrent units (GRU) which can generate variable-length description for a given image. In the training step, we apply the convolutional neural network (CNN) to extract the image feature. Then the feature is imported into the multi-modal GRU as well as the corresponding sentence representations. The multi-modal GRU learns the inter-modal relations between image and sentence. And in the testing step, when an image is imported to our multi-modal GRU model, a sentence which describes the image content is generated. The experimental results demonstrate that our multi-modal GRU model obtains the state-of-the-art performance on Flickr8K, Flickr30K and MS COCO datasets.Comment: 25 pages, 7 figures, 6 tables, magazin
    • …
    corecore