232,208 research outputs found

    A COMPARISON BETWEEN THREE PREDICTIVE MODELS OF COMPUTATIONAL INTELLIGENCE

    Get PDF
    Time series prediction is an open problem and many researchers are trying to find new predictive methods and improvements for the existing ones. Lately methods based on neural networks are used extensively for time series prediction. Also, support vector machines have solved some of the problems faced by neural networks and they began to be widely used for time series prediction. The main drawback of those two methods is that they are global models and in the case of a chaotic time series it is unlikely to find such model. In this paper it is presented a comparison between three predictive from computational intelligence field one based on neural networks one based on support vector machine and another based on chaos theory. We show that the model based on chaos theory is an alternative to the other two methods

    Neural Conservation Laws: A Divergence-Free Perspective

    Full text link
    We investigate the parameterization of deep neural networks that by design satisfy the continuity equation, a fundamental conservation law. This is enabled by the observation that any solution of the continuity equation can be represented as a divergence-free vector field. We hence propose building divergence-free neural networks through the concept of differential forms, and with the aid of automatic differentiation, realize two practical constructions. As a result, we can parameterize pairs of densities and vector fields that always exactly satisfy the continuity equation, foregoing the need for extra penalty methods or expensive numerical simulation. Furthermore, we prove these models are universal and so can be used to represent any divergence-free vector field. Finally, we experimentally validate our approaches by computing neural network-based solutions to fluid equations, solving for the Hodge decomposition, and learning dynamical optimal transport maps

    Character-level Intra Attention Network for Natural Language Inference

    Get PDF
    Natural language inference (NLI) is a central problem in language understanding. End-to-end artificial neural networks have reached state-of-the-art performance in NLI field recently. In this paper, we propose Character-level Intra Attention Network (CIAN) for the NLI task. In our model, we use the character-level convolutional network to replace the standard word embedding layer, and we use the intra attention to capture the intra-sentence semantics. The proposed CIAN model provides improved results based on a newly published MNLI corpus.Comment: EMNLP Workshop RepEval 2017: The Second Workshop on Evaluating Vector Space Representations for NL

    DeepSketchHair: Deep Sketch-based 3D Hair Modeling

    Full text link
    We present sketchhair, a deep learning based tool for interactive modeling of 3D hair from 2D sketches. Given a 3D bust model as reference, our sketching system takes as input a user-drawn sketch (consisting of hair contour and a few strokes indicating the hair growing direction within a hair region), and automatically generates a 3D hair model, which matches the input sketch both globally and locally. The key enablers of our system are two carefully designed neural networks, namely, S2ONet, which converts an input sketch to a dense 2D hair orientation field; and O2VNet, which maps the 2D orientation field to a 3D vector field. Our system also supports hair editing with additional sketches in new views. This is enabled by another deep neural network, V2VNet, which updates the 3D vector field with respect to the new sketches. All the three networks are trained with synthetic data generated from a 3D hairstyle database. We demonstrate the effectiveness and expressiveness of our tool using a variety of hairstyles and also compare our method with prior art

    Geometric Discretization in Shape analysis

    Get PDF
    Discretizations in shape analysis is the main theme of this licentiate thesis, which comprises two papers.\ua0The first paper considers \ua0the problem of finding a parameterized time-dependent vector field that warps an initial set of points to a target set of points.\ua0The parametrization introduces a restriction on the number of available vector fields.\ua0 It is shown that this changes the geometric setting of the matching problem and \ua0equations of motion in this new setting are derived.\ua0\ua0 Computational algorithms are provided, together with numerical examples that emphasize the practical importance of regularization.\ua0Further, the modified problem is shown to have connections with residual neural networks, meaning that it is possible to study neural networks in terms of shape analysis.\ua0The second paper concerns a class of spherical partial differential equations, commonly found in mathematical physics, that describe the evolution of a time-dependent vector field.\ua0The flow of the vector field generates a diffeomorphism, for which a discretization method based on quantization theory is derived.\ua0The discretization method is geometric in the sense that it preserves the underlying Lie--Poisson structure of the original equations.\ua0Numerical examples are provided and potential use cases of the discretization method are discussed, ranging from compressible flows to shape matching
    • …
    corecore