6,381 research outputs found

    Probabilistic Inference from Arbitrary Uncertainty using Mixtures of Factorized Generalized Gaussians

    Full text link
    This paper presents a general and efficient framework for probabilistic inference and learning from arbitrary uncertain information. It exploits the calculation properties of finite mixture models, conjugate families and factorization. Both the joint probability density of the variables and the likelihood function of the (objective or subjective) observation are approximated by a special mixture model, in such a way that any desired conditional distribution can be directly obtained without numerical integration. We have developed an extended version of the expectation maximization (EM) algorithm to estimate the parameters of mixture models from uncertain training examples (indirect observations). As a consequence, any piece of exact or uncertain information about both input and output values is consistently handled in the inference and learning stages. This ability, extremely useful in certain situations, is not found in most alternative methods. The proposed framework is formally justified from standard probabilistic principles and illustrative examples are provided in the fields of nonparametric pattern classification, nonlinear regression and pattern completion. Finally, experiments on a real application and comparative results over standard databases provide empirical evidence of the utility of the method in a wide range of applications

    Neural networks in geophysical applications

    Get PDF
    Neural networks are increasingly popular in geophysics. Because they are universal approximators, these tools can approximate any continuous function with an arbitrary precision. Hence, they may yield important contributions to finding solutions to a variety of geophysical applications. However, knowledge of many methods and techniques recently developed to increase the performance and to facilitate the use of neural networks does not seem to be widespread in the geophysical community. Therefore, the power of these tools has not yet been explored to their full extent. In this paper, techniques are described for faster training, better overall performance, i.e., generalization,and the automatic estimation of network size and architecture

    Multivariate Hawkes Processes for Large-scale Inference

    Full text link
    In this paper, we present a framework for fitting multivariate Hawkes processes for large-scale problems both in the number of events in the observed history nn and the number of event types dd (i.e. dimensions). The proposed Low-Rank Hawkes Process (LRHP) framework introduces a low-rank approximation of the kernel matrix that allows to perform the nonparametric learning of the d2d^2 triggering kernels using at most O(ndr2)O(ndr^2) operations, where rr is the rank of the approximation (r≪d,nr \ll d,n). This comes as a major improvement to the existing state-of-the-art inference algorithms that are in O(nd2)O(nd^2). Furthermore, the low-rank approximation allows LRHP to learn representative patterns of interaction between event types, which may be valuable for the analysis of such complex processes in real world datasets. The efficiency and scalability of our approach is illustrated with numerical experiments on simulated as well as real datasets.Comment: 16 pages, 5 figure

    04131 Abstracts Collection -- Geometric Properties from Incomplete Data

    Get PDF
    From 21.03.04 to 26.03.04, the Dagstuhl Seminar 04131 ``Geometric Properties from Incomplete Data\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Fast PRISM: Branch and Bound Hough Transform for Object Class Detection

    Get PDF
    This paper addresses the task of efficient object class detection by means of the Hough transform. This approach has been made popular by the Implicit Shape Model (ISM) and has been adopted many times. Although ISM exhibits robust detection performance, its probabilistic formulation is unsatisfactory. The PRincipled Implicit Shape Model (PRISM) overcomes these problems by interpreting Hough voting as a dual implementation of linear sliding-window detection. It thereby gives a sound justification to the voting procedure and imposes minimal constraints. We demonstrate PRISM's flexibility by two complementary implementations: a generatively trained Gaussian Mixture Model as well as a discriminatively trained histogram approach. Both systems achieve state-of-the-art performance. Detections are found by gradient-based or branch and bound search, respectively. The latter greatly benefits from PRISM's feature-centric view. It thereby avoids the unfavourable memory trade-off and any on-line pre-processing of the original Efficient Subwindow Search (ESS). Moreover, our approach takes account of the features' scale value while ESS does not. Finally, we show how to avoid soft-matching and spatial pyramid descriptors during detection without losing their positive effect. This makes algorithms simpler and faster. Both are possible if the object model is properly regularised and we discuss a modification of SVMs which allows for doing s

    Time as It Could Be Measured in Artificial Living Systems

    Get PDF
    Being able to measure time, whether directly or indirectly, is a significant advantage for an organism. It permits it to predict regular events, and prepare for them on time. Thus, clocks are ubiquitous in biology. In the present paper, we consider the most minimal abstract pure clocks and investigate their characteristics with respect to their ability to measure time. Amongst other, we find fundamentally diametral clock characteristics, such as oscillatory behaviour for local time measurement or decay-based clocks measuring time periods in scales global to the problem. We include also cascades of independent clocks (“clock bags”) and composite clocks with controlled dependency; the latter show various regimes of markedly different dynamics.Final Published versio

    Agent-Based Computational Economics

    Get PDF
    Agent-based computational economics (ACE) is the computational study of economies modeled as evolving systems of autonomous interacting agents. Starting from initial conditions, specified by the modeler, the computational economy evolves over time as its constituent agents repeatedly interact with each other and learn from these interactions. ACE is therefore a bottom-up culture-dish approach to the study of economic systems. This study discusses the key characteristics and goals of the ACE methodology. Eight currently active research areas are highlighted for concrete illustration. Potential advantages and disadvantages of the ACE methodology are considered, along with open questions and possible directions for future research.Agent-based computational economics; Autonomous agents; Interaction networks; Learning; Evolution; Mechanism design; Computational economics; Object-oriented programming.

    Space-dependent turbulence model aggregation using machine learning

    Full text link
    In this article, we propose a data-driven methodology for combining the solutions of a set of competing turbulence models. The individual model predictions are linearly combined for providing an ensemble solution accompanied by estimates of predictive uncertainty due to the turbulence model choice. First, for a set of training flow configurations we assign to component models high weights in the regions where they best perform, and vice versa, by introducing a measure of distance between high-fidelity data and individual model predictions. The model weights are then mapped into a space of features, representative of local flow physics, and regressed by a Random Forests (RF) algorithm. The RF regressor is finally employed to infer spatial distributions of the model weights for unseen configurations. Predictions of new cases are constructed as a convex linear combination of the underlying models solutions, while the between model variance provides information about regions of high model uncertainty. The method is demonstrated for a class of flows through the compressor cascade NACA65 V103 at Re~3e5. The results show that the aggregated solution outperforms the accuracy of individual models for the quantity used to inform the RF regressor, and performs well for other quantities well-correlated to the preceding one. The estimated uncertainty intervals are generally consistent with the target high-fidelity data. The present approach then represents a viable methodology for a more objective selection and combination of alternative turbulence models in configurations of interest for engineering practic
    • …
    corecore