154 research outputs found

    Forecasting People Trajectories and Head Poses by Jointly Reasoning on Tracklets and Vislets

    Full text link
    In this work, we explore the correlation between people trajectories and their head orientations. We argue that people trajectory and head pose forecasting can be modelled as a joint problem. Recent approaches on trajectory forecasting leverage short-term trajectories (aka tracklets) of pedestrians to predict their future paths. In addition, sociological cues, such as expected destination or pedestrian interaction, are often combined with tracklets. In this paper, we propose MiXing-LSTM (MX-LSTM) to capture the interplay between positions and head orientations (vislets) thanks to a joint unconstrained optimization of full covariance matrices during the LSTM backpropagation. We additionally exploit the head orientations as a proxy for the visual attention, when modeling social interactions. MX-LSTM predicts future pedestrians location and head pose, increasing the standard capabilities of the current approaches on long-term trajectory forecasting. Compared to the state-of-the-art, our approach shows better performances on an extensive set of public benchmarks. MX-LSTM is particularly effective when people move slowly, i.e. the most challenging scenario for all other models. The proposed approach also allows for accurate predictions on a longer time horizon.Comment: Accepted at IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2019. arXiv admin note: text overlap with arXiv:1805.0065

    Regularized System Identification

    Get PDF
    This open access book provides a comprehensive treatment of recent developments in kernel-based identification that are of interest to anyone engaged in learning dynamic systems from data. The reader is led step by step into understanding of a novel paradigm that leverages the power of machine learning without losing sight of the system-theoretical principles of black-box identification. The authors’ reformulation of the identification problem in the light of regularization theory not only offers new insight on classical questions, but paves the way to new and powerful algorithms for a variety of linear and nonlinear problems. Regression methods such as regularization networks and support vector machines are the basis of techniques that extend the function-estimation problem to the estimation of dynamic models. Many examples, also from real-world applications, illustrate the comparative advantages of the new nonparametric approach with respect to classic parametric prediction error methods. The challenges it addresses lie at the intersection of several disciplines so Regularized System Identification will be of interest to a variety of researchers and practitioners in the areas of control systems, machine learning, statistics, and data science. This is an open access book

    Mutual information based measures on complex interdependent networks of neuro data sets

    Get PDF
    We assume that even the simplest model of the brain is nonlinear and ‘causal’. Proceeding with the first assumption, we need a measure that is able to capture nonlinearity and hence Mutual Information whose variants includes Transfer Entropy is chosen. The second assumption of ‘causality’ is defined in relation to prediction ala Granger causality. Both these assumptions led us to Transfer Entropy. We take the simplest case of Transfer Entropy, redefine it for our purposes of detecting causal lag and proceed with a systematic investigation of this value. We start off with the Ising model and then moved on to created an amended Ising model where we attempted to replicate ‘causality’. We do the same for a toy model that can be calculated analytically and thus simulations can be compared to its theoretical value. Lastly, we tackle a very interesting EEG data set where Transfer Entropy shall be used on different frequency bands to display possible emergent property of ‘causality’ and detect possible candidates for causal lag on the data sets.Open Acces

    Rational Covariance Extension, Multivariate Spectral Estimation, and Related Moment Problems: Further Results and Applications

    Get PDF
    This dissertation concerns the problem of spectral estimation subject to moment constraints. Its scalar counterpart is well-known under the name of rational covariance extension which has been extensively studied in past decades. The classical covariance extension problem can be reformulated as a truncated trigonometric moment problem, which in general admits infinitely many solutions. In order to achieve positivity and rationality, optimization with entropy-like functionals has been exploited in the literature to select one solution with a fixed zero structure. Thus spectral zeros serve as an additional degree of freedom and in this way a complete parametrization of rational solutions with bounded degree can be obtained. New theoretical and numerical results are provided in this problem area of systems and control and are summarized in the following. First, a new algorithm for the scalar covariance extension problem formulated in terms of periodic ARMA models is given and its local convergence is demonstrated. The algorithm is formally extended for vector processes and applied to finite-interval model approximation and smoothing problems. Secondly, a general existence result is established for a multivariate spectral estimation problem formulated in a parametric fashion. Efforts are also made to attack the difficult uniqueness question and some preliminary results are obtained. Moreover, well-posedness in a special case is studied throughly, based on which a numerical continuation solver is developed with a provable convergence property. In addition, it is shown that solution to the spectral estimation problem is generally not unique in another parametric family of rational spectra that is advocated in the literature. Thirdly, the problem of image deblurring is formulated and solved in the framework of the multidimensional moment theory with a quadratic penalty as regularization

    First order algorithms in variational image processing

    Get PDF
    Variational methods in imaging are nowadays developing towards a quite universal and flexible tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting, segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form D(Ku)+αR(u)minu{\cal D}(Ku) + \alpha {\cal R} (u) \rightarrow \min_u ; where the functional D{\cal D} is a data fidelity term also depending on some input data ff and measuring the deviation of KuKu from such and R{\cal R} is a regularization functional. Moreover KK is a (often linear) forward operator modeling the dependence of data on an underlying image, and α\alpha is a positive regularization parameter. While D{\cal D} is often smooth and (strictly) convex, the current practice almost exclusively uses nonsmooth regularization functionals. The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof or 1\ell_1-norms of coefficients arising from scalar products with some frame system. The efficient solution of such variational problems in imaging demands for appropriate algorithms. Taking into account the specific structure as a sum of two very different terms to be minimized, splitting algorithms are a quite canonical choice. Consequently this field has revived the interest in techniques like operator splittings or augmented Lagrangians. Here we shall provide an overview of methods currently developed and recent results as well as some computational studies providing a comparison of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure

    Statistical Inference for MCARMA Processes

    Get PDF
    Multivariate continuous-time ARMA(p,q) (MCARMA(p,q)) processes are the continuous-time analog of the well-known vector ARMA(p,q) processes. This thesis contributes to the field of statistical inference of MCARMA processes in two ways. In the first part, we study information criteria, which provide a method to select a suitably MCARMA process as a model for given data. The second part of the thesis is concerned with robust estimation of the parameters of MCARMA processes

    Statistical Foundations of Actuarial Learning and its Applications

    Get PDF
    This open access book discusses the statistical modeling of insurance problems, a process which comprises data collection, data analysis and statistical model building to forecast insured events that may happen in the future. It presents the mathematical foundations behind these fundamental statistical concepts and how they can be applied in daily actuarial practice. Statistical modeling has a wide range of applications, and, depending on the application, the theoretical aspects may be weighted differently: here the main focus is on prediction rather than explanation. Starting with a presentation of state-of-the-art actuarial models, such as generalized linear models, the book then dives into modern machine learning tools such as neural networks and text recognition to improve predictive modeling with complex features. Providing practitioners with detailed guidance on how to apply machine learning methods to real-world data sets, and how to interpret the results without losing sight of the mathematical assumptions on which these methods are based, the book can serve as a modern basis for an actuarial education syllabus
    corecore