3,724 research outputs found

    Times series averaging from a probabilistic interpretation of time-elastic kernel

    Get PDF
    At the light of regularized dynamic time warping kernels, this paper reconsider the concept of time elastic centroid (TEC) for a set of time series. From this perspective, we show first how TEC can easily be addressed as a preimage problem. Unfortunately this preimage problem is ill-posed, may suffer from over-fitting especially for long time series and getting a sub-optimal solution involves heavy computational costs. We then derive two new algorithms based on a probabilistic interpretation of kernel alignment matrices that expresses in terms of probabilistic distributions over sets of alignment paths. The first algorithm is an iterative agglomerative heuristics inspired from the state of the art DTW barycenter averaging (DBA) algorithm proposed specifically for the Dynamic Time Warping measure. The second proposed algorithm achieves a classical averaging of the aligned samples but also implements an averaging of the time of occurrences of the aligned samples. It exploits a straightforward progressive agglomerative heuristics. An experimentation that compares for 45 time series datasets classification error rates obtained by first near neighbors classifiers exploiting a single medoid or centroid estimate to represent each categories show that: i) centroids based approaches significantly outperform medoids based approaches, ii) on the considered experience, the two proposed algorithms outperform the state of the art DBA algorithm, and iii) the second proposed algorithm that implements an averaging jointly in the sample space and along the time axes emerges as the most significantly robust time elastic averaging heuristic with an interesting noise reduction capability. Index Terms-Time series averaging Time elastic kernel Dynamic Time Warping Time series clustering and classification

    Regularized brain reading with shrinkage and smoothing

    Full text link
    Functional neuroimaging measures how the brain responds to complex stimuli. However, sample sizes are modest, noise is substantial, and stimuli are high dimensional. Hence, direct estimates are inherently imprecise and call for regularization. We compare a suite of approaches which regularize via shrinkage: ridge regression, the elastic net (a generalization of ridge regression and the lasso), and a hierarchical Bayesian model based on small area estimation (SAE). We contrast regularization with spatial smoothing and combinations of smoothing and shrinkage. All methods are tested on functional magnetic resonance imaging (fMRI) data from multiple subjects participating in two different experiments related to reading, for both predicting neural response to stimuli and decoding stimuli from responses. Interestingly, when the regularization parameters are chosen by cross-validation independently for every voxel, low/high regularization is chosen in voxels where the classification accuracy is high/low, indicating that the regularization intensity is a good tool for identification of relevant voxels for the cognitive task. Surprisingly, all the regularization methods work about equally well, suggesting that beating basic smoothing and shrinkage will take not only clever methods, but also careful modeling.Comment: Published at http://dx.doi.org/10.1214/15-AOAS837 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Jet Structure in Heavy Ion Collisions

    Full text link
    We review recent theoretical developments in the study of the structure of jets that are produced in ultra relativistic heavy ion collisions. The core of the review focusses on the dynamics of the parton cascade that is induced by the interactions of a fast parton crossing a quark-gluon plasma. We recall the basic mechanisms responsible for medium induced radiation, underline the rapid disappearance of coherence effects, and the ensuing probabilistic nature of the medium induced cascade. We discuss how large radiative corrections modify the classical picture of the gluon cascade, and how these can be absorbed in a renormalization of the jet quenching parameter q^\hat q . Then, we analyze the (wave)-turbulent transport of energy along the medium induced cascade, and point out the main characteristics of the angular structure of such a cascade. Finally, color decoherence of the in-cone jet structure is discussed. Modest contact with phenomenology is presented towards the end of the review.Comment: Review to appear in QGP 5, 55 pages, 15 figure

    A NN-uniform quantitative Tanaka's theorem for the conservative Kac's NN-particle system with Maxwell molecules

    Get PDF
    This paper considers the space homogenous Boltzmann equation with Maxwell molecules and arbitrary angular distribution. Following Kac's program, emphasis is laid on the the associated conservative Kac's stochastic NN-particle system, a Markov process with binary collisions conserving energy and total momentum. An explicit Markov coupling (a probabilistic, Markovian coupling of two copies of the process) is constructed, using simultaneous collisions, and parallel coupling of each binary random collision on the sphere of collisional directions. The euclidean distance between the two coupled systems is almost surely decreasing with respect to time, and the associated quadratic coupling creation (the time variation of the averaged squared coupling distance) is computed explicitly. Then, a family (indexed by δ>0\delta > 0) of NN-uniform ''weak'' coupling / coupling creation inequalities are proven, that leads to a NN-uniform power law trend to equilibrium of order t+tδ{\sim}_{ t \to + \infty} t^{-\delta} , with constants depending on moments of the velocity distributions strictly greater than 2(1+δ)2(1 + \delta). The case of order 44 moment is treated explicitly, achieving Kac's program without any chaos propagation analysis. Finally, two counter-examples are suggested indicating that the method: (i) requires the dependance on >2>2-moments, and (ii) cannot provide contractivity in quadratic Wasserstein distance in any case.Comment: arXiv admin note: text overlap with arXiv:1312.225

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Entrainment, motion and deposition of coarse particles transported by water over a sloping mobile bed

    Full text link
    In gravel-bed rivers, bedload transport exhibits considerable variability in time and space. Recently, stochastic bedload transport theories have been developed to address the mechanisms and effects of bedload transport fluctuations. Stochastic models involve parameters such as particle diffusivity, entrainment and deposition rates. The lack of hard information on how these parameters vary with flow conditions is a clear impediment to their application to real-world scenarios. In this paper, we determined the closure equations for the above parameters from laboratory experiments. We focused on shallow supercritical flow on a sloping mobile bed in straight channels, a setting that was representative of flow conditions in mountain rivers. Experiments were run at low sediment transport rates under steady nonuniform flow conditions (i.e., the water discharge was kept constant, but bedforms developed and migrated upstream, making flow nonuniform). Using image processing, we reconstructed particle paths to deduce the particle velocity and its probability distribution, particle diffusivity, and rates of deposition and entrainment. We found that on average, particle acceleration, velocity and deposition rate were responsive to local flow conditions, whereas entrainment rate depended strongly on local bed activity. Particle diffusivity varied linearly with the depth-averaged flow velocity. The empirical probability distribution of particle velocity was well approximated by a Gaussian distribution when all particle positions were considered together. In contrast, the particles located in close vicinity to the bed had exponentially distributed velocities. Our experimental results provide closure equations for stochastic or deterministic bedload transport models.Comment: Submitted to Journal of Geophysical Researc

    A Few Applications of Seismic Waves: Anisotropy Tomography and All That

    Get PDF
    Seismic anisotropy, the variation of seismic wave speed with direction, is an extremely important physical phenomena. When a certain type of seismic wave (shear wave) propagates in an anisotropic medium, the component polarized parallel to the fast direction (along which the speed is higher) begins to lead and the component polarized to the slow direction lags behind (analogous to the optical birefringence). This observation of seismic anisotropy may be used to infer several physical properties of the medium through which these waves are propagating. Fortunately, Earth\u27s upper mantle shows significant seismic anisotropy due to preferred crystallographic orientation of the constituent minerals. Therefore, it can provide crucial information regarding the convective flow and stress patterns in the upper mantle. To be more precise, seismic anisotropy can shed light on detail inner working of several geodynamic processes which are inherently anisotropic in nature and therefore insensitive to isotropic seismology. \\Owing to its simplicity, the classical ray theory based formulation is widely used to infer anisotropic structures of the upper mantle. However, due to the lack of vertical resolution of infinite frequency ray theory based methods and its numerous other shortcomings even in the simplified studies assuming isotropy, it is undesirable to use a ray theory based method in a fully anisotropic framework. The major portion of this thesis is devoted to developing anisotropy tomography method in a perturbative framework where the `finite-frequency\u27 or the full `wave\u27 feature is taken into account. Such technique is proven to be a substantial improvement in terms of localization of the anisotropy of upper mantle. After benchmarking, it is applied to infer the anisotropic structures beneath the High Lava Plains of Oregon and as such was able to provide an avenue for reconciling apparently contradictory constraints on anisotropic structures from different measurements. \\ In the last part of the thesis, we briefly discuss a technique (slightly tangential to the main theme of anisotropy however seems to enjoy a connection at a more fundamental level) we develop to obtain an effective description of the physical properties of a general heterogeneous medium (including pure randomness). This is motivated by the fact that when propagating through small heterogeneities, seismic waves naturally average the elastic properties of the medium and therefore only an effective physics is realized
    corecore