1,510 research outputs found

    Herding as a Learning System with Edge-of-Chaos Dynamics

    Full text link
    Herding defines a deterministic dynamical system at the edge of chaos. It generates a sequence of model states and parameters by alternating parameter perturbations with state maximizations, where the sequence of states can be interpreted as "samples" from an associated MRF model. Herding differs from maximum likelihood estimation in that the sequence of parameters does not converge to a fixed point and differs from an MCMC posterior sampling approach in that the sequence of states is generated deterministically. Herding may be interpreted as a"perturb and map" method where the parameter perturbations are generated using a deterministic nonlinear dynamical system rather than randomly from a Gumbel distribution. This chapter studies the distinct statistical characteristics of the herding algorithm and shows that the fast convergence rate of the controlled moments may be attributed to edge of chaos dynamics. The herding algorithm can also be generalized to models with latent variables and to a discriminative learning setting. The perceptron cycling theorem ensures that the fast moment matching property is preserved in the more general framework

    Bayesian Estimation for Continuous-Time Sparse Stochastic Processes

    Full text link
    We consider continuous-time sparse stochastic processes from which we have only a finite number of noisy/noiseless samples. Our goal is to estimate the noiseless samples (denoising) and the signal in-between (interpolation problem). By relying on tools from the theory of splines, we derive the joint a priori distribution of the samples and show how this probability density function can be factorized. The factorization enables us to tractably implement the maximum a posteriori and minimum mean-square error (MMSE) criteria as two statistical approaches for estimating the unknowns. We compare the derived statistical methods with well-known techniques for the recovery of sparse signals, such as the 1\ell_1 norm and Log (1\ell_1-0\ell_0 relaxation) regularization methods. The simulation results show that, under certain conditions, the performance of the regularization techniques can be very close to that of the MMSE estimator.Comment: To appear in IEEE TS

    On the Inversion of High Energy Proton

    Full text link
    Inversion of the K-fold stochastic autoconvolution integral equation is an elementary nonlinear problem, yet there are no de facto methods to solve it with finite statistics. To fix this problem, we introduce a novel inverse algorithm based on a combination of minimization of relative entropy, the Fast Fourier Transform and a recursive version of Efron's bootstrap. This gives us power to obtain new perspectives on non-perturbative high energy QCD, such as probing the ab initio principles underlying the approximately negative binomial distributions of observed charged particle final state multiplicities, related to multiparton interactions, the fluctuating structure and profile of proton and diffraction. As a proof-of-concept, we apply the algorithm to ALICE proton-proton charged particle multiplicity measurements done at different center-of-mass energies and fiducial pseudorapidity intervals at the LHC, available on HEPData. A strong double peak structure emerges from the inversion, barely visible without it.Comment: 29 pages, 10 figures, v2: extended analysis (re-projection ratios, 2D
    corecore