1,011 research outputs found

    Human Motion Capture Data Tailored Transform Coding

    Full text link
    Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Geometric deep learning: going beyond Euclidean data

    Get PDF
    Many scientific fields study data with an underlying structure that is a non-Euclidean space. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure, and in cases where the invariances of these structures are built into networks used to model them. Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. The purpose of this paper is to overview different examples of geometric deep learning problems and present available solutions, key difficulties, applications, and future research directions in this nascent field

    Large Field Inflation from Axion Mixing

    Get PDF
    We study the general multi-axion systems, focusing on the possibility of large field inflation driven by axions. We find that through axion mixing from a non-diagonal metric on the moduli space and/or from St\"uckelberg coupling to a U(1) gauge field, an effectively super-Planckian decay constant can be generated without the need of "alignment" in the axion decay constants. We also investigate the consistency conditions related to the gauge symmetries in the multi-axion systems, such as vanishing gauge anomalies and the potential presence of generalized Chern-Simons terms. Our scenario applies generally to field theory models whose axion periodicities are intrinsically sub-Planckian, but it is most naturally realized in string theory. The types of axion mixings invoked in our scenario appear quite commonly in D-brane models, and we present its implementation in type II superstring theory. Explicit stringy models exhibiting all the characteristics of our ideas are constructed within the frameworks of Type IIA intersecting D6-brane models on T6/OR and Type IIB intersecting D7-brane models on Swiss-Cheese Calabi-Yau orientifolds.Comment: v2: references added, typos corrected; v1: 1+85 pages, 4 figure

    On the Possibility of Large Axion Moduli Spaces

    Full text link
    We study the diameters of axion moduli spaces, focusing primarily on type IIB compactifications on Calabi-Yau three-folds. In this case, we derive a stringent bound on the diameter in the large volume region of parameter space for Calabi-Yaus with simplicial K\"ahler cone. This bound can be violated by Calabi-Yaus with non-simplicial K\"ahler cones, but additional contributions are introduced to the effective action which can restrict the field range accessible to the axions. We perform a statistical analysis of simulated moduli spaces, finding in all cases that these additional contributions restrict the diameter so that these moduli spaces are no more likely to yield successful inflation than those with simplicial K\"ahler cone or with far fewer axions. Further heuristic arguments for axions in other corners of the duality web suggest that the difficulty observed in hep-th/0303252 of finding an axion decay constant parametrically larger than MpM_p applies not only to individual axions, but to the diagonals of axion moduli space as well. This observation is shown to follow from the weak gravity conjecture of hep-th/0601001, so it likely applies not only to axions in string theory, but also to axions in any consistent theory of quantum gravity.Comment: 26+11 pages, 9 figures, discussion of relationship to weak gravity conjecture added v2, references added v3, minor changes v4, matches publication versio

    Perturbative Corrections for Staggered Four-Fermion Operators

    Full text link
    We present results for one-loop matching coefficients between continuum four-fermion operators, defined in the Naive Dimensional Regularization scheme, and staggered fermion operators of various types. We calculate diagrams involving gluon exchange between quark lines, and ``penguin'' diagrams containing quark loops. For the former we use Landau gauge operators, with and without O(a)O(a) improvement, and including the tadpole improvement suggested by Lepage and Mackenzie.For the latter we use gauge-invariant operators. Combined with existing results for two-loop anomalous dimension matrices and one-loop matching coefficients, our results allow a lattice calculation of the amplitudes for KKˉK\bar K mixing and KππK\to\pi\pi decays with all corrections of O(g2)O(g^2) included. We also discuss the mixing of ΔS=1\Delta S=1 operators with lower dimension operators, and show that, with staggered fermions, only a single lower dimension operator need be removed by non-perturbative subtraction.Comment: 44 pages latex (uses psfig), 3 ps figures, all bundled using uufiles (correctly this time!), UW/PT-93-

    Scheming in the SMEFT... and a reparameterization invariance!

    Get PDF
    We explain a reparameterization invariance in the Standard Model Effective Field Theory present when considering ψˉψψˉψ\bar{\psi} \psi \rightarrow \bar{\psi} \psi scatterings (with ψ\psi a fermion) and how this leads to unconstrained combinations of Wilson coefficients in global data analyses restricted to these measurements. We develop a {m^W,m^Z,G^F}\{\hat{m}_W, \hat{m}_Z,\hat{G}_F\} input parameter scheme and compare results to the case when an input parameter set {α^,m^Z,G^F}\{\hat{\alpha}, \hat{m}_Z,\hat{G}_F\} is used to constrain this effective theory from the global data set, confirming the input parameter independence of the unconstrained combinations of Wilson coefficients, and supporting the reparameterization invariance explanation. We discuss some conceptual issues related to these degeneracies that are relevant for LHC data reporting and analysis.Comment: 41 pages, 8 tables, 8 figures. addendum in appendix - results obtained allowing two independent flavor contractions for the operator Q_ll in the U(3)^5 flavor symmetric limi

    Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis

    Full text link
    The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train
    corecore