37,044 research outputs found

    Image classification by visual bag-of-words refinement and reduction

    Full text link
    This paper presents a new framework for visual bag-of-words (BOW) refinement and reduction to overcome the drawbacks associated with the visual BOW model which has been widely used for image classification. Although very influential in the literature, the traditional visual BOW model has two distinct drawbacks. Firstly, for efficiency purposes, the visual vocabulary is commonly constructed by directly clustering the low-level visual feature vectors extracted from local keypoints, without considering the high-level semantics of images. That is, the visual BOW model still suffers from the semantic gap, and thus may lead to significant performance degradation in more challenging tasks (e.g. social image classification). Secondly, typically thousands of visual words are generated to obtain better performance on a relatively large image dataset. Due to such large vocabulary size, the subsequent image classification may take sheer amount of time. To overcome the first drawback, we develop a graph-based method for visual BOW refinement by exploiting the tags (easy to access although noisy) of social images. More notably, for efficient image classification, we further reduce the refined visual BOW model to a much smaller size through semantic spectral clustering. Extensive experimental results show the promising performance of the proposed framework for visual BOW refinement and reduction

    The M\"obius Domain Wall Fermion Algorithm

    Full text link
    We present a review of the properties of generalized domain wall Fermions, based on a (real) M\"obius transformation on the Wilson overlap kernel, discussing their algorithmic efficiency, the degree of explicit chiral violations measured by the residual mass (mresm_{res}) and the Ward-Takahashi identities. The M\"obius class interpolates between Shamir's domain wall operator and Bori\c{c}i's domain wall implementation of Neuberger's overlap operator without increasing the number of Dirac applications per conjugate gradient iteration. A new scaling parameter (α\alpha) reduces chiral violations at finite fifth dimension (LsL_s) but yields exactly the same overlap action in the limit Ls→∞L_s \rightarrow \infty. Through the use of 4d Red/Black preconditioning and optimal tuning for the scaling α(Ls)\alpha(L_s), we show that chiral symmetry violations are typically reduced by an order of magnitude at fixed LsL_s. At large LsL_s we argue that the observed scaling for mres=O(1/Ls)m_{res} = O(1/L_s) for Shamir is replaced by mres=O(1/Ls2)m_{res} = O(1/L_s^2) for the properly tuned M\"obius algorithm with α=O(Ls)\alpha = O(L_s)Comment: 59 pages, 11 figure

    Latent Semantic Learning with Structured Sparse Representation for Human Action Recognition

    Full text link
    This paper proposes a novel latent semantic learning method for extracting high-level features (i.e. latent semantics) from a large vocabulary of abundant mid-level features (i.e. visual keywords) with structured sparse representation, which can help to bridge the semantic gap in the challenging task of human action recognition. To discover the manifold structure of midlevel features, we develop a spectral embedding approach to latent semantic learning based on L1-graph, without the need to tune any parameter for graph construction as a key step of manifold learning. More importantly, we construct the L1-graph with structured sparse representation, which can be obtained by structured sparse coding with its structured sparsity ensured by novel L1-norm hypergraph regularization over mid-level features. In the new embedding space, we learn latent semantics automatically from abundant mid-level features through spectral clustering. The learnt latent semantics can be readily used for human action recognition with SVM by defining a histogram intersection kernel. Different from the traditional latent semantic analysis based on topic models, our latent semantic learning method can explore the manifold structure of mid-level features in both L1-graph construction and spectral embedding, which results in compact but discriminative high-level features. The experimental results on the commonly used KTH action dataset and unconstrained YouTube action dataset show the superior performance of our method.Comment: The short version of this paper appears in ICCV 201

    Functorial quantization and the Guillemin-Sternberg conjecture

    Full text link
    We propose that geometric quantization of symplectic manifolds is the arrow part of a functor, whose object part is deformation quantization of Poisson manifolds. The `quantization commutes with reduction' conjecture of Guillemin and Sternberg then becomes a special case of the functoriality of quantization. In fact, our formulation yields almost unlimited generalizations of the Guillemin--Sternberg conjecture, extending it, for example, to arbitrary Lie groups or even Lie groupoids. Technically, this involves symplectic reduction and Weinstein's dual pairs on the classical side, and Kasparov's bivariant K-theory for C*-algebras (KK-theory) on the quantum side.Comment: 15 pages. Proc. Bialowieza 200

    UV dimensional reduction to two from group valued momenta

    Full text link
    We describe a new model of deformed relativistic kinematics based on the group manifold U(1)×SU(2)U(1) \times SU(2) as a four-momentum space. We discuss the action of the Lorentz group on such space and and illustrate the deformed composition law for the group-valued momenta. Due to the geometric structure of the group, the deformed kinematics is governed by {\it two} energy scales λ\lambda and κ\kappa. A relevant feature of the model is that it exhibits a running spectral dimension dsd_s with the characteristic short distance reduction to ds=2d_s =2 found in most quantum gravity scenarios.Comment: 15 pages, 1 figur

    Density of Spherically-Embedded Stiefel and Grassmann Codes

    Full text link
    The density of a code is the fraction of the coding space covered by packing balls centered around the codewords. This paper investigates the density of codes in the complex Stiefel and Grassmann manifolds equipped with the chordal distance. The choice of distance enables the treatment of the manifolds as subspaces of Euclidean hyperspheres. In this geometry, the densest packings are not necessarily equivalent to maximum-minimum-distance codes. Computing a code's density follows from computing: i) the normalized volume of a metric ball and ii) the kissing radius, the radius of the largest balls one can pack around the codewords without overlapping. First, the normalized volume of a metric ball is evaluated by asymptotic approximations. The volume of a small ball can be well-approximated by the volume of a locally-equivalent tangential ball. In order to properly normalize this approximation, the precise volumes of the manifolds induced by their spherical embedding are computed. For larger balls, a hyperspherical cap approximation is used, which is justified by a volume comparison theorem showing that the normalized volume of a ball in the Stiefel or Grassmann manifold is asymptotically equal to the normalized volume of a ball in its embedding sphere as the dimension grows to infinity. Then, bounds on the kissing radius are derived alongside corresponding bounds on the density. Unlike spherical codes or codes in flat spaces, the kissing radius of Grassmann or Stiefel codes cannot be exactly determined from its minimum distance. It is nonetheless possible to derive bounds on density as functions of the minimum distance. Stiefel and Grassmann codes have larger density than their image spherical codes when dimensions tend to infinity. Finally, the bounds on density lead to refinements of the standard Hamming bounds for Stiefel and Grassmann codes.Comment: Two-column version (24 pages, 6 figures, 4 tables). To appear in IEEE Transactions on Information Theor

    Modeling of composite beams and plates for static and dynamic analysis

    Get PDF
    A rigorous theory and the corresponding computational algorithms were developed for through-the-thickness analysis of composite plates. This type of analysis is needed in order to find the elastic stiffness constants of a plate. Additionally, the analysis is used to post-process the resulting plate solution in order to find approximate three-dimensional displacement, strain, and stress distributions throughout the plate. It was decided that the variational-asymptotical method (VAM) would serve as a suitable framework in which to solve these types of problems. Work during this reporting period has progressed along two lines: (1) further evaluation of neo-classical plate theory (NCPT) as applied to shear-coupled laminates; and (2) continued modeling of plates with nonuniform thickness
    • …
    corecore