121,433 research outputs found

    Expanding the Family of Grassmannian Kernels: An Embedding Perspective

    Full text link
    Modeling videos and image-sets as linear subspaces has proven beneficial for many visual recognition tasks. However, it also incurs challenges arising from the fact that linear subspaces do not obey Euclidean geometry, but lie on a special type of Riemannian manifolds known as Grassmannian. To leverage the techniques developed for Euclidean spaces (e.g, support vector machines) with subspaces, several recent studies have proposed to embed the Grassmannian into a Hilbert space by making use of a positive definite kernel. Unfortunately, only two Grassmannian kernels are known, none of which -as we will show- is universal, which limits their ability to approximate a target function arbitrarily well. Here, we introduce several positive definite Grassmannian kernels, including universal ones, and demonstrate their superiority over previously-known kernels in various tasks, such as classification, clustering, sparse coding and hashing

    About Adaptive Coding on Countable Alphabets: Max-Stable Envelope Classes

    Full text link
    In this paper, we study the problem of lossless universal source coding for stationary memoryless sources on countably infinite alphabets. This task is generally not achievable without restricting the class of sources over which universality is desired. Building on our prior work, we propose natural families of sources characterized by a common dominating envelope. We particularly emphasize the notion of adaptivity, which is the ability to perform as well as an oracle knowing the envelope, without actually knowing it. This is closely related to the notion of hierarchical universal source coding, but with the important difference that families of envelope classes are not discretely indexed and not necessarily nested. Our contribution is to extend the classes of envelopes over which adaptive universal source coding is possible, namely by including max-stable (heavy-tailed) envelopes which are excellent models in many applications, such as natural language modeling. We derive a minimax lower bound on the redundancy of any code on such envelope classes, including an oracle that knows the envelope. We then propose a constructive code that does not use knowledge of the envelope. The code is computationally efficient and is structured to use an {E}xpanding {T}hreshold for {A}uto-{C}ensoring, and we therefore dub it the \textsc{ETAC}-code. We prove that the \textsc{ETAC}-code achieves the lower bound on the minimax redundancy within a factor logarithmic in the sequence length, and can be therefore qualified as a near-adaptive code over families of heavy-tailed envelopes. For finite and light-tailed envelopes the penalty is even less, and the same code follows closely previous results that explicitly made the light-tailed assumption. Our technical results are founded on methods from regular variation theory and concentration of measure

    Modeling of universal K-digital structures

    Get PDF
    Chetverikov G.G., Tyshchenko O.O., Zmiivska S.V., Kurinnyi O.V., Horovyi I.U. Modeling of universal k-digital structures. Theoretical construction principles of spatial invertible multiple-valued elements and structures have been developed. The analysis of their practical application in information system with k-valued coding has been tested. All enumerated properties and functions in point of fact are essential not only are discrete on time, but also many-valued

    Word Activation Forces Map Word Networks

    Get PDF
    Words associate with each other in a manner of intricate clusters^1-3^. Yet the brain capably encodes the complex relations into workable networks^4-7^ such that the onset of a word in the brain automatically and selectively activates its associates, facilitating language understanding and generation^8-10^. One believes that the activation strength from one word to another forges and accounts for the latent structures of the word networks. This implies that mapping the word networks from brains to computers^11,12^, which is necessary for various purposes^1,2,13-15^, may be achieved through modeling the activation strengths. However, although a lot of investigations on word activation effects have been carried out^8-10,16-20^, modeling the activation strengths remains open. Consequently, huge labor is required to do the mappings^11,12^. Here we show that our found word activation forces, statistically defined by a formula in the same form of the universal gravitation, capture essential information on the word networks, leading to a superior approach to the mappings. The approach compatibly encodes syntactical and semantic information into sparse coding directed networks, comprehensively highlights the features of individual words. We find that based on the directed networks, sensible word clusters and hierarchies can be efficiently discovered. Our striking results strongly suggest that the word activation forces might reveal the encoding of word networks in the brain
    corecore