6,170 research outputs found

    Gluing two affine Yangians of gl1\mathfrak{gl}_1

    Full text link
    We construct a four-parameter family of affine Yangian algebras by gluing two copies of the affine Yangian of gl1\mathfrak{gl}_1. Our construction allows for gluing operators with arbitrary (integer or half integer) conformal dimension and arbitrary (bosonic or fermionic) statistics, which is related to the relative framing. The resulting family of algebras is a two-parameter generalization of the N=2\mathcal{N}=2 affine Yangian, which is isomorphic to the universal enveloping algebra of u(1)WN=2[λ]\mathfrak{u}(1)\oplus \mathcal{W}^{\mathcal{N}=2}_{\infty}[\lambda]. All algebras that we construct have natural representations in terms of "twin plane partitions", a pair of plane partitions appropriately joined along one common leg. We observe that the geometry of twin plane partitions, which determines the algebra, bears striking similarities to the geometry of certain toric Calabi-Yau threefolds.Comment: 88 pages, 12 figure

    One-shot learning of object categories

    Get PDF
    Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by maximum likelihood (ML) and maximum a posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully

    What do we perceive in a glance of a real-world scene?

    Get PDF
    What do we see when we glance at a natural scene and how does it change as the glance becomes longer? We asked naive subjects to report in a free-form format what they saw when looking at briefly presented real-life photographs. Our subjects received no specific information as to the content of each stimulus. Thus, our paradigm differs from previous studies where subjects were cued before a picture was presented and/or were probed with multiple-choice questions. In the first stage, 90 novel grayscale photographs were foveally shown to a group of 22 native-English-speaking subjects. The presentation time was chosen at random from a set of seven possible times (from 27 to 500 ms). A perceptual mask followed each photograph immediately. After each presentation, subjects reported what they had just seen as completely and truthfully as possible. In the second stage, another group of naive individuals was instructed to score each of the descriptions produced by the subjects in the first stage. Individual scores were assigned to more than a hundred different attributes. We show that within a single glance, much object- and scene-level information is perceived by human subjects. The richness of our perception, though, seems asymmetrical. Subjects tend to have a propensity toward perceiving natural scenes as being outdoor rather than indoor. The reporting of sensory- or feature-level information of a scene (such as shading and shape) consistently precedes the reporting of the semantic-level information. But once subjects recognize more semantic-level components of a scene, there is little evidence suggesting any bias toward either scene-level or object-level recognition

    Learning Object Categories From Internet Image Searches

    Get PDF
    In this paper, we describe a simple approach to learning models of visual object categories from images gathered from Internet image search engines. The images for a given keyword are typically highly variable, with a large fraction being unrelated to the query term, and thus pose a challenging environment from which to learn. By training our models directly from Internet images, we remove the need to laboriously compile training data sets, required by most other recognition approaches-this opens up the possibility of learning object category models “on-the-fly.” We describe two simple approaches, derived from the probabilistic latent semantic analysis (pLSA) technique for text document analysis, that can be used to automatically learn object models from these data. We show two applications of the learned model: first, to rerank the images returned by the search engine, thus improving the quality of the search engine; and second, to recognize objects in other image data sets

    Voluntary movement takes shape. the link between movement focusing and sensory input gating

    Get PDF
    The aim of the study was to investigate the relationship between motor surround inhibition (mSI) and the modulation of somatosensory temporal discrimination threshold (STDT) induced by voluntary movement. Seventeen healthy volunteers participated in the study. To assess mSI, we delivered transcranial magnetic stimulation (TMS) single pulses to record motor evoked potentials (MEPs) from the right abductor digiti minimi (ADM; “surround muscle”) during brief right little finger flexion. mSI was expressed as the ratio of ADM MEP amplitude during movement to MEP amplitude at rest. We preliminarily measured STDT values by assessing the shortest interval at which subjects were able to recognize a pair of electric stimuli, delivered over the volar surface of the right little finger, as separate in time. We then evaluated the STDT by using the same motor task used for mSI. mSI and STDT modulation were evaluated at the same time points during movement. mSI and STDT modulation displayed similar time-dependent changes during index finger movement. In both cases, the modulation was maximally present at the onset of the movement and gradually vanished over about 200 ms. Our study provides the first neurophysiological evidence about the relationship between mSI and tactile-motor integration during movement execution

    Constraining dynamical dark energy with a divergence-free parametrization in the presence of spatial curvature and massive neutrinos

    Full text link
    In this paper, we report the results of constraining the dynamical dark energy with a divergence-free parameterization, w(z)=w0+wa(ln(2+z)1+zln2)w(z) = w_{0} + w_{a}(\frac{\ln(2+z)}{1+z}-\ln2), in the presence of spatial curvature and massive neutrinos, with the 7-yr WMAP temperature and polarization data, the power spectrum of LRGs derived from SDSS DR7, the Type Ia supernova data from Union2 sample, and the new measurements of H0H_0 from HST, by using a MCMC global fit method. Our focus is on the determinations of the spatial curvature, Ωk\Omega_k, and the total mass of neutrinos, mν\sum m_{\nu}, in such a dynamical dark energy scenario, and the influence of these factors to the constraints on the dark energy parameters, w0w_0 and waw_a. We show that Ωk\Omega_k and mν\sum m_{\nu} can be well constrained in this model; the 95% CL limits are: 0.0153<Ωk<0.0167-0.0153<\Omega_k<0.0167 and mν<0.56\sum m_{\nu}<0.56 eV. Comparing to the case in a flat universe, we find that the error in w0w_0 is amplified by 25.51%, and the error in waw_a is amplified by 0.14%; comparing to the case with a zero neutrino mass, we find that the error in w0w_0 is amplified by 12.24%, and the error in waw_a is amplified by 1.63%.Comment: 5 pages, 2 figures; discussions added; accepted for publication in Physics Letters

    Lattice-Boltzmann simulations of the thermally driven 2D square cavity at high Rayleigh numbers

    Get PDF
    The thermal lattice Boltzmann equation (TLBE) with multiple-relaxation-times (MRT) collision model is used to simulate the steady thermal con- vective ows in the two-dimensional square cavity with differentially heated vertical walls at high Rayleigh numbers. The MRT-TLBE consists of two sets of distribution functions, i.e., a D2Q9 model for the mass-momentum equations and a D2Q5 model for the temperature equation. The dimension- less ow parameters are the following: the Prandtl number Pr = 0:71 and the Rayleigh number Ra = 106, 107, and 108. The D2Q9+D2Q5 MRT-TLBE is shown to be second-order accurate and to be capable of yielding results of benchmark quality, including various Nusselt numbers and local hydrody- namic intensities. The results obtained by using the MRT-TLBE agree well with existing benchmark data obtained by other methods

    Lattice-Boltzmann Simulations of the Thermally Driven 2D Square Cavity at High Rayleigh Numbers

    Get PDF
    The thermal lattice Boltzmann equation (TLBE) with multiple-relaxation-times (MRT) collision model is used to simulate the steady thermal convective flows in the two-dimensional square cavity with differentially heated vertical walls at high Rayleigh numbers. The MRT-TLBE consists of two sets of distribution functions, i.e., a D2Q9 model for the mass-momentum equations and a D2Q5 model for the temperature equation. The dimensionless flow parameters are the following: the Prandtl number Pr = 0.71 and the Rayleigh number Ra = 106, 107, and 108. The D2Q9 + D2Q5 MRT-TLBE is shown to be second-order accurate and to be capable of yielding results of benchmark quality, including various Nusselt numbers and local hydrodynamic intensities. Our results also agree well with existing benchmark data obtained by other method

    TinyTracker: Ultra-Fast and Ultra-Low-Power Edge Vision In-Sensor for Gaze Estimation

    Full text link
    Intelligent edge vision tasks encounter the critical challenge of ensuring power and latency efficiency due to the typically heavy computational load they impose on edge platforms.This work leverages one of the first "AI in sensor" vision platforms, IMX500 by Sony, to achieve ultra-fast and ultra-low-power end-to-end edge vision applications. We evaluate the IMX500 and compare it to other edge platforms, such as the Google Coral Dev Micro and Sony Spresense, by exploring gaze estimation as a case study. We propose TinyTracker, a highly efficient, fully quantized model for 2D gaze estimation designed to maximize the performance of the edge vision systems considered in this study. TinyTracker achieves a 41x size reduction (600Kb) compared to iTracker [1] without significant loss in gaze estimation accuracy (maximum of 0.16 cm when fully quantized). TinyTracker's deployment on the Sony IMX500 vision sensor results in end-to-end latency of around 19ms. The camera takes around 17.9ms to read, process and transmit the pixels to the accelerator. The inference time of the network is 0.86ms with an additional 0.24 ms for retrieving the results from the sensor. The overall energy consumption of the end-to-end system is 4.9 mJ, including 0.06 mJ for inference. The end-to-end study shows that IMX500 is 1.7x faster than CoralMicro (19ms vs 34.4ms) and 7x more power efficient (4.9mJ VS 34.2mJ
    corecore