9,240 research outputs found

    Sketch-a-Net that Beats Humans

    Full text link
    We propose a multi-scale multi-channel deep neural network framework that, for the first time, yields sketch recognition performance surpassing that of humans. Our superior performance is a result of explicitly embedding the unique characteristics of sketches in our model: (i) a network architecture designed for sketch rather than natural photo statistics, (ii) a multi-channel generalisation that encodes sequential ordering in the sketching process, and (iii) a multi-scale network ensemble with joint Bayesian fusion that accounts for the different levels of abstraction exhibited in free-hand sketches. We show that state-of-the-art deep networks specifically engineered for photos of natural objects fail to perform well on sketch recognition, regardless whether they are trained using photo or sketch. Our network on the other hand not only delivers the best performance on the largest human sketch dataset to date, but also is small in size making efficient training possible using just CPUs.Comment: Accepted to BMVC 2015 (oral

    Note On Certain Inequalities for Neuman Means

    Full text link
    In this paper, we give the explicit formulas for the Neuman means NAHN_{AH}, NHAN_{HA}, NACN_{AC} and NCAN_{CA}, and present the best possible upper and lower bounds for theses means in terms of the combinations of harmonic mean HH, arithmetic mean AA and contraharmonic mean CC.Comment: 9 page

    Deep Spatial-Semantic Attention for Fine-Grained Sketch-Based Image Retrieval

    Get PDF
    Human sketches are unique in being able to capture both the spatial topology of a visual object, as well as its subtle appearance details. Fine-grained sketch-based image retrieval (FG-SBIR) importantly leverages on such fine-grained characteristics of sketches to conduct instance-level retrieval of photos. Nevertheless, human sketches are often highly abstract and iconic, resulting in severe misalignments with candidate photos which in turn make subtle visual detail matching difficult. Existing FG-SBIR approaches focus only on coarse holistic matching via deep cross-domain representation learning, yet ignore explicitly accounting for fine-grained details and their spatial context. In this paper, a novel deep FG-SBIR model is proposed which differs significantly from the existing models in that: (1) It is spatially aware, achieved by introducing an attention module that is sensitive to the spatial position of visual details: (2) It combines coarse and fine semantic information via a shortcut connection fusion block: and (3) It models feature correlation and is robust to misalignments between the extracted features across the two domains by introducing a novel higher-order learnable energy function (HOLEF) based loss. Extensive experiments show that the proposed deep spatial-semantic attention model significantly outperforms the state-of-the-art

    ηQ\eta_{Q} meson photoproduction in ultrarelativistic heavy ion collisions

    Get PDF
    The transverse momentum distributions for inclusive ηc,b\eta_{c,b} meson described by gluon-gluon interactions from photoproduction processes in relativistic heavy ion collisions are calculated. We considered the color singlet (CS) and color octet (CO) components with the framework of non-relativistic Quantum Chromodynamics (NRQCD) into the production of heavy quarkonium. The phenomenological values of the matrix elements for the color-singlet and color-octet components give the main contribution to the production of heavy quarkonium from the gluon-gluon interaction caused by the emission of additional gluon in the initial state. The numerical results indicate that the contribution of photoproduction processes cannot be negligible for mid-rapidity in p-p and Pb-Pb collisions at the Large Hadron Collider (LHC) energies.Comment: 11 pages, 2 figure
    • …
    corecore