1,262 research outputs found

    Improved repeatability measures for evaluating performance of feature detectors

    Get PDF
    The most frequently employed measure for performance characterisation of local feature detectors is repeatability, but it has been observed that this does not necessarily mirror actual performance. Presented are improved repeatability formulations which correlate much better with the true performance of feature detectors. Comparative results for several state-of-the-art feature detectors are presented using these measures; it is found that Hessian-based detectors are generally superior at identifying features when images are subject to various geometric and photometric transformations

    Keypoint detection by wave propagation

    Get PDF
    We propose to rely on the wave equation for the detection of repeatable keypoints invariant up to image scale and rotation and robust to viewpoint variations, blur, and lighting changes. The algorithm exploits the properties of local spatial–temporal extrema of the evolution of image intensities under the wave propagation to highlight salient symmetries at different scales. Although the image structures found by most state-of-the-art detectors, such as blobs and corners, occur typically on highly textured surfaces, salient symmetries are widespread in diverse kinds of images, including those related to poorly textured objects, which are hardly dealt with by current pipelines based on local invariant features. The impact on the overall algorithm of different numerical wave simulation schemes and their parameters is discussed, and a pyramidal approximation to speed-up the simulation is proposed and validated. Experiments on publicly available datasets show that the proposed algorithm offers state-of-the-art repeatability on a broad set of different images while detecting regions that can be distinctively described and robustly matched

    Points Descriptor in Pattern Recognition: A New Approach

    Get PDF
    We presented in the paper a new tactic, the first thing we have done is extracting the points of descriptor,which it is used in pattern recognition, especially in detection of corner algorithm. Scales of samples (images),each image is tuned by a factor (scale), collect the corners, and collect the points of descriptor key in thesecollected corners, in other words; Hough Transform uses the collected descriptors for classification process, andclassify each points of image to its equivalence class. Experimentally, by using MATLAB, we are shown highaccuracy of recognition result on the selected samples of objects

    Pileup Per Particle Identification

    Get PDF
    We propose a new method for pileup mitigation by implementing "pileup per particle identification" (PUPPI). For each particle we first define a local shape α\alpha which probes the collinear versus soft diffuse structure in the neighborhood of the particle. The former is indicative of particles originating from the hard scatter and the latter of particles originating from pileup interactions. The distribution of α\alpha for charged pileup, assumed as a proxy for all pileup, is used on an event-by-event basis to calculate a weight for each particle. The weights describe the degree to which particles are pileup-like and are used to rescale their four-momenta, superseding the need for jet-based corrections. Furthermore, the algorithm flexibly allows combination with other, possibly experimental, probabilistic information associated with particles such as vertexing and timing performance. We demonstrate the algorithm improves over existing methods by looking at jet pTp_T and jet mass. We also find an improvement on non-jet quantities like missing transverse energy.Comment: v2 - 23 pages, 10 figures; update to JHEP version, minor revisions throughout, results unchange

    Minimum length effects in black hole physics

    Full text link
    We review the main consequences of the possible existence of a minimum measurable length, of the order of the Planck scale, on quantum effects occurring in black hole physics. In particular, we focus on the ensuing minimum mass for black holes and how modified dispersion relations affect the Hawking decay, both in four space-time dimensions and in models with extra spatial dimensions. In the latter case, we briefly discuss possible phenomenological signatures.Comment: 29 pages, 12 figures. To be published in "Quantum Aspects of Black Holes", ed. X. Calmet (Springer, 2014

    Towards visualization and searching :a dual-purpose video coding approach

    Get PDF
    In modern video applications, the role of the decoded video is much more than filling a screen for visualization. To offer powerful video-enabled applications, it is increasingly critical not only to visualize the decoded video but also to provide efficient searching capabilities for similar content. Video surveillance and personal communication applications are critical examples of these dual visualization and searching requirements. However, current video coding solutions are strongly biased towards the visualization needs. In this context, the goal of this work is to propose a dual-purpose video coding solution targeting both visualization and searching needs by adopting a hybrid coding framework where the usual pixel-based coding approach is combined with a novel feature-based coding approach. In this novel dual-purpose video coding solution, some frames are coded using a set of keypoint matches, which not only allow decoding for visualization, but also provide the decoder valuable feature-related information, extracted at the encoder from the original frames, instrumental for efficient searching. The proposed solution is based on a flexible joint Lagrangian optimization framework where pixel-based and feature-based processing are combined to find the most appropriate trade-off between the visualization and searching performances. Extensive experimental results for the assessment of the proposed dual-purpose video coding solution under meaningful test conditions are presented. The results show the flexibility of the proposed coding solution to achieve different optimization trade-offs, notably competitive performance regarding the state-of-the-art HEVC standard both in terms of visualization and searching performance.Em modernas aplicações de vídeo, o papel do vídeo decodificado é muito mais que simplesmente preencher uma tela para visualização. Para oferecer aplicações mais poderosas por meio de sinais de vídeo,é cada vez mais crítico não apenas considerar a qualidade do conteúdo objetivando sua visualização, mas também possibilitar meios de realizar busca por conteúdos semelhantes. Requisitos de visualização e de busca são considerados, por exemplo, em modernas aplicações de vídeo vigilância e comunicações pessoais. No entanto, as atuais soluções de codificação de vídeo são fortemente voltadas aos requisitos de visualização. Nesse contexto, o objetivo deste trabalho é propor uma solução de codificação de vídeo de propósito duplo, objetivando tanto requisitos de visualização quanto de busca. Para isso, é proposto um arcabouço de codificação em que a abordagem usual de codificação de pixels é combinada com uma nova abordagem de codificação baseada em features visuais. Nessa solução, alguns quadros são codificados usando um conjunto de pares de keypoints casados, possibilitando não apenas visualização, mas também provendo ao decodificador valiosas informações de features visuais, extraídas no codificador a partir do conteúdo original, que são instrumentais em aplicações de busca. A solução proposta emprega um esquema flexível de otimização Lagrangiana onde o processamento baseado em pixel é combinado com o processamento baseado em features visuais objetivando encontrar um compromisso adequado entre os desempenhos de visualização e de busca. Os resultados experimentais mostram a flexibilidade da solução proposta em alcançar diferentes compromissos de otimização, nomeadamente desempenho competitivo em relação ao padrão HEVC tanto em termos de visualização quanto de busca

    Using retinex for point selection in 3D shape registration

    Get PDF
    Inspired by retinex theory, we propose a novel method for selecting key points from a depth map of a 3D freeform shape; we also use these key points as a basis for shape registration. To find key points, first, depths are transformed using the Hotelling method and normalized to reduce their dependence on a particular viewpoint. Adaptive smoothing is then applied using weights which decrease with spatial gradient and local inhomogeneity; this preserves local features such as edges and corners while ensuring smoothed depths are not reduced. Key points are those with locally maximal depths, faithfully capturing shape. We show how such key points can be used in an efficient registration process, using two state-of-the-art iterative closest point variants. A comparative study with leading alternatives, using real range images, shows that our approach provides informative, expressive, and repeatable points leading to the most accurate registration results. © 2014 Elsevier Ltd

    Towards object-based image editing

    Get PDF
    corecore