279 research outputs found

    Shape measure for identifying perceptually informative parts of 3d objects

    Get PDF
    We propose a mathematical approach for quantifying shape complexity of 3D surfaces based on perceptual principles of visual saliency. Our curvature variation measure (CVM), as a 3D feature, combines surface curvature and information theory by leveraging bandwidth-optimized kernel density estimators. Using a part decomposition algorithm for digitized 3D objects, represented as triangle meshes, we apply our shape measure to transform the low level mesh representation into a perceptually informative form. Further, we analyze the effects of noise, sensitivity to digitization, occlusions, and descriptiveness to demonstrate our shape measure on laser-scanned real world 3D objects. 1

    Integration of Multispectral Face Recognition and Multi-PTZ Camera Automated Surveillance for Security Applications

    Get PDF
    Due to increasing security concerns, a complete security system should consist of two major components, a computer-based face-recognition system and a real-time automated video surveillance system. A computer-based face-recognition system can be used in gate access control for identity authentication. In recent studies, multispectral imaging and fusion of multispectral narrow-band images in the visible spectrum have been employed and proven to enhance the recognition performance over conventional broad-band images, especially when the illumination changes. Thus, we present an automated method that specifies the optimal spectral ranges under the given illumination. Experimental results verify the consistent performance of our algorithm via the observation that an identical set of spectral band images is selected under all tested conditions. Our discovery can be practically used for a new customized sensor design associated with given illuminations for an improved face recognition performance over conventional broad-band images. In addition, once a person is authorized to enter a restricted area, we still need to continuously monitor his/her activities for the sake of security. Because pantilt-zoom (PTZ) cameras are capable of covering a panoramic area and maintaining high resolution imagery for real-time behavior understanding, researches in automated surveillance systems with multiple PTZ cameras have become increasingly important. Most existing algorithms require the prior knowledge of intrinsic parameters of the PTZ camera to infer the relative positioning and orientation among multiple PTZ cameras. To overcome this limitation, we propose a novel mapping algorithm that derives the relative positioning and orientation between two PTZ cameras based on a unified polynomial model. This reduces the dependence on the knowledge of intrinsic parameters of PTZ camera and relative positions. Experimental results demonstrate that our proposed algorithm presents substantially reduced computational complexity and improved flexibility at the cost of slightly decreased pixel accuracy as compared to Chen and Wang\u27s method [18]. © Versita sp. z o.o

    Multifocus image fusion by establishing focal connectivity

    Get PDF
    ABSTRACT Multifocus fusion is the process of unifying focal information from a set of input images acquired with limited depth of field. In this effort, we present a general purpose multifocus fusion algorithm, which can be applied to varied applications ranging from microscopic to long range scenes. The main contribution in this paper is the segmentation of the input images into partitions based on focal connectivity. Focal connectivity is established by isolating regions in an input image that fall on the same focal plane. Our method uses focal connectivity and does not rely on physical properties like edges directly for segmentation. Our method establishes sharpness maps to the input images, which are used to isolate and attribute image partitions to input images. The partitions are mosaiced seamlessly to form the fused image. Illustrative examples of multifocus fusion using our method are shown. Comparisons against existing methods are made and the results are discussed. Index Terms-Depth of focus, focal connectivity, image fusion, image partitioning, multifocus fusion

    Tensor voting for robust color edge detection

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-94-007-7584-8_9This chapter proposes two robust color edge detection methods based on tensor voting. The first method is a direct adaptation of the classical tensor voting to color images where tensors are initialized with either the gradient or the local color structure tensor. The second method is based on an extension of tensor voting in which the encoding and voting processes are specifically tailored to robust edge detection in color images. In this case, three tensors are used to encode local CIELAB color channels and edginess, while the voting process propagates both color and edginess by applying perception-based rules. Unlike the classical tensor voting, the second method considers the context in the voting process. Recall, discriminability, precision, false alarm rejection and robustness measurements with respect to three different ground-truths have been used to compare the proposed methods with the state-of-the-art. Experimental results show that the proposed methods are competitive, especially in robustness. Moreover, these experiments evidence the difficulty of proposing an edge detector with a perfect performance with respect to all features and fields of application.This research has been supported by the Swedish Research Council under the project VR 2012-3512

    Excitation Transfer Engineering in Ce-Doped Oxide Crystalline Scintillators by Codoping with Alkali-Earth Ions

    Get PDF
    This work has been supported by the European Social Fund Measure No. 09.3.3-LMT-K-712 activity Improvement of Researchers Qualification by Implementing the World-Class R&D Projects, and by grant #14.W03.31.0004 of the Russian Federation Government. Authors are grateful to CERN Crystal Clear Collaboration and COST Action TD1401 "Fast Advanced Scintillator Timing (FAST)" for support of collaboration.Time-resolved spectroscopic study of the photoluminescence response to femtosecond pulse excitation and free carrier absorption at different wavelengths, thermally stimulated luminescence measurements and investigation of differential absorption are applied to amend the available data on excitation transfer in GAGG:Ce scintillators, and an electronic energy-level diagram in this single crystal is suggested to explain the influence of codoping with divalent Mg on luminescence kinetics and light yield. The conclusions are generalized by comparison of the influence of aliovalent doping in garnets (GAGG:Ce) and oxyorthosilicates (LSO:Ce and YSO:Ce). In both cases, the codoping facilitates the energy transfer to radiative Ce3+ centers, while the light yield is increased in the LYSO:Ce system but reduced in GAGG:Ce.Government Council on Grants, Russian Federation No. 09.3.3-LMT-K-712; CERN; European Cooperation in Science and Technology TD1401; Institute of Solid State Physics, University of Latvia as the Center of Excellence has received funding from the European Union’s Horizon 2020 Framework Programme H2020-WIDESPREAD-01-2016-2017-TeamingPhase2 under grant agreement No. 739508, project CAMART

    Growth dynamics of deciduous species during their life period: A case study of urban green space in India

    Get PDF
    It is evident that grass density (GD) and shoot growth rate (SGR) governs the differential settlement of substructure, groundwater recharge, and stability of green infrastructure. GD and SGR are usually assumed to be constant during the entire life period of vegetation. However, spatial and temporal dynamics of GD and SGR in urban green space were rarely explored previously. The main objective of this study is to explore the spatial and temporal dynamics of GD and SGR in urban space vegetated with deciduous species (mix grass i.e., Poaceae and Bauhinia purpurea). Field monitoring was conducted in the urban green space for one year (i.e., life period of selected species). The monitoring period includes the growth period and gradual wilting period. Substantial spatial variation of GD was found during the first six months. GD away from the tree trunk was found to be 1.02–56.3 times higher than that near the tree trunk during the first six months. Thereafter, any spatial variation of GD was not found in the next six months. Unlike the GD, SGR was found to vary during the entire life period of mix grass. In addition, SGR away from the tree trunk was found to be 1.1–4.6 times higher than that near the tree trunk. Any relationship between GD and rainfall depth was not found. Whereas, SGR mainly depends on rainfall depth. The hypothesis of uniformity in GD and SGR during the life period of deciduous species was not found to be true

    Eine Methodenbank zur Evaluierung von Stereo-Vision-Verfahren

    No full text
    SIGLEAvailable from TIB Hannover: RN 2856(91-09) / FIZ - Fachinformationszzentrum Karlsruhe / TIB - Technische InformationsbibliothekDEGerman
    • …
    corecore