1,816 research outputs found

    Dense 3D Face Correspondence

    Full text link
    We present an algorithm that automatically establishes dense correspondences between a large number of 3D faces. Starting from automatically detected sparse correspondences on the outer boundary of 3D faces, the algorithm triangulates existing correspondences and expands them iteratively by matching points of distinctive surface curvature along the triangle edges. After exhausting keypoint matches, further correspondences are established by generating evenly distributed points within triangles by evolving level set geodesic curves from the centroids of large triangles. A deformable model (K3DM) is constructed from the dense corresponded faces and an algorithm is proposed for morphing the K3DM to fit unseen faces. This algorithm iterates between rigid alignment of an unseen face followed by regularized morphing of the deformable model. We have extensively evaluated the proposed algorithms on synthetic data and real 3D faces from the FRGCv2, Bosphorus, BU3DFE and UND Ear databases using quantitative and qualitative benchmarks. Our algorithm achieved dense correspondences with a mean localisation error of 1.28mm on synthetic faces and detected 1414 anthropometric landmarks on unseen real faces from the FRGCv2 database with 3mm precision. Furthermore, our deformable model fitting algorithm achieved 98.5% face recognition accuracy on the FRGCv2 and 98.6% on Bosphorus database. Our dense model is also able to generalize to unseen datasets.Comment: 24 Pages, 12 Figures, 6 Tables and 3 Algorithm

    Constrained correlation functions from the Millennium Simulation

    Full text link
    Context. In previous work, we developed a quasi-Gaussian approximation for the likelihood of correlation functions, which, in contrast to the usual Gaussian approach, incorporates fundamental mathematical constraints on correlation functions. The analytical computation of these constraints is only feasible in the case of correlation functions of one-dimensional random fields. Aims. In this work, we aim to obtain corresponding constraints in the case of higher-dimensional random fields and test them in a more realistic context. Methods. We develop numerical methods to compute the constraints on correlation functions which are also applicable for two- and three-dimensional fields. In order to test the accuracy of the numerically obtained constraints, we compare them to the analytical results for the one-dimensional case. Finally, we compute correlation functions from the halo catalog of the Millennium Simulation, check whether they obey the constraints, and examine the performance of the transformation used in the construction of the quasi-Gaussian likelihood. Results. We find that our numerical methods of computing the constraints are robust and that the correlation functions measured from the Millennium Simulation obey them. Despite the fact that the measured correlation functions lie well inside the allowed region of parameter space, i.e. far away from the boundaries of the allowed volume defined by the constraints, we find strong indications that the quasi-Gaussian likelihood yields a substantially more accurate description than the Gaussian one.Comment: 11 pages, 13 figures, updated to match version accepted by A&

    COOPER-framework: A Unified Standard Process for Non-parametric Projects

    Get PDF
    Practitioners assess performance of entities in increasingly large and complicated datasets. If non-parametric models, such as Data Envelopment Analysis, were ever considered as simple push-button technologies, this is impossible when many variables are available or when data have to be compiled from several sources. This paper introduces by the ‘COOPER-framework’ a comprehensive model for carrying out non-parametric projects. The framework consists of six interrelated phases: Concepts and objectives, On structuring data, Operational models, Performance comparison model, Evaluation, and Result and deployment. Each of the phases describes some necessary steps a researcher should examine for a well defined and repeatable analysis. The COOPER-framework provides for the novice analyst guidance, structure and advice for a sound non-parametric analysis. The more experienced analyst benefits from a check list such that important issues are not forgotten. In addition, by the use of a standardized framework non-parametric assessments will be more reliable, more repeatable, more manageable, faster and less costly.DEA, non-parametric efficiency, unified standard process, COOPER-framework.

    A storage and access architecture for efficient query processing in spatial database systems

    Get PDF
    Due to the high complexity of objects and queries and also due to extremely large data volumes, geographic database systems impose stringent requirements on their storage and access architecture with respect to efficient query processing. Performance improving concepts such as spatial storage and access structures, approximations, object decompositions and multi-phase query processing have been suggested and analyzed as single building blocks. In this paper, we describe a storage and access architecture which is composed from the above building blocks in a modular fashion. Additionally, we incorporate into our architecture a new ingredient, the scene organization, for efficiently supporting set-oriented access of large-area region queries. An experimental performance comparison demonstrates that the concept of scene organization leads to considerable performance improvements for large-area region queries by a factor of up to 150

    Novel applications of spectroscopy to characterize soil variation

    Get PDF
    This thesis embodies a collection of novel studies related to the use of multivariate information provided by spectroscopic tools such as Visible and Near Infrared (Vis-NIR) spectrometers to represent soil variation. The general structure is organized following the increasing levels of soil complexity, starting from the characterization of soil aggregates and the identification of soil colloids, to the recognition of soil horizons and their boundaries in the soil profile, to finally the depiction of soil type’s distribution in the landscape. Briefly, Chapter 1 is written as a rationale, emphasising the need for up-to-date methodologies for making effective use of the increasing amount of soil information produced worldwide. Chapter 2 presents the development of a new methodology for the measure of soil aggregate stability and the further use of spectroscopic information to predict its values. Chapter 3 gives examples of the use of Vis-NIR spectral libraries for the prediction of soil properties. Chapter 4 presents the development of a new method for the identification of soil horizons and their boundaries using fuzzy clustering of Vis-NIR spectra. Chapter 5 expands into a new way of measuring the diversity of soils into the landscape, introducing two new indices for measuring soil diversity or “Functional Pedodiversity” inspired in previous studies in Functional Ecology. Finally Chapter 6 discusses the main findings of this thesis and foresees issues, challenges and opportunities in the area of spectroscopy and multivariate soil data analysis
    • 

    corecore