3,051 research outputs found

    Task-based Augmented Contour Trees with Fibonacci Heaps

    Full text link
    This paper presents a new algorithm for the fast, shared memory, multi-core computation of augmented contour trees on triangulations. In contrast to most existing parallel algorithms our technique computes augmented trees, enabling the full extent of contour tree based applications including data segmentation. Our approach completely revisits the traditional, sequential contour tree algorithm to re-formulate all the steps of the computation as a set of independent local tasks. This includes a new computation procedure based on Fibonacci heaps for the join and split trees, two intermediate data structures used to compute the contour tree, whose constructions are efficiently carried out concurrently thanks to the dynamic scheduling of task parallelism. We also introduce a new parallel algorithm for the combination of these two trees into the output global contour tree. Overall, this results in superior time performance in practice, both in sequential and in parallel thanks to the OpenMP task runtime. We report performance numbers that compare our approach to reference sequential and multi-threaded implementations for the computation of augmented merge and contour trees. These experiments demonstrate the run-time efficiency of our approach and its scalability on common workstations. We demonstrate the utility of our approach in data segmentation applications

    Proceedings of the 4th field robot event 2006, Stuttgart/Hohenheim, Germany, 23-24th June 2006

    Get PDF
    Zeer uitgebreid verslag van het 4e Fieldrobotevent, dat gehouden werd op 23 en 24 juni 2006 in Stuttgart/Hohenhei

    A Study On The Effects Of Noise Level, Cleaning Method, And Vectorization Software On The Quality Of Vector Data.

    Get PDF
    In this paper we study different factors that affect vector quality. Noise level, cleaning method, and vectorization software are three factors that may influence the resulting vector data. Real scanned images from GREC'03 contest are used in the experiment. Three different levels of salt-and-pepper noise (5olo, l0%o, and l5o/o) are used. Noisy images are cleaned by six cleaning algorithms and then three different commercial raster to vector software are used to vectorize the cleaned images. vector Recovery Index (VRI) is the performance evaluation criteria used in this study to judge the quality of the resulting vectors compared to their ground truth data. Statistical analysis on the VRI values shows that vectorization software have the biggest influence on the quality of the resulting vectors

    Automatic Classification of Human Epithelial Type 2 Cell Indirect Immunofluorescence Images using Cell Pyramid Matching

    Get PDF
    This paper describes a novel system for automatic classification of images obtained from Anti-Nuclear Antibody (ANA) pathology tests on Human Epithelial type 2 (HEp-2) cells using the Indirect Immunofluorescence (IIF) protocol. The IIF protocol on HEp-2 cells has been the hallmark method to identify the presence of ANAs, due to its high sensitivity and the large range of antigens that can be detected. However, it suffers from numerous shortcomings, such as being subjective as well as time and labour intensive. Computer Aided Diagnostic (CAD) systems have been developed to address these problems, which automatically classify a HEp-2 cell image into one of its known patterns (eg. speckled, homogeneous). Most of the existing CAD systems use handpicked features to represent a HEp-2 cell image, which may only work in limited scenarios. We propose a novel automatic cell image classification method termed Cell Pyramid Matching (CPM), which is comprised of regional histograms of visual words coupled with the Multiple Kernel Learning framework. We present a study of several variations of generating histograms and show the efficacy of the system on two publicly available datasets: the ICPR HEp-2 cell classification contest dataset and the SNPHEp-2 dataset.Comment: arXiv admin note: substantial text overlap with arXiv:1304.126

    A SPATIAL DATABASE MODEL FOR MOBILITY MANAGEMENT

    Get PDF
    Abstract. In urban and metropolitan context, Traffic Operations Centres (TOCs) use technologies as Geographic Information Systems (GIS) and Intelligent Transport Systems (ITS) to tackling urban mobility issue. Usually in TOCs, various isolated systems are maintained in parallel (stored in different databases), and data comes from different sources: a challenge in transport management is to transfer disparate data into a unified data management system that preserves access to legacy data, allowing multi-thematic analysis. This need of integration between systems is important for a wise policy decision.This study aims to design a comprehensive and general spatial data model that could allow the integration and visualization of traffic components and measures. The activity is focused on the case study of 5T Agency in Turin, a TOC that manages traffic regulation, public transit fleets and information to users, in the metropolitan area of Turin and Piedmont Region.The idea is not to replace the existing implemented and efficient system, but to built-up on these systems a GIS that overpass the different software and DBMS platforms and that can demonstrate how a spatial and horizontal vision in tackling urban mobility issues may be useful for policy and strategies decisions. The modelling activity take reference from a review of transport standards and results in database general schema, which can be reused by other TOCs in their activities, helping the integration and coordination between different TOCs. The final output of the research is an ArcGIS geodatabase, which enable the customised representation of private traffic elements and measures.</p
    corecore