172 research outputs found

    Heap-based Algorithms to Accelerate Fingerprint Matching on Parallel Platforms

    Get PDF
    Nowadays, fingerprint is the most used biometric trait for individuals identification. In this area, the state-of-the-art algorithms are very accurate, but when the database contains millions of identities, an acceleration of the algorithm is required. From these algorithms, Minutia Cylinder-Code (MCC) stands out for its good results in terms of accuracy, however its efficiency in computational time is not high. In this work, we propose to use two different parallel platforms to accelerate fingerprint matching process by using MCC: (1) a multi-core server, and (2) a Xeon Phi coprocessor. Our proposal is based on heaps as auxiliary structure to process the global similarity of MCC. As heap-based algorithms are exhaustive (all the elements are accessed), we also explored the use an indexing algorithm to avoid comparing the query against all the fingerprints of the database. Experimental results show an improvement up to 97.15x of speed-up, which is competitive compared to other state-of-the-art algorithms in GPU and FPGA. To the best of our knowledge, this is the first work for fingerprint identification using a Xeon Phi coprocessor.Instituto de Investigación en Informátic

    Heap-based Algorithms to Accelerate Fingerprint Matching on Parallel Platforms

    Get PDF
    Nowadays, fingerprint is the most used biometric trait for individuals identification. In this area, the state-of-the-art algorithms are very accurate, but when the database contains millions of identities, an acceleration of the algorithm is required. From these algorithms, Minutia Cylinder-Code (MCC) stands out for its good results in terms of accuracy, however its efficiency in computational time is not high. In this work, we propose to use two different parallel platforms to accelerate fingerprint matching process by using MCC: (1) a multi-core server, and (2) a Xeon Phi coprocessor. Our proposal is based on heaps as auxiliary structure to process the global similarity of MCC. As heap-based algorithms are exhaustive (all the elements are accessed), we also explored the use an indexing algorithm to avoid comparing the query against all the fingerprints of the database. Experimental results show an improvement up to 97.15x of speed-up, which is competitive compared to other state-of-the-art algorithms in GPU and FPGA. To the best of our knowledge, this is the first work for fingerprint identification using a Xeon Phi coprocessor.Instituto de Investigación en Informátic

    Heap-based Algorithms to Accelerate Fingerprint Matching on Parallel Platforms

    Get PDF
    Nowadays, fingerprint is the most used biometric trait for individuals identification. In this area, the state-of-the-art algorithms are very accurate, but when the database contains millions of identities, an acceleration of the algorithm is required. From these algorithms, Minutia Cylinder-Code (MCC) stands out for its good results in terms of accuracy, however its efficiency in computational time is not high. In this work, we propose to use two different parallel platforms to accelerate fingerprint matching process by using MCC: (1) a multi-core server, and (2) a Xeon Phi coprocessor. Our proposal is based on heaps as auxiliary structure to process the global similarity of MCC. As heap-based algorithms are exhaustive (all the elements are accessed), we also explored the use an indexing algorithm to avoid comparing the query against all the fingerprints of the database. Experimental results show an improvement up to 97.15x of speed-up, which is competitive compared to other state-of-the-art algorithms in GPU and FPGA. To the best of our knowledge, this is the first work for fingerprint identification using a Xeon Phi coprocessor.Instituto de Investigación en Informátic

    HeavyKeeper: An Accurate Algorithm for Finding Top-k Elephant Flows

    Get PDF

    Exact string matching algorithms : survey, issues, and future research directions

    Get PDF
    String matching has been an extensively studied research domain in the past two decades due to its various applications in the fields of text, image, signal, and speech processing. As a result, choosing an appropriate string matching algorithm for current applications and addressing challenges is difficult. Understanding different string matching approaches (such as exact string matching and approximate string matching algorithms), integrating several algorithms, and modifying algorithms to address related issues are also difficult. This paper presents a survey on single-pattern exact string matching algorithms. The main purpose of this survey is to propose new classification, identify new directions and highlight the possible challenges, current trends, and future works in the area of string matching algorithms with a core focus on exact string matching algorithms. © 2013 IEEE

    Graph Processing in Main-Memory Column Stores

    Get PDF
    Evermore, novel and traditional business applications leverage the advantages of a graph data model, such as the offered schema flexibility and an explicit representation of relationships between entities. As a consequence, companies are confronted with the challenge of storing, manipulating, and querying terabytes of graph data for enterprise-critical applications. Although these business applications operate on graph-structured data, they still require direct access to the relational data and typically rely on an RDBMS to keep a single source of truth and access. Existing solutions performing graph operations on business-critical data either use a combination of SQL and application logic or employ a graph data management system. For the first approach, relying solely on SQL results in poor execution performance caused by the functional mismatch between typical graph operations and the relational algebra. To the worse, graph algorithms expose a tremendous variety in structure and functionality caused by their often domain-specific implementations and therefore can be hardly integrated into a database management system other than with custom coding. Since the majority of these enterprise-critical applications exclusively run on relational DBMSs, employing a specialized system for storing and processing graph data is typically not sensible. Besides the maintenance overhead for keeping the systems in sync, combining graph and relational operations is hard to realize as it requires data transfer across system boundaries. A basic ingredient of graph queries and algorithms are traversal operations and are a fundamental component of any database management system that aims at storing, manipulating, and querying graph data. Well-established graph traversal algorithms are standalone implementations relying on optimized data structures. The integration of graph traversals as an operator into a database management system requires a tight integration into the existing database environment and a development of new components, such as a graph topology-aware optimizer and accompanying graph statistics, graph-specific secondary index structures to speedup traversals, and an accompanying graph query language. In this thesis, we introduce and describe GRAPHITE, a hybrid graph-relational data management system. GRAPHITE is a performance-oriented graph data management system as part of an RDBMS allowing to seamlessly combine processing of graph data with relational data in the same system. We propose a columnar storage representation for graph data to leverage the already existing and mature data management and query processing infrastructure of relational database management systems. At the core of GRAPHITE we propose an execution engine solely based on set operations and graph traversals. Our design is driven by the observation that different graph topologies expose different algorithmic requirements to the design of a graph traversal operator. We derive two graph traversal implementations targeting the most common graph topologies and demonstrate how graph-specific statistics can be leveraged to select the optimal physical traversal operator. To accelerate graph traversals, we devise a set of graph-specific, updateable secondary index structures to improve the performance of vertex neighborhood expansion. Finally, we introduce a domain-specific language with an intuitive programming model to extend graph traversals with custom application logic at runtime. We use the LLVM compiler framework to generate efficient code that tightly integrates the user-specified application logic with our highly optimized built-in graph traversal operators. Our experimental evaluation shows that GRAPHITE can outperform native graph management systems by several orders of magnitude while providing all the features of an RDBMS, such as transaction support, backup and recovery, security and user management, effectively providing a promising alternative to specialized graph management systems that lack many of these features and require expensive data replication and maintenance processes

    Incremental parallel and distributed systems

    Get PDF
    Incremental computation strives for efficient successive runs of applications by re-executing only those parts of the computation that are affected by a given input change instead of recomputing everything from scratch. To realize the benefits of incremental computation, researchers and practitioners are developing new systems where the application programmer can provide an efficient update mechanism for changing application data. Unfortunately, most of the existing solutions are limiting because they not only depart from existing programming models, but also require programmers to devise an incremental update mechanism (or a dynamic algorithm) on a per-application basis. In this thesis, we present incremental parallel and distributed systems that enable existing real-world applications to automatically benefit from efficient incremental updates. Our approach neither requires departure from current models of programming, nor the design and implementation of dynamic algorithms. To achieve these goals, we have designed and built the following incremental systems: (i) Incoop — a system for incremental MapReduce computation; (ii) Shredder — a GPU-accelerated system for incremental storage; (iii) Slider — a stream processing platform for incremental sliding window analytics; and (iv) iThreads — a threading library for parallel incremental computation. Our experience with these systems shows that significant performance can be achieved for existing applications without requiring any additional effort from programmers.Inkrementelle Berechnungen ermöglichen die effizientere Ausführung aufeinanderfolgender Anwendungsaufrufe, indem nur die Teilbereiche der Anwendung erneut ausgefürt werden, die von den Änderungen der Eingabedaten betroffen sind. Dieses Berechnungsverfahren steht dem konventionellen und vollständig neu berechnenden Verfahren gegenüber. Um den Vorteil inkrementeller Berechnungen auszunutzen, entwickeln sowohl Wissenschaft als auch Industrie neue Systeme, bei denen der Anwendungsprogrammierer den effizienten Aktualisierungsmechanismus für die Änderung der Anwendungsdaten bereitstellt. Bedauerlicherweise lassen sich existierende Lösungen meist nur eingeschränkt anwenden, da sie das konventionelle Programmierungsmodel beibehalten und dadurch die erneute Entwicklung vom Programmierer des inkrementellen Aktualisierungsmechanismus (oder einen dynamischen Algorithmus) für jede Anwendung verlangen. Diese Doktorarbeit stellt inkrementelle Parallele- und Verteiltesysteme vor, die es existierenden Real-World-Anwendungen ermöglichen vom Vorteil der inkre- mentellen Berechnung automatisch zu profitieren. Unser Ansatz erfordert weder eine Abkehr von gegenwärtigen Programmiermodellen, noch Design und Implementierung von anwendungsspezifischen dynamischen Algorithmen. Um dieses Ziel zu erreichen, haben wir die folgenden Systeme zur inkrementellen parallelen und verteilten Berechnung entworfen und implementiert: (i) Incoop — ein System für inkrementelle Map-Reduce-Programme; (ii) Shredder — ein GPU- beschleunigtes System zur inkrementellen Speicherung; (iii) Slider — eine Plat- tform zur Batch-basierten Streamverarbeitung via inkrementeller Sliding-Window- Berechnung; und (iv) iThreads — eine Threading-Bibliothek zur parallelen inkre- mentellen Berechnung. Unsere Erfahrungen mit diesen Systemen zeigen, dass unsere Methoden sehr gute Performanz liefern können, und dies ohne weiteren Aufwand des Programmierers

    Incremental parallel and distributed systems

    Get PDF
    Incremental computation strives for efficient successive runs of applications by re-executing only those parts of the computation that are affected by a given input change instead of recomputing everything from scratch. To realize the benefits of incremental computation, researchers and practitioners are developing new systems where the application programmer can provide an efficient update mechanism for changing application data. Unfortunately, most of the existing solutions are limiting because they not only depart from existing programming models, but also require programmers to devise an incremental update mechanism (or a dynamic algorithm) on a per-application basis. In this thesis, we present incremental parallel and distributed systems that enable existing real-world applications to automatically benefit from efficient incremental updates. Our approach neither requires departure from current models of programming, nor the design and implementation of dynamic algorithms. To achieve these goals, we have designed and built the following incremental systems: (i) Incoop — a system for incremental MapReduce computation; (ii) Shredder — a GPU-accelerated system for incremental storage; (iii) Slider — a stream processing platform for incremental sliding window analytics; and (iv) iThreads — a threading library for parallel incremental computation. Our experience with these systems shows that significant performance can be achieved for existing applications without requiring any additional effort from programmers.Inkrementelle Berechnungen ermöglichen die effizientere Ausführung aufeinanderfolgender Anwendungsaufrufe, indem nur die Teilbereiche der Anwendung erneut ausgefürt werden, die von den Änderungen der Eingabedaten betroffen sind. Dieses Berechnungsverfahren steht dem konventionellen und vollständig neu berechnenden Verfahren gegenüber. Um den Vorteil inkrementeller Berechnungen auszunutzen, entwickeln sowohl Wissenschaft als auch Industrie neue Systeme, bei denen der Anwendungsprogrammierer den effizienten Aktualisierungsmechanismus für die Änderung der Anwendungsdaten bereitstellt. Bedauerlicherweise lassen sich existierende Lösungen meist nur eingeschränkt anwenden, da sie das konventionelle Programmierungsmodel beibehalten und dadurch die erneute Entwicklung vom Programmierer des inkrementellen Aktualisierungsmechanismus (oder einen dynamischen Algorithmus) für jede Anwendung verlangen. Diese Doktorarbeit stellt inkrementelle Parallele- und Verteiltesysteme vor, die es existierenden Real-World-Anwendungen ermöglichen vom Vorteil der inkre- mentellen Berechnung automatisch zu profitieren. Unser Ansatz erfordert weder eine Abkehr von gegenwärtigen Programmiermodellen, noch Design und Implementierung von anwendungsspezifischen dynamischen Algorithmen. Um dieses Ziel zu erreichen, haben wir die folgenden Systeme zur inkrementellen parallelen und verteilten Berechnung entworfen und implementiert: (i) Incoop — ein System für inkrementelle Map-Reduce-Programme; (ii) Shredder — ein GPU- beschleunigtes System zur inkrementellen Speicherung; (iii) Slider — eine Plat- tform zur Batch-basierten Streamverarbeitung via inkrementeller Sliding-Window- Berechnung; und (iv) iThreads — eine Threading-Bibliothek zur parallelen inkre- mentellen Berechnung. Unsere Erfahrungen mit diesen Systemen zeigen, dass unsere Methoden sehr gute Performanz liefern können, und dies ohne weiteren Aufwand des Programmierers

    A Pervasive Computational Intelligence based Cognitive Security Co-design Framework for Hype-connected Embedded Industrial IoT

    Get PDF
    The amplified connectivity of routine IoT entities can expose various security trajectories for cybercriminals to execute malevolent attacks. These dangers are even amplified by the source limitations and heterogeneity of low-budget IoT/IIoT nodes, which create existing multitude-centered and fixed perimeter-oriented security tools inappropriate for vibrant IoT settings. The offered emulation assessment exemplifies the remunerations of implementing context aware co-design oriented cognitive security method in assimilated IIoT settings and delivers exciting understandings in the strategy execution to drive forthcoming study. The innovative features of our system is in its capability to get by with irregular system connectivity as well as node limitations in terms of scares computational ability, limited buffer (at edge node), and finite energy. Based on real-time analytical data, projected scheme select the paramount probable end-to-end security system possibility that ties with an agreed set of node constraints. The paper achieves its goals by recognizing some gaps in the security explicit to node subclass that is vital to our system’s operations
    • …
    corecore