199 research outputs found

    Dynamic circuit specialisation for key-based encryption algorithms and DNA alignment

    Get PDF
    Parameterised reconfiguration is a method for dynamic circuit specialization on FPGAs. The main advantage of this new concept is the high resource efficiency. Additionally, there is an automated tool flow, TMAP, that converts a hardware design into a more resource-efficient run-time reconfigurable design without a large design effort. We will start by explaining the core principles behind the dynamic circuit specialization technique. Next, we show the possible gains in encryption applications using an AES encoder. Our AES design shows a 20.6% area gain compared to an unoptimized hardware implementation and a 5.3% gain compared to a manually optimized third-party hardware implementation. We also used TMAP on a Triple-DES and an RC6 implementation, where we achieve a 27.8% and a 72.7% LUT-area gain. In addition, we discuss a run-time reconfigurable DNA aligner. We focus on the optimizations to the dynamic specialization overhead. Our final design is up to 2.80-times more efficient on cheaper FPGAs than the original DNA aligner when at least one DNA sequence is longer than 758 characters. Most sequences in DNA alignment are of the order 2^13

    Evolution Oriented Monitoring oriented to Security Properties for Cloud Applications

    Get PDF
    Internet is changing from an information space to a dynamic computing space. Data distribution and remotely accessible software services, dynamism, and autonomy are prime attributes. Cloud technology offers a powerful and fast growing approach to the provision of infrastructure (platform and software services) avoiding the high costs of owning, operating, and maintaining the computational infrastructures required for this purpose. Nevertheless, cloud technology still raises concerns regarding security, privacy, governance, and compliance of data and software services offered through it. Concerns are due to the difficulty to verify security properties of the different types of applications and services available through cloud technology, the uncertainty of their owners and users about the security of their services, and the applications based on them, once they are deployed and offered through a cloud. This work presents an innovative and novel evolution-oriented, cloud-specific monitoring model (including an architecture and a language) that aim at helping cloud application developers to design and monitor the behavior and functionality of their applications in a cloud environment.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Doctor of Philosophy

    Get PDF
    dissertationThe embedded system space is characterized by a rapid evolution in the complexity and functionality of applications. In addition, the short time-to-market nature of the business motivates the use of programmable devices capable of meeting the conflicting constraints of low-energy, high-performance, and short design times. The keys to achieving these conflicting constraints are specialization and maximally extracting available application parallelism. General purpose processors are flexible but are either too power hungry or lack the necessary performance. Application-specific integrated circuits (ASICS) efficiently meet the performance and power needs but are inflexible. Programmable domain-specific architectures (DSAs) are an attractive middle ground, but their design requires significant time, resources, and expertise in a variety of specialties, which range from application algorithms to architecture and ultimately, circuit design. This dissertation presents CoGenE, a design framework that automates the design of energy-performance-optimal DSAs for embedded systems. For a given application domain and a user-chosen initial architectural specification, CoGenE consists of a a Compiler to generate execution binary, a simulator Generator to collect performance/energy statistics, and an Explorer that modifies the current architecture to improve energy-performance-area characteristics. The above process repeats automatically until the user-specified constraints are achieved. This removes or alleviates the time needed to understand the application, manually design the DSA, and generate object code for the DSA. Thus, CoGenE is a new design methodology that represents a significant improvement in performance, energy dissipation, design time, and resources. This dissertation employs the face recognition domain to showcase a flexible architectural design methodology that creates "ASIC-like" DSAs. The DSAs are instruction set architecture (ISA)-independent and achieve good energy-performance characteristics by coscheduling the often conflicting constraints of data access, data movement, and computation through a flexible interconnect. This represents a significant increase in programming complexity and code generation time. To address this problem, the CoGenE compiler employs integer linear programming (ILP)-based 'interconnect-aware' scheduling techniques for automatic code generation. The CoGenE explorer employs an iterative technique to search the complete design space and select a set of energy-performance-optimal candidates. When compared to manual designs, results demonstrate that CoGenE produces superior designs for three application domains: face recognition, speech recognition and wireless telephony. While CoGenE is well suited to applications that exhibit a streaming behavior, multithreaded applications like ray tracing present a different but important challenge. To demonstrate its generality, CoGenE is evaluated in designing a novel multicore N-wide SIMD architecture, known as StreamRay, for the ray tracing domain. CoGenE is used to synthesize the SIMD execution cores, the compiler that generates the application binary, and the interconnection subsystem. Further, separating address and data computations in space reduces data movement and contention for resources, thereby significantly improving performance compared to existing ray tracing approaches

    High performance reconfigurable architectures for biological sequence alignment

    Get PDF
    Bioinformatics and computational biology (BCB) is a rapidly developing multidisciplinary field which encompasses a wide range of domains, including genomic sequence alignments. It is a fundamental tool in molecular biology in searching for homology between sequences. Sequence alignments are currently gaining close attention due to their great impact on the quality aspects of life such as facilitating early disease diagnosis, identifying the characteristics of a newly discovered sequence, and drug engineering. With the vast growth of genomic data, searching for a sequence homology over huge databases (often measured in gigabytes) is unable to produce results within a realistic time, hence the need for acceleration. Since the exponential increase of biological databases as a result of the human genome project (HGP), supercomputers and other parallel architectures such as the special purpose Very Large Scale Integration (VLSI) chip, Graphic Processing Unit (GPUs) and Field Programmable Gate Arrays (FPGAs) have become popular acceleration platforms. Nevertheless, there are always trade-off between area, speed, power, cost, development time and reusability when selecting an acceleration platform. FPGAs generally offer more flexibility, higher performance and lower overheads. However, they suffer from a relatively low level programming model as compared with off-the-shelf microprocessors such as standard microprocessors and GPUs. Due to the aforementioned limitations, the need has arisen for optimized FPGA core implementations which are crucial for this technology to become viable in high performance computing (HPC). This research proposes the use of state-of-the-art reprogrammable system-on-chip technology on FPGAs to accelerate three widely-used sequence alignment algorithms; the Smith-Waterman with affine gap penalty algorithm, the profile hidden Markov model (HMM) algorithm and the Basic Local Alignment Search Tool (BLAST) algorithm. The three novel aspects of this research are firstly that the algorithms are designed and implemented in hardware, with each core achieving the highest performance compared to the state-of-the-art. Secondly, an efficient scheduling strategy based on the double buffering technique is adopted into the hardware architectures. Here, when the alignment matrix computation task is overlapped with the PE configuration in a folded systolic array, the overall throughput of the core is significantly increased. This is due to the bound PE configuration time and the parallel PE configuration approach irrespective of the number of PEs in a systolic array. In addition, the use of only two configuration elements in the PE optimizes hardware resources and enables the scalability of PE systolic arrays without relying on restricted onboard memory resources. Finally, a new performance metric is devised, which facilitates the effective comparison of design performance between different FPGA devices and families. The normalized performance indicator (speed-up per area per process technology) takes out advantages of the area and lithography technology of any FPGA resulting in fairer comparisons. The cores have been designed using Verilog HDL and prototyped on the Alpha Data ADM-XRC-5LX card with the Virtex-5 XC5VLX110-3FF1153 FPGA. The implementation results show that the proposed architectures achieved giga cell updates per second (GCUPS) performances of 26.8, 29.5 and 24.2 respectively for the acceleration of the Smith-Waterman with affine gap penalty algorithm, the profile HMM algorithm and the BLAST algorithm. In terms of speed-up improvements, comparisons were made on performance of the designed cores against their corresponding software and the reported FPGA implementations. In the case of comparison with equivalent software execution, acceleration of the optimal alignment algorithm in hardware yielded an average speed-up of 269x as compared to the SSEARCH 35 software. For the profile HMM-based sequence alignment, the designed core achieved speed-up of 103x and 8.3x against the HMMER 2.0 and the latest version of HMMER (version 3.0) respectively. On the other hand, the implementation of the gapped BLAST with the two-hit method in hardware achieved a greater than tenfold speed-up compared to the latest NCBI BLAST software. In terms of comparison against other reported FPGA implementations, the proposed normalized performance indicator was used to evaluate the designed architectures fairly. The results showed that the first architecture achieved more than 50 percent improvement, while acceleration of the profile HMM sequence alignment in hardware gained a normalized speed-up of 1.34. In the case of the gapped BLAST with the two-hit method, the designed core achieved 11x speed-up after taking out advantages of the Virtex-5 FPGA. In addition, further analysis was conducted in terms of cost and power performances; it was noted that, the core achieved 0.46 MCUPS per dollar spent and 958.1 MCUPS per watt. This shows that FPGAs can be an attractive platform for high performance computation with advantages of smaller area footprint as well as represent economic ‘green’ solution compared to the other acceleration platforms. Higher throughput can be achieved by redeploying the cores on newer, bigger and faster FPGAs with minimal design effort

    Cascaded Pose Regression

    Get PDF
    We present a fast and accurate algorithm for computing the 2D pose of objects in images called cascaded pose regression (CPR). CPR progressively refines a loosely specified initial guess, where each refinement is carried out by a different regressor. Each regressor performs simple image measurements that are dependent on the output of the previous regressors; the entire system is automatically learned from human annotated training examples. CPR is not restricted to rigid transformations: ‘pose’ is any parameterized variation of the object’s appearance such as the degrees of freedom of deformable and articulated objects. We compare CPR against both standard regression techniques and human performance (computed from redundant human annotations). Experiments on three diverse datasets (mice, faces, fish) suggest CPR is fast (2-3ms per pose estimate), accurate (approaching human performance), and easy to train from small amounts of labeled data

    Run-Time Reconfiguration for Automatic Hardware/Software Partitioning

    Get PDF
    Parameterisable configurations allow very fast run-time reconfiguration in FPGAs. The main advantage of this new concept is the automated tool flow that converts a hardware design into a more resource-efficient run-time reconfigurable design without a large design effort. In this paper, we show that the automated tool flow for run-time reconfiguration can be used to easily optimize a full hardware implementation for area by converting it automatically to a hardware/software implementation. This tool flow can partition the design in a very short time and, at the same time, result in significant area gains. The usage of run time reconfiguration allows us to extend the hardware/software boundary so more functionality can be moved to software. We will explain the core principles behind the run-time reconfiguration technique using the AES encoder as an example. For the AES encoder the manual hardware/software partitioning is clear. This manual partitioning will serve as a comparison to the automated partitioning that uses parameterisable configurations. Several possible AES encoder implementations are compared. Our automatically partitioned AES design shows a 20.6 % area gain compared to an unoptimized hardware implementation and a 5.3 % gain compared to a manually optimized 3rd party hardware implementation. In addition, we discuss the results of our technique on other applications, where the hardware/software partitioning is less clear. Among these, a TripleDES implementation shows a 29.3 % area gain using our technique. Based on our AES encoder results, we derive some guidelines for optimizing the impact of parameterisable configurations in general designs

    Information-driven navigation

    Get PDF
    En los últimos años, hemos presenciado un progreso enorme de la precisión y la robustez de la “Odometría Visual” (VO) y del “Mapeo y la Localización Simultánea” (SLAM). Esta mejora de su funcionamiento ha permitido las primeras implementaciones comerciales relacionadascon la realidad aumentada (AR), la realidad virtual (VR) y la robótica. En esta tesis, desarrollamos nuevos métodos probabilísticos para mejorar la precisión, robustez y eficiencia de estas técnicas. Las contribuciones de nuestro trabajo están publicadas en tres artículos y se complementan con el lanzamiento de “SID-SLAM”, el software que contiene todas nuestras contribuciones, y del “Minimal Texture dataset”.Nuestra primera contribución es un algoritmo para la selección de puntos basado en Teoría de la Información para sistemas RGB-D VO/SLAM basados en métodos directos y/o en características visuales (features). El objetivo es seleccionar las medidas más informativas, para reducir el tama˜no del problema de optimización con un impacto mínimo en la precisión. Nuestros resultados muestran que nuestro nuevo criterio permitereducir el número de puntos hasta tan sólo 24 de ellos, alcanzando la precisión del estado del arte y reduciendo en hasta 10 veces la demanda computacional.El desarrollo de mejores modelos de incertidumbre para las medidas visuales mejoraría la precisión de la estructura y movimiento multi-vista y llevaría a estimaciones más realistas de la incertidumbre del estado en VO/SLAM. En esta tesis derivamos un modelo de covarianza para residuos multi-vista, que se convierte en un elemento crucial de nuestras contribuciones basadas en Teoría de la Información.La odometría visual y los sistemas de SLAM se dividen típicamente en la literatura en dos categorías, los basados en features y los métodos directos, dependiendo del tipo de residuos que son minimizados. En la última parte de la tesis combinamos nuestras dos contribucionesanteriores en la formulación e implementación de SID-SLAM, el primer sistema completo de SLAM semi-directo RGB-D que utiliza de forma integrada e indistinta features y métodos directos, en un sistema completo dirigido con información. Adicionalmente, grabamos ‘‘Minimal Texture”, un dataset RGB-D con un contenido visual conceptualmente simple pero arduo, con un ground truth preciso para facilitar la investigación del estado del arte en SLAM semi-directo.In the last years, we have witnessed an impressive progress in the accuracy and robustness of Visual Odometry (VO) and Simultaneous Localization and Mapping (SLAM). This boost in the performance has enabled the first commercial implementations related to augmented reality (AR), virtual reality (VR) and robotics. In this thesis, we developed new probabilistic methods to further improve the accuracy, robustness and efficiency of VO and SLAM. The contributions of our work are issued in three main publications and complemented with the release of SID-SLAM, the software containing all our contributions, and the challenging Mininal Texture dataset. Our first contribution is an information-theoretic approach to point selection for direct and/or feature-based RGB-D VO/SLAM. The aim is to select only the most informative measurements, in order to reduce the optimization problem with a minimal impact in the accuracy. Our experimental results show that our novel criteria allows us to reduce the number of tracked points down to only 24 of them, achieving state-of-the-art accuracy while reducing 10x the computational demand. Better uncertainty models for visual measurements will impact the accuracy of multi-view structure and motion and will lead to realistic uncertainty estimates of the VO/SLAM states. We derived a novel model for multi-view residual covariances based on perspective deformation, which has become a crucial element in our information-driven approach. Visual odometry and SLAM systems are typically divided in the literature into two categories, feature-based and direct methods, depending on the type of residuals that are minimized. We combined our two previous contributions in the formulation and implementation of SID-SLAM, the first full semi-direct RGB-D SLAM system that uses tightly and indistinctly features and direct methods within a complete information-driven pipeline. Moreover, we recorded Minimal Texture an RGB-D dataset with conceptually simple but challenging content, with accurate ground truth to facilitate state-of-the-art research on semi-direct SLAM.<br /

    Manifold-Based Sensorimotor Representations for Bootstrapping of Mobile Agents

    Get PDF
    Subject of this thesis is the development of a domain-independent algorithm that allows an autonomous system to process sequences of the sensorimotor interaction with its environment and to assign a geometric interpretation to its motor capabilities. We utilize Lie groups, smooth manifolds endowed with a group structure, that allow for an elegant representation of geometric operations as a central foundation for such a sensorimotor representation. Expressing motor controls with respect to the manifold structure allows us to transform the sensorimotor interaction sequence into a specific set of data points. Finding a manifold and a transformation that minimizes an intrinsic conflict function corresponds to finding a topological structure that is the best fit for expressing the sensorimotor space the entity resides in. Experiments in a virtual environment are conducted that show the applicability of the approach with respect to different sensor and motor configurations

    Data-oriented approach for synthetic network traffic generation

    Get PDF
    Abstract. A significant portion of dynamic websites, called web applications present dynamic information, which can be fetched from a backend pre-emptively or on-demand. For popular sites and applications, the increased volume of network calls and connectivity creates high performance requirements for the server side implementation. A failure to fulfill these requirements might lead to security vulnerabilities, high latency, logical malfunctions, and other issues. Identifying and quantifying performance weaknesses is not trivial. Even if measurements are collected, they might not originate from a stable environment which allows truthful comparison between software versions. Often application programming interface (API) performance issues are found by accident while using the software, but a few more automated methods to find the issues have been created as well, such as automated end to end API tests. In this diploma thesis, a case of web application software is studied with the goal of increasing its performance testing capability. A data-oriented approach to create synthetic network traffic for performance testing is prototyped. The approach consists of recording incoming network traffic at the server, finding user sequences within the traffic, which are then used to create a model of the users. The model can then be utilized to generate synthetic network requests, which have the same characteristics as real network traffic. The synthetic network requests are then used to generate load in a performance test environment. The approach is evaluated by its applicability for the use case.Datapohjainen lähestymistapa verkkoliikenteen syntetisointiin. Tiivistelmä. Suuri osa dynaamisista verkkosivuista eli verkkosovelluksista esittää käyttäjälle dynaamisesti tietoa. Tarvittava tieto haetaan tyypillisesti palvelimelta (jatkossa "backend"), jonne voidaan myös lähettää dataa. Suosituilla verkkosovelluksilla voi olla valtava määrä samanaikaisia käyttäjiä, mikä aiheuttaa suuren kuorman backendille. Mahdollinen suuri kuorma johtaa korkeisiin suorituskykyvaatimuksiin. Epäonnistuminen suorituskykyvaatimusten täyttämisessä voi johtaa tietoturvaongelmiin, pitkiin vasteaikoihin, loogisiin virheisiin tai muihin ongelmiin. Verkkopalvelun suorituskykyheikkousten löytäminen ja tunnistaminen ei ole triviaali tehtävä. Vaikka ohjelman suorituksesta kerättäisiin tietoa, se ei välttämättä pohjaudu vakaasta ja toistettavasta ympäristöstä mitatulle datalle. Usein backendien rajapintojen suorituskykyheikkoudet löytyvät vahingossa rajapintaa käytettäessä, mutta muutamia automatisoituja keinoja heikkousten löytämiseksi on olemassa. Tässä diplomityössä tutkitaan yhden verkkosovelluksen käyttäjien datapohjaista mallinnusta, ensisijaisena tavoitteena nostaa sovelluksen suorituskykytestauksen tasoa. Työssä kokeillaan datalähtöistä lähestymistapaa verkkoliikenteen mallintamiseksi ja syntetisoimiseksi. Lähestymistapa koostuu verkkoliikenteen nauhoittamisesta, käyttäjäsekvenssien tunnistamisesta verkkoliikenteen joukosta ja käyttäjien mallintamisesta. Mallinnettujen käyttäjien avulla voidaan luoda synteettistä verkkoliikennettä, joka muistuttaa ominaisuuksiltaan alkuperäistä nauhoitettua verkkoliikennettä. Synteettistä liikennettä käytetään suorituskykytestaukseen tarvittavan kuorman luomisessa. Lopulta lähestymistavan onnistumista ja sovellettavuutta arvioidaan
    corecore