290 research outputs found

    Developing a support for FPGAs in the Controller parallel programming model

    Get PDF
    La computación heterogénea se presenta como la solución para conseguir supercomputadores cada vez más rápidos capaces de resolver problemas más grandes y complejos en diferentes áreas de conocimiento. Para ello, integra aceleradores con distintas arquitecturas capaces de explotar las características de los problemas desde distintos enfoques obteniendo, de este modo, un mayor rendimiento. Las FPGAs son hardware reconfigurable, i.e., es posible modificarlas después de su fabricación. Esto permite una gran flexibilidad y una máxima adaptación al problema en cuestión. Además, tienen un consumo energético muy bajo. Todas estas ventajas tienen el gran inconveniente de una más difícil programaci ón mediante los propensos a errores HDLs (Hardware Description Language), tales como Verilog o VHDL, y requisitos de conocimientos avanzados de electrónica digital. En los últimos años los principales fabricantes de FPGAs han enfocado sus esfuerzos en desarrollar herramientas HLS (High Level Synthesis) que permiten programarlas a través de lenguajes de programación de alto nivel estilo C. Esto ha favorecido su adopción por la comunidad HPC y su integración en los nuevos supercomputadores. Sin embargo, el programador aún tiene que ocuparse de aspectos como la gestión de colas de comandos, parámetros de lanzamiento o transferencias de datos. El modelo Controller es una librería que facilita la gestión de la coordinación, comunicación y los detalles de lanzamiento de los kernels en aceleradores hardware. Explota de forma transparente sus modelos de programación nativos, en concreto OpenCL y CUDA, y, por tanto, consigue un alto rendimiento independientemente del compilador. Permite al programador utilizar los distintos recursos hardware disponibles de forma combinada en entornos heterogéneos. Este trabajo extiende el modelo Controller mediante el desarrollo de un backend que permite la integración de FPGAs, manteniendo los cambios sobre la interfaz de usuario al mínimo. A través de los resultados experimentales se comprueba que se consigue una disminución del esfuerzo de programación significativa en comparación con la implementación nativa en OpenCL. Del mismo modo, se consigue un elevado solapamiento entre computación y comunicación y un sobrecoste por el uso de la librería despreciable.Heterogeneous computing appears to be the solution to achieve ever faster computers capable of solving bigger and more complex problems in difierent fields of knowledge. To that end, it integrates accelerators with difierent architectures capable of exploiting the features of problems from difierent perspectives thus achieving higher performance. FPGAs are reconfigurable hardware, i.e., it is possible to modify them after manufacture. This allows great flexibility and maximum adaptability to the given problem. In addition, they have low power consumption. All these advantages have the great objection of more dificult programming with the errorprone HDLs (Hardware Description Language), such as Verilog or VHDL, and the requirement of advanced knowledge of digital electronics. The main FPGA vendors have concentrated on developing HLS (High Level Synthesis) tools that allow to program them with C-like high level programming languages. This favoured their adoption by the HPC community and their integration in new supercomputers. However, the programmer still has to take care of aspects such as management of command queues, launching parameters or data transfers. The Controller model is a library to easily manage the coordination, communication and kernel launching details on hardware accelerators. It transparently exploits their native or vendor specific programming models, namely OpenCL and CUDA, thus enabling the potential performance obtained by using them in a compiler agnostic way. It is intended to enable the programmer to make use of the diferent available hardware resources in combination in heterogeneous environments. This work extends the Controller model through the development of a backend that allows the integration of FPGAs, keeping the changes over the user-facing interface to the minimum. The experimental results validate that a significant decrease in programming effort compared to the native OpenCL implementation is achieved. Similarly, high overlap of computation and communication and a negligible overhead due to the use of the library are attained.Grado en Ingeniería Informátic

    Carbon Xerogels: The Bespoke Nanoporous Carbons

    Get PDF
    This chapter focuses on the main features of resorcinol-formaldehyde–based carbon xerogels. The first part of the chapter discusses ways of synthesizing these materials and the different variables involved. Then a review of the ways in which the meso- and macroporosity of organic xerogels can be controlled by adjusting the synthesis conditions is undertaken. Special attention is paid to the pH and components of the precursor solution and how these variables are interrelated with each other. The formation of the microporosity during the carbonization or activation processes that give rise to the carbon xerogels is also briefly discussed. Besides the fact that the porosity of these materials can be tailored during the synthesis, another notable characteristic is that, compared with most porous carbons, they possess a relatively high electrical conductivity, which make them ideal materials for use as electrodes in energy storage devices. Their use in supercapacitors and in lithium ion batteries is addressed in the last part of the chapter

    Estudio de métodos exactos y aproximados para la resolución del problema de localización sin capacidades

    Get PDF
    Este trabajo estudia el problema UFLP (problema de localización sin capacidades), que constituye la base de los problemas de localización. Dado que se trata de un problema computacionalmente costoso los métodos exactos no son especialmente útiles para tamaños grandes del problema. El foco del trabajo se encuentra en las heurísticas de búsqueda local y las metaheurísticas, que permiten resolver problemas de optimización grandes en tiempos practicables. Se estudian los algoritmos mediante su implementación en C y la resolución de ficheros de datos de diferentes características y se comparan con los métodos exactos. Finalmente, se determina qué metaheurística es más apropiada para la resolución de este problema.This work studies the UFLP (Uncapacitated Facility Location Problem), wich lays the basis of localisation problems. Since this is a computationally costly problem the exact methods are not particularly useful for big instances of the problem. The focus of this work is on local search heuristics and metaheuristics, which enable solving big optimisation problems in reasonable times. Diferent algorithms are studied through their implementation in C and the solution of datasets of diverse features and they are compared to the exact methods. Finally, it is determined which metaheuristic is the most adequate for the solution of this problem.Grado en Estadístic

    A survey of machine and deep learning methods for privacy protection in the Internet of things

    Get PDF
    Recent advances in hardware and information technology have accelerated the proliferation of smart and interconnected devices facilitating the rapid development of the Internet of Things (IoT). IoT applications and services are widely adopted in environments such as smart cities, smart industry, autonomous vehicles, and eHealth. As such, IoT devices are ubiquitously connected, transferring sensitive and personal data without requiring human interaction. Consequently, it is crucial to preserve data privacy. This paper presents a comprehensive survey of recent Machine Learning (ML)- and Deep Learning (DL)-based solutions for privacy in IoT. First, we present an in depth analysis of current privacy threats and attacks. Then, for each ML architecture proposed, we present the implementations, details, and the published results. Finally, we identify the most effective solutions for the different threats and attacks.This work is partially supported by the Generalitat de Catalunya under grant 2017 SGR 962 and the HORIZON-GPHOENIX (101070586) and HORIZON-EUVITAMIN-V (101093062) projects.Peer ReviewedPostprint (published version

    Lightweight protection of cryptographic hardware accelerators against differential fault analysis

    Get PDF
    © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Hardware acceleration circuits for cryptographic algorithms are largely deployed in a wide range of products. The HW implementations of such algorithms often suffer from a number of vulnerabilities that expose systems to several attacks, e.g., differential fault analysis (DFA). The challenge for designers is to protect cryptographic accelerators in a cost-effective and power-efficient way. In this paper, we propose a lightweight technique for protecting hardware accelerators implementing AES and SHA-2 (which are two widely used NIST standards) against DFA. The proposed technique exploits partial redundancy to first detect the occurrence of a fault and then to react to the attack by obfuscating the output values. An experimental campaign demonstrated that the overhead introduced is 8.32% for AES and 3.88% for SHA-2 in terms of area, 0.81% for AES and 12.31% for SHA-2 in terms of power with no working frequency reduction. Moreover, a comparative analysis showed that our proposal outperforms the most recent related countermeasures.Peer ReviewedPostprint (author's final draft

    NGS data analysis: a review of major tools and pipeline frameworks for variant discovery

    Get PDF
    [EN]The analysis of genetic data has always been a problem due to the large amount of information available and the difficulty in isolating that which is relevant. However, over the years progress in sequencing techniques has been accompanied by a development of computer techniques to the current application of artificial intelligence. We can summarize the phases of sequence analysis in the following: quality assessment, alignment, pre-variant processing, variant calling and variant annotation. In this article we will review and comment on the tools used in each phase of genetic sequencing, and analyze the drawbacks and advantages offered by each of them

    File formats used in next generation sequencing: A literature review

    Get PDF
    [EN]Next-generation sequencing (NGS) has revolutionized the field of genomics, allowing a detailed and precise look at DNA. As this technology advanced, the need arose for standardized file formats to represent, analyze and store the vast data sets produced. In this article, we review the key file formats used in NGS: FASTA, FASTQ, BED, GFF, and VCF. The FASTA format, one of the oldest, provides a basic representation of genomic and protein sequences, identifiable by unique headers. FASTQ is essential for NGS, as it stores both the sequence and the associated quality information. BED provides a tabular representation of genomic loci, while GFF details the localization and structure of genomic features in reference sequences. Finally, VCF has emerged as the predominant standard for documenting genetic variants, from simple SNPs to complex structural variants. The adoption and adaptation of these formats have been fundamental for progress in bioinformatics and genomics. They provide a foundation on which to build sophisticated analyses, from gene discovery and function prediction to the identification of disease-associated variants. With a clear understanding of these formats, researchers and practitioners are better equipped to harness the power and potential of next-generation sequencing.This study has been funded by the AIR Genomics project (with file number CCTT3/20/SA/0003), through the call 2020 R&D PROJECTS ORIENTED TO THE EXCELLENCE AND COMPETITIVE IMPROVEMENT OF THE CCTT by the Institute of Business Competitiveness of Castilla y León and FEDER fun

    Application of Deep Symbolic Learning in NGS

    Get PDF
    [EN]The application of Deep Symbolic Learning in genomic analysis has begun to gain traction as a promising approach to interpret and understand vast data sets derived from DNA sequencing. Next-generation sequencing (NGS) techniques have revolutionized the field of clinical genetics and human biology, generating massive volumes of data that require advanced tools for analysis. However, traditional methods are often too abstract or complicated for clinical staff. This work focuses on exploring how Deep Symbolic Learning, a subfield of explainable artificial intelligence (XAI), can be effectively applied to NGS data. A detailed evaluation of the suitability of different architectures will be carried out

    Integrating Nextflow and AWS for Large-Scale Genomic Analysis: A Hypothetical Case Study

    Get PDF
    [EN]This article explores the innovative combination of Nextflow and Amazon Web Services (AWS) to address the challenges inherent in large-scale genomic analysis. Focusing on a hypothetical case called "The Pacific Genome Atlas", it illustrates how a research organization could approach the sequencing and analysis of 10,000 genomes. Although the "Pacific Genome Atlas" is a fictional example used for illustrative purposes only, it highlights the real challenges associated with large genomic projects, such as handling huge volumes of data and the need for intensive computational analysis. Through the integration of Nextflow, a workflow management tool, with the AWS cloud infrastructure, we demonstrate how these challenges can be overcome, offering scalable, flexible and cost-effective solutions for genomic research. The adoption of modern technologies, such as those described in this article, is essential to advance the field of genomics and accelerate scientific discoveries.The present study has been funded by the AIR Genomics project (file number CCTT3/20/SA/0003) through the 2020 call for R&D Projects Oriented towards Excellence and Competitive Improvement of CCTT by the Institute of Business Competitiveness of Castilla y León and FEDER fund

    Deep Symbolic Learning Architecture for Variant Calling in NGS

    Get PDF
    [EN]The Variant Detection process (Variant Calling) is fundamental in bioinformatics, demanding maximum precision and reliability. This study examines an innovative integration strategy between a traditional pipeline developed in-house and an advanced Intelligent System (IS). Although the original pipeline already had tools based on traditional algorithms, it had limitations, particularly in the detection of rare or unknown variants. Therefore, SI was introduced with the aim of providing an additional layer of analysis, capitalizing on deep and symbolic learning techniques to improve and enhance previous detections. The main technical challenge lay in interoperability. To overcome this, NextFlow, a scripting language designed to manage complex bioinformatics workflows, was employed. Through NextFlow, communication and efficient data transfer between the original pipeline and the SI were facilitated, thus guaranteeing compatibility and reproducibility. After the Variant Calling process of the original system, the results were transmitted to the SI, where a meticulous sequence of analysis was implemented, from preprocessing to data fusion. As a result, an optimized set of variants was generated that was integrated with previous results. Variants corroborated by both tools were considered to be of high reliability, while discrepancies indicated areas for detailed investigations. The product of this integration advanced to subsequent stages of the pipeline, usually annotation or interpretation, contextualizing the variants from biological and clinical perspectives. This adaptation not only maintained the original functionalities of the pipeline, but was also enhanced with the SI, establishing a new standard in the Variant Calling process. This research offers a robust and efficient model for the detection and analysis of genomic variants, highlighting the promise and applicability of blended learning in bioinformaticsThis study has been funded by the AIR Genomics project (with file number CCTT3/20/SA/0003), through the call 2020 R&D PROJECTS ORIENTED TO THE EXCELLENCE AND COMPETITIVE IMPROVEMENT OF THE CCTT by the Institute of Business Competitiveness of Castilla y León and FEDER fund
    corecore