890 research outputs found

    Improving Programming Support for Hardware Accelerators Through Automata Processing Abstractions

    Full text link
    The adoption of hardware accelerators, such as Field-Programmable Gate Arrays, into general-purpose computation pipelines continues to rise, driven by recent trends in data collection and analysis as well as pressure from challenging physical design constraints in hardware. The architectural designs of many of these accelerators stand in stark contrast to the traditional von Neumann model of CPUs. Consequently, existing programming languages, maintenance tools, and techniques are not directly applicable to these devices, meaning that additional architectural knowledge is required for effective programming and configuration. Current programming models and techniques are akin to assembly-level programming on a CPU, thus placing significant burden on developers tasked with using these architectures. Because programming is currently performed at such low levels of abstraction, the software development process is tedious and challenging and hinders the adoption of hardware accelerators. This dissertation explores the thesis that theoretical finite automata provide a suitable abstraction for bridging the gap between high-level programming models and maintenance tools familiar to developers and the low-level hardware representations that enable high-performance execution on hardware accelerators. We adopt a principled hardware/software co-design methodology to develop a programming model providing the key properties that we observe are necessary for success, namely performance and scalability, ease of use, expressive power, and legacy support. First, we develop a framework that allows developers to port existing, legacy code to run on hardware accelerators by leveraging automata learning algorithms in a novel composition with software verification, string solvers, and high-performance automata architectures. Next, we design a domain-specific programming language to aid programmers writing pattern-searching algorithms and develop compilation algorithms to produce finite automata, which supports efficient execution on a wide variety of processing architectures. Then, we develop an interactive debugger for our new language, which allows developers to accurately identify the locations of bugs in software while maintaining support for high-throughput data processing. Finally, we develop two new automata-derived accelerator architectures to support additional applications, including the detection of security attacks and the parsing of recursive and tree-structured data. Using empirical studies, logical reasoning, and statistical analyses, we demonstrate that our prototype artifacts scale to real-world applications, maintain manageable overheads, and support developers' use of hardware accelerators. Collectively, the research efforts detailed in this dissertation help ease the adoption and use of hardware accelerators for data analysis applications, while supporting high-performance computation.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155224/1/angstadt_1.pd

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    Aerospace Medicine and Biology: A continuing bibliography with indexes, supplement 144

    Get PDF
    This bibliography lists 257 reports, articles, and other documents introduced into the NASA scientific and technical information system in July 1975

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    Anomaly detection of web-based attacks

    Get PDF
    Tese de mestrado em Segurança Informática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2010Para prevenir ataques com sucesso, é crucial que exista um sistema de detecção que seja preciso e completo. Os sistemas de detecção de intrusão (IDS) baseados em assinaturas são uma das abordagens mais conhecidas para o efeito, mas não são adequados para detectar ataques web ou ataques previamente desconhecidos. O objectivo deste projecto passa pelo estudo e desenho de um sistema de detecção de intrusão baseado em anomalias capaz de detectar esses tipos de ataques. Os IDS baseados em anomalias constroem um modelo de comportamento normal através de dados de treino, e em seguida utilizam-no para detectar novos ataques. Na maioria dos casos, este modelo é representativo de mais exemplos de comportamento normal do que os presentes nos dados de treino, característica esta a que chamamos generalização e que é fundamental para aumentar a precisão na detecção de anomalias. A precisão da detecção e, portanto, a utilidade destes sistemas, é consideravelmente influenciada pela fase de construção do modelo (muitas vezes chamada fase de treino), que depende da existência de um conjunto de dados sem ataques que se assemelhe ao comportamento normal da aplicação protegida. A construção de modelos correctos é particularmente importante, caso contrário, durante a fase de detecção, provavelmente serão geradas grandes quantidades de falsos positivos e falsos negativos pelo IDS. Esta dissertação detalha a nossa pesquisa acerca da utilização de métodos baseados em anomalias para detectar ataques contra servidores e aplicações web. As nossas contribuições incidem sobre três vertentes distintas: i) procedimentos avançados de treino que permitem aos sistemas de detecção baseados em anomalias um bom funcionamento, mesmo em presença de aplicações complexas e dinâmicas, ii) um sistema de detecção de intrusão que compreende diversas técnicas de detecção de anomalias capazes de reconhecer e identificar ataques contra servidores e aplicações web e iii) uma avaliação do sistema e das técnicas mais adequadas para a detecção de ataques, utilizando um elevado conjunto de dados reais de tráfego pertencentes a uma aplicação web de grandes dimensões alojada em servidores de produção num ISP Português.To successfully prevent attacks it is vital to have a complete and accurate detection system. Signature-based intrusion detection systems (IDS) are one of the most popular approaches, but they are not adequate for detection of web-based or novel attacks. The purpose of this project is to study and design an anomaly-based intrusion detection system capable of detecting those kinds of attacks. Anomaly-based IDS can create a model of normal behavior from a set of training data, and then use it to detect novel attacks. In most cases, this model represents more instances than those in the training data set, a characteristic that we designate as generalization and which is necessary for accurate anomaly detection. The accuracy of such systems, which determines their effectiveness, is considerably influenced by the model building phase (often called training), which depends on having data that is free from attacks resembling the normal operation of the protected application. Having good models is particularly important, or else significant amounts of false positives and false negatives will likely be generated by the IDS during the detection phase. This dissertation details our research on the use of anomaly-based methods to detect attacks against web servers and applications. Our contributions focus on three different strands: i) advanced training procedures that enable anomaly-based learning systems to perform well even in presence of complex and dynamic web applications; ii) a system comprising several anomaly detection techniques capable of recognizing and identifying attacks against web servers and applications and iii) an evaluation of the system and of the most suitable techniques for anomaly detection of web attacks, using a large data set of real-word traffic belonging to a web application of great dimensions hosted in production servers of a Portuguese ISP

    Fundamental Approaches to Software Engineering

    Get PDF
    computer software maintenance; computer software selection and evaluation; formal logic; formal methods; formal specification; programming languages; semantics; software engineering; specifications; verificatio

    Quantification of in situ heterogeneity of contaminants in soil: a fundamental prerequisite to understanding factors controlling plant uptake

    Get PDF
    Heterogeneity of contaminants in soils can vary spatially over a range of scales, causing uncertainty in environmental measurements of contaminant concentrations. Sampling designs may aim to reduce the impact of on-site heterogeneity, by using composite sampling, increased sample mass and off-site homogenisation, yet they could overlook the small scale heterogeneity that can have significant implications for plant uptake of contaminants. Moreover, composite sampling and homogenisation may not be relevant to target receptor behaviour, e.g. plants, and studies, using simplistic models of heterogeneity have shown that it can significantly impact plant uptake of contaminants. The alternative approach, to accept and quantify heterogeneity, requires further exploration as contaminant heterogeneity is inevitable within soils and its quantification should enable improved reliability in risk assessment and understanding variability in plant contaminant uptake. This thesis reports the development of a new sampling design, to characterise and quantify contaminant heterogeneity at scales, from 0.02m to 20m, using in situ measurement techniques, and 0.005m to 0.0005m, using ex situ techniques. The design was implemented at two contaminated land sites, with contrasting heterogeneity based upon historic anthropogenic activity and showed heterogeneity varying between contaminants and at different spatial scales, for Pb, Cu and Zn. Secondly, this research demonstrates how contaminant heterogeneity measured in situ can be recreated in a pot experiment, at a scale specific to the plant under study. Results, from 4 different plant species, demonstrated that existing simplistic models of heterogeneity are an inadequate proxy for plant performance and contaminant uptake under field conditions, and significant differences were found in plant contaminant concentrations between simplistic models and those based upon actual site measurements of heterogeneity. Implications of heterogeneity on plant roots were explored in the final experiment showing significant differences in root biomass between patches of differing contaminant concentrations
    • …
    corecore