225 research outputs found

    Enabling Sophisticated Analysis of x86 Binaries with RevGen

    Get PDF
    Current state-of-the-art static analysis tools for binary software operate on ad-hoc intermediate representations (IR) of the machine code. Therefore, even though IRs facilitate program analysis by abstracting away the source language, it is hard to reuse existing implementations of analysis tools in new endeavors. Recently, a new compiler framework — LLVM— has emerged, together with many analysis tools that use its IR. However, these tools rely on a compiler to generate the IR from source code. We propose RevGen, a tool that automatically converts existing binary programs to the standard LLVM IR, making an increasingly large number of static and dynamic analysis frameworks, as well as run-time instrumentation tools, applicable to legacy software. We show the potential of RevGen by converting several programs and device drivers to LLVM and checking the resulting code with off-the-shelf analysis tools

    Detecting Dissimilar Classes of Source Code Defects

    Get PDF
    Software maintenance accounts for the most part of the software development cost and efforts, with its major activities focused on the detection, location, analysis and removal of defects present in the software. Although software defects can be originated, and be present, at any phase of the software development life-cycle, implementation (i.e., source code) contains more than three-fourths of the total defects. Due to the diverse nature of the defects, their detection and analysis activities have to be carried out by equally diverse tools, often necessitating the application of multiple tools for reasonable defect coverage that directly increases maintenance overhead. Unified detection tools are known to combine different specialized techniques into a single and massive core, resulting in operational difficulty and maintenance cost increment. The objective of this research was to search for a technique that can detect dissimilar defects using a simplified model and a single methodology, both of which should contribute in creating an easy-to-acquire solution. Following this goal, a ‘Supervised Automation Framework’ named FlexTax was developed for semi-automatic defect mapping and taxonomy generation, which was then applied on a large-scale real-world defect dataset to generate a comprehensive Defect Taxonomy that was verified using machine learning classifiers and manual verification. This Taxonomy, along with an extensive literature survey, was used for comprehension of the properties of different classes of defects, and for developing Defect Similarity Metrics. The Taxonomy, and the Similarity Metrics were then used to develop a defect detection model and associated techniques, collectively named Symbolic Range Tuple Analysis, or SRTA. SRTA relies on Symbolic Analysis, Path Summarization and Range Propagation to detect dissimilar classes of defects using a simplified set of operations. To verify the effectiveness of the technique, SRTA was evaluated by processing multiple real-world open-source systems, by direct comparison with three state-of-the-art tools, by a controlled experiment, by using an established Benchmark, by comparison with other tools through secondary data, and by a large-scale fault-injection experiment conducted using a Mutation-Injection Framework, which relied on the taxonomy developed earlier for the definition of mutation rules. Experimental results confirmed SRTA’s practicality, generality, scalability and accuracy, and proved SRTA’s applicability as a new Defect Detection Technique

    Liquid interfaces in viscous straining flows: Numerical studies of the selective withdrawal transition

    Full text link
    This paper presents a numerical analysis of the transition from selective withdrawal to viscous entrainment. In our model problem, an interface between two immiscible layers of equal viscosity is deformed by an axisymmetric withdrawal flow, which is driven by a point sink located some distance above the interface in the upper layer. We find that steady-state hump solutions, corresponding to selective withdrawal of liquid from the upper layer, cease to exist above a threshold withdrawal flux, and that this transition corresponds to a saddle-node bifurcation for the hump solutions. Numerical results on the shape evolution of the steady-state interface are compared against previous experimental measurements. We find good agreement where the data overlap. However, the numerical results' larger dynamic range allows us to show that the large increase in the curvature of the hump tip near transition is not consistent with an approach towards a power-law cusp shape, an interpretation previously suggested from inspection of the experimental measurements alone. Instead the large increase in the curvature at the hump tip reflects a logarithmic coupling between the overall height of the hump and the curvature at the tip of the hump.Comment: submitted to JF

    Progress report no. 4

    Get PDF
    Statement of responsibility on title-page reads: editors: M.J. Driscoll, D.D. Lanning, I. Kaplan, A.T. Supple ; contributors: A. Alvim, G.J. Brown, J.K. Chan, T.P. Choong, M.J. Driscoll, G. A. Ducat, I.A. Forbes, M.V. Gregory, S.Y. Ho, C.M. Hove, O. K. Kadiroglu, R.J. Kennerley, D.D. Lanning, J.L. Lazewatsky, L. Lederman, A.S. Leveckis, V.A. Miethe, P. A. Scheinert, A.M. Thompson, N.E. Todreas, C.P. Tzanos, and P.J. WoodIncludes bibliographical referencesProgress report; June 30, 1973U.S. Atomic Energy Commission contract: AT(11-1)225

    C++-ohjelmien laadun parantaminen staattisella koodianalyysillä

    Get PDF
    Static code analysis is the analysis of program code without executing it. Static analysis tools are therefore a useful part of automated software analysis. Typical uses for these tools are to detect software defects and otherwise suspect code. Several algorithms and formal methods are available specializing in code analysis. Token pattern matching is used by simpler tools, while more in-depth tools prefer formal methods such as abstract interpretation and model checking. The choice of algorithms thus depends on the preferred analysis precision and soundness. We introduced the practical problems facing static analysis, especially in the context of C++ software. For static analysis to work in a satisfiable way, the tool must understand the semantics of the code being analyzed. Many tools, particularly open-source ones, have deficiencies in their capabilities of code understanding due to being unable to correctly parse complex C++. Furthermore, we examined the difficulty of handling large numbers of warnings issued by these tools in mature software projects. As a summary, we presented a list of five open-source and six commercial static analysis tools that are able to analyze C++ source code. To find out the viability of integrating static analysis tools in real-world projects, we performed a two-part evaluation. The first part was a measurement of the detection accuracy of four open-source and two commercial tools in 30 synthetic test cases. We discovered that Clang excels in this test, although each tool found different sets of defects, thus reaffirming the idea that multiple tools should be used together. In the second part of the evaluation, we applied these tools on six consecutive point releases of DynaRoad. While none of the tools were able to detect any of the crash defects known in these releases, they proved to be valuable in finding other unknown problems in our code base. Finally, we detailed the integration effort of three static analysis tools into our existing build process.Staattisella koodianalyysilla tarkoitetaan ohjelmakoodin analysointia suorittamatta sitä. Tämä tekee siitä hyödyllistä ohjelmien automaattista analyysia varten. Tyypillisiä käyttökohteita ovat ohjelmavirheiden havaitseminen sekä tyylitarkastuksien tekeminen. Analyysityökalujen toteuttamiseen on useita algoritmeja sekä formaaleja menetelmiä. Yksinkertaisemmat työkalut turvautuvat merkeistä koostuvien hahmojen etsimiseen lähdekoodista. Toteutustavan valinta riippuu pitkälti halutusta analyysin tarkkuudesta. Työssä esiteltiin C++-ohjelmien analyysiin kohdistuvia ongelmia. Staattisen analyysityökalun on toimiakseen ymmärrettävä analysoitava koodi riittävän hyvin, jotta analyysin tuloksista olisi hyötyä. Monella analyysityökalulla on vaikeuksia ymmärtää monimutkaista lähdekoodia, mikä koskee erityisesti avoimen lähdekoodin ohjelmia. Työssä käsiteltiin lisäksi syitä miksi laajojen ohjelmien analysointi on hankalaa suurten varoitusmäärien takia. Lopuksi listattiin viisi avoimen lähdekoodin analysointiohjelmaa sekä kuusi kaupallista ohjelmaa. Työn tarkoituksena oli selvittää mahdollisuuksia integroida staattisia analyysiohjelmia olemassa oleviin kehitysprosesseihin suorittamalla ohjelmilla kaksiosainen arviointi. Ensimmäinen arviointi koostui 30:stä synteettisestä testistä, joissa mitattiin analyysityökalujen tarkkuutta havaita ennalta määriteltyjä ohjelmavirheitä. Clang-kääntäjä suoriutui parhaiten näistä testeistä. Kaikki analyysityökalut havaitsivat kuitenkin eri virheitä, mikä vahvistaa käsitystä siitä, että mahdollisimman monen työkalun käyttö on suositeltavaa. Toisessa arvioinnissa tutkittiin valituilla analyysityökaluilla kuutta eri DynaRoadin julkaisuversiota. Saaduilla tuloksilla pystyttiin vertailemaan analyysityökalujen pätevyyttä havaita ohjelmasta raportoituja kaatumisvikoja. Analyysityökalut eivät tunnistaneet yhtään tunnettua vikaa, mutta osoittivat hyödyllisyytensä löytämällä muita tuntemattomia vikoja. Työn lopuksi käytiin läpi kolmen analyysityökalun integrointi olemassa oleviin kehitysprosesseihin

    Quantum Algorithms for Solving Hard Constrained Optimization Problems

    Get PDF
    En aquesta investigació, s'han examinat tècniques d'optimització per resoldre problemes de restriccions i s'ha fet un estudi de l'era quàntica i de les empreses líders del mercat, com ara IBM, D-Wave, Google, Xanadu, AWS-Braket i Microsoft. S'ha après sobre la comunitat, les plataformes, l'estat de les investigacions i s'han estudiat els postulats de la mecànica quàntica que serveixen per crear els sistemes i algorismes quàntics més eficients. Per tal de saber si és possible resoldre problemes de Problema de cerca de restriccions (CSP) de manera més eficient amb la computació quàntica, es va definir un escenari perquè tant la computació clàssica com la quàntica tinguessin un bon punt de referència. En primer lloc, la prova de concepte es centra en el problema de programació dels treballadors socials i més tard en el tema de la preparació per lots i la selecció de comandes com a generalització del Problema dels treballadors socials (SWP). El problema de programació dels treballadors socials és una mena de problema d'optimització combinatòria que, en el millor dels casos, es pot resoldre en temps exponencial; veient que el SWP és NP-Hard, proposa fer servir un altre enfoc més enllà de la computació clàssica per a la seva resolució. Avui dia, el focus a la computació quàntica ja no és només per la seva enorme capacitat informàtica sinó també, per l'ús de la seva imperfecció en aquesta era Noisy Intermediate-Scale Quantum (NISQ) per crear un poderós dispositiu d'aprenentatge automàtic que utilitza el principi variacional per resoldre problemes d'optimització en reduir la classe de complexitat. A la tesi es proposa una formulació (quadràtica) per resoldre el problema de l'horari dels treballadors socials de manera eficient utilitzant Variational Quantum Eigensolver (VQE), Quantum Approximate Optimization Algorithm (QAOA), Minimal Eigen Optimizer i ADMM optimizer. La viabilitat quàntica de l'algorisme s'ha modelat en forma QUBO, amb Docplex simulat Cirq, Or-Tools i provat a ordinadors IBMQ. Després d'analitzar els resultats de l'enfocament anterior, es va dissenyar un escenari per resoldre el SWP com a raonament basat en casos (qCBR), tant quànticament com clàssicament. I així poder contribuir amb un algorisme quàntic centrat en la intel·ligència artificial i l'aprenentatge automàtic. El qCBR és una tècnica d’aprenentatge automàtic basada en la resolució de nous problemes que utilitza l’experiència, com ho fan els humans. L'experiència es representa com una memòria de casos que conté qüestions prèviament resoltes i utilitza una tècnica de síntesi per adaptar millor l'experiència al problema nou. A la definició de SWP, si en lloc de pacients es tenen lots de comandes i en lloc de treballadors socials robots mòbils, es generalitza la funció objectiu i les restriccions. Per això, s'ha proposat una prova de concepte i una nova formulació per resoldre els problemes de picking i batching anomenat qRobot. Es va fer una prova de concepte en aquesta part del projecte mitjançant una Raspberry Pi 4 i es va provar la capacitat d'integració de la computació quàntica dins de la robòtica mòbil, amb un dels problemes més demandats en aquest sector industrial: problemes de picking i batching. Es va provar en diferents tecnologies i els resultats van ser prometedors. A més, en cas de necessitat computacional, el robot paral·lelitza part de les operacions en computació híbrida (quàntica + clàssica), accedint a CPU i QPU distribuïts en un núvol públic o privat. A més, s’ha desenvolupat un entorn estable (ARM64) dins del robot (Raspberry) per executar operacions de gradient i altres algorismes quàntics a IBMQ, Amazon Braket (D-Wave) i Pennylane de forma local o remota. Per millorar el temps d’execució dels algorismes variacionals en aquesta era NISQ i la següent, s’ha proposat EVA: un algorisme d’aproximació de Valor Exponencial quàntic. Fins ara, el VQE és el vaixell insígnia de la computació quàntica. Avui dia, a les plataformes líders del mercat de computació quàntica al núvol, el cost de l'experimentació dels circuits quàntics és proporcional al nombre de circuits que s'executen en aquestes plataformes. És a dir, amb més circuits més cost. Una de les coses que aconsegueix el VQE, el vaixell insígnia d'aquesta era de pocs qubits, és la poca profunditat en dividir el Hamiltonià en una llista de molts petits circuits (matrius de Pauli). Però aquest mateix fet, fa que simular amb el VQE sigui molt car al núvol. Per aquesta mateixa raó, es va dissenyar EVA per poder calcular el valor esperat amb un únic circuit. Tot i haver respost a la hipòtesi d'aquesta tesis amb tots els estudis realitzats, encara es pot continuar investigant per proposar nous algorismes quàntics per millorar problemes d'optimització.En esta investigación, se han examinado técnicas de optimización para resolver problemas de restricciones y se ha realizado un estudio de la era cuántica y de las empresas lideres del mercado, como IBM, D-Wave, Google, Xanadu, AWS-Braket y Microsoft. Se ha aprendido sobre su comunidad, sus plataformas, el estado de sus investigaciones y se han estudiado los postulados de la mecánica cuántica que sirven para crear los sistemas y algoritmos cuánticos más eficientes. Por tal de saber si es posible resolver problemas de Problema de búsqueda de restricciones (CSP) de manera más eficiente con la computación cuántica, se definió un escenario para que tanto la computación clásica como la cuántica tuvieran un buen punto de referencia. En primer lugar, la prueba de concepto se centra en el problema de programación de los trabajadores sociales y más tarde en el tema de la preparación por lotes y la selección de pedidos como una generalización del Problema de los trabajadores sociales (SWP). El problema de programación de los trabajadores sociales es una clase de problema de optimización combinatoria que, en el mejor de los casos, puede resolverse en tiempo exponencial; viendo que el SWP es NP-Hard, propone usar otro enfoque mas allá de la computación clásica para su resolución. Hoy en día, el foco en la computación cuántica ya no es sólo por su enorme capacidad informática sino también, por el uso de su imperfección en esta era Noisy Intermediate-Scale Quantum (NISQ) para crear un poderoso dispositivo de aprendizaje automático que usa el principio variacional para resolver problemas de optimización al reducir su clase de complejidad. En la tesis se propone una formulación (cuadrática) para resolver el problema del horario de los trabajadores sociales de manera eficiente usando Variational Quantum Eigensolver (VQE), Quantum Approximate Optimization Algorithm (QAOA), Minimal Eigen Optimizer y ADMM optimizer. La viabilidad cuántica del algoritmo se ha modelado en forma QUBO, con Docplex simulado Cirq, Or-Tools y probado en computadoras IBMQ. Después de analizar los resultados del enfoque anterior, se diseñó un escenario para resolver el SWP como razonamiento basado en casos (qCBR), tanto cuántica como clásicamente. Y así, poder contribuir con un algoritmo cuántico centrado en la inteligencia artificial y el aprendizaje automático. El qCBR es una técnica de aprendizaje automático basada en la resolución de nuevos problemas que utiliza la experiencia, como lo hacen los humanos. La experiencia se representa como una memoria de casos que contiene cuestiones previamente resueltas y usa una técnica de síntesis para adaptar mejor la experiencia al nuevo problema. En la definición de SWP, si en lugar de pacientes se tienen lotes de pedidos y en lugar de trabajadores sociales robots móviles, se generaliza la función objetivo y las restricciones. Para ello, se ha propuesto una prueba de concepto y una nueva formulación para resolver los problemas de picking y batching llamado qRobot. Se hizo una prueba de concepto en esta parte del proyecto a través de una Raspberry Pi 4 y se probó la capacidad de integración de la computación cuántica dentro de la robótica móvil, con uno de los problemas más demandados en este sector industrial: problemas de picking y batching. Se probó en distintas tecnologías y los resultados fueron prometedores. Además, en caso de necesidad computacional, el robot paraleliza parte de las operaciones en computación híbrida (cuántica + clásica), accediendo a CPU y QPU distribuidos en una nube pública o privada. Además, desarrollamos un entorno estable (ARM64) dentro del robot (Raspberry) para ejecutar operaciones de gradiente y otros algoritmos cuánticos en IBMQ, Amazon Braket (D-Wave) y Pennylane de forma local o remota. Para mejorar el tiempo de ejecución de los algoritmos variacionales en esta era NISQ y la siguiente, se ha propuesto EVA: un algoritmo de Aproximación de Valor Exponencial cuántico. Hasta la fecha, el VQE es el buque insignia de la computación cuántica. Hoy en día, en las plataformas de computación cuántica en la nube líderes de mercado, el coste de la experimentación de los circuitos cuánticos es proporcional al número de circuitos que se ejecutan en dichas plataformas. Es decir, con más circuitos mayor coste. Una de las cosas que consigue el VQE, el buque insignia de esta era de pocos qubits, es la poca profundidad al dividir el Hamiltoniano en una lista de muchos pequeños circuitos (matrices de Pauli). Pero este mismo hecho, hace que simular con el VQE sea muy caro en la nube. Por esta misma razón, se diseñó EVA para poder calcular el valor esperado con un único circuito. Aún habiendo respuesto a la hipótesis de este trabajo con todos los estudios realizados, todavía se puede seguir investigando para proponer nuevos algoritmos cuánticos para mejorar problemas de optimización combinatoria.In this research, Combinatorial optimization techniques to solve constraint problems have been examined. A study of the quantum era and market leaders such as IBM, D-Wave, Google, Xanadu, AWS-Braket and Microsoft has been carried out. We have learned about their community, their platforms, the status of their research, and the postulates of quantum mechanics that create the most efficient quantum systems and algorithms. To know if it is possible to solve Constraint Search Problem (CSP) problems more efficiently with quantum computing, a scenario was defined so that both classical and quantum computing would have a good point of reference. First, the proof of concept focuses on the social worker scheduling problem and later on the issue of batch picking and order picking as a generalization of the Social Workers Problem (SWP). The social workers programming problem is a combinatorial optimization problem that can be solved exponentially at best; seeing that the SWP is NP-Hard, it claims using another approach beyond classical computation for its resolution. Today, the focus on quantum computing is no longer only on its enormous computing power but also on the use of its imperfection in this era Noisy Intermediate-Scale Quantum (NISQ) to create a powerful machine learning device that uses the variational principle to solve optimization problems by reducing their complexity class. In the thesis, a (quadratic) formulation is proposed to solve the problem of social workers' schedules efficiently using Variational Quantum Eigensolver (VQE), Quantum Approximate Optimization Algorithm (QAOA), Minimal Eigen Optimizer and ADMM optimizer. The quantum feasibility of the algorithm has been modelled in QUBO form, with Cirq simulated, Or-Tools and tested on IBMQ computers. After analyzing the results of the above approach, a scenario was designed to solve the SWP as quantum case-based reasoning (qCBR), both quantum and classically. And thus, to be able to contribute with a quantum algorithm focused on artificial intelligence and machine learning. The qCBR is a machine learning technique based on solving new problems that use experience, as humans do. The experience is represented as a memory of cases containing previously resolved questions and uses a synthesis technique to adapt the background to the new problem better. In the definition of SWP, if instead of patients there are batches of orders and instead of social workers mobile robots, the objective function and the restrictions are generalized. To do this, a proof of concept and a new formulation has been proposed to solve the problems of picking and batching called qRobot. A proof of concept was carried out in this part of the project through a Raspberry Pi 4 and the integration capacity of quantum computing within mobile robotics was tested, with one of the most demanded problems in this industrial sector: picking and batching problems. It was tested on different technologies, and the results were promising. Furthermore, in case of computational need, the robot parallelizes part of the operations in hybrid computing (quantum + classical), accessing CPU and QPU distributed in a public or private cloud. Furthermore, we developed a stable environment (ARM64) inside the robot (Raspberry) to run gradient operations and other quantum algorithms on IBMQ, Amazon Braket (D-Wave) and Pennylane locally or remotely. To improve the execution time of variational algorithms in this NISQ era and the next, EVA has been proposed: A quantum Exponential Value Approximation algorithm. To date, the VQE is the flagship of quantum computing. Today, in the market-leading quantum cloud computing platforms, the cost of experimenting with quantum circuits is proportional to the number of circuits running on those platforms. That is, with more circuits, higher cost. One of the things that the VQE, the flagship of this low-qubit era, achieves is shallow depth by dividing the Hamiltonian into a list of many small circuits (Pauli matrices). But this very fact makes simulating with VQE very expensive in the cloud. For this same reason, EVA was designed to calculate the expected value with a single circuit. Even having answered the hypothesis of this work with all the studies carried out, it is still possible to continue research to propose new quantum algorithms to improve combinatorial optimization

    Cyber Security of Critical Infrastructures

    Get PDF
    Critical infrastructures are vital assets for public safety, economic welfare, and the national security of countries. The vulnerabilities of critical infrastructures have increased with the widespread use of information technologies. As Critical National Infrastructures are becoming more vulnerable to cyber-attacks, their protection becomes a significant issue for organizations as well as nations. The risks to continued operations, from failing to upgrade aging infrastructure or not meeting mandated regulatory regimes, are considered highly significant, given the demonstrable impact of such circumstances. Due to the rapid increase of sophisticated cyber threats targeting critical infrastructures with significant destructive effects, the cybersecurity of critical infrastructures has become an agenda item for academics, practitioners, and policy makers. A holistic view which covers technical, policy, human, and behavioural aspects is essential to handle cyber security of critical infrastructures effectively. Moreover, the ability to attribute crimes to criminals is a vital element of avoiding impunity in cyberspace. In this book, both research and practical aspects of cyber security considerations in critical infrastructures are presented. Aligned with the interdisciplinary nature of cyber security, authors from academia, government, and industry have contributed 13 chapters. The issues that are discussed and analysed include cybersecurity training, maturity assessment frameworks, malware analysis techniques, ransomware attacks, security solutions for industrial control systems, and privacy preservation methods

    Dagstuhl News January - December 2008

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic
    corecore