65 research outputs found

    A comparison and contrast of APKTool and Soot for injecting blockchain calls into Android applications

    Get PDF
    The injection of blockchain calls into an Android Application is an emerging and important tool for Android application developers. Blockchain technology provides a way of securely storing sensitive data and distributing that data while providing immutability. This paper will compare two compiler-based tools, APKTool, and the Soot framework and how they can inject blockchain calls into Android applications. A major contribution of this paper is that it compares the APKTool, and the Soot framework compilers for injecting blockchain calls, and the difficulties each tool introduces when implementing the injection of a blockchain call. To the best of our knowledge, the use of the Soot framework and the APKTool have never been used to inject blockchain calls. The reason behind this situation is the complexity of configuring blockchain calls in Android applications. Part of the difficulty is because of the constant changes in the API calls in the Android framework. This presents a challenge because the Soot and APKTool compilers have to be modified to adapt to changes in the Android API

    Modular Abstract Definitional Interpreters for WebAssembly

    Get PDF
    Even though static analyses can improve performance and secure programs against vulnerabilities, no static whole-program analyses exist for WebAssembly (Wasm) to date. Part of the reason is that Wasm has many complex language concerns, and it is not obvious how to adopt existing analysis frameworks for these features. This paper explores how abstract definitional interpretation can be used to develop sophisticated analyses for Wasm and other complex languages efficiently. In particular, we show that the semantics of Wasm can be decomposed into 19 language-independent components that abstract different aspects of Wasm. We have written a highly configurable definitional interpreter for full Wasm 1.0 in 1628 LOC against these components. Analysis developers can instantiate this interpreter with different value and effect abstractions to obtain abstract definitional interpreters that compute inter-procedural control and data-flow information. This way, we develop the first whole-program dead code, constant propagation, and taint analyses for Wasm, each in less than 210 LOC. We evaluate our analyses on 1458 Wasm binaries collected by others in the wild. Our implementation is based on a novel framework for definitional abstract interpretation in Scala that eliminates scalability issues of prior work

    Modular Collaborative Program Analysis

    Get PDF
    With our world increasingly relying on computers, it is important to ensure the quality, correctness, security, and performance of software systems. Static analysis that computes properties of computer programs without executing them has been an important method to achieve this for decades. However, static analysis faces major chal- lenges in increasingly complex programming languages and software systems and increasing and sometimes conflicting demands for soundness, precision, and scalability. In order to cope with these challenges, it is necessary to build static analyses for complex problems from small, independent, yet collaborating modules that can be developed in isolation and combined in a plug-and-play manner. So far, no generic architecture to implement and combine a broad range of dissimilar static analyses exists. The goal of this thesis is thus to design such an architecture and implement it as a generic framework for developing modular, collaborative static analyses. We use several, diverse case-study analyses from which we systematically derive requirements to guide the design of the framework. Based on this, we propose the use of a blackboard-architecture style collaboration of analyses that we implement in the OPAL framework. We also develop a formal model of our architectures core concepts and show how it enables freely composing analyses while retaining their soundness guarantees. We showcase and evaluate our architecture using the case-study analyses, each of which shows how important and complex problems of static analysis can be addressed using a modular, collaborative implementation style. In particular, we show how a modular architecture for the construction of call graphs ensures consistent soundness of different algorithms. We show how modular analyses for different aspects of immutability mutually benefit each other. Finally, we show how the analysis of method purity can benefit from the use of other complex analyses in a collaborative manner and from exchanging different analysis implementations that exhibit different characteristics. Each of these case studies improves over the respective state of the art in terms of soundness, precision, and/or scalability and shows how our architecture enables experimenting with and fine-tuning trade-offs between these qualities

    Flow- and context-sensitive points-to analysis using generalized points-to graphs

    Get PDF
    © Springer-Verlag GmbH Germany 2016. Bottom-up interprocedural methods of program analysis construct summary flow functions for procedures to capture the effect of their calls and have been used effectively for many analyses. However, these methods seem computationally expensive for flow- and context- sensitive points-to analysis (FCPA) which requires modelling unknown locations accessed indirectly through pointers. Such accesses are com- monly handled by using placeholders to explicate unknown locations or by using multiple call-specific summary flow functions. We generalize the concept of points-to relations by using the counts of indirection levels leaving the unknown locations implicit. This allows us to create sum- mary flow functions in the form of generalized points-to graphs (GPGs) without the need of placeholders. By design, GPGs represent both mem- ory (in terms of classical points-to facts) and memory transformers (in terms of generalized points-to facts). We perform FCPA by progressively reducing generalized points-to facts to classical points-to facts. GPGs distinguish between may and must pointer updates thereby facilitating strong updates within calling contexts. The size of GPGs is linearly bounded by the number of variables and is independent of the number of statements. Empirical measurements on SPEC benchmarks show that GPGs are indeed compact in spite of large procedure sizes. This allows us to scale FCPA to 158 kLoC using GPGs (compared to 35 kLoC reported by liveness-based FCPA). Thus GPGs hold a promise of efficiency and scalability for FCPA without compro- mising precision

    Automated Realistic Test Input Generation and Cost Reduction in Service-centric System Testing

    Get PDF
    Service-centric System Testing (ScST) is more challenging than testing traditional software due to the complexity of service technologies and the limitations that are imposed by the SOA environment. One of the most important problems in ScST is the problem of realistic test data generation. Realistic test data is often generated manually or using an existing source, thus it is hard to automate and laborious to generate. One of the limitations that makes ScST challenging is the cost associated with invoking services during testing process. This thesis aims to provide solutions to the aforementioned problems, automated realistic input generation and cost reduction in ScST. To address automation in realistic test data generation, the concept of Service-centric Test Data Generation (ScTDG) is presented, in which existing services used as realistic data sources. ScTDG minimises the need for tester input and dependence on existing data sources by automatically generating service compositions that can generate the required test data. In experimental analysis, our approach achieved between 93% and 100% success rates in generating realistic data while state-of-the-art automated test data generation achieved only between 2% and 34%. The thesis addresses cost concerns at test data generation level by enabling data source selection in ScTDG. Source selection in ScTDG has many dimensions such as cost, reliability and availability. This thesis formulates this problem as an optimisation problem and presents a multi-objective characterisation of service selection in ScTDG, aiming to reduce the cost of test data generation. A cost-aware pareto optimal test suite minimisation approach addressing testing cost concerns during test execution is also presented. The approach adapts traditional multi-objective minimisation approaches to ScST domain by formulating ScST concerns, such as invocation cost and test case reliability. In experimental analysis, the approach achieved reductions between 69% and 98.6% in monetary cost of service invocations during testin

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 24th International Conference on Fundamental Approaches to Software Engineering, FASE 2021, which took place during March 27–April 1, 2021, and was held as part of the Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg but changed to an online format due to the COVID-19 pandemic. The 16 full papers presented in this volume were carefully reviewed and selected from 52 submissions. The book also contains 4 Test-Comp contributions

    Intergiciel d'intergiciels adaptable à base de Services, Composants et Aspects

    Get PDF
    Cette habilitation à diriger des recherches présente mes travaux sur le génie logiciel des intergiciels, domaine à la croisée de l’informatique répartie et du génie logiciel. L’intergiciel est la couche logicielle permettant de s’abstraire de l’hétérogénéité des technologies de l’informatique distribuée et de répondre aux besoins d’interopérabilité, de portabilité, d’adaptation et de séparation des préoccupations des applications réparties. Mes travaux ont été guidés par deux questions de recherche ouvertes : 1) quel est le paradigme de programmation le plus approprié pour les applications réparties ? 2) quelle est l’organisation la plus appropriée pour l’intergiciel ?La première partie présente une synthèse de mes travaux et contributions. Premièrement, mes travaux ont porté sur la transition des objets vers les composants CORBA donnant lieu à deux contributions majeures : le langage de script CorbaScript standardisé auprès de l’OMG et la plate-forme OpenCCM pour le développement, le déploiement, l’exécution et l’administration d’applications réparties à base de composants CORBA. Deuxièmement, je me suis intéressé à la conception de canevas intergiciels hautement adaptables. Ces travaux basés sur les composants réflexifs Fractal ont donné lieu à un cadre de programmation par attributs sur lequel trois canevas flexibles pour la gestion du transactionnel, le déploiement de systèmes distribués hétérogènes et les composants Java temps-réels ont été bâtis. Enfin, mes travaux ont porté sur la proposition du modèle Services Composants Aspects (SCA) et l’intergiciel d’intergiciels FraSCAti.La deuxième partie opère un zoom sur le projet FraSCAti. La contribution scientifique de ce projet est de proposer un intergiciel réflexif pour l’informatique orientée service combinant deux idées originales : la notion d’intergiciel d’intergiciels et le modèle Services Composants Aspects réflexif. Partant du constat qu’il n’existe pas d’intergiciel universel capable de couvrir l’ensemble des besoins de toutes les applications distribuées, le projet FraSCAti propose un canevas intergiciel extensible pour l’intégration et la composition élégante des intergiciels et technologies SOA existants, c’est-à-dire un intergiciel d’intergiciels. Le modèle SCA réflexif est quant à lui le mariage fécond du standard OASIS Service Component Architecture (SCA), du modèle de composants Fractal et de la programmation orientée aspects (AOP). Dans ce modèle, tout est composant réflexif permettant ainsi d’adapter dynamiquement aussi bien les applications métiers, l’intergiciel, les liaisons de communication réseau que les aspects non fonctionnels. Cette contribution a été appliquée sur l’orchestration de services à large échelle, la construction de systèmes de systèmes et une plate-forme distribuée multi-nuages. La dernière partie dresse un bilan des contributions et présente mes perspectives de recherche centrées sur le génie logiciel pour l’informatique en nuage (cloud computing)

    Symbolic execution of verification languages and floating-point code

    Get PDF
    The focus of this thesis is a program analysis technique named symbolic execution. We present three main contributions to this field. First, an investigation into comparing several state-of-the-art program analysis tools at the level of an intermediate verification language over a large set of benchmarks, and improvements to the state-of-the-art of symbolic execution for this language. This is explored via a new tool, Symbooglix, that operates on the Boogie intermediate verification language. Second, an investigation into performing symbolic execution of floating-point programs via a standardised theory of floating-point arithmetic that is supported by several existing constraint solvers. This is investigated via two independent extensions of the KLEE symbolic execution engine to support reasoning about floating-point operations (with one tool developed by the thesis author). Third, an investigation into the use of coverage-guided fuzzing as a means for solving constraints over finite data types, inspired by the difficulties associated with solving floating-point constraints. The associated prototype tool, JFS, which builds on the LibFuzzer project, can at present be applied to a wide range of SMT queries over bit-vector and floating-point variables, and shows promise on floating-point constraints.Open Acces
    corecore