58 research outputs found

    CIL to Java-bytecode translation for static analysis leveraging

    Get PDF
    A formal translation of CIL (i.e., .Net) bytecode into Java bytecode is introduced and proved sound with respect to the language semantics. The resulting code is then analyzed with Julia, an industrial static analyzer of Java bytecode. The overall process of translation and analysis is fast, scales up to industrial programs, and introduces a negligible number of false alarms. The main result of this work is to leverage existing, mature, and sound analyzers for Java bytecode by applying them to the (translated) CIL bytecode

    Code Generation for Efficient Query Processing in Managed Runtimes

    Get PDF
    In this paper we examine opportunities arising from the conver-gence of two trends in data management: in-memory database sys-tems (IMDBs), which have received renewed attention following the availability of affordable, very large main memory systems; and language-integrated query, which transparently integrates database queries with programming languages (thus addressing the famous ‘impedance mismatch ’ problem). Language-integrated query not only gives application developers a more convenient way to query external data sources like IMDBs, but also to use the same querying language to query an application’s in-memory collections. The lat-ter offers further transparency to developers as the query language and all data is represented in the data model of the host program-ming language. However, compared to IMDBs, this additional free-dom comes at a higher cost for query evaluation. Our vision is to improve in-memory query processing of application objects by introducing database technologies to managed runtimes. We focus on querying and we leverage query compilation to im-prove query processing on application objects. We explore dif-ferent query compilation strategies and study how they improve the performance of query processing over application data. We take C] as the host programming language as it supports language-integrated query through the LINQ framework. Our techniques de-liver significant performance improvements over the default LINQ implementation. Our work makes important first steps towards a future where data processing applications will commonly run on machines that can store their entire datasets in-memory, and will be written in a single programming language employing language-integrated query and IMDB-inspired runtimes to provide transparent and highly efficient querying. 1

    Concept Location based on System Dependency Graphs

    Get PDF
    Dissertação mestrado em Computer EngineeringSoftware maintenance can be seen as the act of correct errors in the software and/or the addiction of new features. This is one of the most difficult and frequent jobs of a software engineer and one of the most important and expensive parts of software development. Maintaining a software is very complex mainly because before making the change to the program, software engineers need to find the location, or locations, where the changes will be made; in other words, first they need to understand the program. Real applications are always huge and sometimes these programs are old or were written by other person and it is difficult to find the location where the change will be applied. There are various techniques to find these locations minimizing the time spent, but this phase of software development continues to be one of the most expensive and delayed. The objective of this Master Work is to combine Program Comprehension techniques, creating a tool easy to use, that can simplify and decrease the time spent understanding a program.Manutenção de software pode ser vista como o ato de corrigir erros no software, melhorá-lo e/ou de adicionar novas funcionalidades. Esta é uma das tarefas mais frequentes e difíceis de um engenheiro de software e uma das mais importantes e dispendiosas partes do desenvolvimento de software. Manter um software é muito complicado principalmente porque antes de efectuar a alteração do programa, o engenheiro de software precisa de encontrar o local, ou locais, onde a alteração será efectuada, ou seja, primeiro ele precisa de perceber o programa. Por vezes estes programas são antigos ou foram escritos por outra pessoa e é difícil encontrar o local onde a alteração será feita. Existem muitas técnicas para encontrar estes locais minimizando o tempo despendido, mas esta fase do desenvolvimento de software continua a ser uma das mais caras e demoradas. O objetivo do presente Trabalhõ de Mestrado é combinar técnicas de Compreensão de Programas tais como análise de comentários e visualização do SLG, criando uma ferramenta fácil de usar, que consiga simplificar e diminuir o tempo despendido a perceber um programa

    Optimizing JavaScript Engines for Modern-day Workloads

    Get PDF
    In modern times, we have seen tremendous increase in popularity and usage of web-based applications. Applications such as presentation softwareand word processors, which were traditionally considered desktop applications are being ported to the web by compiling them to JavaScript. Since JavaScript is the de facto language of the web, JavaScript engine performance significantly affects the overall web application experience. JavaScript, initially intended solely as a client-side scripting language for web browsers, is now being used to implement server-side web applications (node.js) that traditionally have been written in languages like Java. Web application developers expect "C"-like performance out of their applications. Thus, there is a need to reevaluate the optimization strategies implemented in the modern day engines.Thesis statement: I propose that by using run-time and ahead-of-time profiling and type specialization techniques it is possible to improve the performance of JavaScript engines to cater to the needs of modern-day workloads.In this dissertation, we present an improved synergistic type specialization strategy for optimized JavaScript code execution, implemented on top of a research JavaScript engine called MuscalietJS. Our technique combines type feedback and type inference to reinforce and augment each other in a unique way. We then present a novel deoptimization strategy that enables type specialized code generation on top of typed, stack-based virtual machines like CLR. We also describe a server-side offline profiling technique to collect profile information for web application which helps client JavaScript engines (running in the browser) avoid deoptimizations and improve performance of the applications. Finally, we describe a technique to improve the performance of server-side JavaScript code by making use of intelligent profile caching and two new type stability heuristics

    The 14th Overture Workshop: Towards Analytical Tool Chains

    Get PDF
    This report contains the proceedings from the 14th Overture workshop organized in connection with the Formal Methods 2016 symposium. This includes nine papers describing different technological progress in relation to the Overture/VDM tool support and its connection with other tools such as Crescendo, Symphony, INTO-CPS, TASTE and ViennaTalk

    On expressing different concurrency paradigms on virtual execution environment

    Get PDF
    Virtual execution environments (VEE) such as the Java Virtual Machine (JVM) and the Microsoft Common Language Runtime (CLR) have been designed when the dominant computer architecture featured a Von-Neumann interface to programs: a single processor hiding all the complexity of parallel computations inside its design. Programs are expressed in an intermediate form that is executed by the VEE that defines an abstract computational model in which the concurrency model has been influenced by these design choices and it basically exposes the multi-threading model of the underlying operating system. Recently computer systems have introduced computational units in which concurrency is explicit and under program control. Relevant examples are the Graphical Processing Units (GPU such as Nvidia or AMD) and the Cell BE architecture which allow for explicit control of single processing unit, local memories and communication channels. Unfortunately programs designed for Virtual Machines cannot access to these resources since are not available through the abstractions provided by the VEE. A major redesign of VEEs seems to be necessary in order to bridge this gap. In this thesis we study the problem of exposing non-von Neumann computing resources within the Virtual Machine without need for a redesign of the whole execution infrastructure. In this work we express parallel computations relying on extensible meta-data and reflection to encode information. Meta-programming techniques are then used to rewrite the program into an equivalent one using the special purpose underlying architecture. We provide a case study in which this approach is applied to compiling Common Intermediate Language (CIL) methods to multi-core GPUs; we show that it is possible to access these non-standard computing resources without any change to the virtual machine design

    Towards Automated Performance Analysis of Programs by Runtime Verification

    Get PDF
    This thesis makes a contribution to the field of Runtime Verification, a lightweightlightweight formal method for the analysis of computational systems. The contribution is made in multiple parts. First, a new language is introduced for the specification of properties at the source code level of programs. These properties tend to be with respect to program performance. Second, automatic monitoring and instrumentation techniques are introduced for the specification language. Third, an approach for explaining violations of these properties by program runs is introduced. Finally, the resulting body of theoretical work is implemented in an extensive ecosystem of tools for program analysis. This ecosystem is described in detail, along with its application to a real world system at CERN. The work presented in this thesis diverges from past work in the Runtime Verification community. Instead of focusing on maximising expressiveness of the specification formalism and solving the resulting monitoring and instrumentation problems, it focuses on introducing a language in which properties that often need to be checked over real-world programs can easily be expressed. In the direction of instrumentation, the source-code level of abstraction of our specification language allows an approach to instrumentation that diverges from much previous work. Many previous approaches have treated instrumentation as a separate problem from specification, usually providing a language in which one can describe how instrumentation should be performed. With our specification language, instrumentation can be performed automatically with respect to a specification. Further, an area that has received little attention in the Runtime Verification community is the analysis of verdicts resulting from monitoring programs with respect to specifications. The contributions to this area described in this thesis take the form of tools in the ecosystem. These tools enable detailed exploration of monitoring information, and mark a step towards automated generation of explanations of verdicts. Following the description of the extensive set of tools, this thesis concludes with an in depth discussion of their application to perform significant analyses of software used at CERN. Ultimately, the work described, including the theoretical foundations and implementations, forms the beginnings of a program analysis project whose aim, through continued development at CERN, is to enable detailed analysis of the performance of programs by software engineers with minimal effort

    Worst-case resource-usage analysis of java card classic editions application bytecode

    No full text
    Java Card is the dominant smartcard technology in use today, with over 12 billion Java Card smartcards having shipped globally in the last 15 years. Almost exclusively, the deployed Java Card smartcards are instances of a Classic edition for which garbage collection is an optional component in even the most recent Classic edition. Poorly written or malicious Java Card applications may drain the available memory of a Java Card Virtual Machine to the point the card becomes unusable, and undisciplined use of the transaction mechanism may exhaust the available transaction buffers, resulting in programmatic abort by the Java Card Runtime Environment and so limit the range of services a Java Card application may successfully be able to offer. Given the size and global nature of the user base, and the commercial importance of Java Card, there is a stunning lack of tools supporting analysis or certification of the memory, transactional or CPU usage of Java Card applications. In this thesis we present a worst-case resource-usage analysis tool for Java Card which is capable of producing worst-case memory usage and worst-case execution-time estimates for Java Card applications (also known as applets). Our main theoretical contribution is a static analysis for Java Card applets at the bytecode level which conservatively approximates properties of interest affecting memory usage, input-output/APDU usage and transaction usage. Our static analysis provides the high-level information for subsequent worst-case resource-usage analysis in our tool which exploits well-known results and techniques from hard real-time systems. We generate a resource usage graph per registered applet lifecycle method entry point as the start node and the control-flow returning to the Java Card Runtime Environment as the final node. We use the Implicit Path Enumeration Technique to generate and solve Integer Linear Programming problems representing the worst-case memory-usage and worst-case execution-time.Open Acces

    An Expert System for Automatic Software Protection

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    corecore