125 research outputs found

    Modeling assembly program with constraints. A contribution to WCET problem

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Lógica ComputacionalModel checking with program slicing has been successfully applied to compute Worst Case Execution Time (WCET) of a program running in a given hardware. This method lacks path feasibility analysis and suffers from the following problems: The model checker (MC) explores exponential number of program paths irrespective of their feasibility. This limits the scalability of this method to multiple path programs. And the witness trace returned by the MC corresponding to WCET may not be feasible (executable). This may result in a solution which is not tight i.e., it overestimates the actual WCET. This thesis complements the above method with path feasibility analysis and addresses these problems. To achieve this: we first validate the witness trace returned by the MC and generate test data if it is executable. For this we generate constraints over a trace and solve a constraint satisfaction problem. Experiment shows that 33% of these traces (obtained while computing WCET on standard WCET benchmark programs) are infeasible. Second, we use constraint solving technique to compute approximate WCET solely based on the program (without taking into account the hardware characteristics), and suggest some feasible and probable worst case paths which can produce WCET. Each of these paths forms an input to the MC. The more precise WCET then can be computed on these paths using the above method. The maximum of all these is the WCET. In addition this, we provide a mechanism to compute an upper bound of over approximation for WCET computed using model checking method. This effort of combining constraint solving technique with model checking takes advantages of their strengths and makes WCET computation scalable and amenable to hardware changes. We use our technique to compute WCET on standard benchmark programs from M¨alardalen University and compare our results with results from model checking method

    Software reverse engineering education

    Get PDF
    Software Reverse Engineering (SRE) is the practice of analyzing a software system, either in whole or in part, to extract design and implementation information. A typical SRE scenario would involve a software module that has worked for years and carries several rules of a business in its lines of code. Unfortunately the source code of the application has been lost; what remains is “native ” or “binary ” code. Reverse engineering skills are also used to detect and neutralize viruses and malware as well as to protect intellectual property. It became frighteningly apparent during the Y2K crisis that reverse engineering skills were not commonly held amongst programmers. Since that time, much research has been undertaken to formalize the types of activities that fall into the category of reverse engineering so that these skills can be taught to computer programmers and testers. To help address the lack of software reverse engineering education, several peer-reviewed articles on software reverse engineering, re-engineering, reuse, maintenance, evolution, and security were gathered with the objective of developing relevant, practical exercises for instructional purposes. The research revealed that SRE is fairly well described and most of the related activities fall into one of tw

    Sawja: Static Analysis Workshop for Java

    Get PDF
    Static analysis is a powerful technique for automatic verification of programs but raises major engineering challenges when developing a full-fledged analyzer for a realistic language such as Java. This paper describes the Sawja library: a static analysis framework fully compliant with Java 6 which provides OCaml modules for efficiently manipulating Java bytecode programs. We present the main features of the library, including (i) efficient functional data-structures for representing program with implicit sharing and lazy parsing, (ii) an intermediate stack-less representation, and (iii) fast computation and manipulation of complete programs

    A comparison of code similarity analysers

    Get PDF
    Copying and pasting of source code is a common activity in software engineering. Often, the code is not copied as it is and it may be modified for various purposes; e.g. refactoring, bug fixing, or even software plagiarism. These code modifications could affect the performance of code similarity analysers including code clone and plagiarism detectors to some certain degree. We are interested in two types of code modification in this study: pervasive modifications, i.e. transformations that may have a global effect, and local modifications, i.e. code changes that are contained in a single method or code block. We evaluate 30 code similarity detection techniques and tools using five experimental scenarios for Java source code. These are (1) pervasively modified code, created with tools for source code and bytecode obfuscation, and boiler-plate code, (2) source code normalisation through compilation and decompilation using different decompilers, (3) reuse of optimal configurations over different data sets, (4) tool evaluation using ranked-based measures, and (5) local + global code modifications. Our experimental results show that in the presence of pervasive modifications, some of the general textual similarity measures can offer similar performance to specialised code similarity tools, whilst in the presence of boiler-plate code, highly specialised source code similarity detection techniques and tools outperform textual similarity measures. Our study strongly validates the use of compilation/decompilation as a normalisation technique. Its use reduced false classifications to zero for three of the tools. Moreover, we demonstrate that optimal configurations are very sensitive to a specific data set. After directly applying optimal configurations derived from one data set to another, the tools perform poorly on the new data set. The code similarity analysers are thoroughly evaluated not only based on several well-known pair-based and query-based error measures but also on each specific type of pervasive code modification. This broad, thorough study is the largest in existence and potentially an invaluable guide for future users of similarity detection in source code

    Reverse Engineering and the Rise of Electronic Vigilantism: Intellectual Property Implications of Lock-Out Programs

    Get PDF
    Over the past few years, there has been an abundance of scholarship dealing with the appropriate scope of copyright and patent protection for computer programs. This Article approaches those problems from a slightly different perspective, focusing on the discrete problem of lock-out programs. The choice of lock-out as a paradigm for exploring the interoperability question and the contours of copyright and patent protection of computer programs is informed by two considerations. First, for purposes of the interoperability inquiry, lock-out programs represent an extreme; they are discrete, self-contained modules that are highly innovative in design, yet that serve no purpose other than to regulate access to a computer or computer operating system. Copyright and patent analyses of the lockout problem highlight a fundamental tension between intellectual property rights and considerations of public access, and so afford a useful vehicle for examining the scope of copyright and patent protection for computer programs generally. Second, lock-out may well become a defining technology of the coming “Information Age.” Pundits have prophesied a “set-top box” in every home that affords a gateway to an “information superhighway” where goods and services may be purchased and information accessed. Whether or not the manufacturer of the set-top box will be able to exclude unauthorized purveyors of goods, services, and information will significantly affect both the structure of the emerging market in information services and the nature of individual participation in that market. The purpose of this Article is twofold. First, the author argues that neither the copyright laws nor the patent laws preclude duplication of protected program features, including “lock” and “key” features, to whatever extent necessary to achieve full compatibility with an unpatented computer system. Second, and more generally, she addresses inconsistencies and conceptual flaws in the current understanding of copyright and patent protection for computer programs that emerge during the first inquiry, and propose doctrinal modifications to resolve them. Although computer programs have been protected by both copyright and patent regimes for years, the precise contours of the protection these regimes afford remain unsettled. For that reason, some scholars, computer lawyers, and computer industry professionals have urged the adoption of sui generi protection for computer programs, but the question of sui generis protection may have become largely irrelevant. The United States has convinced many other countries to follow its lead in “tending both copyright and patent protection to computer programs and is unlikely to change course. For better or worse, it seems we are stuck with the existing modes of intellectual property protection for computer programs. However, this Article argues that certain adjustments to the copyright and patent doctrines governing the protection of computer programs are necessary if the intellectual property laws are to continue to serve both their new and their traditional functions. Part I of this Article describes the facts and outcomes of two recent cases: Sega Enterprises Ltd. v. Accolade, Inc. and Atari Games Corp. v. Nintendo of America, Inc., both of which involved attempts to enforce intellectual property rights in lock-out programs. The remainder of the Article takes those cases as a starting point for discussion of the interoperability question and what it reveals about the scope and structure of copyright and patent protection for computer programs. Parts II and III explore the copyright implications of reverse engineering interface specifications and lock-out programs and of using the information gained thereby to create and market a compatible program. Part II focuses on the copyright issues resulting from intermediate copying during the reverse engineering process. Part III considers whether the reverse engineer may create a program that duplicates the “key” to the “lock” and other functional features of interoperability-related routines. Part IV addresses issues bearing on the validity of a lock-out patent. Finally, Part V considers whether, in light of the analyses in Parts II, III, and IV, attempts to enforce patents and copyrights against competitors who crack the code for a lockout program constitute patent or copyright misuse. The Article concludes with some general reflections on the efficacy and viability of the copyright and patent models for intellectual property protection of computer programs
    • …
    corecore