39 research outputs found

    Workload Modeling for Computer Systems Performance Evaluation

    Full text link

    GNU epsilon - an extensible programming language

    Full text link
    Reductionism is a viable strategy for designing and implementing practical programming languages, leading to solutions which are easier to extend, experiment with and formally analyze. We formally specify and implement an extensible programming language, based on a minimalistic first-order imperative core language plus strong abstraction mechanisms, reflection and self-modification features. The language can be extended to very high levels: by using Lisp-style macros and code-to-code transforms which automatically rewrite high-level expressions into core forms, we define closures and first-class continuations on top of the core. Non-self-modifying programs can be analyzed and formally reasoned upon, thanks to the language simple semantics. We formally develop a static analysis and prove a soundness property with respect to the dynamic semantics. We develop a parallel garbage collector suitable to multi-core machines to permit efficient execution of parallel programs.Comment: 172 pages, PhD thesi

    Untangling the Web: A Guide To Internet Research

    Get PDF
    [Excerpt] Untangling the Web for 2007 is the twelfth edition of a book that started as a small handout. After more than a decade of researching, reading about, using, and trying to understand the Internet, I have come to accept that it is indeed a Sisyphean task. Sometimes I feel that all I can do is to push the rock up to the top of that virtual hill, then stand back and watch as it rolls down again. The Internet—in all its glory of information and misinformation—is for all practical purposes limitless, which of course means we can never know it all, see it all, understand it all, or even imagine all it is and will be. The more we know about the Internet, the more acute is our awareness of what we do not know. The Internet emphasizes the depth of our ignorance because our knowledge can only be finite, while our ignorance must necessarily be infinite. My hope is that Untangling the Web will add to our knowledge of the Internet and the world while recognizing that the rock will always roll back down the hill at the end of the day

    Building Universal Digital Libraries: An Agenda for Copyright Reform

    Get PDF
    This article proposes a series of copyright reforms to pave the way for digital library projects like Project Gutenberg, the Internet Archive, and Google Print, which promise to make much of the world\u27s knowledge easily searchable and accessible from anywhere. Existing law frustrates digital library growth and development by granting overlapping, overbroad, and near-perpetual copyrights in books, art, audiovisual works, and digital content. Digital libraries would benefit from an expanded public domain, revitalized fair use doctrine and originality requirement, rationalized systems for copyright registration and transfer, and a new framework for compensating copyright owners for online infringement without imposing derivative copyright liability on technologists. This article\u27s case for reform begins with rolling back the copyright term extensions of recent years, which were upheld by the Supreme Court in Eldred v. Reno. Indefinitely renewable copyrights threaten to marginalize Internet publishing and online libraries by entangling them in endless disputes regarding the rights to decades- or centuries-old works. Similarly, digital library projects are becoming unnecessarily complicated and expensive to undertake due to the assertion by libraries and copyright holding companies of exclusive rights over unoriginal reproductions of public domain works, and the demands of authors that courts block all productive digital uses of their already published but often out-of-print works. Courts should refuse to allow the markets in digital reproductions to be monopolized in this way, and Congress must introduce greater certainty into copyright licensing by requiring more frequent registration and recordation of rights. Courts should also consider the digitizing of copyrighted works for the benefit of the public to be fair use, particularly where only excerpts of the works are posted online for public perusal. A digital library like Google Print needs a degree of certainty - which existing law does not provide - that it will not be punished for making miles of printed matter instantly searchable in the comfort of one\u27s home, or for rescuing orphan works from obscurity or letting consumers preview a few pages of a book before buying it. Finally, the Supreme Court\u27s recognition of liability for inducement of digital copyright infringement in the Grokster case may have profoundly negative consequences for digital library technology. The article discusses how recent proposals for statutory file-sharing licenses may reduce the bandwidth and storage costs of digital libraries, and thereby make them more comprehensive and accessible

    Enabling Parallel Execution via Principled Speculation.

    Get PDF

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers

    Advanced Techniques for Search-Based Program Repair

    Get PDF
    Debugging and repairing software defects costs the global economy hundreds of billions of dollars annually, and accounts for as much as 50% of programmers' time. To tackle the burgeoning expense of repair, researchers have proposed the use of novel techniques to automatically localise and repair such defects. Collectively, these techniques are referred to as automated program repair. Despite promising, early results, recent studies have demonstrated that existing automated program repair techniques are considerably less effective than previously believed. Current approaches are limited either in terms of the number and kinds of bugs they can fix, the size of patches they can produce, or the programs to which they can be applied. To become economically viable, automated program repair needs to overcome all of these limitations. Search-based repair is the only approach to program repair which may be applied to any bug or program, without assuming the existence of formal specifications. Despite its generality, current search-based techniques are restricted; they are either efficient, or capable of fixing multiple-line bugs---no existing technique is both. Furthermore, most techniques rely on the assumption that the material necessary to craft a repair already exists within the faulty program. By using existing code to craft repairs, the size of the search space is vastly reduced, compared to generating code from scratch. However, recent results, which show that almost all repairs generated by a number of search-based techniques can be explained as deletion, lead us to question whether this assumption is valid. In this thesis, we identify the challenges facing search-based program repair, and demonstrate ways of tackling them. We explore if and how the knowledge of candidate patch evaluations can be used to locate the source of bugs. We use software repository mining techniques to discover the form of a better repair model capable of addressing a greater number of bugs. We conduct a theoretical and empirical analysis of existing search algorithms for repair, before demonstrating a more effective alternative, inspired by greedy algorithms. To ensure reproducibility, we propose and use a methodology for conducting high-quality automated program research. Finally, we assess our progress towards solving the challenges of search-based program repair, and reflect on the future of the field
    corecore