43 research outputs found

    Enabling Additional Parallelism in Asynchronous JavaScript Applications

    Get PDF
    JavaScript is a single-threaded programming language, so asynchronous programming is practiced out of necessity to ensure that applications remain responsive in the presence of user input or interactions with file systems and networks. However, many JavaScript applications execute in environments that do exhibit concurrency by, e.g., interacting with multiple or concurrent servers, or by using file systems managed by operating systems that support concurrent I/O. In this paper, we demonstrate that JavaScript programmers often schedule asynchronous I/O operations suboptimally, and that reordering such operations may yield significant performance benefits. Concretely, we define a static side-effect analysis that can be used to determine how asynchronous I/O operations can be refactored so that asynchronous I/O-related requests are made as early as possible, and so that the results of these requests are awaited as late as possible. While our static analysis is potentially unsound, we have not encountered any situations where it suggested reorderings that change program behavior. We evaluate the refactoring on 20 applications that perform file- or network-related I/O. For these applications, we observe average speedups ranging between 0.99% and 53.6% for the tests that execute refactored code (8.1% on average)

    1st doctoral symposium of the international conference on software language engineering (SLE) : collected research abstracts, October 11, 2010, Eindhoven, The Netherlands

    Get PDF
    The first Doctoral Symposium to be organised by the series of International Conferences on Software Language Engineering (SLE) will be held on October 11, 2010 in Eindhoven, as part of the 3rd instance of SLE. This conference series aims to integrate the different sub-communities of the software-language engineering community to foster cross-fertilisation and strengthen research overall. The Doctoral Symposium at SLE 2010 aims to contribute towards these goals by providing a forum for both early and late-stage Ph.D. students to present their research and get detailed feedback and advice from researchers both in and out of their particular research area. Consequently, the main objectives of this event are: – to give Ph.D. students an opportunity to write about and present their research; – to provide Ph.D. students with constructive feedback from their peers and from established researchers in their own and in different SLE sub-communities; – to build bridges for potential research collaboration; and – to foster integrated thinking about SLE challenges across sub-communities. All Ph.D. students participating in the Doctoral Symposium submitted an extended abstract describing their doctoral research. Based on a good set of submisssions we were able to accept 13 submissions for participation in the Doctoral Symposium. These proceedings present final revised versions of these accepted research abstracts. We are particularly happy to note that submissions to the Doctoral Symposium covered a wide range of SLE topics drawn from all SLE sub-communities. In selecting submissions for the Doctoral Symposium, we were supported by the members of the Doctoral-Symposium Selection Committee (SC), representing senior researchers from all areas of the SLE community.We would like to thank them for their substantial effort, without which this Doctoral Symposium would not have been possible. Throughout, they have provided reviews that go beyond the normal format of a review being extra careful in pointing out potential areas of improvement of the research or its presentation. Hopefully, these reviews themselves will already contribute substantially towards the goals of the symposium and help students improve and advance their work. Furthermore, all submitting students were also asked to provide two reviews for other submissions. The members of the SC went out of their way to comment on the quality of these reviews helping students improve their reviewing skills

    1st doctoral symposium of the international conference on software language engineering (SLE) : collected research abstracts, October 11, 2010, Eindhoven, The Netherlands

    Get PDF
    The first Doctoral Symposium to be organised by the series of International Conferences on Software Language Engineering (SLE) will be held on October 11, 2010 in Eindhoven, as part of the 3rd instance of SLE. This conference series aims to integrate the different sub-communities of the software-language engineering community to foster cross-fertilisation and strengthen research overall. The Doctoral Symposium at SLE 2010 aims to contribute towards these goals by providing a forum for both early and late-stage Ph.D. students to present their research and get detailed feedback and advice from researchers both in and out of their particular research area. Consequently, the main objectives of this event are: – to give Ph.D. students an opportunity to write about and present their research; – to provide Ph.D. students with constructive feedback from their peers and from established researchers in their own and in different SLE sub-communities; – to build bridges for potential research collaboration; and – to foster integrated thinking about SLE challenges across sub-communities. All Ph.D. students participating in the Doctoral Symposium submitted an extended abstract describing their doctoral research. Based on a good set of submisssions we were able to accept 13 submissions for participation in the Doctoral Symposium. These proceedings present final revised versions of these accepted research abstracts. We are particularly happy to note that submissions to the Doctoral Symposium covered a wide range of SLE topics drawn from all SLE sub-communities. In selecting submissions for the Doctoral Symposium, we were supported by the members of the Doctoral-Symposium Selection Committee (SC), representing senior researchers from all areas of the SLE community.We would like to thank them for their substantial effort, without which this Doctoral Symposium would not have been possible. Throughout, they have provided reviews that go beyond the normal format of a review being extra careful in pointing out potential areas of improvement of the research or its presentation. Hopefully, these reviews themselves will already contribute substantially towards the goals of the symposium and help students improve and advance their work. Furthermore, all submitting students were also asked to provide two reviews for other submissions. The members of the SC went out of their way to comment on the quality of these reviews helping students improve their reviewing skills

    Cautiously Optimistic Program Analyses for Secure and Reliable Software

    Full text link
    Modern computer systems still have various security and reliability vulnerabilities. Well-known dynamic analyses solutions can mitigate them using runtime monitors that serve as lifeguards. But the additional work in enforcing these security and safety properties incurs exorbitant performance costs, and such tools are rarely used in practice. Our work addresses this problem by constructing a novel technique- Cautiously Optimistic Program Analysis (COPA). COPA is optimistic- it infers likely program invariants from dynamic observations, and assumes them in its static reasoning to precisely identify and elide wasteful runtime monitors. The resulting system is fast, but also ensures soundness by recovering to a conservatively optimized analysis when a likely invariant rarely fails at runtime. COPA is also cautious- by carefully restricting optimizations to only safe elisions, the recovery is greatly simplified. It avoids unbounded rollbacks upon recovery, thereby enabling analysis for live production software. We demonstrate the effectiveness of Cautiously Optimistic Program Analyses in three areas: Information-Flow Tracking (IFT) can help prevent security breaches and information leaks. But they are rarely used in practice due to their high performance overhead (>500% for web/email servers). COPA dramatically reduces this cost by eliding wasteful IFT monitors to make it practical (9% overhead, 4x speedup). Automatic Garbage Collection (GC) in managed languages (e.g. Java) simplifies programming tasks while ensuring memory safety. However, there is no correct GC for weakly-typed languages (e.g. C/C++), and manual memory management is prone to errors that have been exploited in high profile attacks. We develop the first sound GC for C/C++, and use COPA to optimize its performance (16% overhead). Sequential Consistency (SC) provides intuitive semantics to concurrent programs that simplifies reasoning for their correctness. However, ensuring SC behavior on commodity hardware remains expensive. We use COPA to ensure SC for Java at the language-level efficiently, and significantly reduce its cost (from 24% down to 5% on x86). COPA provides a way to realize strong software security, reliability and semantic guarantees at practical costs.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/170027/1/subarno_1.pd

    Provenance

    Full text link

    Automated metamorphic testing on the analyses of feature models

    Get PDF
    Copyright © 2010 Elsevier B.V. All rights reserved.Context: A feature model (FM) represents the valid combinations of features in a domain. The automated extraction of information from FMs is a complex task that involves numerous analysis operations, techniques and tools. Current testing methods in this context are manual and rely on the ability of the tester to decide whether the output of an analysis is correct. However, this is acknowledged to be time-consuming, error-prone and in most cases infeasible due to the combinatorial complexity of the analyses, this is known as the oracle problem.Objective: In this paper, we propose using metamorphic testing to automate the generation of test data for feature model analysis tools overcoming the oracle problem. An automated test data generator is presented and evaluated to show the feasibility of our approach.Method: We present a set of relations (so-called metamorphic relations) between input FMs and the set of products they represent. Based on these relations and given a FM and its known set of products, a set of neighbouring FMs together with their corresponding set of products are automatically generated and used for testing multiple analyses. Complex FMs representing millions of products can be efficiently created by applying this process iteratively.Results: Our evaluation results using mutation testing and real faults reveal that most faults can be automatically detected within a few seconds. Two defects were found in FaMa and another two in SPLOT, two real tools for the automated analysis of feature models. Also, we show how our generator outperforms a related manual suite for the automated analysis of feature models and how this suite can be used to guide the automated generation of test cases obtaining important gains in efficiency.Conclusion: Our results show that the application of metamorphic testing in the domain of automated analysis of feature models is efficient and effective in detecting most faults in a few seconds without the need for a human oracle.This work has been partially supported by the European Commission(FEDER)and Spanish Government under CICYT project SETI(TIN2009-07366)and the Andalusian Government project ISABEL(TIC-2533)

    Easier Parallel Programming with Provably-Efficient Runtime Schedulers

    Get PDF
    Over the past decade processor manufacturers have pivoted from increasing uniprocessor performance to multicore architectures. However, utilizing this computational power has proved challenging for software developers. Many concurrency platforms and languages have emerged to address parallel programming challenges, yet writing correct and performant parallel code retains a reputation of being one of the hardest tasks a programmer can undertake. This dissertation will study how runtime scheduling systems can be used to make parallel programming easier. We address the difficulty in writing parallel data structures, automatically finding shared memory bugs, and reproducing non-deterministic synchronization bugs. Each of the systems presented depends on a novel runtime system which provides strong theoretical performance guarantees and performs well in practice

    Languages of games and play: A systematic mapping study

    Get PDF
    Digital games are a powerful means for creating enticing, beautiful, educational, and often highly addictive interactive experiences that impact the lives of billions of players worldwide. We explore what informs the design and construction of good games to learn how to speed-up game development. In particular, we study to what extent languages, notations, patterns, and tools, can offer experts theoretical foundations, systematic techniques, and practical solutions they need to raise their productivity and improve the quality of games and play. Despite the growing number of publications on this topic there is currently no overview describing the state-of-the-art that relates research areas, goals, and applications. As a result, efforts and successes are often one-off, lessons learned go overlooked, language reuse remains minimal, and opportunities for collaboration and synergy are lost. We present a systematic map that identifies relevant publications and gives an overview of research areas and publication venues. In addition, we categorize research perspectives along common objectives, techniques, and approaches, illustrated by summaries of selected languages. Finally, we distill challenges and opportunities for future research and development

    A case study of agile software development for large-scale safety-critical systems projects

    Get PDF
    This study explores the introduction of agile software development within an avionics company engaged in safety-critical system engineering. There is increasing pressure throughout the software industry for development efforts to adopt agile software development in order to respond more rapidly to changing requirements and make more frequent deliveries of systems to customers for review and integration. This pressure is also being experienced in safety-critical industries, where release cycles on typically large and complex systems may run to several years on projects spanning decades. However, safety-critical system developments are normally highly regulated, which may constrain the adoption of agile software development or require adaptation of selected methods or practices. To investigate this potential conflict, we conducted a series of interviews with practitioners in the company, exploring their experiences of adopting agile software development and the challenges encountered. The study also explores the opportunities for altering the existing software process in the company to better fit agile software development to the constraints of software development for safety-critical systems. We conclude by identifying immediate future research directions to better align the tempo of software development for safety-critical systems and agile software development

    Dynamically fighting bugs : prevention, detection and elimination

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 147-160).This dissertation presents three test-generation techniques that are used to improve software quality. Each of our techniques targets bugs that are found by different stake-holders: developers, testers, and maintainers. We implemented and evaluated our techniques on real code. We present the design of each tool and conduct experimental evaluation of the tools with available alternatives. Developers need to prevent regression errors when they create new functionality. This dissertation presents a technique that helps developers prevent regression errors in object-oriented programs by automatically generating unit-level regression tests. Our technique generates regressions tests by using models created dynamically from example executions. In our evaluation, our technique created effective regression tests, and achieved good coverage even for programs with constrained APIs. Testers need to detect bugs in programs. This dissertation presents a technique that helps testers detect and localize bugs in web applications. Our technique automatically creates tests that expose failures by combining dynamic test generation with explicit state model checking. In our evaluation, our technique discovered hundreds of faults in real applications. Maintainers have to reproduce failing executions in order to eliminate bugs found in deployed programs. This dissertation presents a technique that helps maintainers eliminate bugs by generating tests that reproduce failing executions. Our technique automatically generates tests that reproduce the failed executions by monitoring methods and storing optimized states of method arguments.(cont.) In our evaluation, our technique reproduced failures with low overhead in real programs Analyses need to avoid unnecessary computations in order to scale. This dissertation presents a technique that helps our other techniques to scale by inferring the mutability classification of arguments. Our technique classifies mutability by combining both static analyses and a novel dynamic mutability analysis. In our evaluation, our technique efficiently and correctly classified most of the arguments for programs with more than hundred thousand lines of code.by Shay Artzi.Ph.D
    corecore