16 research outputs found

    Squeak by Example

    Get PDF
    Squeak by Example, intended for both students and developers, will guide you gently through the Squeak language and environment by means of a series of examples and exercises. This book is made available under the Creative Commons Attribution-ShareAlike 3.0 license

    Pharo by Example

    Get PDF
    Pharo is a modern, open source, fully-featured implementation of the Smalltalk programming language and environment. Pharo is derived from Squeak1, a re-implementation of the classic Smalltalk-80 system. Whereas Squeak was developed mainly as a platform for developing experimental educational software, Pharo strives to offer a lean, open-source platform for professional software development, and a robust and stable platform for research and development into dynamic languages and environments. Pharo serves as the reference implementation for the Seaside web development framework

    Pharo by Example 5

    Get PDF
    International audiencePharo by Example, intended for both students and developers, will guide you gently through the Pharo language and environment by means of a series of examples and exercises. This book is made available under the Creative Commons Attribution-ShareAlike 3.0 license

    Pharo by Example

    Get PDF
    Ce livre est disponible sous les termes de la license Creative Commons Attribution-ShareAlike 3.0 license. Téléchargement gratuitement du fichier pdf : http://pharobyexample.org/International audiencePharo by Example, intended for both students and developers, will guide you gently through the Pharo language and environment by means of a series of examples and exercises. This book is made available under the Creative Commons Attribution-ShareAlike 3.0 license

    Test Cases Evolution of Mobile Applications: Model Driven Approach

    Get PDF
    AELOS_HCERES2020 , NAOMOD_HCERES2020Mobile Applications Developers, with large freedom given to them, focus on satisfying market requirements and on pleasing consumer’s desires. They are forced to be creative and productive in a short period of time. As a result, billions of powerful mobile applications are displayed every day. Therefore, every mobile application needs to continually change and make an incremental evolution in order to survive and preserve its ranking among the top applications in the market. Mobile apps Testers hold a heavy responsibility on their shoulders, the intrinsic nature of agile swift change of mobile apps pushes them to be meticulous, to be aware that things can be different at any time, and to be prepared for unpredicted crashes. Therefore, starting the generation or the creation of test cases from scratch and selecting each time the overridden or the overloaded test cases is a tedious operation. In software testing the time allocated for testing and correcting defects is important for every software development (regularly half the time). This time can be reduced by the introduction of tools and the adoption of new testing methods. In the field of mobile development, new concerns should be taken into account; among the most important ones are the heterogeneity of execution environments and the fragmentation of terminals which have different impacts on the functionality, performance, and connectivity. This project studies the evolution of mobile applications and its impact on the evolution of test cases from their creation until their expiration stage. A detailed case study of a native open source Android application is provided; describing many aspects of design, development, testing in addition to the analysis of the process of mobile apps evolution. This project based on model driven engineering approach where the models are serialized using the standard XMI. It presents a protocol for the adaptation of test cases under certain restrictions

    Strategies for the intelligent selection of components

    Get PDF
    It is becoming common to build applications as component-intensive systems - a mixture of fresh code and existing components. For application developers the selection of components to incorporate is key to overall system quality - so they want the `best\u27. For each selection task, the application developer will de ne requirements for the ideal component and use them to select the most suitable one. While many software selection processes exist there is a lack of repeatable, usable, exible, automated processes with tool support. This investigation has focussed on nding and implementing strategies to enhance the selection of software components. The study was built around four research elements, targeting characterisation, process, strategies and evaluation. A Post-positivist methodology was used with the Spiral Development Model structuring the investigation. Data for the study is generated using a range of qualitative and quantitative methods including a survey approach, a range of case studies and quasiexperiments to focus on the speci c tuning of tools and techniques. Evaluation and review are integral to the SDM: a Goal-Question-Metric (GQM)-based approach was applied to every Spiral

    A trusted infrastructure for symbolic analysis of event-based web APIs

    Get PDF
    JavaScript has been widely adopted for the development of Web applications, being used for both client and server-side code. Client-side JavaScript programs commonly interact with Web APIs, for instance, to capture the user interaction with the Web page via events. The use of such APIs increases the complexity of JavaScript programs. In fact, most errors in these programs are caused by the misuse of Web APIs. There are several approaches for detecting errors in client-side JavaScript programs, but they either assume the use of a single API or do not model APIs faithfully, giving rise to inconsistent behaviour and lack of trust. We address the problem by developing a trustworthy infrastructure for the static analysis of Web APIs. We focus on two aspects of JavaScript programs: event-driven and message-passing programming, as these paradigms are common sources of confusion among developers. We choose to target the DOM event model and the JavaScript Promises and JavaScript async/await, which facilitate event-driven programming. Additionally, we target the message-passing model of the WebMessaging and WebWorkers APIs. We design formal semantics for events and message-passing to capture fundamental operations required by those APIs, and API reference implementations which are trustworthy in that they follow the respective standards and have been thoroughly tested against their official test suites. Using our formal semantics and reference implementations, we develop JaVerT.Click, the first static symbolic execution tool for JavaScript supporting both event-based and message-passing APIs. We evaluated both the reference implementations and the symbolic execution engine of JaVerT.Click. By testing the reference implementations against their official test suites, we found coverage gaps and issues in the test suites, most of which have been since fixed. By testing the symbolic execution engine against three open-source libraries, we established the bounded correctness of functional properties and found real bugs.Open Acces

    Automated Software Transplantation

    Get PDF
    Automated program repair has excited researchers for more than a decade, yet it has yet to find full scale deployment in industry. We report our experience with SAPFIX: the first deployment of automated end-to-end fault fixing, from test case design through to deployed repairs in production code. We have used SAPFIX at Facebook to repair 6 production systems, each consisting of tens of millions of lines of code, and which are collectively used by hundreds of millions of people worldwide. In its first three months of operation, SAPFIX produced 55 repair candidates for 57 crashes reported to SAPFIX, of which 27 have been deem as correct by developers and 14 have been landed into production automatically by SAPFIX. SAPFIX has thus demonstrated the potential of the search-based repair research agenda by deploying, to hundreds of millions of users worldwide, software systems that have been automatically tested and repaired. Automated software transplantation (autotransplantation) is a form of automated software engineering, where we use search based software engineering to be able to automatically move a functionality of interest from a ‘donor‘ program that implements it into a ‘host‘ program that lacks it. Autotransplantation is a kind of automated program repair where we repair the ‘host‘ program by augmenting it with the missing functionality. Automated software transplantation would open many exciting avenues for software development: suppose we could autotransplant code from one system into another, entirely unrelated, system, potentially written in a different programming language. Being able to do so might greatly enhance the software engineering practice, while reducing the costs. Automated software transplantation manifests in two different flavors: monolingual, when the languages of the host and donor programs is the same, or multilingual when the languages differ. This thesis introduces a theory of automated software transplantation, and two algorithms implemented in two tools that achieve this: µSCALPEL for monolingual software transplantation and τSCALPEL for multilingual software transplantation. Leveraging lightweight annotation, program analysis identifies an organ (interesting behavior to transplant); testing validates that the organ exhibits the desired behavior during its extraction and after its implantation into a host. We report encouraging results: in 14 of 17 monolingual transplantation experiments involving 6 donors and 4 hosts, popular real-world systems, we successfully autotransplanted 6 new functionalities; and in 10 out of 10 multlingual transplantation experiments involving 10 donors and 10 hosts, popular real-world systems written in 4 different programming languages, we successfully autotransplanted 10 new functionalities. That is, we have passed all the test suites that validates the new functionalities behaviour and the fact that the initial program behaviour is preserved. Additionally, we have manually checked the behaviour exercised by the organ. Autotransplantation is also very useful: in just 26 hours computation time we successfully autotransplanted the H.264 video encoding functionality from the x264 system to the VLC media player, a task that is currently done manually by the developers of VLC, since 12 years ago. We autotransplanted call graph generation and indentation for C programs into Kate, (a popular KDE based test editor used as an IDE by a lot of C developers) two features currently missing from Kate, but requested by the users of Kate. Autotransplantation is also efficient: the total runtime across 15 monolingual transplants is 5 hours and a half; the total runtime across 10 multilingual transplants is 33 hours

    Exploiting the architectural characteristics of software components to improve software reuse

    Get PDF
    PhD ThesisSoftware development is a costly process for all but the most trivial systems. One of the commonly known ways of minimizing development costs is to re-use previously built software components. However, a significant problem that source-code re-users encounter is the difficulty of finding components that not only provide the functionality they need but also conform to the architecture of the system they are building. To facilitate finding reusable components there is a need to establish an appropriate mechanism for matching the key architectural characteristics of the available source-code components against the characteristics of the system being built. This research develops a precise characterization of the architectural characteristics of source-code components, and investigates a new way to describe how appropriate components for re-use can be identified and categorized.Umm Al- Qura University

    Engineering Agile Big-Data Systems

    Get PDF
    To be effective, data-intensive systems require extensive ongoing customisation to reflect changing user requirements, organisational policies, and the structure and interpretation of the data they hold. Manual customisation is expensive, time-consuming, and error-prone. In large complex systems, the value of the data can be such that exhaustive testing is necessary before any new feature can be added to the existing design. In most cases, the precise details of requirements, policies and data will change during the lifetime of the system, forcing a choice between expensive modification and continued operation with an inefficient design.Engineering Agile Big-Data Systems outlines an approach to dealing with these problems in software and data engineering, describing a methodology for aligning these processes throughout product lifecycles. It discusses tools which can be used to achieve these goals, and, in a number of case studies, shows how the tools and methodology have been used to improve a variety of academic and business systems
    corecore