1,413 research outputs found

    The Making of Cloud Applications An Empirical Study on Software Development for the Cloud

    Full text link
    Cloud computing is gaining more and more traction as a deployment and provisioning model for software. While a large body of research already covers how to optimally operate a cloud system, we still lack insights into how professional software engineers actually use clouds, and how the cloud impacts development practices. This paper reports on the first systematic study on how software developers build applications in the cloud. We conducted a mixed-method study, consisting of qualitative interviews of 25 professional developers and a quantitative survey with 294 responses. Our results show that adopting the cloud has a profound impact throughout the software development process, as well as on how developers utilize tools and data in their daily work. Among other things, we found that (1) developers need better means to anticipate runtime problems and rigorously define metrics for improved fault localization and (2) the cloud offers an abundance of operational data, however, developers still often rely on their experience and intuition rather than utilizing metrics. From our findings, we extracted a set of guidelines for cloud development and identified challenges for researchers and tool vendors

    Data Provenance Inference in Logic Programming: Reducing Effort of Instance-driven Debugging

    Get PDF
    Data provenance allows scientists in different domains validating their models and algorithms to find out anomalies and unexpected behaviors. In previous works, we described on-the-fly interpretation of (Python) scripts to build workflow provenance graph automatically and then infer fine-grained provenance information based on the workflow provenance graph and the availability of data. To broaden the scope of our approach and demonstrate its viability, in this paper we extend it beyond procedural languages, to be used for purely declarative languages such as logic programming under the stable model semantics. For experiments and validation, we use the Answer Set Programming solver oClingo, which makes it possible to formulate and solve stream reasoning problems in a purely declarative fashion. We demonstrate how the benefits of the provenance inference over the explicit provenance still holds in a declarative setting, and we briefly discuss the potential impact for declarative programming, in particular for instance-driven debugging of the model in declarative problem solving

    The State of the Art in Language Workbenches. Conclusions from the Language Workbench Challenge

    Get PDF
    Language workbenches are tools that provide high-level mechanisms for the implementation of (domain-specific) languages. Language workbenches are an active area of research that also receives many contributions from industry. To compare and discuss existing language workbenches, the annual Language Workbench Challenge was launched in 2011. Each year, participants are challenged to realize a given domain-specific language with their workbenches as a basis for discussion and comparison. In this paper, we describe the state of the art of language workbenches as observed in the previous editions of the Language Workbench Challenge. In particular, we capture the design space of language workbenches in a feature model and show where in this design space the participants of the 2013 Language Workbench Challenge reside. We compare these workbenches based on a DSL for questionnaires that was realized in all workbenches

    Using contextual knowledge in interactive fault localization

    Get PDF
    Tool support for automated fault localization in program debugging is limited because state-of-the-art algorithms often fail to provide efficient help to the user. They usually offer a ranked list of suspicious code elements, but the fault is not guaranteed to be found among the highest ranks. In Spectrum-Based Fault Localization (SBFL) – which uses code coverage information of test cases and their execution outcomes to calculate the ranks –, the developer has to investigate several locations before finding the faulty code element. Yet, all the knowledge she a priori has or acquires during this process is not reused by the SBFL tool. There are existing approaches in which the developer interacts with the SBFL algorithm by giving feedback on the elements of the prioritized list. We propose a new approach called iFL which extends interactive approaches by exploiting contextual knowledge of the user about the next item in the ranked list (e. g., a statement), with which larger code entities (e. g., a whole function) can be repositioned in their suspiciousness. We implemented a closely related algorithm proposed by Gong et al. , called Talk . First, we evaluated iFL using simulated users, and compared the results to SBFL and Talk . Next, we introduced two types of imperfections in the simulation: user’s knowledge and confidence levels. On SIR and Defects4J, results showed notable improvements in fault localization efficiency, even with strong user imperfections. We then empirically evaluated the effectiveness of the approach with real users in two sets of experiments: a quantitative evaluation of the successfulness of using iFL , and a qualitative evaluation of practical uses of the approach with experienced developers in think-aloud sessions

    Використання хмарних сервісів у процесі професійної підготовки програмістів у ВНЗ

    Get PDF
    In the article the state of the art and the main tendencies of cloud computing development are analyzed. The importance of cloud technologies application in education is grounded. The directions of their appliance are examined in higher education, in particular as for the creation of the cloud-oriented learning scientific environment at educational institutions. The advantages of transfer of IT-infrastructure of higher educational institutions into the cloud form are shown (the economy of funds for purchase of software and renovation of computer database; the reduction of the need for specially equipped premises; the creation of an open educational environment). The main directions of the research study of cloud technologies application in the process of professional training of programmers are characterized, among them there are: 1) formation of skills of the cloud services use for the professional tasks solving; 2) formation of skills of the development of cloud applications, deployment of cloud infrastructure, cloud applications and data bases security support. The description of cloud services that can be used in the process of programming learning and usage of training projects (Ideone, Codenvy, DbDesigner) are provided. The brief description of the possibilities of the Amazon platform usage for formation of cloud software development skills is provided.У статті проаналізований сучасний стан і наведені основні тенденції розвитку хмарних обчислень. Обґрунтована важливість упровадження хмарних технологій в освіті. Розглядаються напрями їх застосування у вищій освіті, зокрема створення у вищих навчальних закладах хмароорієнтованого освітньо-наукового середовища. Показані переваги переведення ІТ-інфраструктури ВНЗ до хмари (економія коштів на придбання програмного забезпечення й оновлення комп’ютерної бази; зменшення потреби у спеціально обладнаних приміщеннях; створення відкритого навчального середовища). Схарактеризовані основні напрями вивчення хмарних технологій у процесі професійної підготовки програмістів: 1) формування умінь використання хмарних сервісів для розв’язання фахових завдань; 2) формування умінь з розробки хмарних додатків, розгортання хмарної інфраструктури, підтримки безпеки хмарних додатків і сховищ даних. Надається опис хмарних сервісів, які можуть бути використані у процесі вивчення програмування і виконання навчальних проектів (Ideone, Codenvy, DbDesigner). Надається коротка характеристика можливостей використання платформи Amazon для формування умінь з розробки хмарного програмного забезпечення

    MODIFICATIONS AND INNOVATIONS TO TECHNOLOGY ARTIFACTS

    Get PDF
    What happens to a technology artifact after it is adopted? It has to evolve within its particular context to be effective; if it doesn’t, it will become part of the detritus of change, like the many genes without a discernible function in a living organism. In this paper, we report on a study of post-adoption technology behavior that examined how users modified and innovated with technology artifacts. We uncovered three types of changes conducted to technology artifacts: personalization, customization, and inventions. Personalization attempts are modifications involving changes to technology parameters to meet the specificities of the user; customizing attempts occur to adapt the technology parameters to meet the specificities of the user’s environment; and inventions are exaptations conducted to the technology artifact. The paper presents a grounded theoretic analysis of the post-adoption evolution based in-depth interviews with 20 software engineers in one multi-national organization. We identify a life-cycle model that connects the various types of modifications conducted to technology artifacts. The life-cycle model elaborates on how individual and organizational dynamics are linked to diffusion of innovations. While the research is still in progress and the post-adoption evolution model has to be refined, the research has significant value in understanding the full life-cycle of adoption of technological artifacts and how is maximum value derived from them

    Automatic verification and validation wizard in web-centred end-user software engineering

    Get PDF
    This paper addresses one of the major web end-user software engineering (WEUSE) challenges, namely, how to verify and validate software products built using a life cycle enacted by end-user programmers. Few end-user development support tools implement an engineering life cycle adapted to the needs of end users. End users do not have the programming knowledge, training or experience to perform devel- opment tasks requiring creativity. Elsewhere we published a life cycle adapted to this challenge. With the support of a wizard, end-user programmers follow this life cycle and develop rich internet applica- tions (RIA) to meet specific end-user requirements. However, end-user programmers regard verification and validation activities as being secondary or unnecessary for opportunistic programming tasks. Hence, although the solutions that they develop may satisfy specific requirements, it is impossible to guarantee the quality or the reusability of this software either for this user or for other developments by future end-user programmers. The challenge, then, is to find means of adopting a verification and validation workflow and adding verification and validation activities to the existing WEUSE life cycle. This should not involve users having to make substantial changes to the type of work that they do or to their priori- ties. In this paper, we set out a verification and validation life cycle supported by a wizard that walks the user through test case-based component, integration and acceptance testing. This wizard is well-aligned with WEUSE’s characteristic informality, ambiguity and opportunisticity. Users applying this verification and validation process manage to find bugs and errors that they would otherwise be unable to identify. They also receive instructions for error correction. This assures that their composite applications are of better quality and can be reliably reused. We also report a user study in which users develop web soft- ware with and without a wizard to drive verification and validation. The aim of this user study is to confirm the applicability and effectiveness of our wizard in the verification and validation of a RIAEuropean Union (UE) GA FP7-216048European Union (UE) GA FP7-285248European Union (UE) GA FP7-258862Ministerio de Economía y Competitividad TIN2016-76956-C3-2-R (POLOLAS)Ministerio de Economía y Competitividad TIN2015-71938-RED

    Proceedings of The Rust-Edu Workshop

    Get PDF
    The 2022 Rust-Edu Workshop was an experiment. We wanted to gather together as many thought leaders we could attract in the area of Rust education, with an emphasis on academic-facing ideas. We hoped that productive discussions and future collaborations would result. Given the quick preparation and the difficulties of an international remote event, I am very happy to report a grand success. We had more than 27 participants from timezones around the globe. We had eight talks, four refereed papers and statements from 15 participants. Everyone seemed to have a good time, and I can say that I learned a ton. These proceedings are loosely organized: they represent a mere compilation of the excellent submitted work. I hope you’ll find this material as pleasant and useful as I have. Bart Massey 30 August 202

    Open Source Software in Complex Domains: Current Perceptions in the Embedded Systems Area

    Get PDF
    With Nokia’s 770 and N800 Internet Tablets heavily utilising Open Source software, it is timely to ask whether – and if so to what extent – Open Source has made ingress into complex application domains such as embedded systems. In this paper we report on a qualitative study of perceptions of Open Source software in the secondary software sector, and in particular companies deploying embedded software. Although the sector is historically associated in Open Source software studies with uptake of embedded Linux, we find broader acceptance. The level of reasoning about Open Source quality and trust issues found was commensurate with that expressed in the literature. The classical strengths of Open Source, namely mass inspection, ease of conducting trials, longevity and source code access for debugging, were at the forefront of thinking. However, there was an acknowledgement that more guidelines were needed for assessing and incorporating Open Source software in products
    corecore