249 research outputs found

    Incremental and Modular Context-sensitive Analysis

    Full text link
    Context-sensitive global analysis of large code bases can be expensive, which can make its use impractical during software development. However, there are many situations in which modifications are small and isolated within a few components, and it is desirable to reuse as much as possible previous analysis results. This has been achieved to date through incremental global analysis fixpoint algorithms that achieve cost reductions at fine levels of granularity, such as changes in program lines. However, these fine-grained techniques are not directly applicable to modular programs, nor are they designed to take advantage of modular structures. This paper describes, implements, and evaluates an algorithm that performs efficient context-sensitive analysis incrementally on modular partitions of programs. The experimental results show that the proposed modular algorithm shows significant improvements, in both time and memory consumption, when compared to existing non-modular, fine-grain incremental analysis techniques. Furthermore, thanks to the proposed inter-modular propagation of analysis information, our algorithm also outperforms traditional modular analysis even when analyzing from scratch.Comment: 56 pages, 27 figures. To be published in Theory and Practice of Logic Programming. v3 corresponds to the extended version of the ICLP2018 Technical Communication. v4 is the revised version submitted to Theory and Practice of Logic Programming. v5 (this one) is the final author version to be published in TPL

    Concerns about Cybersecurity: The Implications of the use of ICT for Citizens and Companies

    Get PDF
    The widespread use of Information and Communication Technologies - ICT substantially increases the risks related to information security. In fact, due to the increase in the number and type of cyber attacks, Cybersecurity has become a growing concern in today's society. This phenomenon affects not only individual citizens, but also companies and even State entities. Despite the numerous advantages of this "digitalisation" of society, there are several risks, ranging from identity theft, scam emails or phone calls, online fraud, offensive material and child pornography, material promoting racial hatred or religious extremism, access to online services, email account hacking, online banking fraud, cyber extortion or malicious software. In order to determine the impact that cyber attacks have on society it is necessary to understand how people and companies use ICTs, such as social networks, the information they share, their privacy concerns, or the use of electronic services such as online payments or the cloud. This study becomes central not only to try to prevent/minimise risks, showing what has been done in this area, but more importantly, the way forward to try to prevent or minimise possible risks in the future.info:eu-repo/semantics/publishedVersio

    PyGGI 2.0: Language independent genetic improvement framework

    Get PDF
    PyGGI is a research tool for Genetic Improvement (GI), that is designed to be versatile and easy to use. We present version 2.0 of PyGGI, the main feature of which is an XML-based intermediate program representation. It allows users to easily define GI operators and algorithms that can be reused with multiple target languages. Using the new version of PyGGI, we present two case studies. First, we conduct an Automated Program Repair (APR) experiment with the QuixBugs benchmark, one that contains defective programs in both Python and Java. Second, we replicate an existing work on runtime improvement through program specialisation for the MiniSAT satisfiability solver. PyGGI 2.0 was able to generate a patch for a bug not previously fixed by any APR tool. It was also able to achieve 14% runtime improvement in the case of MiniSAT. The presented results show the applicability and the expressiveness of the new version of PyGGI. A video of the tool demo is at: https://youtu.be/PxRUdlRDS40

    A Decade of Code Comment Quality Assessment: A Systematic Literature Review

    Get PDF
    Code comments are important artifacts in software systems and play a paramount role in many software engineering (SE) tasks related to maintenance and program comprehension. However, while it is widely accepted that high quality matters in code comments just as it matters in source code, assessing comment quality in practice is still an open problem. First and foremost, there is no unique definition of quality when it comes to evaluating code comments. The few existing studies on this topic rather focus on specific attributes of quality that can be easily quantified and measured. Existing techniques and corresponding tools may also focus on comments bound to a specific programming language, and may only deal with comments with specific scopes and clear goals (e.g., Javadoc comments at the method level, or in-body comments describing TODOs to be addressed). In this paper, we present a Systematic Literature Review (SLR) of the last decade of research in SE to answer the following research questions: (i) What types of comments do researchers focus on when assessing comment quality? (ii) What quality attributes (QAs) do they consider? (iii) Which tools and techniques do they use to assess comment quality?, and (iv) How do they evaluate their studies on comment quality assessment in general? Our evaluation, based on the analysis of 2353 papers and the actual review of 47 relevant ones, shows that (i) most studies and techniques focus on comments in Java code, thus may not be generalizable to other languages, and (ii) the analyzed studies focus on four main QAs of a total of 21 QAs identified in the literature, with a clear predominance of checking consistency between comments and the code. We observe that researchers rely on manual assessment and specific heuristics rather than the automated assessment of the comment quality attributes

    Decentralized Finance – A Systematic Literature Review and Research Directions

    Get PDF
    Decentralized Finance (DeFi) is the (r)evolutionary movement to create a solely code-based, intermediary-independent financial system—a movement which has grown from 4bnto4bn to 104bn in assets locked in the last three years. We present the first systematic literature review of the yet fragmented DeFi research field. By identifying, analyzing, and integrating 83 peer-reviewed DeFi-related publications, our results contribute fivefold. First, we confirm the increasing growth of academic DeFi publications through systematic analysis. Second, we frame DeFi-related literature into three levels of abstraction (micro, meso, and macro) and seven subcategories. Third, we identify Ethereum as the blockchain in main academic focus. Fourth, we show that prototyping is the dominant research method applied whereas only one paper has used primary research data. Fifth, we derive four prioritized research avenues, namely concerning i) DeFi protocol interaction and aggregation platforms, ii) decentralized off-chain data integration to DeFi, iii) DeFi agents, and iv) regulation

    Suffolk Journal, vol.82, no.15, 3/27/2019

    Get PDF
    https://dc.suffolk.edu/journal/1684/thumbnail.jp

    Recommending Analogical APIs via Knowledge Graph Embedding

    Full text link
    Library migration, which re-implements the same software behavior by using a different library instead of using the current one, has been widely observed in software evolution. One essential part of library migration is to find an analogical API that could provide the same functionality as current ones. However, given the large number of libraries/APIs, manually finding an analogical API could be very time-consuming and error-prone. Researchers have developed multiple automated analogical API recommendation techniques. Documentation-based methods have particularly attracted significant interest. Despite their potential, these methods have limitations, such as a lack of comprehensive semantic understanding in documentation and scalability challenges. In this work, we propose KGE4AR, a novel documentation-based approach that leverages knowledge graph (KG) embedding to recommend analogical APIs during library migration. Specifically, KGE4AR proposes a novel unified API KG to comprehensively and structurally represent three types of knowledge in documentation, which can better capture the high-level semantics. Moreover, KGE4AR then proposes to embed the unified API KG into vectors, enabling more effective and scalable similarity calculation. We build KGE4AR' s unified API KG for 35,773 Java libraries and assess it in two API recommendation scenarios: with and without target libraries. Our results show that KGE4AR substantially outperforms state-of-the-art documentation-based techniques in both evaluation scenarios in terms of all metrics (e.g., 47.1%-143.0% and 11.7%-80.6% MRR improvements in each scenario). Additionally, we explore KGE4AR' s scalability, confirming its effective scaling with the growing number of libraries.Comment: Accepted by FSE 202

    A decade of code comment quality assessment : a systematic literature review

    Get PDF
    Code comments are important artifacts in software systems and play a paramount role in many software engineering (SE) tasks related to maintenance and program comprehension. However, while it is widely accepted that high quality matters in code comments just as it matters in source code, assessing comment quality in practice is still an open problem. First and foremost, there is no unique definition of quality when it comes to evaluating code comments. The few existing studies on this topic rather focus on specific attributes of quality that can be easily quantified and measured. Existing techniques and corresponding tools may also focus on comments bound to a specific programming language, and may only deal with comments with specific scopes and clear goals (e.g., Javadoc comments at the method level, or in-body comments describing TODOs to be addressed). In this paper, we present a Systematic Literature Review (SLR) of the last decade of research in SE to answer the following research questions: (i) What types of comments do researchers focus on when assessing comment quality? (ii) What quality attributes (QAs) do they consider? (iii) Which tools and techniques do they use to assess comment quality?, and (iv) How do they evaluate their studies on comment quality assessment in general? Our evaluation, based on the analysis of 2353 papers and the actual review of 47 relevant ones, shows that (i) most studies and techniques focus on comments in Java code, thus may not be generalizable to other languages, and (ii) the analyzed studies focus on four main QAs of a total of 21 QAs identified in the literature, with a clear predominance of checking consistency between comments and the code. We observe that researchers rely on manual assessment and specific heuristics rather than the automated assessment of the comment quality attributes, with evaluations often involving surveys of students and the authors of the original studies but rarely professional developers

    RML: Runtime Monitoring Language

    Get PDF
    Runtime verification is a relatively new software verification technique that aims to prove the correctness of a specific run of a program, rather than statically verify the code. The program is instrumented in order to collect all the relevant information, and the resulting trace of events is inspected by a monitor that verifies its compliance with respect to a specification of the expected properties of the system under scrutiny. Many languages exist that can be used to formally express the expected behavior of a system, with different design choices and degrees of expressivity. This thesis presents RML, a specification language designed for runtime verification, with the goal of being completely modular and independent from the instrumentation and the kind of system being monitored. RML is highly expressive, and allows one to express complex, parametric, non-context-free properties concisely. RML is compiled down to TC, a lower level calculus, which is fully formalized with a deterministic, rewriting-based semantics. In order to evaluate the approach, an open source implementation has been developed, and several examples with Node.js programs have been tested. Benchmarks show the ability of the monitors automatically generated from RML specifications to effectively and efficiently verify complex properties

    Santurismo: The Commodification of Santería and the Touristic Value of Afro-Cuban Derived Religions in Cuba

    Get PDF
    Santurismo (Santería + Turismo) refers to the popular formula of Afro-Cuban religions and tourism and initially served the Cuban government in the 1960s to promote Santería as a folkloric product of Cuban identity through staged performances in touristic surroundings. Gradually, it became a coping strategy by Cuban people to deal with political and economic hardship during the Special Period in the 1990s which led to the emergence of diplo-santería by so-called jinetero-santeros. While the continuous process of commodification of Cuban Santería is marked by local social, economic and political influences, it also relates to current tendencies in comparable religious and spiritual phenomena at a global level. This research paper is based on an extensive literature review as well as on ethnographic fieldwork conducted in Cuba in 2016. It aims at showing in which ways Afro-Cuban religions have worked their way up from a stigmatized and persecuted religious system to a widely valorized religion in the spiritual and touristic sphere. While warning for the consequences of its commodification, it also shows that, over time, Santería has proved to serve as a weapon for resistance and struggle, which is still ongoing in Cuban society today
    corecore