412,095 research outputs found

    Combining static and dynamic analysis for the reverse engineering of web applications

    Get PDF
    Software has become so complex that it is increasingly hard to have a complete understanding of how a particular system will behave. Web applications, their user interfaces in particular, are built with a wide variety of technologies making them particularly hard to debug and maintain. Reverse engineering techniques, either through static analysis of the code or dynamic analysis of the running application, can be used to help gain this understanding. Each type of technique has its limitations. With static analysis it is difficult to have good coverage of highly dynamic applications, while dynamic analysis faces problems with guaranteeing that generated models fully capture the behavior of the system. This paper proposes a new hybrid approach for the reverse engineering of web applications' user interfaces. The approach combines dynamic analyzes of the application at runtime, with static analyzes of the source code of the event handlers found during interaction. Information derived from the source code is both directly added to the generated models, and used to guide the dynamic analysis.This work is funded by ERDF - European Regional Development Fund through the COMPETE Programme (operational programme for competitiveness) and by National Funds through the FCT - Fundação para a CiĂȘncia e a Tecnologia (Portuguese Foundation for Science and Technology) within project FCOMP-01-0124-FEDER-015095. Carlos Eduardo Silva is further funded by the Portuguese Government through FCT, grant SFRH/BD/71136/2010

    An executable formal semantics of PHP with applications to program analysis

    Get PDF
    Nowadays, many important activities in our lives involve the web. However, the software and protocols on which web applications are based were not designed with the appropriate level of security in mind. Many web applications have reached a level of complexity for which testing, code reviews and human inspection are no longer sufficient quality-assurance guarantees. Tools that employ static analysis techniques are needed in order to explore all possible execution paths through an application and guarantee the absence of undesirable behaviours. To make sure that an analysis captures the properties of interest, and to navigate the trade-offs between efficiency and precision, it is necessary to base the design and the development of static analysis tools on a firm understanding of the language to be analysed. When this underlying knowledge is missing or erroneous, tools can’t be trusted no matter what advanced techniques they use to perform their task. In this Thesis, we introduce KPHP, the first executable formal semantics of PHP, one of the most popular languages for server-side web programming. Then, we demonstrate its practical relevance by developing two verification tools, of increasing complexity, on top of it - a simple verifier based on symbolic execution and LTL model checking and a general purpose, fully configurable and extensible static analyser based on Abstract Interpretation. Our LTL-based tool leverages the existing symbolic execution and model checking support offered by K, our semantics framework of choice, and constitutes a first proof-of-concept of the usefulness of our semantics. Our abstract interpreter, on the other hand, represents a more significant and novel contribution to the field of static analysis of dynamic scripting languages (PHP in particular). Although our tool is still a prototype and therefore not well suited for handling large real-world codebases, we demonstrate how our semantics-based, principled approach to the development of verification tools has lead to the design of static analyses that outperform existing tools and approaches, both in terms of supported language features, precision, and breadth of possible applications.Open Acces

    Microservice Transition and its Granularity Problem: A Systematic Mapping Study

    Get PDF
    Microservices have gained wide recognition and acceptance in software industries as an emerging architectural style for autonomic, scalable, and more reliable computing. The transition to microservices has been highly motivated by the need for better alignment of technical design decisions with improving value potentials of architectures. Despite microservices' popularity, research still lacks disciplined understanding of transition and consensus on the principles and activities underlying "micro-ing" architectures. In this paper, we report on a systematic mapping study that consolidates various views, approaches and activities that commonly assist in the transition to microservices. The study aims to provide a better understanding of the transition; it also contributes a working definition of the transition and technical activities underlying it. We term the transition and technical activities leading to microservice architectures as microservitization. We then shed light on a fundamental problem of microservitization: microservice granularity and reasoning about its adaptation as first-class entities. This study reviews state-of-the-art and -practice related to reasoning about microservice granularity; it reviews modelling approaches, aspects considered, guidelines and processes used to reason about microservice granularity. This study identifies opportunities for future research and development related to reasoning about microservice granularity.Comment: 36 pages including references, 6 figures, and 3 table

    Community detection applied on big linked data

    Get PDF
    The Linked Open Data (LOD) Cloud has more than tripled its sources in just six years (from 295 sources in 2011 to 1163 datasets in 2017). The actual Web of Data contains more then 150 Billions of triples. We are assisting at a staggering growth in the production and consumption of LOD and the generation of increasingly large datasets. In this scenario, providing researchers, domain experts, but also businessmen and citizens with visual representations and intuitive interactions can significantly aid the exploration and understanding of the domains and knowledge represented by Linked Data. Various tools and web applications have been developed to enable the navigation, and browsing of the Web of Data. However, these tools lack in producing high level representations for large datasets, and in supporting users in the exploration and querying of these big sources. Following this trend, we devised a new method and a tool called H-BOLD (High level visualizations on Big Open Linked Data). H-BOLD enables the exploratory search and multilevel analysis of Linked Open Data. It offers different levels of abstraction on Big Linked Data. Through the user interaction and the dynamic adaptation of the graph representing the dataset, it will be possible to perform an effective exploration of the dataset, starting from a set of few classes and adding new ones. Performance and portability of H-BOLD have been evaluated on the SPARQL endpoint listed on SPARQL ENDPOINT STATUS. The effectiveness of H-BOLD as a visualization tool is described through a user study

    Measuring third party tracker power across web and mobile

    Full text link
    Third-party networks collect vast amounts of data about users via web sites and mobile applications. Consolidations among tracker companies can significantly increase their individual tracking capabilities, prompting scrutiny by competition regulators. Traditional measures of market share, based on revenue or sales, fail to represent the tracking capability of a tracker, especially if it spans both web and mobile. This paper proposes a new approach to measure the concentration of tracking capability, based on the reach of a tracker on popular websites and apps. Our results reveal that tracker prominence and parent-subsidiary relationships have significant impact on accurately measuring concentration

    Island time and the interplay between ecology and evolution in species diversification.

    Get PDF
    Research on the dynamics of biodiversity has progressed tremendously over recent years, although in two separate directions - ecological, to determine change over space at a given time, and evolutionary, to understand change over time. Integration of these approaches has remained elusive. Archipelagoes with a known geological chronology provide an opportunity to study ecological interactions over evolutionary time. Here, I focus on the Hawaiian archipelago and summarize the development of ecological and evolutionary research; I emphasize spiders because they have attributes allowing analysis of ecological affinities in concert with diversification. Within this framework, I highlight recent insights from the island chronosequence, in particular the importance of (i) selection and genetic drift in generating diversity; (ii) fusion and fission in fostering diversification; and (iii) variability upon which selection can act. Insights into biodiversity dynamics at the nexus of ecology and evolution are now achievable by integrating new tools, in particular (i) ecological metrics (interaction networks, maximum entropy inference) across the chronosequence to uncover community dynamics and (ii) genomic tools to understand contemporaneous microevolutionary change. The work can inform applications of invasion and restoration ecology by elucidating the importance of changes in abundances, interaction strengths, and rates of evolutionary response in shaping biodiversity
    • 

    corecore