21 research outputs found

    Dissecting the Workload of a Major Adult Video Portal

    Get PDF
    Adult content constitutes a major source of Internet traffic. As with many other platforms, these sites are incentivized to engage users and maintain them on the site. This engagement (e.g., through recommendations) shapes the journeys taken through such sites. Using data from a large content delivery network, we explore session journeys within an adult website. We take two perspectives. We first inspect the corpus available on these platforms. Following this, we investigate the session access patterns. We make a number of observations that could be exploited for optimizing delivery, e.g., that users often skip within video streams

    Adequate monitoring of service compositions

    No full text
    Monitoring is essential to validate the runtime behaviour of dynamic distributed systems. However, monitors can inform of relevant events as they occur, but by their very nature they will not report about all those events that are not happening. In service-oriented applications it would be desirable to have means to assess the thoroughness of the interactions among the services that are being monitored. In case some events or message sequences or interaction patterns have not been observed for a while, in fact, one could timely check whether this happens because something is going wrong. In this paper, we introduce the novel notion of monitoring adequacy, which is generic and can be defined on different entities. We then define two adequacy criteria for service compositions and implement a proof-of-concept adequate monitoring framework. We validate the approach on two case studies, the Travel Reservation System and the Future Market choreographies

    Smart Contract Testing: Challenges and Opportunities

    No full text
    Blockchain technologies have found important and concrete applications in the real world. Active solutions leverage Smart Contracts for the management of cryptocurrencies, sensitive data, and other valuable assets. One of the core objectives of blockchain-oriented software engineering (BOSE) is ensuring that Smart Contracts receive adequate pre-release testing to guarantee the deployment of reliable code. However, the novelty and the complexity of the blockchain environment pose new challenges to the validation and verification of Smart Contract based software. In this paper, we analyze the aforementioned challenges to foster the discussion on the specific topic of Smart Contract testing and identify relevant research directions

    SuMo: A Mutation Testing Strategy for Solidity Smart Contracts

    No full text
    Smart Contracts are software programs that are deployed and executed within a blockchain infrastructure. Due to their immutable nature, directly resulting from the specific characteristics of the deploying infrastructure, smart contracts must be thoroughly tested before their release. Testing is one of the main activities that can help to improve the reliability of a smart contract, so as to possibly prevent considerable loss of valuable assets. It is therefore important to provide the testers with tools that permit them to assess the activity they performed.Mutation testing is a powerful approach for assessing the fault-detection capability of a test suite. In this paper, we propose SuMo, a novel mutation testing tool for Ethereum Smart Contracts. SuMo implements a set of 44 mutation operators that were designed starting from the latest Solidity documentation, and from well-known mutation testing tools. These allow to simulate a wide variety of faults that can be made by smart contract developers. The set of operators was designed to limit the generation of stillborn mutants, which slow down the mutation testing process and limit the usability of the tool. We report a first evaluation of SuMo on open-source projects for which test suites were available. The results we got are encouraging, and they suggest that SuMo can effectively help developers to deliver more reliable smart contracts

    A Data Extraction Methodology for Ethereum Smart Contracts

    No full text
    The broader adoption of blockchain for creating decentralised applications has raised interest in employing analysis techniques to support continuous improvement. Data extraction is crucial in this context, as it permits a better understanding of how applications behave. However, due to the variety of data sources (e.g., transactions and events) and the characterisation of the blockchain structure, several challenges arise in automatically extracting data. In particular, retrieving smart contract state changes remains unexplored despite its potential usage for discovering unexpected behaviour. For such reasons, this work proposes a methodology and a supporting tool for extracting data from smart contract executions and state changes. The obtained data is then offered in a way that can be easily converted to purpose-specific standards. The methodology was tested on the PancakeSwap Ethereum bridge smart contract

    ReSuMo: Regression Mutation Testing for Solidity Smart Contracts

    No full text
    Mutation testing is a powerful test adequacy assessment technique that can guarantee the deployment of more reliable Smart Contract code. Developers add new features, fix bugs, and refactor modern distributed applications at a quick pace, thus they must perform continuous re-testing to ensure that the project evolution does not break existing functionalities. However, regularly re-running the entire test suite can be time intensive, especially when mutation testing is involved. This paper presents ReSuMo, the first regression mutation testing approach and tool for Solidity Smart Contracts. ReSuMo uses a static, file-level technique to select a subset of Smart Contracts to mutate and a subset of test files to re-run during a regression mutation testing campaign. ReSuMo incrementally updates the mutation testing results considering the outcomes of the old program version; in this way, it can speed up mutation testing on evolving projects without compromising the mutation score

    CATANA: Replay Testing for the Ethereum Blockchain

    No full text
    Blockchain technology is increasingly being adopted in various domains where the immutability of recorded information can foster trust among stakeholders. However, upgradeability mechanisms such as the proxy pattern permit modifying the terms encoded by a Smart Contract even after its deployment. Ensuring that such changes do not impact previous users is of paramount importance. This paper introduces CATANA, a replay testing approach for proxy-based Ethereum applications. Experiments conducted on real-world projects demonstrate the viability of using the public history of transactions to evaluate new versions of a deployed contract and perform more reliable upgrades

    Process variance analysis and configuration in the Public Administration sector

    No full text
    This paper presents a three-layered methodology to contrast variants of services offered by Municipalities with the main aim of improving their business processes re-engineering as well as other significant phases of the software life cycle, such as configuration and maintenance. The methodology makes it possible to detect discrepancies or alignments among services’ variants. It relies on execution logs and applies clustering algorithms to reduce the huge amount of available logs into few clusters of "equivalent" executions. Then variance mining becomes a cornerstone to contrast clusters representatives and enables analysis on the offered services or those a specific Municipality would like to offer. The methodology has been validated on real case studies

    Label-independent feature engineering-based clustering in Public Administration Event Logs

    No full text
    Process mining algorithms infer business models by analyzing Log files derived from the execution of business activities in organizations. In this paper, a label-independent clustering methodology is proposed. It allows an analysis completely agnostic with respect the nature and domain knowledge of the process Logs. The methodology is totally data-driven and it is based on features that do not depend on activity labels and do not need model extraction at all, thus not requiring the four quality dimensions of a mining discovery algorithm to be satisfied. Due to its independence from asset labels, the methodology is very flexible and applicable in different scenarios. The methodology was tested on the process logs of a municipality of twenty thousand inhabitants showing good performances when evaluated using a mining discovering algorithm
    corecore