321,990 research outputs found

    A two-level Markov model for packet loss in UDP/IP-based real-time video applications targeting residential users

    Get PDF
    The packet loss characteristics of Internet paths that include residential broadband links are not well understood, and there are no good models for their behaviour. This compli- cates the design of real-time video applications targeting home users, since it is difficult to choose appropriate error correction and concealment algorithms without a good model for the types of loss observed. Using measurements of residential broadband networks in the UK and Finland, we show that existing models for packet loss, such as the Gilbert model and simple hidden Markov models, do not effectively model the loss patterns seen in this environment. We present a new two-level Markov model for packet loss that can more accurately describe the characteristics of these links, and quantify the effectiveness of this model. We demonstrate that our new packet loss model allows for improved application design, by using it to model the performance of forward error correction on such links

    Evaluating implicit feedback models using searcher simulations

    Get PDF
    In this article we describe an evaluation of relevance feedback (RF) algorithms using searcher simulations. Since these algorithms select additional terms for query modification based on inferences made from searcher interaction, not on relevance information searchers explicitly provide (as in traditional RF), we refer to them as implicit feedback models. We introduce six different models that base their decisions on the interactions of searchers and use different approaches to rank query modification terms. The aim of this article is to determine which of these models should be used to assist searchers in the systems we develop. To evaluate these models we used searcher simulations that afforded us more control over the experimental conditions than experiments with human subjects and allowed complex interaction to be modeled without the need for costly human experimentation. The simulation-based evaluation methodology measures how well the models learn the distribution of terms across relevant documents (i.e., learn what information is relevant) and how well they improve search effectiveness (i.e., create effective search queries). Our findings show that an implicit feedback model based on Jeffrey's rule of conditioning outperformed other models under investigation

    Harvey: A Greybox Fuzzer for Smart Contracts

    Full text link
    We present Harvey, an industrial greybox fuzzer for smart contracts, which are programs managing accounts on a blockchain. Greybox fuzzing is a lightweight test-generation approach that effectively detects bugs and security vulnerabilities. However, greybox fuzzers randomly mutate program inputs to exercise new paths; this makes it challenging to cover code that is guarded by narrow checks, which are satisfied by no more than a few input values. Moreover, most real-world smart contracts transition through many different states during their lifetime, e.g., for every bid in an auction. To explore these states and thereby detect deep vulnerabilities, a greybox fuzzer would need to generate sequences of contract transactions, e.g., by creating bids from multiple users, while at the same time keeping the search space and test suite tractable. In this experience paper, we explain how Harvey alleviates both challenges with two key fuzzing techniques and distill the main lessons learned. First, Harvey extends standard greybox fuzzing with a method for predicting new inputs that are more likely to cover new paths or reveal vulnerabilities in smart contracts. Second, it fuzzes transaction sequences in a targeted and demand-driven way. We have evaluated our approach on 27 real-world contracts. Our experiments show that the underlying techniques significantly increase Harvey's effectiveness in achieving high coverage and detecting vulnerabilities, in most cases orders-of-magnitude faster; they also reveal new insights about contract code.Comment: arXiv admin note: substantial text overlap with arXiv:1807.0787

    Intelligent intrusion detection in low power IoTs

    Get PDF

    A simulated study of implicit feedback models

    Get PDF
    In this paper we report on a study of implicit feedback models for unobtrusively tracking the information needs of searchers. Such models use relevance information gathered from searcher interaction and can be a potential substitute for explicit relevance feedback. We introduce a variety of implicit feedback models designed to enhance an Information Retrieval (IR) system's representation of searchers' information needs. To benchmark their performance we use a simulation-centric evaluation methodology that measures how well each model learns relevance and improves search effectiveness. The results show that a heuristic-based binary voting model and one based on Jeffrey's rule of conditioning [5] outperform the other models under investigation

    Targeted Greybox Fuzzing with Static Lookahead Analysis

    Full text link
    Automatic test generation typically aims to generate inputs that explore new paths in the program under test in order to find bugs. Existing work has, therefore, focused on guiding the exploration toward program parts that are more likely to contain bugs by using an offline static analysis. In this paper, we introduce a novel technique for targeted greybox fuzzing using an online static analysis that guides the fuzzer toward a set of target locations, for instance, located in recently modified parts of the program. This is achieved by first semantically analyzing each program path that is explored by an input in the fuzzer's test suite. The results of this analysis are then used to control the fuzzer's specialized power schedule, which determines how often to fuzz inputs from the test suite. We implemented our technique by extending a state-of-the-art, industrial fuzzer for Ethereum smart contracts and evaluate its effectiveness on 27 real-world benchmarks. Using an online analysis is particularly suitable for the domain of smart contracts since it does not require any code instrumentation---instrumentation to contracts changes their semantics. Our experiments show that targeted fuzzing significantly outperforms standard greybox fuzzing for reaching 83% of the challenging target locations (up to 14x of median speed-up)

    Searching for test data with feature diversity

    Full text link
    There is an implicit assumption in software testing that more diverse and varied test data is needed for effective testing and to achieve different types and levels of coverage. Generic approaches based on information theory to measure and thus, implicitly, to create diverse data have also been proposed. However, if the tester is able to identify features of the test data that are important for the particular domain or context in which the testing is being performed, the use of generic diversity measures such as this may not be sufficient nor efficient for creating test inputs that show diversity in terms of these features. Here we investigate different approaches to find data that are diverse according to a specific set of features, such as length, depth of recursion etc. Even though these features will be less general than measures based on information theory, their use may provide a tester with more direct control over the type of diversity that is present in the test data. Our experiments are carried out in the context of a general test data generation framework that can generate both numerical and highly structured data. We compare random sampling for feature-diversity to different approaches based on search and find a hill climbing search to be efficient. The experiments highlight many trade-offs that needs to be taken into account when searching for diversity. We argue that recurrent test data generation motivates building statistical models that can then help to more quickly achieve feature diversity.Comment: This version was submitted on April 14th 201

    Defect prediction with bad smells in code

    Get PDF
    Background: Defect prediction in software can be highly beneficial for development projects, when prediction is highly effective and defect-prone areas are predicted correctly. One of the key elements to gain effective software defect prediction is proper selection of metrics used for dataset preparation. Objective: The purpose of this research is to verify, whether code smells metrics, collected using Microsoft CodeAnalysis tool, added to basic metric set, can improve defect prediction in industrial software development project. Results: We verified, if dataset extension by the code smells sourced metrics, change the effectiveness of the defect prediction by comparing prediction results for datasets with and without code smells-oriented metrics. In a result, we observed only small improvement of effectiveness of defect prediction when dataset extended with bad smells metrics was used: average accuracy value increased by 0.0091 and stayed within the margin of error. However, when only use of code smells based metrics were used for prediction (without basic set of metrics), such process resulted with surprisingly high accuracy (0.8249) and F-measure (0.8286) results. We also elaborated data anomalies and problems we observed when two different metric sources were used to prepare one, consistent set of data. Conclusion: Extending the dataset by the code smells sourced metric does not significantly improve the prediction effectiveness. Achieved result did not compensate effort needed to collect additional metrics. However, we observed that defect prediction based on the code smells only is still highly effective and can be used especially where other metrics hardly be used.Comment: Chapter 10 in Software Engineering: Improving Practice through Research (B. Hnatkowska and M. \'Smia{\l}ek, eds.), pp. 163-176, 201
    • 

    corecore