83,338 research outputs found

    On the Intrinsic Locality Properties of Web Reference Streams

    Full text link
    There has been considerable work done in the study of Web reference streams: sequences of requests for Web objects. In particular, many studies have looked at the locality properties of such streams, because of the impact of locality on the design and performance of caching and prefetching systems. However, a general framework for understanding why reference streams exhibit given locality properties has not yet emerged. In this work we take a first step in this direction, based on viewing the Web as a set of reference streams that are transformed by Web components (clients, servers, and intermediaries). We propose a graph-based framework for describing this collection of streams and components. We identify three basic stream transformations that occur at nodes of the graph: aggregation, disaggregation and filtering, and we show how these transformations can be used to abstract the effects of different Web components on their associated reference streams. This view allows a structured approach to the analysis of why reference streams show given properties at different points in the Web. Applying this approach to the study of locality requires good metrics for locality. These metrics must meet three criteria: 1) they must accurately capture temporal locality; 2) they must be independent of trace artifacts such as trace length; and 3) they must not involve manual procedures or model-based assumptions. We describe two metrics meeting these criteria that each capture a different kind of temporal locality in reference streams. The popularity component of temporal locality is captured by entropy, while the correlation component is captured by interreference coefficient of variation. We argue that these metrics are more natural and more useful than previously proposed metrics for temporal locality. We use this framework to analyze a diverse set of Web reference traces. We find that this framework can shed light on how and why locality properties vary across different locations in the Web topology. For example, we find that filtering and aggregation have opposing effects on the popularity component of the temporal locality, which helps to explain why multilevel caching can be effective in the Web. Furthermore, we find that all transformations tend to diminish the correlation component of temporal locality, which has implications for the utility of different cache replacement policies at different points in the Web.National Science Foundation (ANI-9986397, ANI-0095988); CNPq-Brazi

    Generating Predicate Callback Summaries for the Android Framework

    Full text link
    One of the challenges of analyzing, testing and debugging Android apps is that the potential execution orders of callbacks are missing from the apps' source code. However, bugs, vulnerabilities and refactoring transformations have been found to be related to callback sequences. Existing work on control flow analysis of Android apps have mainly focused on analyzing GUI events. GUI events, although being a key part of determining control flow of Android apps, do not offer a complete picture. Our observation is that orthogonal to GUI events, the Android API calls also play an important role in determining the order of callbacks. In the past, such control flow information has been modeled manually. This paper presents a complementary solution of constructing program paths for Android apps. We proposed a specification technique, called Predicate Callback Summary (PCS), that represents the callback control flow information (including callback sequences as well as the conditions under which the callbacks are invoked) in Android API methods and developed static analysis techniques to automatically compute and apply such summaries to construct apps' callback sequences. Our experiments show that by applying PCSs, we are able to construct Android apps' control flow graphs, including inter-callback relations, and also to detect infeasible paths involving multiple callbacks. Such control flow information can help program analysis and testing tools to report more precise results. Our detailed experimental data is available at: http://goo.gl/NBPrKsComment: 11 page

    Conformance Checking Based on Multi-Perspective Declarative Process Models

    Full text link
    Process mining is a family of techniques that aim at analyzing business process execution data recorded in event logs. Conformance checking is a branch of this discipline embracing approaches for verifying whether the behavior of a process, as recorded in a log, is in line with some expected behaviors provided in the form of a process model. The majority of these approaches require the input process model to be procedural (e.g., a Petri net). However, in turbulent environments, characterized by high variability, the process behavior is less stable and predictable. In these environments, procedural process models are less suitable to describe a business process. Declarative specifications, working in an open world assumption, allow the modeler to express several possible execution paths as a compact set of constraints. Any process execution that does not contradict these constraints is allowed. One of the open challenges in the context of conformance checking with declarative models is the capability of supporting multi-perspective specifications. In this paper, we close this gap by providing a framework for conformance checking based on MP-Declare, a multi-perspective version of the declarative process modeling language Declare. The approach has been implemented in the process mining tool ProM and has been experimented in three real life case studies

    Moving Object Trajectories Meta-Model And Spatio-Temporal Queries

    Full text link
    In this paper, a general moving object trajectories framework is put forward to allow independent applications processing trajectories data benefit from a high level of interoperability, information sharing as well as an efficient answer for a wide range of complex trajectory queries. Our proposed meta-model is based on ontology and event approach, incorporates existing presentations of trajectory and integrates new patterns like space-time path to describe activities in geographical space-time. We introduce recursive Region of Interest concepts and deal mobile objects trajectories with diverse spatio-temporal sampling protocols and different sensors available that traditional data model alone are incapable for this purpose.Comment: International Journal of Database Management Systems (IJDMS) Vol.4, No.2, April 201

    IMITATOR II: A Tool for Solving the Good Parameters Problem in Timed Automata

    Full text link
    We present here Imitator II, a new version of Imitator, a tool implementing the "inverse method" for parametric timed automata: given a reference valuation of the parameters, it synthesizes a constraint such that, for any valuation satisfying this constraint, the system behaves the same as under the reference valuation in terms of traces, i.e., alternating sequences of locations and actions. Imitator II also implements the "behavioral cartography algorithm", allowing us to solve the following good parameters problem: find a set of valuations within a given bounded parametric domain for which the system behaves well. We present new features and optimizations of the tool, and give results of applications to various examples of asynchronous circuits and communication protocols.Comment: In Proceedings INFINITY 2010, arXiv:1010.611

    COST Action IC 1402 ArVI: Runtime Verification Beyond Monitoring -- Activity Report of Working Group 1

    Full text link
    This report presents the activities of the first working group of the COST Action ArVI, Runtime Verification beyond Monitoring. The report aims to provide an overview of some of the major core aspects involved in Runtime Verification. Runtime Verification is the field of research dedicated to the analysis of system executions. It is often seen as a discipline that studies how a system run satisfies or violates correctness properties. The report exposes a taxonomy of Runtime Verification (RV) presenting the terminology involved with the main concepts of the field. The report also develops the concept of instrumentation, the various ways to instrument systems, and the fundamental role of instrumentation in designing an RV framework. We also discuss how RV interplays with other verification techniques such as model-checking, deductive verification, model learning, testing, and runtime assertion checking. Finally, we propose challenges in monitoring quantitative and statistical data beyond detecting property violation
    • …
    corecore