5 research outputs found

    Run-Time Monitoring of Timing Constraints: A Survey of Methods and Tools

    Get PDF
    Abstract-Despite the availability of static analysis methods to achieve a correct-by-construction design for different systems in terms of timing behavior, violations of timing constraints can still occur at run-time due to different reasons. The aim of monitoring of system performance with respect to the timing constraints is to detect the violations of timing specifications, or to predict them based on the current system performance data. Considerable work has been dedicated to suggesting efficient performance monitoring approaches during the past years. This paper presents a survey and classification of those approaches in order to help researchers gain a better view over different methods and developments in monitoring of timing behavior of systems. Classifications of the mentioned approaches are given based on different items that are seen as important in developing a monitoring system, i.e., the use of additional hardware, the data collection approach, etc. Moreover, a description of how these different methods work is presented in this paper along with the advantages and downsides of each of them

    Runtime monitoring of timing constraints in distributed real-time systems

    Full text link
    Embedded real-time systems often operate under strict timing and dependability constraints. To ensure responsiveness, these systems must be able to provide the expected services in a timely manner even in the presence of faults. In this paper, we describe a run-time environment for monitoring of timing constraints in distributed real-time systems. In particular, we focus on the problem of detecting violations of timing assertions in an environment in which the real-time tasks run on multiple processors, and timing constraints can be either inter-processor or intra-processor constraints. Constraint violations are detected at the earliest possible time by deriving and checking intermediate constraints from the user-specified constraints. If the violations must be detected as early as possible, then the problem of minimizing the number of messages to be exchanged between the processors becomes intractable. We characterize a sub-class of timing constraints that occur commonly in distributed real-time systems and whose message requirements can be minimized. We also take into account the drift among the various processor clocks when detecting a violation of a timing assertion. Finally, we describe a prototype implementation of a distributed run-time monitor.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/48087/1/11241_2005_Article_BF01088521.pd

    Monitoring Computer Systems: An Intelligent Approach

    Get PDF
    Monitoring modern computer systems is increasingly difficult due to their peculiar characteristics. To cope with this situation, the dissertation develops an approach to intelligent monitoring. The resulting model consists of three major designs: representing targets, controlling data collection, and autonomously refining monitoring performance. The model explores a more declarative object-oriented model by introducing virtual objects to dynamically compose abstract representations, while it treats conventional hard-wired hierarchies and predefined object classes as primitive structures. Taking the representational framework as a reasoning bed, the design for controlling mechanisms adopts default reasoning backed up with ordered constraints, so that the amount of data collected, levels of details, semantics, and resolution of observation can be appropriately controlled. The refining mechanisms classify invoked knowledge and update the classified knowledge in terms of the feedback from monitoring. The approach is designed first and then formally specified. Applications of the resulting model are examined and an operational prototype is implemented. Thus the dissertation establishes a basis for an approach to intelligent monitoring, one which would be equipped to deal effectively with the difficulties that arise in monitoring modern computer systems

    Web page performance analysis

    Get PDF
    Computer systems play an increasingly crucial and ubiquitous role in human endeavour by carrying out or facilitating tasks and providing information and services. How much work these systems can accomplish, within a certain amount of time, using a certain amount of resources, characterises the systems’ performance, which is a major concern when the systems are planned, designed, implemented, deployed, and evolve. As one of the most popular computer systems, the Web is inevitably scrutinised in terms of performance analysis that deals with its speed, capacity, resource utilisation, and availability. Performance analyses for the Web are normally done from the perspective of the Web servers and the underlying network (the Internet). This research, on the other hand, approaches Web performance analysis from the perspective of Web pages. The performance metric of interest here is response time. Response time is studied as an attribute of Web pages, instead of being considered purely a result of network and server conditions. A framework that consists of measurement, modelling, and monitoring (3Ms) of Web pages that revolves around response time is adopted to support the performance analysis activity. The measurement module enables Web page response time to be measured and is used to support the modelling module, which in turn provides references for the monitoring module. The monitoring module estimates response time. The three modules are used in the software development lifecycle to ensure that developed Web pages deliver at worst satisfactory response time (within a maximum acceptable time), or preferably much better response time, thereby maximising the efficiency of the pages. The framework proposes a systematic way to understand response time as it is related to specific characteristics of Web pages and explains how individual Web page response time can be examined and improved

    Online Trace Reordering for Efficient Representation of Event Partial Orders

    Get PDF
    Distributed and parallel applications not only have distributed state but are often inherently non-deterministic, making them significantly more challenging to monitor and debug. Additionally, a significant challenge when working with distributed and parallel applications has to do with the fundamental requirement of determining the order in which certain actions are performed by the application. A naive approach for ordering actions would be to impose a single order on all actions, i.e., given any two actions or events, one must happen before the other. A global order, however, is often misleading, e.g., two events in two different processes may be causally independent yet one may have occurred before the other. A partial order of events, therefore, serves as the fundamental data structure for ordering events in distributed and parallel applications. Traditionally, Fidge/Mattern timestamps have been used for representing event partial orders. The size of the vector timestamp depends on the number of parallel entities (traces) in the application, e.g., processes or threads. A major limitation of Fidge/Mattern time- stamps is that the total size of timestamps does not scale for large systems with hundreds or thousands of traces. Taylor proposed an efficient offset-based scheme for representing large event partial orders by representing deltas between timestamps of successive events. The offset-based schemes have been shown to be significantly more space efficient when traces that communicate the most are close to each other for generating the deltas (offsets). In Taylor’s offset-based schemes the optimal order of traces is computed offline. In this work we adapt the offset-based schemes to dynamically reorder traces and demonstrate that very efficient scalable representations of event partial orders can be generated in an online setting, requiring as few as 100 bytes/event for storing partial order event data for applications with around 1000 processes
    corecore