903 research outputs found

    Systems with Session-based Workloads: Assessing Performance and Reliability

    Get PDF
    Many systems, including the Web and Software as a Service (SaaS), are best characterized with session-based workloads. Empirical studies have shown that Web session arrivals exhibit long range dependence and that the number of requests in a session is well modeled with skewed or heavy-tailed distributions. However, models that account for session workloads characterized by empirically observed phenomena and studies of their impact on performance and reliability metrics are lacking.;For assessing performance, we use a feedback queue to account for session-based workloads in a physically meaningful way and use simulation to analyze the behavior of the Web system under Long Range Dependent (LRD) session arrival process and skewed distribution for the number of requests in a session. Our results show that the percentage of dropped sessions, mean queue length, mean waiting time, and the useful server utilization are all affected by the LRD session arrivals and the statistics of the number of requests within a session. The impact is higher in the case of more prominent long-range dependence. Interestingly, both the request arrival process and the request departure process are long-range dependent, even in the case when session arrivals are Poisson. This indicates that the LRD at the request level can be a result of the existence of sessions.;For assessing reliability, we propose a framework which integrates (1) the Web workloads defined in term of user sessions, (2) the user navigation patterns through the Web site, and (3) the reliability estimates of the Web requests based on the system architecture; then, we give a detailed reliability model of a Web system based on the proposed framework. We recognize the difficulty of solving the proposed model and use simulation to obtain the results. And last but not least, we use statistical design of experiment to quantify the results and to determine which factors have the highest impact on the system\u27s reliability. Our results show that some two-way and three-way interactions are very important for the session reliability of Web systems

    Parameter dependencies for reusable performance specifications of software components

    Get PDF
    To avoid design-related per­for­mance problems, model-driven performance prediction methods analyse the response times, throughputs, and re­source utilizations of software architectures before and during implementation. This thesis proposes new modeling languages and according model transformations, which allow a reusable description of usage profile dependencies to the performance of software components. Predictions based on this new methods can support performance-related design decisions

    Effective process times for aggregate modeling of manufacturing systems

    Get PDF

    Automated Improvement of Software Architecture Models for Performance and Other Quality Attributes

    Get PDF

    Online Modeling and Tuning of Parallel Stream Processing Systems

    Get PDF
    Writing performant computer programs is hard. Code for high performance applications is profiled, tweaked, and re-factored for months specifically for the hardware for which it is to run. Consumer application code doesn\u27t get the benefit of endless massaging that benefits high performance code, even though heterogeneous processor environments are beginning to resemble those in more performance oriented arenas. This thesis offers a path to performant, parallel code (through stream processing) which is tuned online and automatically adapts to the environment it is given. This approach has the potential to reduce the tuning costs associated with high performance code and brings the benefit of performance tuning to consumer applications where otherwise it would be cost prohibitive. This thesis introduces a stream processing library and multiple techniques to enable its online modeling and tuning. Stream processing (also termed data-flow programming) is a compute paradigm that views an application as a set of logical kernels connected via communications links or streams. Stream processing is increasingly used by computational-x and x-informatics fields (e.g., biology, astrophysics) where the focus is on safe and fast parallelization of specific big-data applications. A major advantage of stream processing is that it enables parallelization without necessitating manual end-user management of non-deterministic behavior often characteristic of more traditional parallel processing methods. Many big-data and high performance applications involve high throughput processing, necessitating usage of many parallel compute kernels on several compute cores. Optimizing the orchestration of kernels has been the focus of much theoretical and empirical modeling work. Purely theoretical parallel programming models can fail when the assumptions implicit within the model are mis-matched with reality (i.e., the model is incorrectly applied). Often it is unclear if the assumptions are actually being met, even when verified under controlled conditions. Full empirical optimization solves this problem by extensively searching the range of likely configurations under native operating conditions. This, however, is expensive in both time and energy. For large, massively parallel systems, even deciding which modeling paradigm to use is often prohibitively expensive and unfortunately transient (with workload and hardware). In an ideal world, a parallel run-time will re-optimize an application continuously to match its environment, with little additional overhead. This work presents methods aimed at doing just that through low overhead instrumentation, modeling, and optimization. Online optimization provides a good trade-off between static optimization and online heuristics. To enable online optimization, modeling decisions must be fast and relatively accurate. Online modeling and optimization of a stream processing system first requires the existence of a stream processing framework that is amenable to the intended type of dynamic manipulation. To fill this void, we developed the RaftLib C++ template library, which enables usage of the stream processing paradigm for C++ applications (it is the run-time which is the basis of almost all the work within this dissertation). An application topology is specified by the user, however almost everything else is optimizable by the run-time. RaftLib takes advantage of the knowledge gained during the design of several prior streaming languages (notably Auto-Pipe). The resultant framework enables online migration of tasks, auto-parallelization, online buffer-reallocation, and other useful dynamic behaviors that were not available in many previous stream processing systems. Several benchmark applications have been designed to assess the performance gains through our approaches and compare performance to other leading stream processing frameworks. Information is essential to any modeling task, to that end a low-overhead instrumentation framework has been developed which is both dynamic and adaptive. Discovering a fast and relatively optimal configuration for a stream processing application often necessitates solving for buffer sizes within a finite capacity queueing network. We show that a generalized gain/loss network flow model can bootstrap the process under certain conditions. Any modeling effort, requires that a model be selected; often a highly manual task, involving many expensive operations. This dissertation demonstrates that machine learning methods (such as a support vector machine) can successfully select models at run-time for a streaming application. The full set of approaches are incorporated into the open source RaftLib framework

    Perspectives on trading cost and availability for corrective maintenance at the equipment type level

    Get PDF
    Characterising maintenance costs has always been challenging due to a lack of accurate prior cost data and the uncertainties around equipment usage and reliability. Since preventive maintenance does not completely prevent corrective repairs in demanding environments, any unscheduled maintenance can have a large impact on the overall maintenance costs. This introduces the requirement to set up support contracts with minimum baseline solutions that warrant the target demand within certain costs and risks. This article investigates a process that has been developed to estimate performance based support contract costs attributed to corrective maintenance. These can play a dominant role in the through-life support of high values assets. The case context for the paper is the UK Ministry of Defence. The developed approach allows benchmarking support contract solutions, and enabling efficient planning decisions. Emphasis is placed on learning from feedback, testing and validating current methodologies for estimating corrective maintenance costs and availability at the Equipment Type level. These are interacting sub-equipment's that have unique availability requirements and hence have a much larger impact on the capital maintenance expenditure. The presented case studies demonstrate the applicability of the approach towards adequate savings and improved availability estimates

    Pedestrian Dynamics: Modeling and Analyzing Cognitive Processes and Traffic Flows to Evaluate Facility Service Level

    Get PDF
    Walking is the oldest and foremost mode of transportation through history and the prevalence of walking has increased. Effective pedestrian model is crucial to evaluate pedestrian facility service level and to enhance pedestrian safety, performance, and satisfaction. The objectives of this study were to: (1) validate the efficacy of utilizing queueing network model, which predicts cognitive information processing time and task performance; (2) develop a generalized queueing network based cognitive information processing model that can be utilized and applied to construct pedestrian cognitive structure and estimate the reaction time with the first moment of service time distribution; (3) investigate pedestrian behavior through naturalistic and experimental observations to analyze the effects of environment settings and psychological factors in pedestrians; and (4) develop pedestrian level of service (LOS) metrics that are quick and practical to identify improvement points in pedestrian facility design. Two empirical and two analytical studies were conducted to address the research objectives. The first study investigated the efficacy of utilizing queueing network in modeling and predicting the cognitive information processing time. Motion capture system was utilized to collect detailed pedestrian movement. The predicted reaction time using queueing network was compared with the results from the empirical study to validate the performance of the model. No significant difference between model and empirical results was found with respect to mean reaction time. The second study endeavored to develop a generalized queueing network system so the task can be modeled with the approximated queueing network and its first moment of any service time distribution. There was no significant difference between empirical study results and the proposed model with respect to mean reaction time. Third study investigated methods to quantify pedestrian traffic behavior, and analyze physical and cognitive behavior from the real-world observation and field experiment. Footage from indoor and outdoor corridor was used to quantify pedestrian behavior. Effects of environmental setting and/or psychological factor on travel performance were tested. Finally, adhoc and tailor-made LOS metrics were presented for simple realistic service level assessments. The proposed methodologies were composed of space revision LOS, delay-based LOS, preferred walking speed-based LOS, and ‘blocking probability’

    Critical Fault-Detecting Time Evaluation in Software with Discrete Compound Poisson Models

    Get PDF
    Software developers predict their product’s failure rate using reliability growth models that are typically based on nonhomogeneous Poisson (NHP) processes. In this article, we extend that practice to a nonhomogeneous discrete-compound Poisson process that allows for multiple faults of a system at the same time point. Along with traditional reliability metrics such as average number of failures in a time interval, we propose an alternative reliability index called critical fault-detecting time in order to provide more information for software managers making software quality evaluation and critical market policy decisions. We illustrate the significant potential for improved analysis using wireless failure data as well as simulated data
    • …
    corecore