7,182 research outputs found

    Modeling bursts and heavy tails in human dynamics

    Full text link
    Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. We provide direct evidence that for five human activity patterns the timing of individual human actions follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experiencing very long waiting times. We discuss two queueing models that capture human activity. The first model assumes that there are no limitations on the number of tasks an individual can hadle at any time, predicting that the waiting time of the individual tasks follow a heavy tailed distribution with exponent alpha=3/2. The second model imposes limitations on the queue length, resulting in alpha=1. We provide empirical evidence supporting the relevance of these two models to human activity patterns. Finally, we discuss possible extension of the proposed queueing models and outline some future challenges in exploring the statistical mechanisms of human dynamics.Comment: RevTex, 19 pages, 8 figure

    Improving Responsiveness of Time-Sensitive Applications by Exploiting Dynamic Task Dependencies

    Get PDF
    In this paper, a mechanism is presented for reducing priority inversion in multi-programmed computing systems. Contrarily to well-known approaches from the literature, this paper tackles cases where the dependency relationships among tasks cannot be known in advance to the operating system (OS). The presented mechanism allows tasks to explicitly declare said relationships, enabling the OS scheduler to take advantage of such information and trigger priority inheritance, resulting in reduced priority inversion. We present the prototype implementation of the concept within the Linux kernel, in the form of modifications to the standard POSIX condition variables code, along with an extensive evaluation including a quantitative assessment of the benefits for applications making use of the technique, as well as comprehensive overhead measurements. Also, we present an associated technique for theoretical schedulability analysis of a system using the new mechanism, which is useful to determine whether all tasks can meet their deadlines or not, in the specific scenario of tasks interacting only through remote procedure calls, and under partitioned scheduling

    BRAHMA(+): A Framework for Resource Scaling of Streaming and ASAP Time-Varying Workflows

    Get PDF
    Automatic scaling of complex software-as-a-service application workflows is one of the most important problems concerning resource management in clouds. In this paper, we study the automatic workflow resource scaling problem for streaming and ASAP workflows, and its time-varying variant where the workflow resource requirements change over time. Service components of streaming workflows execute concurrently while those of ASAP workflows execute sequentially. We propose an intelligent framework, BRAHMA(+), which possesses the capability to learn the workflow behavior and construct a knowledge base that serves as its decision making engine. The proposed resource provisioning algorithms leverage this learned information curated in the knowledge base to perform informed and intelligent scaling decisions. Additionally, BRAHMA(+) employs the use of online-learning strategies to keep the knowledge base up-to-date, thereby accommodating the changes in the workflow resource requirements over time. We evaluate the proposed algorithms using CloudSim simulations. Results on streaming and ASAP workflows, with both static and time-varying resource requirements show that the proposed algorithms are effective and produce good cost-quality trade-offs. The proactive and hybrid algorithms meet the service level agreements and restrict deadline violations to a small fraction (3%-5% in the considered scenarios), while only suffering a marginal increase in average cost per component compared to the described baseline algorithms

    AN EA-BASED APPROACH TO VALUATE ENTERPRISE TRANSFORMATION: THE CASE OF IS INVESTMENTS ENABLING ON DEMAND INTEGRATION OF SERVICE PROVIDERS

    Get PDF
    Determining the value contribution of IS investments is a crucial task to support conscious decisions, e.g. about the scope or for or against the implementation of these investments. IS investments transform an enterprise not only in its IS related architecture, but often enable enhancements within the business related architecture. Valuating IS investments from an integral point of view therefore means to measure the value contribution to all affected artifacts of an enterprise. Enterprise architecture (EA) used as a coordinative framework to valuate enterprise transformation may help to support this goal. We propose a valuation approach for IS investments based on EA offering two advantages: As through EA all artifacts of and their relationships within an enterprise are known, the impact of IS investments on all architectural layers can be identified and attributed to the IS investments as an integral value. Furthermore EA provides (detailed) models of all artifacts changed. These models can be used to support the valuation of the IS investments? impact on all affected artifacts. To demonstrate how this valuation approach can be tailored for valuating a concrete IS investment, we apply it to the exemplary case of valuating an IS investment enabling the on demand integration of service providers. Therefore we model the enabled enhancements of this IS investment on the business and business process architecture, relating on the basic optimization problem of capacity planning within a certain business process. A case study of the payment transaction process of a banking transactions provider finally shows the applicability of the valuation approach

    Performance Analysis of Live-Virtual-Constructive and Distributed Virtual Simulations: Defining Requirements in Terms of Temporal Consistency

    Get PDF
    This research extends the knowledge of live-virtual-constructive (LVC) and distributed virtual simulations (DVS) through a detailed analysis and characterization of their underlying computing architecture. LVCs are characterized as a set of asynchronous simulation applications each serving as both producers and consumers of shared state data. In terms of data aging characteristics, LVCs are found to be first-order linear systems. System performance is quantified via two opposing factors; the consistency of the distributed state space, and the response time or interaction quality of the autonomous simulation applications. A framework is developed that defines temporal data consistency requirements such that the objectives of the simulation are satisfied. Additionally, to develop simulations that reliably execute in real-time and accurately model hierarchical systems, two real-time design patterns are developed: a tailored version of the model-view-controller architecture pattern along with a companion Component pattern. Together they provide a basis for hierarchical simulation models, graphical displays, and network I/O in a real-time environment. For both LVCs and DVSs the relationship between consistency and interactivity is established by mapping threads created by a simulation application to factors that control both interactivity and shared state consistency throughout a distributed environment

    Dynamic Priority Rules for Combining On-Demand Passenger Transportation and Transportation of Goods

    Get PDF
    Urban on-demand transportation services are booming, in both passenger transportation and the transportation of goods. The types of service differ in timeliness and compensation and, until now, providers operate larger fleets separately for each type of service. While this may ensure sufficient resources for lucrative passenger transportation, the separation also leaves consolidation potentials untapped. In this paper, we propose combining both services in an anticipatory way that ensures high passenger service rates while simultaneously transporting a large number of goods. To this end, we introduce a dynamic priority policy that uses a time-dependent percentage of vehicles mainly to serve passengers. To find effective time-dependent parametrizations given a limited number of runtime-expensive simulations, we apply Bayesian Optimization. We show that our anticipatory policy increases revenue and service rates significantly while a myopic combination of service may actually lead to inferior performance compared to using two separate fleets
    corecore