15,781 research outputs found

    Predicting Worst-Case Execution Time Trends in Long-Lived Real-Time Systems

    Get PDF
    In some long-lived real-time systems, it is not uncommon to see that the execution times of some tasks may exhibit trends. For hard and firm real-time systems, it is important to ensure these trends will not jeopardize the system. In this paper, we first introduce the notion of dynamic worst-case execution time (dWCET), which forms a new perspective that could help a system to predict potential timing failures and optimize resource allocations. We then have a comprehensive review of trend prediction methods. In the evaluation, we make a comparative study of dWCET trend prediction. Four prediction methods, combined with three data selection processes, are applied in an evaluation framework. The result shows the importance of applying data preprocessing and suggests that non-parametric estimators perform better than parametric methods

    On the classification and evaluation of prefetching schemes

    Get PDF
    Abstract available: p. [2

    A self-adapting latency/power tradeoff model for replicated search engines

    Get PDF
    For many search settings, distributed/replicated search engines deploy a large number of machines to ensure efficient retrieval. This paper investigates how the power consumption of a replicated search engine can be automatically reduced when the system has low contention, without compromising its efficiency. We propose a novel self-adapting model to analyse the trade-off between latency and power consumption for distributed search engines. When query volumes are high and there is contention for the resources, the model automatically increases the necessary number of active machines in the system to maintain acceptable query response times. On the other hand, when the load of the system is low and the queries can be served easily, the model is able to reduce the number of active machines, leading to power savings. The model bases its decisions on examining the current and historical query loads of the search engine. Our proposal is formulated as a general dynamic decision problem, which can be quickly solved by dynamic programming in response to changing query loads. Thorough experiments are conducted to validate the usefulness of the proposed adaptive model using historical Web search traffic submitted to a commercial search engine. Our results show that our proposed self-adapting model can achieve an energy saving of 33% while only degrading mean query completion time by 10 ms compared to a baseline that provisions replicas based on a previous day's traffic

    Emulating and evaluating hybrid memory for managed languages on NUMA hardware

    Get PDF
    Non-volatile memory (NVM) has the potential to become a mainstream memory technology and challenge DRAM. Researchers evaluating the speed, endurance, and abstractions of hybrid memories with DRAM and NVM typically use simulation, making it easy to evaluate the impact of different hardware technologies and parameters. Simulation is, however, extremely slow, limiting the applications and datasets in the evaluation. Simulation also precludes critical workloads, especially those written in managed languages such as Java and C#. Good methodology embraces a variety of techniques for evaluating new ideas, expanding the experimental scope, and uncovering new insights. This paper introduces a platform to emulate hybrid memory for managed languages using commodity NUMA servers. Emulation complements simulation but offers richer software experimentation. We use a thread-local socket to emulate DRAM and a remote socket to emulate NVM. We use standard C library routines to allocate heap memory on the DRAM and NVM sockets for use with explicit memory management or garbage collection. We evaluate the emulator using various configurations of write-rationing garbage collectors that improve NVM lifetimes by limiting writes to NVM, using 15 applications and various datasets and workload configurations. We show emulation and simulation confirm each other's trends in terms of writes to NVM for different software configurations, increasing our confidence in predicting future system effects. Emulation brings novel insights, such as the non-linear effects of multi-programmed workloads on NVM writes, and that Java applications write significantly more than their C++ equivalents. We make our software infrastructure publicly available to advance the evaluation of novel memory management schemes on hybrid memories

    Flexible and Adaptive Real-Time Task Scheduling in Cyber-Physical Control Systems

    Get PDF
    In a Cyber-Physical Control System (CPCS), there is often a hybrid of hard real-time tasks which have stringent timing requirements and soft real-time tasks that are computationally intensive. The task scheduling of such systems is challenging and requires flexible schemes that can meet the timing requirements without being over-conservative. Fixed-priority scheduling (FPS) is a scheduling policy that has been widely used in industry. However, as an open-loop scheduler, FPS has low system dynamics and no feedback from historic operation. As the working conditions of a CPCS will change due to both internal and external factors, an improved scheduling scheme is required which can adapt to changes without a costly system redesign. In recent years, there is a large research interest in the co-design of control and scheduling systems that explicitly considers task scheduling during the design of a controller. Many of these works reveal the possibility of adapting control periods at run-time in order to accommodate varying resource requirements and to optimise CPU utilization. It is also shown that control quality can be traded off for resource usages. In this thesis, an adaptive real-time scheduling framework for CPCS is presented. The adaptive scheduler has a hierarchical structure and it is built on top of a traditional FPS scheduler. The idea of dynamic worst-case execution time is introduced and its cause and methods to identify the existence of a trend are discussed. An adaptation method that uses monitored statistical information to update control task periods is then introduced. Finally, this method is extended by proposing a dual-period model that can switch between multiple operational modes at run-time. The proposed framework can be potentially extended in many aspects and some of these are discussed in the future work. All proposals of this thesis are supported by extensive analysis and evaluations

    Performance analysis and optimization of the Java memory system

    Get PDF

    The Illusion of the Perpetual Money Machine

    Full text link
    We argue that the present crisis and stalling economy continuing since 2007 are rooted in the delusionary belief in policies based on a "perpetual money machine" type of thinking. We document strong evidence that, since the early 1980s, consumption has been increasingly funded by smaller savings, booming financial profits, wealth extracted from house price appreciation and explosive debt. This is in stark contrast with the productivity-fueled growth that was seen in the 1950s and 1960s. This transition, starting in the early 1980s, was further supported by a climate of deregulation and a massive growth in financial derivatives designed to spread and diversify the risks globally. The result has been a succession of bubbles and crashes, including the worldwide stock market bubble and great crash of October 1987, the savings and loans crisis of the 1980s, the burst in 1991 of the enormous Japanese real estate and stock market bubbles, the emerging markets bubbles and crashes in 1994 and 1997, the LTCM crisis of 1998, the dotcom bubble bursting in 2000, the recent house price bubbles, the financialization bubble via special investment vehicles, the stock market bubble, the commodity and oil bubbles and the debt bubbles, all developing jointly and feeding on each other. Rather than still hoping that real wealth will come out of money creation, we need fundamentally new ways of thinking. In uncertain times, it is essential, more than ever, to think in scenarios: what can happen in the future, and, what would be the effect on your wealth and capital? How can you protect against adverse scenarios? We thus end by examining the question "what can we do?" from the macro level, discussing the fundamental issue of incentives and of constructing and predicting scenarios as well as developing investment insights.Comment: 27 pages, 18 figures (Notenstein Academy White Paper Series

    Prochlo: Strong Privacy for Analytics in the Crowd

    Full text link
    The large-scale monitoring of computer users' software activities has become commonplace, e.g., for application telemetry, error reporting, or demographic profiling. This paper describes a principled systems architecture---Encode, Shuffle, Analyze (ESA)---for performing such monitoring with high utility while also protecting user privacy. The ESA design, and its Prochlo implementation, are informed by our practical experiences with an existing, large deployment of privacy-preserving software monitoring. (cont.; see the paper
    corecore