35,605 research outputs found

    A Survey of Green Networking Research

    Full text link
    Reduction of unnecessary energy consumption is becoming a major concern in wired networking, because of the potential economical benefits and of its expected environmental impact. These issues, usually referred to as "green networking", relate to embedding energy-awareness in the design, in the devices and in the protocols of networks. In this work, we first formulate a more precise definition of the "green" attribute. We furthermore identify a few paradigms that are the key enablers of energy-aware networking research. We then overview the current state of the art and provide a taxonomy of the relevant work, with a special focus on wired networking. At a high level, we identify four branches of green networking research that stem from different observations on the root causes of energy waste, namely (i) Adaptive Link Rate, (ii) Interface proxying, (iii) Energy-aware infrastructures and (iv) Energy-aware applications. In this work, we do not only explore specific proposals pertaining to each of the above branches, but also offer a perspective for research.Comment: Index Terms: Green Networking; Wired Networks; Adaptive Link Rate; Interface Proxying; Energy-aware Infrastructures; Energy-aware Applications. 18 pages, 6 figures, 2 table

    PowerPack: Energy Profiling and Analysis of High-Performance Systems and Applications

    Get PDF
    Energy efficiency is a major concern in modern high-performance computing system design. In the past few years, there has been mounting evidence that power usage limits system scale and computing density, and thus, ultimately system performance. However, despite the impact of power and energy on the computer systems community, few studies provide insight to where and how power is consumed on high-performance systems and applications. In previous work, we designed a framework called PowerPack that was the first tool to isolate the power consumption of devices including disks, memory, NICs, and processors in a high-performance cluster and correlate these measurements to application functions. In this work, we extend our framework to support systems with multicore, multiprocessor-based nodes, and then provide in-depth analyses of the energy consumption of parallel applications on clusters of these systems. These analyses include the impacts of chip multiprocessing on power and energy efficiency, and its interaction with application executions. In addition, we use PowerPack to study the power dynamics and energy efficiencies of dynamic voltage and frequency scaling (DVFS) techniques on clusters. Our experiments reveal conclusively how intelligent DVFS scheduling can enhance system energy efficiency while maintaining performance

    A cell outage management framework for dense heterogeneous networks

    Get PDF
    In this paper, we present a novel cell outage management (COM) framework for heterogeneous networks with split control and data planes-a candidate architecture for meeting future capacity, quality-of-service, and energy efficiency demands. In such an architecture, the control and data functionalities are not necessarily handled by the same node. The control base stations (BSs) manage the transmission of control information and user equipment (UE) mobility, whereas the data BSs handle UE data. An implication of this split architecture is that an outage to a BS in one plane has to be compensated by other BSs in the same plane. Our COM framework addresses this challenge by incorporating two distinct cell outage detection (COD) algorithms to cope with the idiosyncrasies of both data and control planes. The COD algorithm for control cells leverages the relatively larger number of UEs in the control cell to gather large-scale minimization-of-drive-test report data and detects an outage by applying machine learning and anomaly detection techniques. To improve outage detection accuracy, we also investigate and compare the performance of two anomaly-detecting algorithms, i.e., k-nearest-neighbor- and local-outlier-factor-based anomaly detectors, within the control COD. On the other hand, for data cell COD, we propose a heuristic Grey-prediction-based approach, which can work with the small number of UE in the data cell, by exploiting the fact that the control BS manages UE-data BS connectivity and by receiving a periodic update of the received signal reference power statistic between the UEs and data BSs in its coverage. The detection accuracy of the heuristic data COD algorithm is further improved by exploiting the Fourier series of the residual error that is inherent to a Grey prediction model. Our COM framework integrates these two COD algorithms with a cell outage compensation (COC) algorithm that can be applied to both planes. Our COC solution utilizes an actor-critic-based reinforcement learning algorithm, which optimizes the capacity and coverage of the identified outage zone in a plane, by adjusting the antenna gain and transmission power of the surrounding BSs in that plane. The simulation results show that the proposed framework can detect both data and control cell outage and compensate for the detected outage in a reliable manner

    Uncertainty Wedge Analysis: Quantifying the Impact of Sparse Sound Speed Profiling Regimes on Sounding Uncertainty

    Get PDF
    Recent advances in real-time monitoring of uncertainty due to refraction have demonstrated the power of estimating and visualizing uncertainty over the entire potential sounding space. This representation format, referred to as an uncertainty wedge, can be used to help solve difficult survey planning problems regarding the spatio-temporal variability of the watercolumn. Though initially developed to work in-line with underway watercolumn sampling hardware (e.g. moving vessel profilers), uncertainty wedge analysis techniques are extensible to investigate problems associated with low-density watercolumn sampling in which only a few sound speed casts are gathered per day. As uncertainty wedge analysis techniques require no sounding data, the overhead of post-processing soundings is circumvented in the situation when one needs to quickly ascertain the impact of a particular sampling regime. In keeping with the spirit of the underlying real-time monitoring tools, a just in time analysis of sound speed casts can help the field operator assess the effects of watercolumn variability during acquisition and objectively seek a watercolumn sampling regime which would balance the opposing goals of maximizing survey efficiency and maintaining reasonable sounding accuracy. In this work, we investigate the particular problem of estimating the uncertainty that would be associated with a particular low-density sound speed sampling regime. A pre-analysis technique is proposed in which a high-density set of sound speed profiles provides a baseline against which various low-density sampling regimes can be tested, the end goal being to ascertain the penalty in sounding confidence that would be associated with a particular low-density sampling regime. In other words, by knowing too much about the watercolumn, one can objectively quantify the impact of not knowing enough. In addition to the goal-seeking field application outlined earlier, this allows for more confi- dent attribution of uncertainty to soundings, a marked improvement over current approaches to refraction uncertainty estimation

    ALSEP termination report

    Get PDF
    The Apollo Lunar Surface Experiments Package (ALSEP) final report was prepared when support operations were terminated September 30, 1977, and NASA discontinued the receiving and processing of scientific data transmitted from equipment deployed on the lunar surface. The ALSEP experiments (Apollo 11 to Apollo 17) are described and pertinent operational history is given for each experiment. The ALSEP data processing and distribution are described together with an extensive discussion on archiving. Engineering closeout tests and results are given, and the status and configuration of the experiments at termination are documented. Significant science findings are summarized by selected investigators. Significant operational data and recommendations are also included

    The future of laboratory medicine - A 2014 perspective.

    Get PDF
    Predicting the future is a difficult task. Not surprisingly, there are many examples and assumptions that have proved to be wrong. This review surveys the many predictions, beginning in 1887, about the future of laboratory medicine and its sub-specialties such as clinical chemistry and molecular pathology. It provides a commentary on the accuracy of the predictions and offers opinions on emerging technologies, economic factors and social developments that may play a role in shaping the future of laboratory medicine

    The potential of glycomics as prognostic biomarkers in liver disease and liver transplantation

    Get PDF
    The study of glycomics is a novel and fascinating approach for the development of biomarkers. It has become clear that in the field of liver disease specific glycomic patters are present in specific disease states, which has led to the development of diagnostic biomarkers. In this manuscript, we will describe two new applications of this technology for the development of prognostic biomarkers. The first biomarker is associated with the risk of hepatocellular carcinoma development in patients with compensated cirrhosis. The second biomarker is present in perfusate and is related to the risk of primary non function occurrence after liver transplantation. The technology used for these biomarkers could easily be implemented on routine capillary electrophoresis equipment
    • …
    corecore