1,663 research outputs found

    Autonomic Overload Management For Large-Scale Virtualized Network Functions

    Get PDF
    The explosion of data traffic in telecommunication networks has been impressive in the last few years. To keep up with the high demand and staying profitable, Telcos are embracing the Network Function Virtualization (NFV) paradigm by shifting from hardware network appliances to software virtual network functions, which are expected to support extremely large scale architectures, providing both high performance and high reliability. The main objective of this dissertation is to provide frameworks and techniques to enable proper overload detection and mitigation for the emerging virtualized software-based network services. The thesis contribution is threefold. First, it proposes a novel approach to quickly detect performance anomalies in complex and large-scale VNF services. Second, it presents NFV-Throttle, an autonomic overload control framework to protect NFV services from overload within a short period of time, allowing to preserve the QoS of traffic flows admitted by network services in response to both traffic spikes (up to 10x the available capacity) and capacity reduction due to infrastructure problems (such as CPU contention). Third, it proposes DRACO, to manage overload problems arising in novel large-scale multi-tier applications, such as complex stateful network functions in which the state is spread across modern key-value stores to achieve both scalability and performance. DRACO performs a fine-grained admission control, by tuning the amount and type of traffic according to datastore node dependencies among the tiers (which are dynamically discovered at run-time), and to the current capacity of individual nodes, in order to mitigate overloads and preventing hot-spots. This thesis presents the implementation details and an extensive experimental evaluation for all the above overload management solutions, by means of a virtualized IP Multimedia Subsystem (IMS), which provides modern multimedia services for Telco operators, such as Videoconferencing and VoLTE, and which is one of the top use-cases of the NFV technology

    The NASA Aerospace Battery Safety Handbook

    Get PDF
    This handbook has been written for the purpose of acquainting those involved with batteries with the information necessary for the safe handling, storage, and disposal of these energy storage devices. Included in the document is a discussion of the cell and battery design considerations and the role of the components within a cell. The cell and battery hazards are related to user- and/or manufacturer-induced causes. The Johnson Space Center (JSC) Payload Safety Guidelines for battery use in Shuttle applications are also provided. The electrochemical systems are divided into zinc anode and lithium anode primaries, secondary cells, and fuel cells. Each system is briefly described, typical applications are given, advantages and disadvantages are tabulated, and most importantly, safety hazards associated with its use are given

    Enabling virtualization technologies for enhanced cloud computing

    Get PDF
    Cloud Computing is a ubiquitous technology that offers various services for individual users, small businesses, as well as large scale organizations. Data-center owners maintain clusters of thousands of machines and lease out resources like CPU, memory, network bandwidth, and storage to clients. For organizations, cloud computing provides the means to offload server infrastructure and obtain resources on demand, which reduces setup costs as well as maintenance overheads. For individuals, cloud computing offers platforms, resources and services that would otherwise be unavailable to them. At the core of cloud computing are various virtualization technologies and the resulting Virtual Machines (VMs). Virtualization enables cloud providers to host multiple VMs on a single Physical Machine (PM). The hallmark of VMs is the inability of the end-user to distinguish them from actual PMs. VMs allow cloud owners such essential features as live migration, which is the process of moving a VM from one PM to another while the VM is running, for various reasons. Features of the cloud such as fault tolerance, geographical server placement, energy management, resource management, big data processing, parallel computing, etc. depend heavily on virtualization technologies. Improvements and breakthroughs in these technologies directly lead to introduction of new possibilities in the cloud. This thesis identifies and proposes innovations for such underlying VM technologies and tests their performance on a cluster of 16 machines with real world benchmarks. Specifically the issues of server load prediction, VM consolidation, live migration, and memory sharing are attempted. First, a unique VM resource load prediction mechanism based on Chaos Theory is introduced that predicts server workloads with high accuracy. Based on these predictions, VMs are dynamically and autonomously relocated to different PMs in the cluster in an attempt to conserve energy. Experimental evaluations with a prototype on real world data- center load traces show that up to 80% of the unused PMs can be freed up and repurposed, with Service Level Objective (SLO) violations as little as 3%. Second, issues in live migration of VMs are analyzed, based on which a new distributed approach is presented that allows network-efficient live migration of VMs. The approach amortizes the transfer of memory pages over the life of the VM, thus reducing network traffic during critical live migration. The prototype reduces network usage by up to 45% and lowers required time by up to 40% for live migration on various real-world loads. Finally, a memory sharing and management approach called ACE-M is demonstrated that enables VMs to share and utilize all the memory available in the cluster remotely. Along with predictions on network and memory, this approach allows VMs to run applications with memory requirements much higher than physically available locally. It is experimentally shown that ACE-M reduces the memory performance degradation by about 75% and achieves a 40% lower network response time for memory intensive VMs. A combination of these innovations to the virtualization technologies can minimize performance degradation of various VM attributes, which will ultimately lead to a better end-user experience

    Chance Constrained Optimal Power Flow: Risk-Aware Network Control under Uncertainty

    Full text link

    Chance Constrained Optimal Power Flow: Risk-Aware Network Control under Uncertainty

    Get PDF
    When uncontrollable resources fluctuate, Optimum Power Flow (OPF), routinely used by the electric power industry to redispatch hourly controllable generation (coal, gas and hydro plants) over control areas of transmission networks, can result in grid instability, and, potentially, cascading outages. This risk arises because OPF dispatch is computed without awareness of major uncertainty, in particular fluctuations in renewable output. As a result, grid operation under OPF with renewable variability can lead to frequent conditions where power line flow ratings are significantly exceeded. Such a condition, which is borne by simulations of real grids, would likely resulting in automatic line tripping to protect lines from thermal stress, a risky and undesirable outcome which compromises stability. Smart grid goals include a commitment to large penetration of highly fluctuating renewables, thus calling to reconsider current practices, in particular the use of standard OPF. Our Chance Constrained (CC) OPF corrects the problem and mitigates dangerous renewable fluctuations with minimal changes in the current operational procedure. Assuming availability of a reliable wind forecast parameterizing the distribution function of the uncertain generation, our CCOPF satisfies all the constraints with high probability while simultaneously minimizing the cost of economic redispatch. CCOPF allows efficient implementation, e.g. solving a typical instance over the 2746bus Polish network in 20s on a standard laptop

    Deletion of the hemopexin or heme oxygenase-2 gene aggravates brain injury following stroma-free hemoglobin-induced intracerebral hemorrhage

    Get PDF
    BACKGROUND: Following intracerebral hemorrhage (ICH), red blood cells release massive amounts of toxic heme that causes local brain injury. Hemopexin (Hpx) has the highest binding affinity to heme and participates in its transport, while heme oxygenase 2 (HO2) is the rate-limiting enzyme for the degradation of heme. Microglia are the resident macrophages in the brain; however, the significance and role of HO2 and Hpx on microglial clearance of the toxic heme (iron-protoporphyrin IX) after ICH still remain understudied. Accordingly, we postulated that global deletion of constitutive HO2 or Hpx would lead to worsening of ICH outcomes. METHODS: Intracerebral injection of stroma-free hemoglobin (SFHb) was used in our study to induce ICH. Hpx knockout (Hpx(−/−)) or HO2 knockout (HO2(−/−)) mice were injected with 10 μL of SFHb in the striatum. After injection, behavioral/functional tests were performed, along with anatomical analyses. Iron deposition and neuronal degeneration were depicted by Perls’ and Fluoro-Jade B staining, respectively. Immunohistochemistry with anti-ionized calcium-binding adapter protein 1 (Iba1) was used to estimate activated microglial cells around the injured site. RESULTS: This study shows that deleting Hpx or HO2 aggravated SFHb-induced brain injury. Compared to wild-type littermates, larger lesion volumes were observed in Hpx(−/−) and HO2(−/−) mice, which also bear more degenerating neurons in the peri-lesion area 24 h postinjection. Fewer Iba1-positive microglial cells were detected at the peri-lesion area in Hpx(−/−) and HO2(−/−) mice, interestingly, which is associated with markedly increased iron-positive microglial cells. Moreover, the Iba1-positive microglial cells increased from 24 to 72 h postinjection and were accompanied with improved neurologic deficits in Hpx(−/−) and HO2(−/−) mice. These results suggest that Iba1-positive microglial cells could engulf the extracellular SFHb and provide protective effects after ICH. We then treated cultured primary microglial cells with SFHb at low and high concentrations. The results show that microglial cells actively take up the extracellular SFHb. Of interest, we also found that iron overload in microglia significantly reduces the Iba1 expression level and resultantly inhibits microglial phagocytosis. CONCLUSIONS: This study suggests that microglial cells contribute to hemoglobin-heme clearance after ICH; however, the resultant iron overloads in microglia appear to decrease Iba1 expression and to further inhibit microglial phagocytosis

    Zuverlässige und Energieeffiziente gemischt-kritische Echtzeit On-Chip Systeme

    Get PDF
    Multi- and many-core embedded systems are increasingly becoming the target for many applications that require high performance under varying conditions. A resulting challenge is the control, and reliable operation of such complex multiprocessing architectures under changes, e.g., high temperature and degradation. In mixed-criticality systems where many applications with varying criticalities are consolidated on the same execution platform, fundamental isolation requirements to guarantee non-interference of critical functions are crucially important. While Networks-on-Chip (NoCs) are the prevalent solution to provide scalable and efficient interconnects for the multiprocessing architectures, their associated energy consumption has immensely increased. Specifically, hard real-time NoCs must manifest limited energy consumption as thermal runaway in such a core shared resource jeopardizes the whole system guarantees. Thus, dynamic energy management of NoCs, as opposed to the related work static solutions, is highly necessary to save energy and decrease temperature, while preserving essential temporal requirements. In this thesis, we introduce a centralized management to provide energy-aware NoCs for hard real-time systems. The design relies on an energy control network, developed on top of an existing switch arbitration network to allow isolation between energy optimization and data transmission. The energy control layer includes local units called Power-Aware NoC controllers that dynamically optimize NoC energy depending on the global state and applications’ temporal requirements. Furthermore, to adapt to abnormal situations that might occur in the system due to degradation, we extend the concept of NoC energy control to include the entire system scope. That is, online resource management employing hierarchical control layers to treat system degradation (imminent core failures) is supported. The mechanism applies system reconfiguration that involves workload migration. For mixed-criticality systems, it allows flexible boundaries between safety-critical and non-critical subsystems to safely apply the reconfiguration, preserving fundamental safety requirements and temporal predictability. Simulation and formal analysis-based experiments on various realistic usecases and benchmarks are conducted showing significant improvements in NoC energy-savings and in treatment of system degradation for mixed-criticality systems improving dependability over the status quo.Eingebettete Many- und Multi-core-Systeme werden zunehmend das Ziel für Anwendungen, die hohe Anfordungen unter unterschiedlichen Bedinungen haben. Für solche hochkomplexed Multi-Prozessor-Systeme ist es eine grosse Herausforderung zuverlässigen Betrieb sicherzustellen, insbesondere wenn sich die Umgebungseinflüsse verändern. In Systeme mit gemischter Kritikalität, in denen viele Anwendungen mit unterschiedlicher Kritikalität auf derselben Ausführungsplattform bedient werden müssen, sind grundlegende Isolationsanforderungen zur Gewährleistung der Nichteinmischung kritischer Funktionen von entscheidender Bedeutung. Während On-Chip Netzwerke (NoCs) häufig als skalierbare Verbindung für die Multiprozessor-Architekturen eingesetzt werden, ist der damit verbundene Energieverbrauch immens gestiegen. Daher sind dynamische Plattformverwaltungen, im Gegensatz zu den statischen, zwingend notwendig, um ein System an die oben genannten Veränderungen anzupassen und gleichzeitig Timing zu gewährleisten. In dieser Arbeit entwickeln wir energieeffiziente NoCs für harte Echtzeitsysteme. Das Design basiert auf einem Energiekontrollnetzwerk, das auf einem bestehenden Switch-Arbitration-Netzwerk entwickelt wurde, um eine Isolierung zwischen Energieoptimierung und Datenübertragung zu ermöglichen. Die Energiesteuerungsschicht umfasst lokale Einheiten, die als Power-Aware NoC-Controllers bezeichnet werden und die die NoC-Energie in Abhängigkeit vom globalen Zustand und den zeitlichen Anforderungen der Anwendungen optimieren. Darüber hinaus wird das Konzept der NoC-Energiekontrolle zur Anpassung an Anomalien, die aufgrund von Abnutzung auftreten können, auf den gesamten Systemumfang ausgedehnt. Online- Ressourcenverwaltungen, die hierarchische Kontrollschichten zur Behandlung Abnutzung (drohender Kernausfälle) einsetzen, werden bereitgestellt. Bei Systemen mit gemischter Kritikalität erlaubt es flexible Grenzen zwischen sicherheitskritischen und unkritischen Subsystemen, um die Rekonfiguration sicher anzuwenden, wobei grundlegende Sicherheitsanforderungen erhalten bleiben und Timing Vorhersehbarkeit. Experimente werden auf der Basis von Simulationen und formalen Analysen zu verschiedenen realistischen Anwendungsfallen und Benchmarks durchgeführt, die signifikanten Verbesserungen bei On-Chip Netzwerke-Energieeinsparungen und bei der Behandlung von Abnutzung für Systeme mit gemischter Kritikalität zur Verbesserung die Systemstabilität gegenüber dem bisherigen Status quo zeigen

    A method of hardware qualification for flight by analyses, similarity and integrated testing

    Get PDF
    The results are described of a study on four pieces of flight hardware from the Saturn 1U and S-4B stages to determine whether the objectives of the formal qualification tests on that hardware could have been obtained within that program by methods other than performing the qualification tests. These methods include qualification by analyses, similarity and integrated testing, i.e., distribution of the objectives among the other tests in the program. It was found that it is feasible to delete the requirements for formal qualification testing provided that it is accomplished early in the program to allow adequate planning for accomplishing the qualification objectives by other means. Additionally, a scorekeeping system was defined that is simple, straightforward, easy to implement. This scorekeeping system provides complete visibility of equivalent qualification status at any point during the program. A set of groundrules for implementing this study was established as a result of findings on the specific items of hardware studied
    • …
    corecore