17 research outputs found

    Adaptive runtime techniques for power and resource management on multi-core systems

    Full text link
    Energy-related costs are among the major contributors to the total cost of ownership of data centers and high-performance computing (HPC) clusters. As a result, future data centers must be energy-efficient to meet the continuously increasing computational demand. Constraining the power consumption of the servers is a widely used approach for managing energy costs and complying with power delivery limitations. In tandem, virtualization has become a common practice, as virtualization reduces hardware and power requirements by enabling consolidation of multiple applications on to a smaller set of physical resources. However, administration and management of data center resources have become more complex due to the growing number of virtualized servers installed in data centers. Therefore, designing autonomous and adaptive energy efficiency approaches is crucial to achieve sustainable and cost-efficient operation in data centers. Many modern data centers running enterprise workloads successfully implement energy efficiency approaches today. However, the nature of multi-threaded applications, which are becoming more common in all computing domains, brings additional design and management challenges. Tackling these challenges requires a deeper understanding of the interactions between the applications and the underlying hardware nodes. Although cluster-level management techniques bring significant benefits, node-level techniques provide more visibility into application characteristics, which can then be used to further improve the overall energy efficiency of the data centers. This thesis proposes adaptive runtime power and resource management techniques on multi-core systems. It demonstrates that taking the multi-threaded workload characteristics into account during management significantly improves the energy efficiency of the server nodes, which are the basic building blocks of data centers. The key distinguishing features of this work are as follows: We implement the proposed runtime techniques on state-of-the-art commodity multi-core servers and show that their energy efficiency can be significantly improved by (1) taking multi-threaded application specific characteristics into account while making resource allocation decisions, (2) accurately tracking dynamically changing power constraints by using low-overhead application-aware runtime techniques, and (3) coordinating dynamic adaptive decisions at various layers of the computing stack, specifically at system and application levels. Our results show that efficient resource distribution under power constraints yields energy savings of up to 24% compared to existing approaches, along with the ability to meet power constraints 98% of the time for a diverse set of multi-threaded applications

    Cyber Defense Remediation in Energy Delivery Systems

    Get PDF
    The integration of Information Technology (IT) and Operational Technology (OT) in Cyber-Physical Systems (CPS) has resulted in increased efficiency and facilitated real-time information acquisition, processing, and decision making. However, the increase in automation technology and the use of the internet for connecting, remote controlling, and supervising systems and facilities has also increased the likelihood of cybersecurity threats that can impact safety of humans and property. There is a need to assess cybersecurity risks in the power grid, nuclear plants, chemical factories, etc. to gain insight into the likelihood of safety hazards. Quantitative cybersecurity risk assessment will lead to informed cyber defense remediation and will ensure the presence of a mitigation plan to prevent safety hazards. In this dissertation, using Energy Delivery Systems (EDS) as a use case to contextualize a CPS, we address key research challenges in managing cyber risk for cyber defense remediation. First, we developed a platform for modeling and analyzing the effect of cyber threats and random system faults on EDS\u27s safety that could lead to catastrophic damages. We developed a data-driven attack graph and fault graph-based model to characterize the exploitability and impact of threats in EDS. We created an operational impact assessment to quantify the damages. Finally, we developed a strategic response decision capability that presents optimal mitigation actions and policies that balance the tradeoff between operational resilience (tactical risk) and strategic risk. Next, we addressed the challenge of management of tactical risk based on a prioritized cyber defense remediation plan. A prioritized cyber defense remediation plan is critical for effective risk management in EDS. Due to EDS\u27s complexity in terms of the heterogeneous nature of blending IT and OT and Industrial Control System (ICS), scale, and critical processes tasks, prioritized remediation should be applied gradually to protect critical assets. We proposed a methodology for prioritizing cyber risk remediation plans by detecting and evaluating critical EDS nodes\u27 paths. We conducted evaluation of critical nodes characteristics based on nodes\u27 architectural positions, measure of centrality based on nodes\u27 connectivity and frequency of network traffic, as well as the controlled amount of electrical power. The model also examines the relationship between cost models of budget allocation for removing vulnerabilities on critical nodes and their impact on gradual readiness. The proposed cost models were empirically validated in an existing network ICS test-bed computing nodes criticality. Two cost models were examined, and although varied, we concluded the lack of correlation between types of cost models to most damageable attack path and critical nodes readiness. Finally, we proposed a time-varying dynamical model for the cyber defense remediation in EDS. We utilize the stochastic evolutionary game model to simulate the dynamic adversary of cyber-attack-defense. We leveraged the Logit Quantal Response Dynamics (LQRD) model to quantify real-world players\u27 cognitive differences. We proposed the optimal decision making approach by calculating the stable evolutionary equilibrium and balancing defense costs and benefits. Case studies on EDS indicate that the proposed method can help the defender predict possible attack action, select the related optimal defense strategy over time, and gain the maximum defense payoffs. We also leveraged software-defined networking (SDN) in EDS for dynamical cyber defense remediation. We presented an approach to aid the selection security controls dynamically in an SDN-enabled EDS and achieve tradeoffs between providing security and Quality of Service (QoS). We modeled the security costs based on end-to-end packet delay and throughput. We proposed a non-dominated sorting based multi-objective optimization framework which can be implemented within an SDN controller to address the joint problem of optimizing between security and QoS parameters by alleviating time complexity at O(MN2). The M is the number of objective functions, and N is the population for each generation, respectively. We presented simulation results that illustrate how data availability and data integrity can be achieved while maintaining QoS constraints

    High throughput image compression and decompression on GPUs

    Get PDF
    Diese Arbeit befasst sich mit der Entwicklung eines GPU-freundlichen, intra-only, Wavelet-basierten Videokompressionsverfahrens mit hohem Durchsatz, das für visuell verlustfreie Anwendungen optimiert ist. Ausgehend von der Beobachtung, dass der JPEG 2000 Entropie-Kodierer ein Flaschenhals ist, werden verschiedene algorithmische Änderungen vorgeschlagen und bewertet. Zunächst wird der JPEG 2000 Selective Arithmetic Coding Mode auf der GPU realisiert, wobei sich die Erhöhung des Durchsatzes hierdurch als begrenzt zeigt. Stattdessen werden zwei nicht standard-kompatible Änderungen vorgeschlagen, die (1) jede Bitebebene in nur einem einzelnen Pass verarbeiten (Single-Pass-Modus) und (2) einen echten Rohcodierungsmodus einführen, der sample-weise parallelisierbar ist und keine aufwendige Kontextmodellierung erfordert. Als nächstes wird ein alternativer Entropiekodierer aus der Literatur, der Bitplane Coder with Parallel Coefficient Processing (BPC-PaCo), evaluiert. Er gibt Signaladaptivität zu Gunsten von höherer Parallelität auf und daher wird hier untersucht und gezeigt, dass ein aus verschiedensten Testsequenzen gemitteltes statisches Wahrscheinlichkeitsmodell eine kompetitive Kompressionseffizienz erreicht. Es wird zudem eine Kombination von BPC-PaCo mit dem Single-Pass-Modus vorgeschlagen, der den Speedup gegenüber dem JPEG 2000 Entropiekodierer von 2,15x (BPC-PaCo mit zwei Pässen) auf 2,6x (BPC-PaCo mit Single-Pass-Modus) erhöht auf Kosten eines um 0,3 dB auf 1,0 dB erhöhten Spitzen-Signal-Rausch-Verhältnis (PSNR). Weiter wird ein paralleler Algorithmus zur Post-Compression Ratenkontrolle vorgestellt sowie eine parallele Codestream-Erstellung auf der GPU. Es wird weiterhin ein theoretisches Laufzeitmodell formuliert, das es durch Benchmarking von einer GPU ermöglicht die Laufzeit einer Routine auf einer anderen GPU vorherzusagen. Schließlich wird der erste JPEG XS GPU Decoder vorgestellt und evaluiert. JPEG XS wurde als Low Complexity Codec konzipiert und forderte erstmals explizit GPU-Freundlichkeit bereits im Call for Proposals. Ab Bitraten über 1 bpp ist der Decoder etwa 2x schneller im Vergleich zu JPEG 2000 und 1,5x schneller als der schnellste hier vorgestellte Entropiekodierer (BPC-PaCo mit Single-Pass-Modus). Mit einer GeForce GTX 1080 wird ein Decoder Durchsatz von rund 200 fps für eine UHD-4:4:4-Sequenz erreicht.This work investigates possibilities to create a high throughput, GPU-friendly, intra-only, Wavelet-based video compression algorithm optimized for visually lossless applications. Addressing the key observation that JPEG 2000’s entropy coder is a bottleneck and might be overly complex for a high bit rate scenario, various algorithmic alterations are proposed. First, JPEG 2000’s Selective Arithmetic Coding mode is realized on the GPU, but the gains in terms of an increased throughput are shown to be limited. Instead, two independent alterations not compliant to the standard are proposed, that (1) give up the concept of intra-bit plane truncation points and (2) introduce a true raw-coding mode that is fully parallelizable and does not require any context modeling. Next, an alternative block coder from the literature, the Bitplane Coder with Parallel Coefficient Processing (BPC-PaCo), is evaluated. Since it trades signal adaptiveness for increased parallelism, it is shown here how a stationary probability model averaged from a set of test sequences yields competitive compression efficiency. A combination of BPC-PaCo with the single-pass mode is proposed and shown to increase the speedup with respect to the original JPEG 2000 entropy coder from 2.15x (BPC-PaCo with two passes) to 2.6x (proposed BPC-PaCo with single-pass mode) at the marginal cost of increasing the PSNR penalty by 0.3 dB to at most 1 dB. Furthermore, a parallel algorithm is presented that determines the optimal code block bit stream truncation points (given an available bit rate budget) and builds the entire code stream on the GPU, reducing the amount of data that has to be transferred back into host memory to a minimum. A theoretical runtime model is formulated that allows, based on benchmarking results on one GPU, to predict the runtime of a kernel on another GPU. Lastly, the first ever JPEG XS GPU-decoder realization is presented. JPEG XS was designed to be a low complexity codec and for the first time explicitly demanded GPU-friendliness already in the call for proposals. Starting at bit rates above 1 bpp, the decoder is around 2x faster compared to the original JPEG 2000 and 1.5x faster compared to JPEG 2000 with the fastest evaluated entropy coder (BPC-PaCo with single-pass mode). With a GeForce GTX 1080, a decoding throughput of around 200 fps is achieved for a UHD 4:4:4 sequence

    Airborne Wind Shear Detection and Warning Systems: First Combined Manufacturers' and Technologists' Conference

    Get PDF
    The purpose of the meeting was to transfer significant, ongoing results gained during the first year of the joint NASA/FAA Airborne Wind Shear Program to the technical industry and to pose problems of current concern to the combined group. It also provided a forum for manufacturers to review forward-looking technology concepts and for technologists to gain an understanding of FAA certification requirements and the problems encountered by the manufacturers during the development of airborne equipment

    A framework for economic analysis of network architectures

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)This thesis firstly surveys and summarizes the state-of-the-art studies from two research areas in Software De fined Networking (SDN) architecture: (i) control plane scalability and (ii) Quality of Service (QoS)-related problems. It also outlines the potential challenges and open problems that need to be addressed further for more scalable SDN control planes and better and complete QoS abilities in SDN networks. The thesis secondly presents a hierarchical SDN design along with an inter-AS QoS-guaranteed routing approach. This design addresses the scalability problems of control plane and privacy concerns of inter-AS QoS routing philosophies in SDN. After exploring the roots of control plane scalability problems in SDN, the thesis then proposes a metric to quantitatively evaluate the control plane scalability in SDN. Later, the thesis presents a general framework for economic analysis of network architectures and designs. To this end, the thesis defines and utilizes two metrics, Unit Service Cost Scalability and Cost-to-Service, to evaluate how SDN architecture performs compared to MPLS architecture in terms of unit cost for a service and cost of introducing a new service along with giving mathematical models to calculate Capital Expenditures (CAPEX) and Operational Expenditures (OPEX) of a network. Moreover, the thesis studies the problem of optimal final pricing for services by proposing an optimal pricing scheme for a service request with QoS in SDN environment while aiming to maximize benefits of both service providers and customers. Finally, the thesis investigates how programmable network architectures, i.e. SDN, affect the network economics compared to traditional network architectures, i.e. MPLS, in case of failures along with exploring the economic impact of failures in different SDN control plane models

    Neural and behavioral bases of innate behaviors

    Get PDF
    Recently, ethological studies of animal behavior uncovered its complexity while neuroscientific work began unraveling the neural bases of behavior. Improvements in algorithmic understanding of behavior and neural function contributed to re- cent breakthroughs in robotics and artificial intelligence systems. Yet, animals’ decision-making and motor-control are unequalled by human engineered systems and the continued investigation of the behavioral and neural bases of these abilities is crucial for understanding brain function and inform further technological devel- opments. In my PhD work, I first investigate escape path selection in mice presented with threat, demonstrating how mice combined rapidly acquired spatial knowledge with an innate choice heuristic to inform decision-making. This strategy minimizes the requirement for trial-and-error learning and yields accurate decision-making by combining knowledge acquired at an evolutionarily time-scale with that acquired by the individual. Future work aimed at understanding how these sources of in- formation are combined in the brain to inform decision-making may lead to more efficient artificial learning agents. Next, I studied goal-directed locomotion behav- ior in which mice move rapidly through an environment to reach a goal location. Successful goal-directed locomotion behavior requires substantial navigation and motor control skills and, additionally, sophisticated planning and control of move- ments while moving at high speed. Detailed behavioral quantification and compar- ison to a control-theoretic model demonstrated that mice do possess such planning skills, allowing them to execute rapid and efficient trajectories to a goal. Population- level extracellular recordings of neural activity during goal directed locomotion was also used to begin uncovering the neural bases of planning during locomotion. Altogether, my work combined accurate quantification of animal movements with the- oretical models of optimal behavior to understand behavior at a computation level, aiming to provide crucial information to inform future studies on the neural bases of innate behaviors and aid in the development of novel artificial learning system

    Distributed real-time physics for scalable and streamed games and simulation

    Get PDF
    PhD ThesisIn this study, a solution to delivering scalable real-time physics simulations is proposed. Although high performance computing simulations of physics related problems do exist, these are not real-time and do not model the real-time intricate interactions of rigid bodies for visual effect common in video games (favouring accuracy over real-time). As such, this study presents the first approach to real-time delivery of scalable, commercial grade, video game quality physics and is termed Aura Projection (AP). This approach takes the physics engine out of the player’s machine and deploys it across standard cloud based infrastructures. The simulation world is divided into regions that are then allocated to multiple servers. A server maintains the physics for all simulated objects in its region. The contribution of this study is the ability to maintain a scalable simulation by allowing object interaction across region boundaries using predictive migration techniques. AP allows each object to project an aura that is used to determine object migration across servers to ensure seamless physics interactions between objects. AP allows player interaction at any point in real-time (influencing the simulation) in the same manner as any video game. This study measures and evaluates both the scalability of AP and correctness of collisions within AP through experimentation and benchmarking. The experiments show that AP is a solution to scalable real-time physics by measuring computation workload with increasing computation resources. AP also demonstrates that collisions between rigid-bodies can be simulated correctly within a scalable real-time physics simulation, even when rigid-bodies are intersecting server-region boundaries; demonstrated through comparison of a distributed AP simulation to a single, centralised simulation. We believe that AP is the first successful demonstration of scalable real-time physics in an academic setting

    A Phenomenological Examination of Virtual Game Developers\u27 Experiences Using Jacob\u27s Ladder Pre-Production Design Tactic

    Get PDF
    Edutainment refers to curriculum and instruction designed with a clear educational purpose, including multi-faceted virtual learning game design. Tools such as the Jacob\u27s Ladder pre-production design tactic have been developed to ensure that voices of both engineers and educators are heard. However, it is unclear how development team members experience and perceive their collaborative work while designing a virtual game using such tactics. This phenomenological study examined the experiences of agile software team members using Jacob\u27s Ladder pre-production design as an interdisciplinary collaboration tool while designing a virtual learning game. Seven design team members (3 educators and 4 engineers) participated in semi-structured interviews and transcripts were analyzed via an inductive coding process that led to the development of key themes. Findings indicated that using Jacob\u27s Ladder design tactic influenced the experience of the team by keeping the team focused on common goals and learner needs, organizing the team work, supporting interdisciplinary collaboration, and promoting shared understandings of the software platform limitations. Individuals played various roles, appreciated diverse views, recognized prior experience and idea sharing, and felt the design tactic supported flexibility for interdisciplinary collaboration. By linking integration strategies to interdisciplinary collaboration, findings from this study may be used by organizational leaders to consider best practices in team building for virtual learning game design, which will further support the development of effective games and growth of the edutainment industry

    Detection of optical water quality parameters for eutrophic waters by high resolution remote sensing

    Get PDF
    corecore