175 research outputs found

    Architecting Efficient Data Centers.

    Full text link
    Data center power consumption has become a key constraint in continuing to scale Internet services. As our society’s reliance on “the Cloud” continues to grow, companies require an ever-increasing amount of computational capacity to support their customers. Massive warehouse-scale data centers have emerged, requiring 30MW or more of total power capacity. Over the lifetime of a typical high-scale data center, power-related costs make up 50% of the total cost of ownership (TCO). Furthermore, the aggregate effect of data center power consumption across the country cannot be ignored. In total, data center energy usage has reached approximately 2% of aggregate consumption in the United States and continues to grow. This thesis addresses the need to increase computational efficiency to address this grow- ing problem. It proposes a new classes of power management techniques: coordinated full-system idle low-power modes to increase the energy proportionality of modern servers. First, we introduce the PowerNap server architecture, a coordinated full-system idle low- power mode which transitions in and out of an ultra-low power nap state to save power during brief idle periods. While effective for uniprocessor systems, PowerNap relies on full-system idleness and we show that such idleness disappears as the number of cores per processor continues to increase. We expose this problem in a case study of Google Web search in which we demonstrate that coordinated full-system active power modes are necessary to reach energy proportionality and that PowerNap is ineffective because of a lack of idleness. To recover full-system idleness, we introduce DreamWeaver, architectural support for deep sleep. DreamWeaver allows a server to exchange latency for full-system idleness, allowing PowerNap-enabled servers to be effective and provides a better latency- power savings tradeoff than existing approaches. Finally, this thesis investigates workloads which achieve efficiency through methodical cluster provisioning techniques. Using the popular memcached workload, this thesis provides examples of provisioning clusters for cost-efficiency given latency, throughput, and data set size targets.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91499/1/meisner_1.pd

    The development of a power system simulator using multiple microprocessors

    Get PDF
    Imperial Users onl

    A PC-based data acquisition system for sub-atomic physics measurements

    Get PDF
    Modern particle physics measurements are heavily dependent upon automated data acquisition systems (DAQ) to collect and process experiment-generated information. One research group from the University of Saskatchewan utilizes a DAQ known as the Lucid data acquisition and analysis system. This thesis examines the project undertaken to upgrade the hardware and software components of Lucid. To establish the effectiveness of the system upgrades, several performance metrics were obtained including the system's dead time and input/output bandwidth.Hardware upgrades to Lucid consisted of replacing its aging digitization equipment with modern, faster-converting Versa-Module Eurobus (VME) technology and replacing the instrumentation processing platform with common, PC hardware. The new processor platform is coupled to the instrumentation modules via a fiber-optic bridging-device, the sis1100/3100 from Struck Innovative Systems.The software systems of Lucid were also modified to follow suit with the new hardware. Originally constructed to utilize a proprietary real-time operating system, the data acquisition application was ported to run under the freely available Real-Time Executive for Multiprocessor Systems (RTEMS). The device driver software provided with sis1100/3100 interface also had to be ported for use under the RTEMS-based system. Performance measurements of the upgraded DAQ indicate that the dead time has been reduced from being on the order of milliseconds to being on the order of several tens of microseconds. This increased capability means that Lucid's users may acquire significantly more data in a shorter period of time, thereby decreasing both the statistical uncertainties and data collection duration associated with a given experiment

    RICIS Symposium 1992: Mission and Safety Critical Systems Research and Applications

    Get PDF
    This conference deals with computer systems which control systems whose failure to operate correctly could produce the loss of life and or property, mission and safety critical systems. Topics covered are: the work of standards groups, computer systems design and architecture, software reliability, process control systems, knowledge based expert systems, and computer and telecommunication protocols

    Aligning Business Process Quality and Information System Quality

    Get PDF
    Business processes and information systems mutually affect each other in non-trivial ways. Frequently, the business process design and the information system design are not well aligned. This means that business processes are designed without taking the information system impact into account, and vice versa. Missing alignment at design time often results in quality problems at runtime, such as large response times of information systems, large process execution times, overloaded information systems or interrupted processes. Aligning business process quality and information system quality at design time requires to solve the following problems (P). Business process quality and information system quality have to be characterized. P1: In contrast to information system quality, which is specified in the ISO/IEC 9126 standard, for example, there is no common and comprehensive understanding of business process quality. P2: Beyond that, current business process modeling notations do not aim to represent quality aspects. The impact of a business process on the quality of an information system, and vice versa, is unknown at design time. P3: The mutual impact between business processes and information systems must be predicted at design time. In this thesis, the Business Process Quality Reference-Model (BPQRM), a quality model for business processes, is introduced. The model allows for a comprehensive characterization of business process quality (P1). The BPQRM is applied successfully in a case study to identify potential for process quality improvement in practice. Based on the BPQRM an existing process modeling notation is extended by model elements to represent quality aspects (P2). Simulation is a powerful means to predict the impact of a business process on the quality of an information system, and vice versa, at design time. This thesis proposes two simulation approaches to predict the mutual impact between business processes and information systems, in terms of performance (P3). The approach Business IT Impact Simulation (BIIS) defines interfaces between the business process simulation and the information system simulation. Performance-relevant information is exchanged via the interfaces between both simulations. Using business process simulation and information system simulation in isolation, workload burstiness is not adequately reflected. This is especially true for occasional, volatile peak loads. Workload burstiness can significantly affect the performance of business processes and information systems. The approach Integrated Business IT Impact Simulation (IntBIIS) for the integration of business processes and information systems in a single simulation allows for reflecting workload burstiness correctly. The simulation approaches support the comparison of design alternatives and the verification of a certain design against requirements. A case study confirms the feasibility in practice and the acceptance from practitioners’ point of view

    Reusable Reentry Satellite (RRS) system design study

    Get PDF
    The Reusable Reentry Satellite (RRS) is intended to provide investigators in several biological disciplines with a relatively inexpensive method to access space for up to 60 days with eventual recovery on Earth. The RRS will permit totally intact, relatively soft, recovery of the vehicle, system refurbishment, and reflight with new and varied payloads. The RRS is to be capable of three reflights per year over a 10-year program lifetime. The RRS vehicle will have a large and readily accessible volume near the vehicle center of gravity for the Payload Module (PM) containing the experiment hardware. The vehicle is configured to permit the experimenter late access to the PM prior to launch and rapid access following recovery. The RRS will operate in one of two modes: (1) as a free-flying spacecraft in orbit, and will be allowed to drift in attitude to provide an acceleration environment of less than 10(exp -5) g. the acceleration environment during orbital trim maneuvers will be less than 10(exp -3) g; and (2) as an artificial gravity system which spins at controlled rates to provide an artificial gravity of up to 1.5 Earth g. The RRS system will be designed to be rugged, easily maintained, and economically refurbishable for the next flight. Some systems may be designed to be replaced rather than refurbished, if cost effective and capable of meeting the specified turnaround time. The minimum time between recovery and reflight will be approximately 60 days. The PMs will be designed to be relatively autonomous, with experiments that require few commands and limited telemetry. Mass data storage will be accommodated in the PM. The hardware development and implementation phase is currently expected to start in 1991 with a first launch in late 1993

    An Architectural Framework for Performance Analysis: Supporting the Design, Configuration, and Control of DIS /HLA Simulations

    Get PDF
    Technology advances are providing greater capabilities for most distributed computing environments. However, the advances in capabilities are paralleled by progressively increasing amounts of system complexity. In many instances, this complexity can lead to a lack of understanding regarding bottlenecks in run-time performance of distributed applications. This is especially true in the domain of distributed simulations where a myriad of enabling technologies are used as building blocks to provide large-scale, geographically disperse, dynamic virtual worlds. Persons responsible for the design, configuration, and control of distributed simulations need to understand the impact of decisions made regarding the allocation and use of the logical and physical resources that comprise a distributed simulation environment and how they effect run-time performance. Distributed Interactive Simulation (DIS) and High Level Architecture (HLA) simulation applications historically provide some of the most demanding distributed computing environments in terms of performance, and as such have a justified need for performance information sufficient to support decision-makers trying to improve system behavior. This research addresses two fundamental questions: (1) Is there an analysis framework suitable for characterizing DIS and HLA simulation performance? and (2) what kind of mechanism can be used to adequately monitor, measure, and collect performance data to support different performance analysis objectives for DIS and HLA simulations? This thesis presents a unified, architectural framework for DIS and HLA simulations, provides details on a performance monitoring system, and shows its effectiveness through a series of use cases that include practical applications of the framework to support real-world U.S. Department of Defense (DoD) programs. The thesis also discusses the robustness of the constructed framework and its applicability to performance analysis of more general distributed computing applications

    High performance network function virtualization for user-oriented services

    Get PDF
    The Network Function Virtualization (NFV) paradigm proposes to transform those network functions today running on dedicated and often closed appliances (e.g., firewall, wan accelerator) into pure software images, called Virtual Network Functions (VNFs), which can be consolidated and executed on high-volume standard servers. In this context, this dissertation focuses on the possibility of enabling each single end user (and not only network operators) to set up network services by means of NFV, allowing him to custoimize the set of services that are active on his Internet connection. This goal mainly requires to address flexibility and performance issues. Regarding to the former, it is important: (i) to support services including both network (e.g., firewall) and cloud (e.g., storage server) applications; (ii) to allow the user to define the service with an intuitive and high-level abstraction, hiding infrastructure-layer details. Instead, with respect to performance, multiple software-based services operating on the user's traffic should not introduce penalties in the user’s Internet experience. This dissertation solves the above issues by proposing a number of improvements in the context of Network Function Virtualization, both in terms of high level models and architectures to define and instantiate network services, and in terms of mechanisms to efficiently interconnect VNFs. Experimental results demonstrate that the goal of allowing end users to deploy services operating on their own traffic is feasible without impacting the Internet experience

    Simulation modelling software approaches to manufacturing problems

    Get PDF
    Increased competition in many industries has resulted in a greater emphasis on developing and using advanced manufacturing systems to improve productivity and reduce costs. The complexity and dynamic behaviour of such systems, make simulation modelling one of the most popular methods to facilitate the design and assess operating strategies of these systems. The growing need for the use of simulation is reflected by a growth in the number of simulation languages and data-driven simulators in the software market. This thesis investigates which characteristics typical manufacturing simulators possess, and how the user requirements can be better fulfilled. For the purpose of software evaluation, a case study has been carried out on a real manufacturing system. Several simulation models of an automated system for electrostatic powder coating have been developed using different simulators. In addition to the evaluation of these simulators, a comprehensive evaluation framework has been developed to facilitate selection of simulation software for modelling manufacturing systems. Different hierarchies of evaluation criteria have been established for different software purposes. In particular, the criteria that have to be satisfied for users in education differ from those for users in industry. A survey has also been conducted involving a number of users of software for manufacturing simulation. The purpose of the survey was to investigate users' opinions about simulation software, and the features that they desire to be incorporated in simulation software. A methodology for simulation software selection is also derived. It consists of guidelines related to the actions to be taken and factors to be considered during the evaluation and selection of simulation software. On the basis of all the findings, proposals on how manufacturing simulators can be improved are made, both for use in education and in industry. These software improvements should result in a reduction in the amount of time and effort needed for simulation model development, and therefore make simulation more beneficial
    • …
    corecore