46 research outputs found

    Dynamic Scale-out Mechanisms for Partitioned Shared-Nothing Databases

    Get PDF
    For a database system used in pay-per-use cloud environments, elastic scaling becomes an essential feature, allowing for minimizing costs while accommodating fluctuations of load. One approach to scalability involves horizontal database partitioning and dynamic migration of partitions between servers. We define a scale-out operation as a combination of provisioning a new server followed by migration of one or more partitions to the newly-allocated server. In this thesis we study the efficiency of different implementations of the scale-out operation in the context of online transaction processing (OLTP) workloads. We designed and implemented three migration mechanisms featuring different strategies for data transfer. The first one is based on a modification of the Xen hypervisor, Snowflock, and uses on-demand block transfers for both server provisioning and partition migration. The second one is implemented in a database management system (DBMS) and uses bulk transfers for partition migration, optimized for higher bandwidth utilization. The third one is a conventional application, using SQL commands to copy partitions between servers. We perform an experimental comparison of those scale-out mechanisms for disk-bound and CPU-bound configurations. When comparing the mechanisms we analyze their impact on whole-system performance and on the experience of individual clients

    Fourth NASA Goddard Conference on Mass Storage Systems and Technologies

    Get PDF
    This report contains copies of all those technical papers received in time for publication just prior to the Fourth Goddard Conference on Mass Storage and Technologies, held March 28-30, 1995, at the University of Maryland, University College Conference Center, in College Park, Maryland. This series of conferences continues to serve as a unique medium for the exchange of information on topics relating to the ingestion and management of substantial amounts of data and the attendant problems involved. This year's discussion topics include new storage technology, stability of recorded media, performance studies, storage system solutions, the National Information infrastructure (Infobahn), the future for storage technology, and lessons learned from various projects. There also will be an update on the IEEE Mass Storage System Reference Model Version 5, on which the final vote was taken in July 1994

    Just-in-time Hardware generation for abstracted reconfigurable computing

    Get PDF
    This thesis addresses the use of reconfigurable hardware in computing platforms, in order to harness the performance benefits of dedicated hardware whilst maintaining the flexibility associated with software. Although the reconfigurable computing concept is not new, the low level nature of the supporting tools normally used, together with the consequent limited level of abstraction and resultant lack of backwards compatibility, has prevented the widespread adoption of this technology. In addition, bandwidth and architectural limitations, have seriously constrained the potential improvements in performance. A review of existing approaches and tools flows is conducted to highlight the current problems being faced in this field. The objective of the work presented in this thesis is to introduce a radically new approach to reconfigurable computing tool flows. The runtime based tool flow introduces complete abstraction between the application developer and the underlying hardware. This new technique eliminates the ease of use and backwards compatibility issues that have plagued the reconfigurable computing concept, and could pave the way for viable mainstream reconfigurable computing platforms. An easy to use, cycle accurate behavioural modelling system is also presented, which was used extensively during the early exploration of new concepts and architectures. Some performance improvements produced by the new reconfigurable computing tool flow, when applied to both a MIPS based embedded platform, and the Cray XDl, are also presented. These results are then analyzed and the hardware and software factors affecting the performance increases that were obtained are discussed, together with potential techniques that could be used to further increase the performance of the system. Lastly a heterogenous computing concept is proposed, in which, a computer system, containing multiple types of computational resource is envisaged, each having their own strengths and weaknesses (e.g. DSPs, CPUs, FPGAs). A revolutionary new method of fully exploiting the potential of such a system, whilst maintaining scalability, backwards compatibility, and ease of use is also presented

    A predictive troubleshooting model for early engagement

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and, (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; in conjunction with the Leaders for Global Operations Program at MIT, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 76-77).Raytheon Integrated Defense Systems (IDS) is home to Circuit Card Assembly, the department responsible for the production of circuit card assemblies from across all of Raytheon's businesses. Circuit Card Assembly includes manufacturing, test, quality, finance and other groups, functioning as its own business within Raytheon IDS. Circuit Card Assembly competes with external vendors for contracts from Raytheon businesses outside of IDS, thus the pursuit of competitive advantage in the form of technology, quality and throughput is a continuous activity. Circuit Card Assembly spends upwards of a million dollars each year on troubleshooting circuit card assemblies that fail first pass testing, in labor alone, with additional costs associated with reprocessing and material replacement. This thesis describes the creation of a design tool that improves electrical design for test, reducing wasteful troubleshooting on hundreds of products each year, saving tens of thousands of dollars on high cost programs, with incremental yearly savings totaling in the hundreds of thousands, and a net present value of over 2.5 million in labor savings. The tool provides designers with real time feedback regarding the impact their design decisions have on expected troubleshooting activity, and provides guidance to improve troubleshoot ability. The tool reduces spending on non-value added activity buy an average of 50%, while at the same time helping fulfill Circuit Card Assembly's mission to engage design teams at the earliest stages of product development, before potentially costly decisions are finalized and beyond Circuit Card Assembly's ability to influence. The subject of interaction between groups in different functional silos, between independent Raytheon businesses and with seemingly disparate incentives is investigated as it pertains to the development of the design for test tool. The method of action of the design tool at a personal or organizational level is to raise awareness of total product cost and allow disparate teams to communicate in the same language with a more complete understanding of how to achieve corporate level goals. Communicating effectively across business and functional barriers is the greatest achievement of the new tool, but also the greatest roll out and developmental challenge. The tool is part of a suite of similar activities driving towards operational excellence within CCA.by Glenn Bergevin.M.B.A.S.M

    A quality of service based framework for dynamic, dependable systems

    Get PDF
    There is currently much UK government and industry interest towards the integration of complex computer-based systems, including those in the military domain. These systems can include both mission critical and safety critical applications, and therefore require the dependable communication of data. Current modular military systems requiring such performance guarantees are mostly based on parameters and system states fixed during design time, thus allowing a predictable estimate of performance. These systems can exhibit a limited degree of reconfiguration, but this is typically within the constraints of a predefined set of configurations. The ability to reconfigure systems more dynamically, could lead to further increased flexibility and adaptability, resulting in the better use of existing assets. Current software architecture models that are capable of providing this flexibility, however, tend to lack support for dependable performance. This thesis explores the benefits for the dependability of future dynamic systems, built on a publish/subscribe model, from using Quality of Service (QoS) methods to map application level data communication requirements to available network resources. Through this, original contributions to knowledge are created, including; the proposal of a QoS framework that specifies a way of defining flexible levels of QoS characteristics and their use in the negotiation of network resources, a simulation based evaluation of the QoS framework and specifically the choice of negotiation algorithm used, and a test-bed based feasibility study. Simulation experimentation conducted comparing different methods of QoS negotiation gives a clear indication that the use of the proposed QoS framework and flexible negotiation algorithm can provide a benefit in terms of system utility, resource utilisation, and system stability. The choice of negotiation algorithm has a particularly strong impact on these system properties. The cost of these benefits comes in terms of the processing power and execution time required to reach a decision on the acceptance of a subscriber. It is suggested, given this cost, that when computational resources are limited, a simpler priority based negotiation algorithm should be used. Where system resources are more abundant, however, the flexible negotiation algorithm proposed within the QoS framework can offer further benefits. Through the implementation of the QoS framework within an existing military avionics software architecture based emulator on a test-bed, both the technical challenges that will need to be overcome and, more importantly, the potential viability for the inclusion of the QoS framework have been demonstrated

    Techniques for power system simulation using multiple processors

    Get PDF
    The thesis describes development work which was undertaken to improve the speed of a real-time power system simulator used for the development and testing of control schemes. The solution of large, highly sparse matrices was targeted because this is the most time-consuming part of the current simulator. Major improvements in the speed of the matrix ordering phase of the solution were achieved through the development of a new ordering strategy. This was thoroughly investigated, and is shown to provide important additional improvements compared to standard ordering methods, in reducing path length and minimising potential pipeline stalls. Alterations were made to the remainder of the solution process which provided more flexibility in scheduling calculations. This was used to dramatically ease the run-time generation of efficient code, dedicated to the solution of one matrix structure, and also to reduce memory requirements. A survey of the available microprocessors was performed, which concluded that a special-purpose design could best implement the code generated at run-time, and a design was produced using a microprogrammable floating-point processor, which matched the code produced by the earlier work. A method of splitting the matrix solution onto parallel processors was investigated, and two methods of producing network splits were developed and their results compared. The best results from each method were found to agree well, with a predicted three-fold speed-up for the matrix solution of the C.E.G.B. transmission system from the use of six processors. This gain will increase for the whole simulator. A parallel processing topology of the partitioned network and produce the necessary structures for the remainder of the solution process

    Innovation for maintenance technology improvements

    Get PDF
    A group of 34 submitted entries (32 papers and 2 abstracts) from the 33rd meeting of the Mechanical Failures Prevention Group whose subject was maintenance technology improvement through innovation. Areas of special emphasis included maintenance concepts, maintenance analysis systems, improved maintenance processes, innovative maintenance diagnostics and maintenance indicators, and technology improvements for power plant applications

    The 1991 Goddard Conference on Space Applications of Artificial Intelligence

    Get PDF
    The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The papers in this proceeding fall into the following areas: Planning and scheduling, fault monitoring/diagnosis/recovery, machine vision, robotics, system development, information management, knowledge acquisition and representation, distributed systems, tools, neural networks, and miscellaneous applications

    Aeronautical Engineering. A continuing bibliography, supplement 115

    Get PDF
    This bibliography lists 273 reports, articles, and other documents introduced into the NASA scientific and technical information system in October 1979

    The 1990 Goddard Conference on Space Applications of Artificial Intelligence

    Get PDF
    The papers presented at the 1990 Goddard Conference on Space Applications of Artificial Intelligence are given. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The proceedings fall into the following areas: Planning and Scheduling, Fault Monitoring/Diagnosis, Image Processing and Machine Vision, Robotics/Intelligent Control, Development Methodologies, Information Management, and Knowledge Acquisition
    corecore