7 research outputs found

    Hardware Support for Advanced Data Management Systems

    Get PDF
    This thesis considers the problem of the optimal hardware architecture for advanced data management systems, of which the REL system can be considered a prototype. Exploration of the space of architectures requires a new technique which applies widely varying work loads, performance constraints, and heuristic configuration rules with an analytic queueing network model to develop cost functions which cover a representative range of organizational requirements. The model computes cost functions, which are the ultimate basis for comparison of architectures, from a technology forecast. Thc discussion shows the application of the modeling technique to thirty trial architectures which reflect the major classifications of data base machine architectures and memory technologies. The results suggest practical design considerations for advanced data management systems

    Submicron Systems Architecture: Semiannual Technical Report

    Get PDF
    No abstract available

    A benchmarking methodology for the centralized-database computer with expandable and parallel database processors and stores

    Get PDF
    In this paper a benchmarking methodology for a new kind of database computers is introduced. The emergence in the research community and in the commercial world of this kind of database computer (known as the multiple-backed database computers), were each computer system is configured with two or more identical processors and their associated stores for concurrent execution of transactions and for parallel processing of a centralized database spread over separate stores, is evident. The motivation and characterization of the multiple-backend database computer are first given. The need and lack of a methodology for benchmarking the new computer with a variable number of backends for the same database or with a fixed number of backends for different capacities are also evident. The measures (benchmarks) of the new computer are articulated and established and the design of the methodology for conducting the measurements is then given. Because of the novelty of the database computer architecture, the benchmarking methodology is rather elaborate and somewhat complicated. To aid our understanding of the methodology, a concrete sample is given herein. This sample also illustrates the use of the methodology. Meanwhile, a CAD system which computerizes the benchmarking methodology for systematically assisting the design of test databases and test-transaction mixes, for automatically tallying the design data and workloads, and for completely generating the test databases and test-transaction mixes is being implementedPrepared for: Chief of Naval Research Arlington, VAhttp://archive.org/details/benchmarkingmeth00demu61153N; RRO14-0 8-01 N0001485WR24046NAApproved for public release; distribution is unlimited

    Submicron Systems Architecture Project : Semiannual Technical Report

    Get PDF
    The Mosaic C is an experimental fine-grain multicomputer based on single-chip nodes. The Mosaic C chip includes 64KB of fast dynamic RAM, processor, packet interface, ROM for bootstrap and self-test, and a two-dimensional selftimed router. The chip architecture provides low-overhead and low-latency handling of message packets, and high memory and network bandwidth. Sixty-four Mosaic chips are packaged by tape-automated bonding (TAB) in an 8 x 8 array on circuit boards that can, in turn, be arrayed in two dimensions to build arbitrarily large machines. These 8 x 8 boards are now in prototype production under a subcontract with Hewlett-Packard. We are planning to construct a 16K-node Mosaic C system from 256 of these boards. The suite of Mosaic C hardware also includes host-interface boards and high-speed communication cables. The hardware developments and activities of the past eight months are described in section 2.1. The programming system that we are developing for the Mosaic C is based on the same message-passing, reactive-process, computational model that we have used with earlier multicomputers, but the model is implemented for the Mosaic in a way that supports finegrain concurrency. A process executes only in response to receiving a message, and may in execution send messages, create new processes, and modify its persistent variables before it either exits or becomes dormant in preparation for receiving another message. These computations are expressed in an object-oriented programming notation, a derivative of C++ called C+-. The computational model and the C+- programming notation are described in section 2.2. The Mosaic C runtime system, which is written in C+-, provides automatic process placement and highly distributed management of system resources. The Mosaic C runtime system is described in section 2.3

    Submicron Systems Architecture: Semiannual Technical Report

    Get PDF
    No abstract available

    Submicron Systems Architecture Project: Semiannual Technical Report

    Get PDF
    No abstract available

    An Epidemiology of Big Data

    Get PDF
    Federal legislation designed to transform the U.S. healthcare system and the emergence of mobile technology are among the common drivers that have contributed to a data explosion, with industry analysts and stakeholders proclaiming this decade the big data decade in healthcare (Horowitz, 2012). But a precise definition of big data is hazy (Dumbill, 2013). Instead, the healthcare industry mainly relies on metaphors, buzzwords, and slogans that fail to provide information about big data\u27s content, value, or purposes for existence (Burns, 2011). Bollier and Firestone (2010) even suggests big data does not really exist in healthcare (p. 29). While federal policymakers and other healthcare stakeholders struggle with the adoption of Meaningful Use Standards, International Classification of Diseases-10 (ICD-10), and electronic health record interoperability standards, big data in healthcare remains a widely misunderstood phenomenon. Borgman (2012) found by studying how data are created, handled, and managed in multi-disciplinary collaborations, we can inform science policy and practice (p. 12). Through the narratives of nine leaders representing three key stakeholder classes in the healthcare ecosystem: government, providers and consumers, this phenomenological research study explored a fundamental question: Within and across the narratives of three key healthcare stakeholder classes, what are the important categories of meaning or current themes about big data in healthcare? This research is significant because it: (1) produces new thematic insights about the meaning of big data in healthcare through narrative inquiry; (2) offers an agile framework of big data that can be deployed across all industries; and, (3) makes a unique contribution to scholarly qualitative literature about the phenomena of big data in healthcare for future research on topics including the diffusion and spread of health information across networks, mixed methods studies about big data, standards development, and health policy
    corecore