2,242 research outputs found

    Proceedings of the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications

    Get PDF
    The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's

    Is There a Market for Work Group Servers? Evaluating Market Level Demand Elasticities Using Micro and Macro Models

    Get PDF
    This paper contains an empirical analysis demand for "work-group" (or low-end) servers. Servers are at thecentre of many US and EU anti-trust debates, including the Hewlett-Packard/Compaq merger and investigationsinto the activities of Microsoft. One question in these policy decisions is whether a high share of work serversindicates anything about shortrun market power. To investigate price elasticities we use model-level panel dataon transaction prices, sales and characteristics of practically every server in the world. We contrast estimatesfrom the traditional "macro" approaches that aggregate across brands and modern "micro" approaches that usebrand-level information (including both "distance metric" and logit based approaches). We find that the macroapproaches lead to overestimates of consumer price sensitivity. Our preferred micro-based estimates of themarket level elasticity of demand for work group servers are around 0.3 to 0.6 (compared to 1 to 1.3 in themacro estimates). Even at the higher range of the estimates, however, we find that demand elasticities aresufficiently low to imply a distinct "anti-trust" market for work group servers and their operating systems. It isunsurprising that firms with large shares of work group servers have come under some antitrust scrutiny.demand elasticities, network servers, computers, anti-trust

    Methodologies for CIM systems integration in small batch manufacturing

    Get PDF
    This thesis is concerned with identifying the problems and constraints faced by small batch manufacturing companies during the implementation of Computer Integrated Manufacturing (CIM). The main aim of this work is to recommend generic solutions to these problems with particular regard to those constraints arising because of the need for ClM systems integration involving both new and existing systems and procedures. The work has involved the application of modern computer technologies, including suitable hardware and software tools, in an industrial environment. Since the research has been undertaken with particular emphasis on the industrial implementor's viewpoint, it is supported by the results of a two phased implementation of computer based control systems within the machine shop of a manufacturing company. This involved the specific implementation of a Distributed Numerical Control system on a single machine in a group technology cell of machines followed by the evolution of this system into Cell and Machine Management Systems to provide a comprehensive decision support and information distribution facility for the foremen and uperators within the cell. The work also required the integration of these systems with existing Factory level manufacturing control and CADCAM functions. Alternative approaches have been investigated which may have been applicable under differing conditions and the implications that this specific work has for CIM systems integration in small batch manufacturing companies evaluated with regard not only to the users within an industrial company but also the systems suppliers external to the company. The work has resulted in certain generic contributions to knowledge by complementing ClM systems integration research with regard to problems encountered; cost implications; the use of appropriate methodologies including the role of emerging international standard methods, tools and technologies and also the importance of 'human integration' when implementing CIM systems in a real industrial situation

    Factors shaping the evolution of electronic documentation systems

    Get PDF
    The main goal is to prepare the space station technical and managerial structure for likely changes in the creation, capture, transfer, and utilization of knowledge. By anticipating advances, the design of Space Station Project (SSP) information systems can be tailored to facilitate a progression of increasingly sophisticated strategies as the space station evolves. Future generations of advanced information systems will use increases in power to deliver environmentally meaningful, contextually targeted, interconnected data (knowledge). The concept of a Knowledge Base Management System is emerging when the problem is focused on how information systems can perform such a conversion of raw data. Such a system would include traditional management functions for large space databases. Added artificial intelligence features might encompass co-existing knowledge representation schemes; effective control structures for deductive, plausible, and inductive reasoning; means for knowledge acquisition, refinement, and validation; explanation facilities; and dynamic human intervention. The major areas covered include: alternative knowledge representation approaches; advanced user interface capabilities; computer-supported cooperative work; the evolution of information system hardware; standardization, compatibility, and connectivity; and organizational impacts of information intensive environments

    NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 1

    Get PDF
    Papers and viewgraphs from the conference are presented. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disks and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's

    Deep Space Network information system architecture study

    Get PDF
    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control

    Domain Computing: The Next Generation of Computing

    Get PDF
    Computers are indispensable in our daily lives. The first generation of computing started the era of human automation computing. These machine’s computational resources, however, were completely centralized in local machines. With the appearance of networks, the second generation of computing significantly improved data availability and portability so that computing resources could be efficiently shared among the networks. The service-oriented third generation of computing provided functionality by breaking down applications into services, on-demand computing through utility and cloud infrastructures, as well as ubiquitous accesses from wide-spread geographical networks. Services as primary computing resources are far spread from lo- cal to worldwide. These services loosely couple applications and servers, which allows services to scale up easily with higher availability. The complexity of locating, utilizing and optimizing computational resources becomes even more challenging as these resources become more available, fault-tolerant, scalable, better per- forming, and spatially distributed. The critical question becomes how do applications dynamically utilize and optimize unique/duplicate/competitive resources at runtime in the most efficient and effective way without code changes, as well as providing high available, scalable, secured and easy development services. Domain computing proposes a new way to manage computational resources and applications. Domain computing dy- namically manages resources within logic entities, domains, and without being bound to physical machines so that application functionality can be extended at runtime. Moreover, domain computing introduces domains as a replacement of a traditional computer in order to run applications and link different computational resources that are distributed over networks into domains so that a user can greatly improve and optimize the resource utilization at a global level. By negotiating with different layers, domain computing dynamically links different resources, shares resources and cooperates with domains at runtime so applications can more quickly adapt to dynamically changing environments and gain better performance. Also, domain computing presents a new way to develop applications which are resource stateless based. In this work, a prototype sys- tem was built and the performance of its various aspects has been examined, including network throughput, response time, variance, resource publishing and subscription, and secured communications

    Database machines in support of very large databases

    Get PDF
    Software database management systems were developed in response to the needs of early data processing applications. Database machine research developed as a result of certain performance deficiencies of these software systems. This thesis discusses the history of database machines designed to improve the performance of database processing and focuses primarily on the Teradata DBC/1012, the only successfully marketed database machine that supports very large databases today. Also reviewed is the response of IBM to the performance needs of its database customers; this response has been in terms of improvements in both software and hardware support for database processing. In conclusion, an analysis is made of the future of database machines, in particular the DBC/1012, in light of recent IBM enhancements and its immense customer base
    • …
    corecore