4,011 research outputs found

    Deep Space Network information system architecture study

    Get PDF
    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control

    Ground Systems Development Environment (GSDE) interface requirements analysis

    Get PDF
    A set of procedural and functional requirements are presented for the interface between software development environments and software integration and test systems used for space station ground systems software. The requirements focus on the need for centralized configuration management of software as it is transitioned from development to formal, target based testing. This concludes the GSDE Interface Requirements study. A summary is presented of findings concerning the interface itself, possible interface and prototyping directions for further study, and results of the investigation of the Cronus distributed applications environment

    Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992

    Get PDF
    Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented

    APPLICATIONS: Financial risk and financial Risk Management Technology (RMT): Issues and advances

    Get PDF
    Methods for sound risk management are of increasing interest among Wall Street investment banking and brokerage firms in the aftermath of the October 1987 crash of the stock market. As the knowledge of advanced technology applications in risk management increases, financial firms are finding innovative ways to use them practically, in order to insulate themselves. The recent development in models, the software and hardware, and the market data to track risk are all considered advances in Risk Management Technology (RMT). -. These advances have affected all three stages of risk management: the identification, the measurement, and the formulation of strategies to control financial risk. This article discusses the advances made in five areas of RMT: communication software, object-oriented programming, parallel processing, neural nets and artificial intelligence. Systems based on any of these areas may be used to add value to the business of a firm. A business value linkage analysis shows how the utility of advanced systems can be measured to justify their costs.Information Systems Working Papers Serie

    A comparison of integration architectures

    Get PDF
    This paper presents GenSIF, a Generic Systems Integration Framework. GenSIF features a pre-planned development process on a domain-wide basis and facilitates system integration and project coordination for very large, complex and distributed systems. Domain analysis, integration architecture design and infrastructure design are identified as the three main components of GenSIF. In the next step we map Beilcore\u27s OSCA interoperability architecture, ANSA, IBM\u27s SAA and Bull\u27s DCM into GenSIF. Using the GenSIF concepts we compare each of these architectures. GenSIF serves as a general framework to evaluate and position specific architecture. The OSCA architecture is used to discuss the impact of vendor architectures on application development. All opinions expressed in this paper, especially with regard to the OSCA architecture, are the opinions of the author and do not necessarily reflect the point of view of any of the mentioned companies

    FINANCIAL RISK AND FINANCIAL RISK MANAGEMENT TECHNOLOGY (RMT): ISSUES AND ADVANTAGES

    Get PDF
    Methods for sound risk management are of increasing interest among Wall Street investment banking and brokerage firms in the aftermath of the October 1987 crash of the stock market. As the knowledge of advanced technology applications in risk management increases, financial firms are finding innovative ways to use them practically, in order to insulate themselves. The recent development in models, the software and hardware, and the market data to track risk are all considered advances in Risk Management Technology (RMT). These advances have affected all three stages of risk management: the identification, the measurement, and the formulation of strategies to control financial risk. This article discusses the advances made in five areas of RMT: communication software, object-oriented programming, parallel processing, neural nets and artificial intelligence. Systems based on any of these areas may be used to add value to the business of a firm. A business value linkage analysis shows how the utility of advanced systems can be measured to justify their costs.Information Systems Working Papers Serie

    An incremental database access method for autonomous interoperable databases

    Get PDF
    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values

    Data Abstraction Mechanisms in Sina/st

    Get PDF
    This paper describes a new data abstraction mechanism in an object-oriented model of computing. The data abstraction mechanism described here has been devised in the context of the design of Sina/st language. In Sina/st no language constructs have been adopted for specifying inheritance or delegation, but rather, we introduce simpler mechanisms that can support a wide range of code sharing strategies without selecting one among them as a language feature. Sina/st also provides a stronger data encapsulation than most of the existing object-oriented languages. This language has been implemented on the SUN 3 workstation using Smalltalk
    • …
    corecore