108 research outputs found

    Deep Space Network information system architecture study

    Get PDF
    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control

    Adaptive object management for distributed systems

    Get PDF
    This thesis describes an architecture supporting the management of pluggable software components and evaluates it against the requirement for an enterprise integration platform for the manufacturing and petrochemical industries. In a distributed environment, we need mechanisms to manage objects and their interactions. At the least, we must be able to create objects in different processes on different nodes; we must be able to link them together so that they can pass messages to each other across the network; and we must deliver their messages in a timely and reliable manner. Object based environments which support these services already exist, for example ANSAware(ANSA, 1989), DEC's Objectbroker(ACA,1992), Iona's Orbix(Orbix,1994)Yet such environments provide limited support for composing applications from pluggable components. Pluggability is the ability to install and configure a component into an environment dynamically when the component is used, without specifying static dependencies between components when they are produced. Pluggability is supported to a degree by dynamic binding. Components may be programmed to import references to other components and to explore their interfaces at runtime, without using static type dependencies. Yet thus overloads the component with the responsibility to explore bindings. What is still generally missing is an efficient general-purpose binding model for managing bindings between independently produced components. In addition, existing environments provide no clear strategy for dealing with fine grained objects. The overhead of runtime binding and remote messaging will severely reduce performance where there are a lot of objects with complex patterns of interaction. We need an adaptive approach to managing configurations of pluggable components according to the needs and constraints of the environment. Management is made difficult by embedding bindings in component implementations and by relying on strong typing as the only means of verifying and validating bindings. To solve these problems we have built a set of configuration tools on top of an existing distributed support environment. Specification tools facilitate the construction of independent pluggable components. Visual composition tools facilitate the configuration of components into applications and the verification of composite behaviours. A configuration model is constructed which maintains the environmental state. Adaptive management is made possible by changing the management policy according to this state. Such policy changes affect the location of objects, their bindings, and the choice of messaging system

    NASA/ASEE Summer Faculty Fellowship Program

    Get PDF
    This document is a collection of technical reports on research conducted by the participants in the 1996 NASA/ASEE Summer Faculty Fellowship Program at the Kennedy Space Center (KSC). This was the twelfth year that a NASA/ASEE program has been conducted at KSC. The 1996 program was administered by the University of Central Florida in cooperation with KSC. The program was operated under the auspices of the American Society for Engineering Education (ASEE) with sponsorship and funding from the Office of Educational Affairs, NASA Headquarters, Washington, DC and KSC. The KSC Program was one of nine such Aeronautics and Space Research Program funded by NASA in 1996. The NASA/ASEE Program is intended to be a two-year program to allow in-depth research by the University faculty member. The editors of this document were responsible for selecting appropriately qualified faculty to address some of the many problems of current interest to NASA/KSC

    A technology reference model for client/server software development

    Get PDF
    In today's highly competitive global economy, information resources representing enterprise-wide information are essential to the survival of an organization. The development of and increase in the use of personal computers and data communication networks are supporting or, in many cases, replacing the traditional computer mainstay of corporations. The client/server model incorporates mainframe programming with desktop applications on personal computers. The aim of the research is to compile a technology model for the development of client/server software. A comprehensive overview of the individual components of the client/server system is given. The different methodologies, tools and techniques that can be used are reviewed, as well as client/server-specific design issues. The research is intended to create a road map in the form of a Technology Reference Model for Client/Server Software Development.ComputingM. Sc. (Information Systems

    NASA/OAI Collaborative Aerospace Internship and Fellowship Program

    Get PDF
    The NASA/OAI Collaborative Aerospace Internship and Fellowship Program is a collaborative undertaking by the Office of Educational Programs at the NASA Lewis Research Center and the Department of Workforce Enhancement at the Ohio Aerospace Institute. This program provides 12 or 14 week internships for undergraduate and graduate students of science and engineering, and for secondary school teachers. Each item is assigned a NASA mentor who facilitates a research assignment. An important aspect of the program is that it includes students with diverse social, cultural and economic backgrounds. The purpose of this report is to document the program accomplishments for 1996

    RICIS Symposium 1992: Mission and Safety Critical Systems Research and Applications

    Get PDF
    This conference deals with computer systems which control systems whose failure to operate correctly could produce the loss of life and or property, mission and safety critical systems. Topics covered are: the work of standards groups, computer systems design and architecture, software reliability, process control systems, knowledge based expert systems, and computer and telecommunication protocols

    Predictive YASIR: High Security with Lower Latency in Legacy SCADA

    Get PDF
    Message authentication with low latency is necessary to ensure secure operations in legacy industrial control networks, such as power grid networks. Previous authentication solutions by our lab and others looked at single messages and incurred noticeable latency. To reduce this latency, we develop Predictive YASIR, a bump-in-the-wire device that looks at broader patterns of messages. The device (1) predicts the incoming plaintext based on previous observations; (2) compresses, encrypts, and authenticates data online; and (3) pre-sends a part of ciphertext before receiving the whole plaintext. I demonstrate the performance properties of this approach by implementing it in the Scalable Simulation Framework and testing it on Modbus/ASCII protocol, which is widely used in the power grid, oil and gas, manufacturing, and water treatment control networks. By looking at broader message patterns and using predictive analysis, my results demonstrate a 15.48 +/- 0.35% improvement in latency over the previous most efficient solution. The simulation code is available from http://www.cs.dartmouth.edu/~pyasir/

    Distributed decision-making in electric power system transmission maintenance scheduling using Multi-Agent Systems (MAS)

    Get PDF
    In this work, motivated by the need to coordinate transmission maintenance scheduling among a multiplicity of self-interested entities in restructured power industry, a distributed decision support framework based on multiagent negotiation systems (MANS) is developed. An innovative risk-based transmission maintenance optimization procedure is introduced. Several models for linking condition monitoring information to the equipment\u27s instantaneous failure probability are presented, which enable quantitative evaluation of the effectiveness of maintenance activities in terms of system cumulative risk reduction. Methodologies of statistical processing, equipment deterioration evaluation and time-dependent failure probability calculation are also described. A novel framework capable of facilitating distributed decision-making through multiagent negotiation is developed. A multiagent negotiation model is developed and illustrated that accounts for uncertainty and enables social rationality. Some issues of multiagent negotiation convergence and scalability are discussed. The relationships between agent-based negotiation and auction systems are also identified. A four-step MAS design methodology for constructing multiagent systems for power system applications is presented. A generic multiagent negotiation system, capable of inter-agent communication and distributed decision support through inter-agent negotiations, is implemented. A multiagent system framework for facilitating the automated integration of condition monitoring information and maintenance scheduling for power transformers is developed. Simulations of multiagent negotiation-based maintenance scheduling among several independent utilities are provided. It is shown to be a viable alternative solution paradigm to the traditional centralized optimization approach in today\u27s deregulated environment. This multiagent system framework not only facilitates the decision-making among competing power system entities, but also provides a tool to use in studying competitive industry relative to monopolistic industry

    A semi-formal comparison between the Common Object Request Broker Architecture (COBRA) and the Distributed Component Object Model (DCOM)

    Get PDF
    The way in which application systems and software are built has changed dramatically over the past few years. This is mainly due to advances in hardware technology, programming languages, as well as the requirement to build better software application systems in less time. The importance of mondial (worldwide) communication between systems is also growing exponentially. People are using network-based applications daily, communicating not only locally, but also globally. The Internet, the global network, therefore plays a significant role in the development of new software. Distributed object computing is one of the computing paradigms that promise to solve the need to develop clienVserver application systems, communicating over heterogeneous environments. This study, of limited scope, concentrates on one crucial element without which distributed object computing cannot be implemented. This element is the communication software, also called middleware, which allows objects situated on different hardware platforms to communicate over a network. Two of the most important middleware standards for distributed object computing today are the Common Object Request Broker Architecture (CORBA) from the Object Management Group, and the Distributed Component Object Model (DCOM) from Microsoft Corporation. Each of these standards is implemented in commercially available products, allowing distributed objects to communicate over heterogeneous networks. In studying each of the middleware standards, a formal way of comparing CORBA and DCOM is presented, namely meta-modelling. For each of these two distributed object infrastructures (middleware), meta-models are constructed. Based on this uniform and unbiased approach, a comparison of the two distributed object infrastructures is then performed. The results are given as a set of tables in which the differences and similarities of each distributed object infrastructure are exhibited. By adopting this approach, errors caused by misunderstanding or misinterpretation are minimised. Consequently, an accurate and unbiased comparison between CORBA and DCOM is made possible, which constitutes the main aim of this dissertation.ComputingM. Sc. (Computer Science
    corecore