609 research outputs found
Deep Space Network information system architecture study
The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control
The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2
In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions
Java operating systems: design and implementation
Journal ArticleLanguage-based extensible systems such as Java use type safety to provide memory safety in a single address space. Memory safety alone, however, is not sufficient to protect different applications from each other. such systems must support a process model that enables the control and management of computational resources. In particular, language-based extensible systems must support resource control mechanisms analogous to those in standard operating-systems. They must support the separation of processes and limit their use of resources, but still support safe and efficient interprocess communication
Multicast communications in distributed systems
PhD ThesisOne of the numerous results of recent developments in communication
networks and distributed systems has been an increased interest in the study
of applications and protocolsfor communications between multiple, as opposed
to single, entities such as processes and computers. For example, in replicated
file storage, a process attempts to store a file on several file servers, rather
than one. MUltiple entity communications, which allow one-to-many and
many-to-one communications, are known as multicast communications.
This thesis examines some of the ways in which the architectures of
computer networks and distributed systems can affect the design and
development of multicast communication applications and protocols.To assist
in this examination, the thesis presents three contributions. First, a set of
classification schemes are developed for use in the description and analysis of
various multicast communication strategies. Second, a general set of
multicast communication primitives are presented, unrelated to any specific
network or distributed system, yet efficiently implementable on a variety of
networks. Third, the primitives are used to obtain experimental results for a
study ofintranetwork and internetwork multicast communications.Postgraduate Scholarship, The Natural Sciences and Engineering Research Council of Canada:
Overseas Research Student Award:
the Committee of Vice-Chancellors and Principals of the Universities of the
Uni ted Kingdom
An improved infrastructure for the IceCube realtime system
The IceCube realtime alert system has been operating since 2016. It provides
prompt alerts on high-energy neutrino events to the astroparticle physics
community. The localization regions for the incoming direction of neutrinos are
published through NASA's Gamma-ray Coordinate Network (GCN). The IceCube
realtime system consists of infrastructure dedicated to the selection of alert
events, the reconstruction of their topology and arrival direction, the
calculation of directional uncertainty contours and the distribution of the
event information through public alert networks. Using a message-based workflow
management system, a dedicated software (SkyDriver) provides a representational
state transfer (REST) interface to parallelized reconstruction algorithms. In
this contribution, we outline the improvements of the internal infrastructure
of the IceCube realtime system that aims to streamline the internal handling of
neutrino events, their distribution to the SkyDriver interface, the collection
of the reconstruction results as well as their conversion into human- and
machine-readable alerts to be publicly distributed through different alert
networks. An approach for the long-term storage and cataloging of alert events
according to findability, accessibility, interoperability and reusability
(FAIR) principles is outlined.Comment: Presented at the 38th International Cosmic Ray Conference (ICRC2023).
See arXiv:2307.13047 for all IceCube contributions. 8 pages, 3 figure
- …