13 research outputs found

    Navigation/traffic control satellite mission study. Volume 3 - System concepts

    Get PDF
    Satellite network for air traffic control, solar flare warning, and collision avoidanc

    DEPENDABILITY BENCHMARKING OF NETWORK FUNCTION VIRTUALIZATION

    Get PDF
    Network Function Virtualization (NFV) is an emerging networking paradigm that aims to reduce costs and time-to-market, improve manageability, and foster competition and innovative services. NFV exploits virtualization and cloud computing technologies to turn physical network functions into Virtualized Network Functions (VNFs), which will be implemented in software, and will run as Virtual Machines (VMs) on commodity hardware located in high-performance data centers, namely Network Function Virtualization Infrastructures (NFVIs). The NFV paradigm relies on cloud computing and virtualization technologies to provide carrier-grade services, i.e., the ability of a service to be highly reliable and available, within fast and automatic failure recovery mechanisms. The availability of many virtualization solutions for NFV poses the question on which virtualization technology should be adopted for NFV, in order to fulfill the requirements described above. Currently, there are limited solutions for analyzing, in quantitative terms, the performance and reliability trade-offs, which are important concerns for the adoption of NFV. This thesis deals with assessment of the reliability and of the performance of NFV systems. It proposes a methodology, which includes context, measures, and faultloads, to conduct dependability benchmarks in NFV, according to the general principles of dependability benchmarking. To this aim, a fault injection framework for the virtualization technologies has been designed and implemented for the virtualized technologies being used as case studies in this thesis. This framework is successfully used to conduct an extensive experimental campaign, where we compare two candidate virtualization technologies for NFV adoption: the commercial, hypervisor-based virtualization platform VMware vSphere, and the open-source, container-based virtualization platform Docker. These technologies are assessed in the context of a high-availability, NFV-oriented IP Multimedia Subsystem (IMS). The analysis of experimental results reveal that i) fault management mechanisms are crucial in NFV, in order to provide accurate failure detection and start the subsequent failover actions, and ii) fault injection proves to be valuable way to introduce uncommon scenarios in the NFVI, which can be fundamental to provide a high reliable service in production

    Reconfigurable G and C computer study for space station use. Volume 2 - Final technical report Final report, 29 Dec. 1969 - 31 Jan. 1971

    Get PDF
    Design and development of reconfigurable guidance and control computer for space station applications - Vol.

    Fifth Conference on Artificial Intelligence for Space Applications

    Get PDF
    The Fifth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: automation for Space Station; intelligent control, testing, and fault diagnosis; robotics and vision; planning and scheduling; simulation, modeling, and tutoring; development tools and automatic programming; knowledge representation and acquisition; and knowledge base/data base integration

    Risk management of student-run small satellite programs

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2007.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 145-148).This paper proposes an approach for failure mode identification in university-affiliated, small satellite programs. These small programs have a unique set of risks due to many factors, including a typically inexperienced workforce, limited corporate knowledge, and a high student turnover rate. Only those risks unique to small, student-run satellite programs are presented. Technical risks and mitigation strategies of student and industry satellites are also discussed. Additionally, several risk management strategies are explored, and the advantages and disadvantages of these risk-related tools and techniques are examined. To aid the process of risk identification in these particular programs, a master logic diagram (MLD) for small satellites was created to help identify potential initiating events that could lead to failures during the mission. To validate the MLD, a case study and multiple experiments are presented and analyzed. This master logic diagram approach is shown to provide an effective method of risk identification that can be easily adapted to small, student-run satellite programs.by Elizabeth Deems.S.M

    Scalability of RAID systems

    Get PDF
    RAID systems (Redundant Arrays of Inexpensive Disks) have dominated backend storage systems for more than two decades and have grown continuously in size and complexity. Currently they face unprecedented challenges from data intensive applications such as image processing, transaction processing and data warehousing. As the size of RAID systems increases, designers are faced with both performance and reliability challenges. These challenges include limited back-end network bandwidth, physical interconnect failures, correlated disk failures and long disk reconstruction time. This thesis studies the scalability of RAID systems in terms of both performance and reliability through simulation, using a discrete event driven simulator for RAID systems (SIMRAID) developed as part of this project. SIMRAID incorporates two benchmark workload generators, based on the SPC-1 and Iometer benchmark specifications. Each component of SIMRAID is highly parameterised, enabling it to explore a large design space. To improve the simulation speed, SIMRAID develops a set of abstraction techniques to extract the behaviour of the interconnection protocol without losing accuracy. Finally, to meet the technology trend toward heterogeneous storage architectures, SIMRAID develops a framework that allows easy modelling of different types of device and interconnection technique. Simulation experiments were first carried out on performance aspects of scalability. They were designed to answer two questions: (1) given a number of disks, which factors affect back-end network bandwidth requirements; (2) given an interconnection network, how many disks can be connected to the system. The results show that the bandwidth requirement per disk is primarily determined by workload features and stripe unit size (a smaller stripe unit size has better scalability than a larger one), with cache size and RAID algorithm having very little effect on this value. The maximum number of disks is limited, as would be expected, by the back-end network bandwidth. Studies of reliability have led to three proposals to improve the reliability and scalability of RAID systems. Firstly, a novel data layout called PCDSDF is proposed. PCDSDF combines the advantages of orthogonal data layouts and parity declustering data layouts, so that it can not only survivemultiple disk failures caused by physical interconnect failures or correlated disk failures, but also has a good degraded and rebuild performance. The generating process of PCDSDF is deterministic and time-efficient. The number of stripes per rotation (namely the number of stripes to achieve rebuild workload balance) is small. Analysis shows that the PCDSDF data layout can significantly improve the system reliability. Simulations performed on SIMRAID confirm the good performance of PCDSDF, which is comparable to other parity declustering data layouts, such as RELPR. Secondly, a system architecture and rebuilding mechanism have been designed, aimed at fast disk reconstruction. This architecture is based on parity declustering data layouts and a disk-oriented reconstruction algorithm. It uses stripe groups instead of stripes as the basic distribution unit so that it can make use of the sequential nature of the rebuilding workload. The design space of system factors such as parity declustering ratio, chunk size, private buffer size of surviving disks and free buffer size are explored to provide guidelines for storage system design. Thirdly, an efficient distributed hot spare allocation and assignment algorithm for general parity declustering data layouts has been developed. This algorithm avoids conflict problems in the process of assigning distributed spare space for the units on the failed disk. Simulation results show that it effectively solves the write bottleneck problem and, at the same time, there is only a small increase in the average response time to user requests

    AAS/GSFC 13th International Symposium on Space Flight Dynamics

    Get PDF
    This conference proceedings preprint includes papers and abstracts presented at the 13th International Symposium on Space Flight Dynamics. Cosponsored by American Astronautical Society and the Guidance, Navigation and Control Center of the Goddard Space Flight Center, this symposium featured technical papers on a wide range of issues related to orbit-attitude prediction, determination, and control; attitude sensor calibration; attitude dynamics; and mission design

    Programming Languages and Systems

    Get PDF
    This open access book constitutes the proceedings of the 29th European Symposium on Programming, ESOP 2020, which was planned to take place in Dublin, Ireland, in April 2020, as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The actual ETAPS 2020 meeting was postponed due to the Corona pandemic. The papers deal with fundamental issues in the specification, design, analysis, and implementation of programming languages and systems

    Distributed on-line safety monitor based on safety assessment model and multi-agent system

    Get PDF
    On-line safety monitoring, i.e. the tasks of fault detection and diagnosis, alarm annunciation, and fault controlling, is essential in the operational phase of critical systems. Over the last 30 years, considerable work in this area has resulted in approaches that exploit models of the normal operational behaviour and failure of a system. Typically, these models incorporate on-line knowledge of the monitored system and enable qualitative and quantitative reasoning about the symptoms, causes and possible effects of faults. Recently, monitors that exploit knowledge derived from the application of off-line safety assessment techniques have been proposed. The motivation for that work has been the observation that, in current practice, vast amounts of knowledge derived from off-line safety assessments cease to be useful following the certification and deployment of a system. The concept is potentially very useful. However, the monitors that have been proposed so far are limited in their potential because they are monolithic and centralised, and therefore, have limited applicability in systems that have a distributed nature and incorporate large numbers of components that interact collaboratively in dynamic cooperative structures. On the other hand, recent work on multi-agent systems shows that the distributed reasoning paradigm could cope with the nature of such systems. This thesis proposes a distributed on-line safety monitor which combines the benefits of using knowledge derived from off-line safety assessments with the benefits of the distributed reasoning of the multi-agent system. The monitor consists of a multi-agent system incorporating a number of Belief-Desire-Intention (BDI) agents which operate on a distributed monitoring model that contains reference knowledge derived from off-line safety assessments. Guided by the monitoring model, agents are hierarchically deployed to observe the operational conditions across various levels of the hierarchy of the monitored system and work collaboratively to integrate and deliver safety monitoring tasks. These tasks include detection of parameter deviations, diagnosis of underlying causes, alarm annunciation and application of fault corrective measures. In order to avoid alarm avalanches and latent misleading alarms, the monitor optimises alarm annunciation by suppressing unimportant and false alarms, filtering spurious sensory measurements and incorporating helpful alarm information that is announced at the correct time. The thesis discusses the relevant literature, describes the structure and algorithms of the proposed monitor, and through experiments, it shows the benefits of the monitor which range from increasing the composability, extensibility and flexibility of on-line safety monitoring to ultimately developing an effective and cost-effective monitor. The approach is evaluated in two case studies and in the light of the results the thesis discusses and concludes both limitations and relative merits compared to earlier safety monitoring concepts
    corecore