663 research outputs found

    Prototype of Fault Adaptive Embedded Software for Large-Scale Real-Time Systems

    Get PDF
    This paper describes a comprehensive prototype of large-scale fault adaptive embedded software developed for the proposed Fermilab BTeV high energy physics experiment. Lightweight self-optimizing agents embedded within Level 1 of the prototype are responsible for proactive and reactive monitoring and mitigation based on specified layers of competence. The agents are self-protecting, detecting cascading failures using a distributed approach. Adaptive, reconfigurable, and mobile objects for reliability are designed to be self-configuring to adapt automatically to dynamically changing environments. These objects provide a self-healing layer with the ability to discover, diagnose, and react to discontinuities in real-time processing. A generic modeling environment was developed to facilitate design and implementation of hardware resource specifications, application data flow, and failure mitigation strategies. Level 1 of the planned BTeV trigger system alone will consist of 2500 DSPs, so the number of components and intractable fault scenarios involved make it impossible to design an “expert system” that applies traditional centralized mitigative strategies based on rules capturing every possible system state. Instead, a distributed reactive approach is implemented using the tools and methodologies developed by the RealTime Embedded Systems group

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    La valutazione del livello di integritĂ  dei sistemi di protezione delle macchine, dall'analisi dei rischi ai dati affidabilistici

    Get PDF
    Through the application of several different methods of analysis it was showed definitely that a risk analysis conducted on an industrial machine, taking into account the human factor from the beginning, has a significant impact in the assignment of SIL to all Safety Integrity Function (SIF). The same phenomenon occurs in the verification and computation of the performance level for each device of the press. Comparing these methods and the results obtained through the development of a further case study carried out during the Spring at Trinity College of Dublin it was possible to define a useful logical model of analysis to describe and assess the risks related to human-interface and usable to verify and calculate the safety integrity level of the safety functions of the machine. This methodology is a combination between the Integrated Dynamic Decision Analysis (IDDA) and the Technique for Human Error Rate Prediction (THERP). The application of the Decision Analysis has been made possible through a careful reconstruction of the operating procedure regarding the use of the press applying an using an ad hoc Failure Mode and Effects Analysis (FMEA) template. The system of the study was described by IDDA in a random sequence of events where the given values of probability derived from the THERP model for human error and from the method of calculation set out by the standard technique (EN IEC 62061) for dangerous failures of safety devices. This integrated approach has allowed to take into account human factors with greater detail and in a quantitative way describing where and why the operator can cheat or by-pass the safety system, unlike other methods of risk assessment where you could only identify where the man-machine interface should be analyzed in dept

    A Literature Survey on Resource Management Techniques, Issues and Challenges in Cloud Computing

    Get PDF
    Cloud computing is a large scale distributed computing which provides on demand services for clients. Cloud Clients use web browsers, mobile apps, thin clients, or terminal emulators to request and control their cloud resources at any time and anywhere through the network. As many companies are shifting their data to cloud and as many people are being aware of the advantages of storing data to cloud, there is increasing number of cloud computing infrastructure and large amount of data which lead to the complexity management for cloud providers. We surveyed the state-of-the-art resource management techniques for IaaS (infrastructure as a service) in cloud computing. Then we put forward different major issues in the deployment of the cloud infrastructure in order to avoid poor service delivery in cloud computing

    Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    Full text link
    • …
    corecore