228 research outputs found

    Development of a Security Methodology for Cooperative Information Systems: The CooPSIS Project

    Get PDF
    Since networks and computing systems are vital components of today\u27s life, it is of utmost importance to endow them with the capability to survive physical and logical faults, as well as malicious or deliberate attacks. When the information system is obtained by federating pre-existing local systems, a methodology is needed to integrate security policies and mechanisms under a uniform structure. Therefore, in building distributed information systems, a methodology for analysis, design and implementation of security requirements of data and processes is essential for obtaining mutual trust between cooperating organizations. Moreover, when the information system is built as a cooperative set of e-services, security is related to the type of data, to the sensitivity context of the cooperative processes and to the security characteristics of the communication paradigms. The CoopSIS (Cooperative Secure Information Systems) project aims to develop methods and tools for the analysis, design, implementation and evaluation of secure and survivable distributed information systems of cooperative type, in particular with experimentation in the Public Administration Domain. This paper presents the basic issues of a methodology being conceived to build a trusted cooperative environment, where data sensitivity parameters and security requirements of processes are taken into account. The milestones phases of the security development methodology in the context of this project are illustrated

    Operability of Mobile Agent Applications in a Protected Environment

    Get PDF
    There is a shift toward increasingly heterogeneous networks in today�s communications environment. Such diversity requires that network operators have greater experience and increased training. Managing these diverse networks especially in institutions requires the collection of large quantities of data from a dependable network that must be analyzed before management of any activity can be comenced. In this research, we have identified the operability of mobile Agents in a protected network environment

    Reconfigurable middleware architectures for large scale sensor networks

    Get PDF
    Wireless sensor networks, in an effort to be energy efficient, typically lack the high-level abstractions of advanced programming languages. Though strong, the dichotomy between these two paradigms can be overcome. The SENSIX software framework, described in this dissertation, uniquely integrates constraint-dominated wireless sensor networks with the flexibility of object-oriented programming models, without violating the principles of either. Though these two computing paradigms are contradictory in many ways, SENSIX bridges them to yield a dynamic middleware abstraction unifying low-level resource-aware task reconfiguration and high-level object recomposition. Through the layered approach of SENSIX, the software developer creates a domain-specific sensing architecture by defining a customized task specification and utilizing object inheritance. In addition, SENSIX performs better at large scales (on the order of 1000 nodes or more) than other sensor network middleware which do not include such unified facilities for vertical integration

    Online failure prediction in air traffic control systems

    Get PDF
    This thesis introduces a novel approach to online failure prediction for mission critical distributed systems that has the distinctive features to be black-box, non-intrusive and online. The approach combines Complex Event Processing (CEP) and Hidden Markov Models (HMM) so as to analyze symptoms of failures that might occur in the form of anomalous conditions of performance metrics identified for such purpose. The thesis presents an architecture named CASPER, based on CEP and HMM, that relies on sniffed information from the communication network of a mission critical system, only, for predicting anomalies that can lead to software failures. An instance of Casper has been implemented, trained and tuned to monitor a real Air Traffic Control (ATC) system developed by Selex ES, a Finmeccanica Company. An extensive experimental evaluation of CASPER is presented. The obtained results show (i) a very low percentage of false positives over both normal and under stress conditions, and (ii) a sufficiently high failure prediction time that allows the system to apply appropriate recovery procedures

    Online failure prediction in air traffic control systems

    Get PDF
    This thesis introduces a novel approach to online failure prediction for mission critical distributed systems that has the distinctive features to be black-box, non-intrusive and online. The approach combines Complex Event Processing (CEP) and Hidden Markov Models (HMM) so as to analyze symptoms of failures that might occur in the form of anomalous conditions of performance metrics identified for such purpose. The thesis presents an architecture named CASPER, based on CEP and HMM, that relies on sniffed information from the communication network of a mission critical system, only, for predicting anomalies that can lead to software failures. An instance of Casper has been implemented, trained and tuned to monitor a real Air Traffic Control (ATC) system developed by Selex ES, a Finmeccanica Company. An extensive experimental evaluation of CASPER is presented. The obtained results show (i) a very low percentage of false positives over both normal and under stress conditions, and (ii) a sufficiently high failure prediction time that allows the system to apply appropriate recovery procedures

    Conception et implémentation de systèmes résilients par une approche à composants

    Get PDF
    L'évolution des systèmes pendant leur vie opérationnelle est incontournable. Les systèmes sûrs de fonctionnement doivent évoluer pour s'adapter à des changements comme la confrontation à de nouveaux types de fautes ou la perte de ressources. L'ajout de cette dimension évolutive à la fiabilité conduit à la notion de résilience informatique. Parmi les différents aspects de la résilience, nous nous concentrons sur l'adaptativité. La sûreté de fonctionnement informatique est basée sur plusieurs moyens, dont la tolérance aux fautes à l'exécution, où l'on attache des mécanismes spécifiques (Fault Tolerance Mechanisms, FTMs) à l'application. A ce titre, l'adaptation des FTMs à l'exécution s'avère un défi pour développer des systèmes résilients. Dans la plupart des travaux de recherche existants, l'adaptation des FTMs à l'exécution est réalisée de manière préprogrammée ou se limite à faire varier quelques paramètres. Tous les FTMs envisageables doivent être connus dès le design du système et déployés et attachés à l'application dès le début. Pourtant, les changements ont des origines variées et, donc, vouloir équiper un système pour le pire scénario est impossible. Selon les observations pendant la vie opérationnelle, de nouveaux FTMs peuvent être développés hors-ligne, mais intégrés pendant l'exécution. On dénote cette capacité comme adaptation agile, par opposition à l'adaptation préprogrammée. Dans cette thèse, nous présentons une approche pour développer des systèmes sûrs de fonctionnement flexibles dont les FTMs peuvent s'adapter à l'exécution de manière agile par des modifications à grain fin pour minimiser l'impact sur l'architecture initiale. D'abord, nous proposons une classification d'un ensemble de FTMs existants basée sur des critères comme le modèle de faute, les caractéristiques de l'application et les ressources nécessaires. Ensuite, nous analysons ces FTMs et extrayons un schéma d'exécution générique identifiant leurs parties communes et leurs points de variabilité. Après, nous démontrons les bénéfices apportés par les outils et les concepts issus du domaine du génie logiciel, comme les intergiciels réflexifs à base de composants, pour développer une librairie de FTMs adaptatifs à grain fin. Nous évaluons l'agilité de l'approche et illustrons son utilité à travers deux exemples d'intégration : premièrement, dans un processus de développement dirigé par le design pour les systèmes ubiquitaires et, deuxièmement, dans un environnement pour le développement d'applications pour des réseaux de capteurs. ABSTRACT : Evolution during service life is mandatory, particularly for long-lived systems. Dependable systems, which continuously deliver trustworthy services, must evolve to accommodate changes e.g., new fault tolerance requirements or variations in available resources. The addition of this evolutionary dimension to dependability leads to the notion of resilient computing. Among the various aspects of resilience, we focus on adaptivity. Dependability relies on fault tolerant computing at runtime, applications being augmented with fault tolerance mechanisms (FTMs). As such, on-line adaptation of FTMs is a key challenge towards resilience. In related work, on-line adaption of FTMs is most often performed in a preprogrammed manner or consists in tuning some parameters. Besides, FTMs are replaced monolithically. All the envisaged FTMs must be known at design time and deployed from the beginning. However, dynamics occurs along multiple dimensions and developing a system for the worst-case scenario is impossible. According to runtime observations, new FTMs can be developed off-line but integrated on-line. We denote this ability as agile adaption, as opposed to the preprogrammed one. In this thesis, we present an approach for developing flexible fault-tolerant systems in which FTMs can be adapted at runtime in an agile manner through fine-grained modifications for minimizing impact on the initial architecture. We first propose a classification of a set of existing FTMs based on criteria such as fault model, application characteristics and necessary resources. Next, we analyze these FTMs and extract a generic execution scheme which pinpoints the common parts and the variable features between them. Then, we demonstrate the use of state-of-the-art tools and concepts from the field of software engineering, such as component-based software engineering and reflective component-based middleware, for developing a library of fine-grained adaptive FTMs. We evaluate the agility of the approach and illustrate its usability throughout two examples of integration of the library: first, in a design-driven development process for applications in pervasive computing and, second, in a toolkit for developing applications for WSNs

    Intrusion Tolerance: Concepts and Design Principles. A Tutorial

    Get PDF
    In traditional dependability, fault tolerance has been the workhorse of the many solutions published over the years. Classical security-related work has on the other hand privileged, with few exceptions, intrusion prevention, or intrusion detection without systematic forms of processing the intrusion symptoms. A new approach has slowly emerged during the past decade, and gained impressive momentum recently: intrusion tolerance. The purpose of this tutorial is to explain the underlying concepts and design principles. The tutorial reviews previous results under the light of intrusion tolerance (IT), introduces the fundamental ideas behind IT, and presents recent advances of the state-of-the-art, coming from European and US research efforts devoted to IT. The program of the tutorial will address: a review of the dependability and security background; introduction of the fundamental concepts of intrusion tolerance (IT); intrusion-aware fault models; intrusion prevention; intrusion detection; IT strategies and mechanisms; design methodologies for IT systems; examples of IT systems and protocol

    Integrating legacy mainframe systems: architectural issues and solutions

    Get PDF
    For more than 30 years, mainframe computers have been the backbone of computing systems throughout the world. Even today it is estimated that some 80% of the worlds' data is held on such machines. However, new business requirements and pressure from evolving technologies, such as the Internet is pushing these existing systems to their limits and they are reaching breaking point. The Banking and Financial Sectors in particular have been relying on mainframes for the longest time to do their business and as a result it is they that feel these pressures the most. In recent years there have been various solutions for enabling a re-engineering of these legacy systems. It quickly became clear that to completely rewrite them was not possible so various integration strategies emerged. Out of these new integration strategies, the CORBA standard by the Object Management Group emerged as the strongest, providing a standards based solution that enabled the mainframe applications become a peer in a distributed computing environment. However, the requirements did not stop there. The mainframe systems were reliable, secure, scalable and fast, so any integration strategy had to ensure that the new distributed systems did not lose any of these benefits. Various patterns or general solutions to the problem of meeting these requirements have arisen and this research looks at applying some of these patterns to mainframe based CORBA applications. The purpose of this research is to examine some of the issues involved with making mainframebased legacy applications inter-operate with newer Object Oriented Technologies

    INSENS: Intrusion-tolerant routing for wireless sensor networks

    Get PDF
    This paper describes an INtrusion-tolerant routing protocol for wireless SEnsor NetworkS (INSENS). INSENS securely and efficiently constructs tree-structured routing for wireless sensor networks (WSNs). The key objective of an INSENS network is to tolerate damage caused by an intruder who has compromised deployed sensor nodes and is intent on injecting, modifying, or blocking packets. To limit or localize the damage caused by such an intruder, INSENS incorporates distributed lightweight security mechanisms, including efficient one-way hash chains and nested keyed message authentication codes that defend against wormhole attacks, as well as multipath routing. Adapting to WSN characteristics, the design of INSENS also pushes complexity away from resource-poor sensor nodes towards resource-rich base stations. An enhanced single-phase version of INSENS scales to large networks, integrates bidirectional verification to defend against rushing attacks, accommodates multipath routing to multiple base stations, enables secure joining/leaving, and incorporates a novel pairwise key setup scheme based on transitory global keys that is more resilient than LEAP. Simulation results are presented to demonstrate and assess the tolerance of INSENS to various attacks launched by an adversary. A prototype implementation of INSENS over a network of MICA2 motes is presented to evaluate the cost incurred

    Quality assessment technique for ubiquitous software and middleware

    Get PDF
    The new paradigm of computing or information systems is ubiquitous computing systems. The technology-oriented issues of ubiquitous computing systems have made researchers pay much attention to the feasibility study of the technologies rather than building quality assurance indices or guidelines. In this context, measuring quality is the key to developing high-quality ubiquitous computing products. For this reason, various quality models have been defined, adopted and enhanced over the years, for example, the need for one recognised standard quality model (ISO/IEC 9126) is the result of a consensus for a software quality model on three levels: characteristics, sub-characteristics, and metrics. However, it is very much unlikely that this scheme will be directly applicable to ubiquitous computing environments which are considerably different to conventional software, trailing a big concern which is being given to reformulate existing methods, and especially to elaborate new assessment techniques for ubiquitous computing environments. This paper selects appropriate quality characteristics for the ubiquitous computing environment, which can be used as the quality target for both ubiquitous computing product evaluation processes ad development processes. Further, each of the quality characteristics has been expanded with evaluation questions and metrics, in some cases with measures. In addition, this quality model has been applied to the industrial setting of the ubiquitous computing environment. These have revealed that while the approach was sound, there are some parts to be more developed in the future
    corecore