2,233 research outputs found

    Autonomic computing architecture for SCADA cyber security

    Get PDF
    Cognitive computing relates to intelligent computing platforms that are based on the disciplines of artificial intelligence, machine learning, and other innovative technologies. These technologies can be used to design systems that mimic the human brain to learn about their environment and can autonomously predict an impending anomalous situation. IBM first used the term ‘Autonomic Computing’ in 2001 to combat the looming complexity crisis (Ganek and Corbi, 2003). The concept has been inspired by the human biological autonomic system. An autonomic system is self-healing, self-regulating, self-optimising and self-protecting (Ganek and Corbi, 2003). Therefore, the system should be able to protect itself against both malicious attacks and unintended mistakes by the operator

    Stochastic model checking for predicting component failures and service availability

    Get PDF
    When a component fails in a critical communications service, how urgent is a repair? If we repair within 1 hour, 2 hours, or n hours, how does this affect the likelihood of service failure? Can a formal model support assessing the impact, prioritisation, and scheduling of repairs in the event of component failures, and forecasting of maintenance costs? These are some of the questions posed to us by a large organisation and here we report on our experience of developing a stochastic framework based on a discrete space model and temporal logic to answer them. We define and explore both standard steady-state and transient temporal logic properties concerning the likelihood of service failure within certain time bounds, forecasting maintenance costs, and we introduce a new concept of envelopes of behaviour that quantify the effect of the status of lower level components on service availability. The resulting model is highly parameterised and user interaction for experimentation is supported by a lightweight, web-based interface

    Development of a Security Methodology for Cooperative Information Systems: The CooPSIS Project

    Get PDF
    Since networks and computing systems are vital components of today\u27s life, it is of utmost importance to endow them with the capability to survive physical and logical faults, as well as malicious or deliberate attacks. When the information system is obtained by federating pre-existing local systems, a methodology is needed to integrate security policies and mechanisms under a uniform structure. Therefore, in building distributed information systems, a methodology for analysis, design and implementation of security requirements of data and processes is essential for obtaining mutual trust between cooperating organizations. Moreover, when the information system is built as a cooperative set of e-services, security is related to the type of data, to the sensitivity context of the cooperative processes and to the security characteristics of the communication paradigms. The CoopSIS (Cooperative Secure Information Systems) project aims to develop methods and tools for the analysis, design, implementation and evaluation of secure and survivable distributed information systems of cooperative type, in particular with experimentation in the Public Administration Domain. This paper presents the basic issues of a methodology being conceived to build a trusted cooperative environment, where data sensitivity parameters and security requirements of processes are taken into account. The milestones phases of the security development methodology in the context of this project are illustrated

    A SURVIVABLE DISTRIBUTED DATABASE AGAINST BYZANTINE FAILURE

    Get PDF
    Distributed Database Systems have been very useful technologies in making a wide range of information available to users across the World. However, there are now growing security concerns, arising from the use of distributed systems, particularly the ones attached to critical systems. More than ever before, data in distributed databases are more susceptible to attacks, failures or accidents owing to advanced knowledge explosions in network and database technologies. The imperfection of the existing security mechanisms coupled with the heightened and growing concerns for intrusion, attack, compromise or even failure owing to Byzantine failure are also contributing factors. The importance of  survivable distributed databases in the face of byzantine failure, to other emerging technologies is the motivation for this research. Furthermore, It has been observed that most of the existing works on distributed database only dwelled on maintaining data integrity and availability in the face of attack. There exist few on availability or survibability of distributed databases owing to internal factors such as internal sabotage or storage defects. In this paper, an architecture for entrenching survivability of Distributed Databases occasioned by Byzantine failures is proposed. The proposed architecture concept is based on re-creating data on failing database server based on a set  threshold value.The proposed architecture is tested and found to be capable of improving probability of survivability in distributed database where it is implemented to  99.6%  from 99.2%.

    Recovery Model for Survivable System through Resource Reconfiguration

    Get PDF
    A survivable system is able to fulfil its mission in a timely manner, in the presence of attacks, failures, or accidents. It has been realized that it is not always possible to anticipate every type of attack or failure or accident in a system, and to predict and protect against those threats. Consequently, recovering back from any damage caused by threats becomes an important attention to be taken into account. This research proposed another recovery model to enhance system survivability. The model focuses on how to preserve the system and resume its critical service while incident occurs by reconfiguring the damaged critical service resources based on available resources without affecting the stability and functioning of the system. There are three critical requisite conditions in this recovery model: the number of pre-empted non-critical service resources, the response time of resource allocation, and the cost of reconfiguration, which are used in some scenarios to find and re-allocate the available resource for the reconfiguration. A brief specifications using Z language are also explored as a preliminary proof before the implementation .. To validate the viability of the approach, two instance cases studies of real-time system, delivery units of post office and computer system of a company, are provided in ensuring the durative running of critical service. The adoption of fault-tolerance and survivability using redundancy re-allocation in this recovery model is discussed from a new perspective. Compared to the closest work done by other researchers, it is shown that the model can solve not only single fault and can reconfigure the damage resource with minimum disruption to other services
    • …
    corecore