418 research outputs found

    Methodologies synthesis

    Get PDF
    This deliverable deals with the modelling and analysis of interdependencies between critical infrastructures, focussing attention on two interdependent infrastructures studied in the context of CRUTIAL: the electric power infrastructure and the information infrastructures supporting management, control and maintenance functionality. The main objectives are: 1) investigate the main challenges to be addressed for the analysis and modelling of interdependencies, 2) review the modelling methodologies and tools that can be used to address these challenges and support the evaluation of the impact of interdependencies on the dependability and resilience of the service delivered to the users, and 3) present the preliminary directions investigated so far by the CRUTIAL consortium for describing and modelling interdependencies

    Security Analysis and Improvement Model for Web-based Applications

    Get PDF
    Today the web has become a major conduit for information. As the World Wide Web?s popularity continues to increase, information security on the web has become an increasing concern. Web information security is related to availability, confidentiality, and data integrity. According to the reports from http://www.securityfocus.com in May 2006, operating systems account for 9% vulnerability, web-based software systems account for 61% vulnerability, and other applications account for 30% vulnerability. In this dissertation, I present a security analysis model using the Markov Process Model. Risk analysis is conducted using fuzzy logic method and information entropy theory. In a web-based application system, security risk is most related to the current states in software systems and hardware systems, and independent of web application system states in the past. Therefore, the web-based applications can be approximately modeled by the Markov Process Model. The web-based applications can be conceptually expressed in the discrete states of (web_client_good; web_server_good, web_server_vulnerable, web_server_attacked, web_server_security_failed; database_server_good, database_server_vulnerable, database_server_attacked, database_server_security_failed) as state space in the Markov Chain. The vulnerable behavior and system response in the web-based applications are analyzed in this dissertation. The analyses focus on functional availability-related aspects: the probability of reaching a particular security failed state and the mean time to the security failure of a system. Vulnerability risk index is classified in three levels as an indicator of the level of security (low level, high level, and failed level). An illustrative application example is provided. As the second objective of this dissertation, I propose a security improvement model for the web-based applications using the GeoIP services in the formal methods. In the security improvement model, web access is authenticated in role-based access control using user logins, remote IP addresses, and physical locations as subject credentials to combine with the requested objects and privilege modes. Access control algorithms are developed for subjects, objects, and access privileges. A secure implementation architecture is presented. In summary, the dissertation has developed security analysis and improvement model for the web-based application. Future work will address Markov Process Model validation when security data collection becomes easy. Security improvement model will be evaluated in performance aspect

    Revised reference model

    Get PDF
    This document contains an update of the HIDENETS Reference Model, whose preliminary version was introduced in D1.1. The Reference Model contains the overall approach to development and assessment of end-to-end resilience solutions. As such, it presents a framework, which due to its abstraction level is not only restricted to the HIDENETS car-to-car and car-to-infrastructure applications and use-cases. Starting from a condensed summary of the used dependability terminology, the network architecture containing the ad hoc and infrastructure domain and the definition of the main networking elements together with the software architecture of the mobile nodes is presented. The concept of architectural hybridization and its inclusion in HIDENETS-like dependability solutions is described subsequently. A set of communication and middleware level services following the architecture hybridization concept and motivated by the dependability and resilience challenges raised by HIDENETS-like scenarios is then described. Besides architecture solutions, the reference model addresses the assessment of dependability solutions in HIDENETS-like scenarios using quantitative evaluations, realized by a combination of top-down and bottom-up modelling, as well as verification via test scenarios. In order to allow for fault prevention in the software development phase of HIDENETS-like applications, generic UML-based modelling approaches with focus on dependability related aspects are described. The HIDENETS reference model provides the framework in which the detailed solution in the HIDENETS project are being developed, while at the same time facilitating the same task for non-vehicular scenarios and application

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    A dependability framework for WSN-based aquatic monitoring systems

    Get PDF
    Wireless Sensor Networks (WSN) are being progressively used in several application areas, particularly to collect data and monitor physical processes. Moreover, sensor nodes used in environmental monitoring applications, such as the aquatic sensor networks, are often subject to harsh environmental conditions while monitoring complex phenomena. Non-functional requirements, like reliability, security or availability, are increasingly important and must be accounted for in the application development. For that purpose, there is a large body of knowledge on dependability techniques for distributed systems, which provides a good basis to understand how to satisfy these non-functional requirements of WSN-based monitoring applications. Given the data-centric nature of monitoring applications, it is of particular importance to ensure that data is reliable or, more generically, that it has the necessary quality. The problem of ensuring the desired quality of data for dependable monitoring using WSNs is studied herein. With a dependability-oriented perspective, it is reviewed the possible impairments to dependability and the prominent existing solutions to solve or mitigate these impairments. Despite the variety of components that may form a WSN-based monitoring system, it is given particular attention to understanding which faults can affect sensors, how they can affect the quality of the information, and how this quality can be improved and quantified. Open research issues for the specific case of aquatic monitoring applications are also discussed. One of the challenges in achieving a dependable system behavior is to overcome the external disturbances affecting sensor measurements and detect the failure patterns in sensor data. This is a particular problem in environmental monitoring, due to the difficulty in distinguishing a faulty behavior from the representation of a natural phenomenon. Existing solutions for failure detection assume that physical processes can be accurately modeled, or that there are large deviations that may be detected using coarse techniques, or more commonly that it is a high-density sensor network with value redundant sensors. This thesis aims at defining a new methodology for dependable data quality in environmental monitoring systems, aiming to detect faulty measurements and increase the sensors data quality. The framework of the methodology is overviewed through a generically applicable design, which can be employed to any environment sensor network dataset. The methodology is evaluated in various datasets of different WSNs, where it is used machine learning to model each sensor behavior, exploiting the existence of correlated data provided by neighbor sensors. It is intended to explore the data fusion strategies in order to effectively detect potential failures for each sensor and, simultaneously, distinguish truly abnormal measurements from deviations due to natural phenomena. This is accomplished with the successful application of the methodology to detect and correct outliers, offset and drifting failures in real monitoring networks datasets. In the future, the methodology can be applied to optimize the data quality control processes of new and already operating monitoring networks, and assist in the networks maintenance operations.As redes de sensores sem fios (RSSF) têm vindo cada vez mais a serem utilizadas em diversas áreas de aplicação, em especial para monitorizar e capturar informação de processos físicos em meios naturais. Neste contexto, os sensores que estão em contacto direto com o respectivo meio ambiente, como por exemplo os sensores em meios aquáticos, estão sujeitos a condições adversas e complexas durante o seu funcionamento. Esta complexidade conduz à necessidade de considerarmos, durante o desenvolvimento destas redes, os requisitos não funcionais da confiabilidade, da segurança ou da disponibilidade elevada. Para percebermos como satisfazer estes requisitos da monitorização com base em RSSF para aplicações ambientais, já existe uma boa base de conhecimento sobre técnicas de confiabilidade em sistemas distribuídos. Devido ao foco na obtenção de dados deste tipo de aplicações de RSSF, é particularmente importante garantir que os dados obtidos na monitorização sejam confiáveis ou, de uma forma mais geral, que tenham a qualidade necessária para o objetivo pretendido. Esta tese estuda o problema de garantir a qualidade de dados necessária para uma monitorização confiável usando RSSF. Com o foco na confiabilidade, revemos os possíveis impedimentos à obtenção de dados confiáveis e as soluções existentes capazes de corrigir ou mitigar esses impedimentos. Apesar de existir uma grande variedade de componentes que formam ou podem formar um sistema de monitorização com base em RSSF, prestamos particular atenção à compreensão das possíveis faltas que podem afetar os sensores, a como estas faltas afetam a qualidade dos dados recolhidos pelos sensores e a como podemos melhorar os dados e quantificar a sua qualidade. Tendo em conta o caso específico dos sistemas de monitorização em meios aquáticos, discutimos ainda as várias linhas de investigação em aberto neste tópico. Um dos desafios para se atingir um sistema de monitorização confiável é a deteção da influência de fatores externos relacionados com o ambiente monitorizado, que afetam as medições obtidas pelos sensores, bem como a deteção de comportamentos de falha nas medições. Este desafio é um problema particular na monitorização em ambientes naturais adversos devido à dificuldade da distinção entre os comportamentos associados às falhas nos sensores e os comportamentos dos sensores afetados pela à influência de um evento natural. As soluções existentes para este problema, relacionadas com deteção de faltas, assumem que os processos físicos a monitorizar podem ser modelados de forma eficaz, ou que os comportamentos de falha são caraterizados por desvios elevados do comportamento expectável de forma a serem facilmente detetáveis. Mais frequentemente, as soluções assumem que as redes de sensores contêm um número suficientemente elevado de sensores na área monitorizada e, consequentemente, que existem sensores redundantes relativamente à medição. Esta tese tem como objetivo a definição de uma nova metodologia para a obtenção de qualidade de dados confiável em sistemas de monitorização ambientais, com o intuito de detetar a presença de faltas nas medições e aumentar a qualidade dos dados dos sensores. Esta metodologia tem uma estrutura genérica de forma a ser aplicada a uma qualquer rede de sensores ambiental ou ao respectivo conjunto de dados obtido pelos sensores desta. A metodologia é avaliada através de vários conjuntos de dados de diferentes RSSF, em que aplicámos técnicas de aprendizagem automática para modelar o comportamento de cada sensor, com base na exploração das correlações existentes entre os dados obtidos pelos sensores da rede. O objetivo é a aplicação de estratégias de fusão de dados para a deteção de potenciais falhas em cada sensor e, simultaneamente, a distinção de medições verdadeiramente defeituosas de desvios derivados de eventos naturais. Este objectivo é cumprido através da aplicação bem sucedida da metodologia para detetar e corrigir outliers, offsets e drifts em conjuntos de dados reais obtidos por redes de sensores. No futuro, a metodologia pode ser aplicada para otimizar os processos de controlo da qualidade de dados quer de novos sistemas de monitorização, quer de redes de sensores já em funcionamento, bem como para auxiliar operações de manutenção das redes.Laboratório Nacional de Engenharia Civi
    corecore