3,283 research outputs found

    Model checking a decentralized storage deduplication protocol

    Get PDF
    Fifth Latin-American Symposium on Dependable Computing (LADC)Deduplication of live storage volumes in a cloud computing environment is better done by post-processing: by delaying discovery and removal of duplicate data after I/O requests have been concluded, impact in latency can be minimized. When compared to traditional deduplication in backup systems, which can be done in-line and in a centralized fashion, distribution and concurrency lead to increased complexity. This paper outlines a deduplication algorithm for a typical cloud infrastructure with a common storage pool and summarizes how model-checking with the TLA+ toolset was used to uncover and correct some subtle concurrency issues

    Achieving eventual leader election in WS-discovery

    Get PDF
    Fifth Latin-American Symposium on Dependable Computing (LADC)The Devices Profile for Web Services (DPWS) provides the foundation for seamless deployment, autonomous configuration, and joint operation for various computing devices in environments ranging from simple personal multimedia setups and home automation to complex industrial equipment and large data centers. In particular, WS-Discovery provides dynamic rendezvous for clients and services embodied in such devices. Unfortunately, failure detection implicit in this standard is very limited, both by embodying static timing assumptions and by omitting liveness monitoring, leading to undesirable situations in demanding application scenarios. In this paper we identify these undesirable outcomes and propose an extension of WS-Discovery that allows failure detection to achieve eventual leader election, thus preventing them

    Contents EATCS bulletin number 55, February 1995

    Get PDF

    "Security Model for a Central Bank in Latin America using Blockchain"

    Get PDF
    "Banking institutions in Latin America are the target of increasingly sophisticated and advanced cyber-attacks and threats, which increase every year and leave substantial economic losses, due to the high level of global interconnection and digitization of their operations. The objective of this work is to design a model to guarantee information security in a Central Bank in Latin America using Blockchain technology. Exploratory research, observation and inductive and deductive methods are used to propose Blockchain solutions in a Central Bank. The results are a model for secure transactions in Blockchain, Smart Contract functions and a data management process. It was concluded that the security model for a central bank provides high level of information management and storage of transactions in a secure and immutable way.

    Comparative Analysis Of Fault-Tolerance Techniques For Space Applications

    Get PDF
    Fault-tolerance technique enables a system or application to continue working even if some fault /error occurs in a system. Therefore, it is vital to choose appropriate fault tolerant technique best suited to our application. In case of real-time embedded systems in a space project, the importance of such techniques becomes more critical. In space applications, there is minor or no possibility of maintenance and faults occurrence may lead to serious consequences in terms of partial or complete mission failure. This paper describes the comparison of various fault tolerant techniques for space applications. This also suggests the suitability of these techniques in particular scenario.  The study of fault tolerance techniques relevant to real-time embedded systems and on-board space applications (satellites) is given due importance. This study will not only summarize fault tolerant techniques but also describe their strengths. The paper describes the future trends of faults-tolerance techniques in space applications. This effort may help space system engineers and scientists to select suitable fault-tolerance technique for their mission.

    Uso de riscos na validação de sistemas baseados em componentes

    Get PDF
    Orientadores: Eliane Martins, Henrique Santos do Carmo MadeiraTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: A sociedade moderna está cada vez mais dependente dos serviços prestados pelos computadores e, conseqüentemente, dependente do software que está sendo executado para prover estes serviços. Considerando a tendência crescente do desenvolvimento de produtos de software utilizando componentes reutilizáveis, a dependabilidade do software, ou seja, a segurança de que o software irá funcionar adequadamente, recai na dependabilidade dos componentes que são integrados. Os componentes são normalmente adquiridos de terceiros ou produzidos por outras equipes de desenvolvimento. Dessa forma, os critérios utilizados na fase de testes dos componentes dificilmente estão disponíveis. A falta desta informação aliada ao fato de se estar utilizando um componente que não foi produzido para o sistema e o ambiente computacional específico faz com que a reutilização de componentes apresente um risco para o sistema que os integra. Estudos tradicionais do risco de um componente de software definem dois fatores que caracteriza o risco, a probabilidade de existir uma falha no componente e o impacto que isso causa no sistema computacional. Este trabalho propõe o uso da análise do risco para selecionar pontos de injeção e monitoração para campanhas de injeção de falhas. Também propõe uma abordagem experimental para a avaliação do risco de um componente para um sistema. Para se estimar a probabilidade de existir uma falha no componente, métricas de software foram combinadas num modelo estatístico. O impacto da manifestação de uma falha no sistema foi estimado experimentalmente utilizando a injeção de falhas. Considerando esta abordagem, a avaliação do risco se torna genérica e repetível embasando-se em medidas bem definidas. Dessa forma, a metodologia pode ser utilizada como um benchmark de componentes quanto ao risco e pode ser utilizada quando é preciso escolher o melhor componente para um sistema computacional, entre os vários componentes que provêem a mesma funcionalidade. Os resultados obtidos na aplicação desta abordagem em estudos de casos nos permitiram escolher o melhor componente, considerando diversos objetivos e necessidades dos usuáriosAbstract: Today's societies have become increasingly dependent on information services. A corollary is that we have also become increasingly dependent on computer software products that provide such services. The increasing tendency of software development to employ reusable components means that software dependability has become even more reliant on the dependability of integrated components. Components are usually acquired from third parties or developed by unknown development teams. In this way, the criteria employed in the testing phase of components-based systems are hardly ever been available. This lack of information, coupled with the use of components that are not specifically developed for a particular system and computational environment, makes components reutilization risky for the integrating system. Traditional studies on the risk of software components suggest that two aspects must be considered when risk assessment tests are performed, namely the probability of residual fault in software component, and the probability of such fault activation and impact on the computational system. The present work proposes the use of risk analysis to select the injection and monitoring points for fault injection campaigns. It also proposes an experimental approach to evaluate the risk a particular component may represent to a system. In order to determine the probability of a residual fault in the component, software metrics are combined in a statistical mode!. The impact of fault activation is estimated using fault injection. Through this experimental approach, risk evaluation becomes replicable and buttressed on well-defined measurements. In this way, the methodology can be used as a components' risk benchmark, and can be employed when it is necessary to choose the most suitable among several functionally-similar components for a particular computational system. The results obtained in the application of this approach to specific case studies allowed us to choose the best component in each case, without jeopardizing the diverse objectives and needs of their usersDoutoradoDoutor em Ciência da Computaçã
    corecore