1,815 research outputs found

    A Method for Patching Interleaving-Replay Attacks in Faulty Security Protocols

    Get PDF
    AbstractThe verification of security protocols has attracted a lot of interest in the formal methods community, yielding two main verification approaches: i) state exploration, e.g. FDR [Gavin Lowe. Breaking and fixing the needham-schroeder public-key protocol using FDR. In TACAs'96: Proceedings of the Second International Workshop on Tools and Algorithms for Construction and Analysis of Systems, pages 147–166, London, UK, 1996. Springer-Verlag] and OFMC [A.D. Basin, S. Mödersheim, and L. Viganò. An on-the-fly model-checker for security protocol analysis. In D. Gollmann and E. Snekkenes, editors, ESORICS'03: 8th European Symposium on Research in Computer Security, number 2808 in Lecture Notes in Computer Science, pages 253–270, Gjøvik, Norway, 2003. Springer-Verlag]; and ii) theorem proving, e.g. the Isabelle inductive method [Lawrence C. Paulson. The inductive approach to verifying cryptographic protocols. Journal in Computer Security, 6(1-2):85–128, 1998] and Coral [G. Steel, A. Bundy, and M. Maidl. Attacking the asokan-ginzboorg protocol for key distribution in an ad-hoc bluetooth network using coral. In H. König, M. Heiner, and A. Wolisz, editors, IFIP TC6 /WG 6.1: Proceedings of 23rd IFIP International Conference on Formal Techniques for Networked and Distributed Systems, volume 2767, pages 1–10, Berlin, Germany, 2003. FORTE 2003 (work in progress papers)]. Complementing formal methods, Abadi and Needham's principles aim to guide the design of security protocols in order to make them simple and, hopefully, correct [M. Abadi and R. Needham. Prudent engineering practice for cryptographic protocols. IEEE Transactions on Software Engineering, 22(1):6–15, 1996]. We are interested in a problem related to verification but far less explored: the correction of faulty security protocols. Experience has shown that the analysis of counterexamples or failed proof attempts often holds the key to the completion of proofs and for the correction of a faulty model. In this paper, we introduce a method for patching faulty security protocols that are susceptible to an interleaving-replay attack. Our method makes use of Abadi and Needham's principles for the prudent engineering practice for cryptographic protocols in order to guide the location of the fault in a protocol as well as the proposition of candidate patches. We have run a test on our method with encouraging results. The test set includes 21 faulty security protocols borrowed from the Clark-Jacob library [J. Clark and J. Jacob. A survey of authentication protocol literature: Version 1.0. Technical report, Department of Computer Science, University of York, November 1997. A complete specification of the Clark-Jacob library in CAPSL is available at http://www.cs.sri.com/millen/capsl/]

    A Systematic Framework for Radio Frequency Identification (RFID) Hazard Mitigation in the Blood Transfusion Supply Chain from Donation to Distribution

    Get PDF
    The RFID Consortium is developing what will be the first FDA-approved use of radio frequency identification (RFID) technology to identify, track, manage, and monitor blood throughout the entire blood transfusion supply chain. The iTraceTM is an innovative technological system designed to optimize the procedures currently employed when tracing blood from the donor to the recipient. With all novel technologies it is essential to consider not only the advantages, but also the potential harms that may come about from using the system. The deployment of the iTraceTM consists of two phases: 1) Phase One - application of the iTraceTM from the donor to blood center distribution, and 2) Phase Two - application of the iTraceTM from blood center distribution to transfusion. This dissertation seeks to identify the possible hazards that may occur when utilizing the iTraceTM during Phase One, and to assess the mitigation and correction processes to combat these hazards. A thorough examination of verification and validation tests, as well as of the system design, requirements, and standard operating procedures was performed to qualify and quantify each hazard into specific categories of severity and likelihood. A traceability matrix was also established to link each hazard with its associated tests and/or features. Furthermore, a series of analyses were conducted to determine whether the benefits of implementing the iTraceTM outweighed the risks and whether the mitigation and correction strategies of the hazards were effective. Ultimately, this dissertation serves as a usable, generalizable framework for the management of RFID-related hazards in the blood transfusion supply chain from donor to blood center distribution

    Critique of Architectures for Long-Term Digital Preservation

    Get PDF
    Evolving technology and fading human memory threaten the long-term intelligibility of many kinds of documents. Furthermore, some records are susceptible to improper alterations that make them untrustworthy. Trusted Digital Repositories (TDRs) and Trustworthy Digital Objects (TDOs) seem to be the only broadly applicable digital preservation methodologies proposed. We argue that the TDR approach has shortfalls as a method for long-term digital preservation of sensitive information. Comparison of TDR and TDO methodologies suggests differentiating near-term preservation measures from what is needed for the long term. TDO methodology addresses these needs, providing for making digital documents durably intelligible. It uses EDP standards for a few file formats and XML structures for text documents. For other information formats, intelligibility is assured by using a virtual computer. To protect sensitive information—content whose inappropriate alteration might mislead its readers, the integrity and authenticity of each TDO is made testable by embedded public-key cryptographic message digests and signatures. Key authenticity is protected recursively in a social hierarchy. The proper focus for long-term preservation technology is signed packages that each combine a record collection with its metadata and that also bind context—Trustworthy Digital Objects.

    Analysis of Security Protocols in Embedded Systems

    Get PDF

    Using Deep RNA Sequencing for the Structural Annotation of the Laccaria Bicolor Mycorrhizal Transcriptome

    Get PDF
    BACKGROUND: Accurate structural annotation is important for prediction of function and required for in vitro approaches to characterize or validate the gene expression products. Despite significant efforts in the field, determination of the gene structure from genomic data alone is a challenging and inaccurate process. The ease of acquisition of transcriptomic sequence provides a direct route to identify expressed sequences and determine the correct gene structure. METHODOLOGY: We developed methods to utilize RNA-seq data to correct errors in the structural annotation and extend the boundaries of current gene models using assembly approaches. The methods were validated with a transcriptomic data set derived from the fungus Laccaria bicolor, which develops a mycorrhizal symbiotic association with the roots of many tree species. Our analysis focused on the subset of 1501 gene models that are differentially expressed in the free living vs. mycorrhizal transcriptome and are expected to be important elements related to carbon metabolism, membrane permeability and transport, and intracellular signaling. Of the set of 1501 gene models, 1439 (96%) successfully generated modified gene models in which all error flags were successfully resolved and the sequences aligned to the genomic sequence. The remaining 4% (62 gene models) either had deviations from transcriptomic data that could not be spanned or generated sequence that did not align to genomic sequence. The outcome of this process is a set of high confidence gene models that can be reliably used for experimental characterization of protein function. CONCLUSIONS: 69% of expressed mycorrhizal JGI "best" gene models deviated from the transcript sequence derived by this method. The transcriptomic sequence enabled correction of a majority of the structural inconsistencies and resulted in a set of validated models for 96% of the mycorrhizal genes. The method described here can be applied to improve gene structural annotation in other species, provided that there is a sequenced genome and a set of gene models

    Observing the clouds : a survey and taxonomy of cloud monitoring

    Get PDF
    This research was supported by a Royal Society Industry Fellowship and an Amazon Web Services (AWS) grant. Date of Acceptance: 10/12/2014Monitoring is an important aspect of designing and maintaining large-scale systems. Cloud computing presents a unique set of challenges to monitoring including: on-demand infrastructure, unprecedented scalability, rapid elasticity and performance uncertainty. There are a wide range of monitoring tools originating from cluster and high-performance computing, grid computing and enterprise computing, as well as a series of newer bespoke tools, which have been designed exclusively for cloud monitoring. These tools express a number of common elements and designs, which address the demands of cloud monitoring to various degrees. This paper performs an exhaustive survey of contemporary monitoring tools from which we derive a taxonomy, which examines how effectively existing tools and designs meet the challenges of cloud monitoring. We conclude by examining the socio-technical aspects of monitoring, and investigate the engineering challenges and practices behind implementing monitoring strategies for cloud computing.Publisher PDFPeer reviewe

    Reliability and security at the dawn of electronic bank transfers in the 1970s-1980s

    Get PDF
    From a historical perspective, the concept of reliability and computing security in the early 1970s, when electronic data transfer processes were in infancy, is especially interesting in terms of their implications in technological change and the business of banking. The cases of Japan, Spain and Germany, in terms of their national banking networks, provide an interesting field of analysis in terms of the implications that the online data transfer systems had for banking institutions. Concerns about the reliability of the computing processes and digital security were the key factors. These innovations laid the foundation for the advancement of networks and new banking services that would open up unprecedented horizons in what was to become known as service banking
    • …
    corecore