221,427 research outputs found

    A Design Theory for Secure Semantic E-Business Processes (SSEBP)

    Get PDF
    This dissertation develops and evaluates a Design theory. We follow the design science approach (Hevener, et al., 2004) to answer the following research question: "How can we formulate a design theory to guide the analysis and design of Secure Semantic eBusiness processes (SSeBP)?" Goals of SSeBP design theory include (i) unambiguously represent information and knowledge resources involved in eBusiness processes to solve semantic conflicts and integrate heterogeneous information systems; (ii) analyze and model business processes that include access control mechanisms to prevent unauthorized access to resources; and (iii) facilitate the coordination of eBusiness process activities-resources by modeling their dependencies. Business processes modeling techniques such as Business Process Modeling Notation (BPMN) (BPMI, 2004) and UML Activity Diagrams (OMG, 2003) lack theoretical foundations and are difficult to verify for correctness and completeness (Soffer and Wand, 2007). Current literature on secure information systems design methods are theoretically underdeveloped and consider security as a non-functional requirement and as an afterthought (Siponen et al. 2006, Mouratidis et al., 2005). SSeBP design theory is one of the first attempts at providing theoretically grounded guidance to design richer secure eBusiness processes for secure and coordinated seamless knowledge exchange among business partners in a value chain. SSeBP design theory allows for the inclusion of non-repudiation mechanisms into the analysis and design of eBusiness processes which lays the foundations for auditing and compliance with regulations such as Sarbanes-Oxley. SSeBP design theory is evaluated through a rigorous multi-method evaluation approach including descriptive, observational, and experimental evaluation. First, SSeBP design theory is validated by modeling business processes of an industry standard named Collaborative Planning, Forecasting, and Replenishment (CPFR) approach. Our model enhances CPFR by incorporating security requirements in the process model, which is critically lacking in the current CPFR technical guidelines. Secondly, we model the demand forecasting and capacity planning business processes for two large organizations to evaluate the efficacy and utility of SSeBP design theory to capture the realistic requirements and complex nuances of real inter-organizational business processes. Finally, we empirically evaluate SSeBP, against enhanced Use Cases (Siponen et al., 2006) and UML activity diagrams, for informational equivalence (Larkin and Simon, 1987) and its utility in generating situational awareness (Endsley, 1995) of the security and coordination requirements of a business process. Specific contributions of this dissertation are to develop a design theory (SSeBP) that presents a novel and holistic approach that contributes to the IS knowledge base by filling an existing research gap in the area of design of information systems to support secure and coordinated business processes. The proposed design theory provides practitioners with the meta-design and the design process, including the system components and principles to guide the analysis and design of secure eBusiness processes that are secure and coordinated

    Semantic and logical foundations of global computing: Papers from the EU-FET global computing initiative (2001–2005)

    Get PDF
    Overvew of the contents of the volume "Semantic and logical foundations of global computing

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling

    Autonomic computing architecture for SCADA cyber security

    Get PDF
    Cognitive computing relates to intelligent computing platforms that are based on the disciplines of artificial intelligence, machine learning, and other innovative technologies. These technologies can be used to design systems that mimic the human brain to learn about their environment and can autonomously predict an impending anomalous situation. IBM first used the term ‘Autonomic Computing’ in 2001 to combat the looming complexity crisis (Ganek and Corbi, 2003). The concept has been inspired by the human biological autonomic system. An autonomic system is self-healing, self-regulating, self-optimising and self-protecting (Ganek and Corbi, 2003). Therefore, the system should be able to protect itself against both malicious attacks and unintended mistakes by the operator

    Curriculum Guidelines for Undergraduate Programs in Data Science

    Get PDF
    The Park City Math Institute (PCMI) 2016 Summer Undergraduate Faculty Program met for the purpose of composing guidelines for undergraduate programs in Data Science. The group consisted of 25 undergraduate faculty from a variety of institutions in the U.S., primarily from the disciplines of mathematics, statistics and computer science. These guidelines are meant to provide some structure for institutions planning for or revising a major in Data Science

    Postprocessing for quantum random number generators: entropy evaluation and randomness extraction

    Full text link
    Quantum random-number generators (QRNGs) can offer a means to generate information-theoretically provable random numbers, in principle. In practice, unfortunately, the quantum randomness is inevitably mixed with classical randomness due to classical noises. To distill this quantum randomness, one needs to quantify the randomness of the source and apply a randomness extractor. Here, we propose a generic framework for evaluating quantum randomness of real-life QRNGs by min-entropy, and apply it to two different existing quantum random-number systems in the literature. Moreover, we provide a guideline of QRNG data postprocessing for which we implement two information-theoretically provable randomness extractors: Toeplitz-hashing extractor and Trevisan's extractor.Comment: 13 pages, 2 figure

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    Formal Verification of Security Protocol Implementations: A Survey

    Get PDF
    Automated formal verification of security protocols has been mostly focused on analyzing high-level abstract models which, however, are significantly different from real protocol implementations written in programming languages. Recently, some researchers have started investigating techniques that bring automated formal proofs closer to real implementations. This paper surveys these attempts, focusing on approaches that target the application code that implements protocol logic, rather than the libraries that implement cryptography. According to these approaches, libraries are assumed to correctly implement some models. The aim is to derive formal proofs that, under this assumption, give assurance about the application code that implements the protocol logic. The two main approaches of model extraction and code generation are presented, along with the main techniques adopted for each approac
    • …
    corecore