93 research outputs found

    Towards a Self-Forensics Property in the ASSL Toolset

    Get PDF
    This preliminary conceptual work discusses a notion of self-forensics as an autonomic property to augment the Autonomic System Specification Language (ASSL) framework of formal specification tools for autonomic systems. The core of the proposed methodology leverages existing designs, theoretical results, and implementing systems to enable rapid completion of and validation of the experiments and their the results initiated in this work. Specifically, we leverage the ASSL toolkit to add the self-forensics autonomic property (SFAP) to enable generation of the Java-based Object-Oriented Intensional Programming (JOOIP) language code laced with traces of Forensic Lucid to encode contextual forensic evidence and other expressions

    Towards a Formal Reactive Autonomic Systems Framework using Category Theory

    Get PDF
    Software complexity is the main obstacle to further progress in IT industry, as the difficulty of managing complex and massive computing systems goes well beyond IT administrators’ capabilities. One of the remaining options is autonomic computing, which helps to address complexity by using technology to manage technology in terms of hiding and removing low level complexities from end users. Real-time reactive systems are some of the most complex systems that have become increasingly heterogeneous and intelligent. Thus, we want to add autonomic features to real-time reactive systems by building a formal framework, Reactive Autonomic Systems Framework (RASF), which can leverage specification, modeling and development of Reactive Autonomic Systems (RAS). With autonomic behavior, the real-time reactive systems are more self-managed to themselves and more adaptive to their environment. Formal methods are proven approaches to ensure the correct operation of complex interacting systems. However, many current formal approaches do not have appropriate mechanisms to specify RAS and have not addressed well on verifying self-management behavior, which is one of the most important features of the RAS. The management of evolving specifications and analysis of changes require a specification structure, which can isolate those changes in a small number of components and analyze the impacts of a change on interconnected components. Category theory has been proposed as a framework to offer that structure; it has a rich body of theory to reason about objects and their relations. Furthermore, category theory adopts a correct by construction approach by which components can be specified, proved and composed in the way of preserving their properties. In the multi-agent community, agent-based approach is considered as a natural way to model and implement autonomic systems, as the ability of an autonomous agent can be easily mapped to the self-management behaviors in autonomic systems. Thus, many ideas from the Multi-Agent Systems (MAS) community can be adapted to implement the autonomic systems, such as the self-management behavior, automatic group formation, interfacing and evolution. Therefore, in terms of achieving our research goal, we need to i) build an architecture and corresponding communication mechanism for modeling both reactive and autonomic behavior of the RAS, ii) formally specify the architecture, communication and behavior above using category theory, iii) design and implement the architecture, communication as well as behavior of the RAS model by the MAS approach with its implementation and iv) illustrate our RASF methodology and approach with case studies

    Intensional Cyberforensics

    Get PDF
    This work focuses on the application of intensional logic to cyberforensic analysis and its benefits and difficulties are compared with the finite-state-automata approach. This work extends the use of the intensional programming paradigm to the modeling and implementation of a cyberforensics investigation process with backtracing of event reconstruction, in which evidence is modeled by multidimensional hierarchical contexts, and proofs or disproofs of claims are undertaken in an eductive manner of evaluation. This approach is a practical, context-aware improvement over the finite state automata (FSA) approach we have seen in previous work. As a base implementation language model, we use in this approach a new dialect of the Lucid programming language, called Forensic Lucid, and we focus on defining hierarchical contexts based on intensional logic for the distributed evaluation of cyberforensic expressions. We also augment the work with credibility factors surrounding digital evidence and witness accounts, which have not been previously modeled. The Forensic Lucid programming language, used for this intensional cyberforensic analysis, formally presented through its syntax and operational semantics. In large part, the language is based on its predecessor and codecessor Lucid dialects, such as GIPL, Indexical Lucid, Lucx, Objective Lucid, and JOOIP bound by the underlying intensional programming paradigm.Comment: 412 pages, 94 figures, 18 tables, 19 algorithms and listings; PhD thesis; v2 corrects some typos and refs; also available on Spectrum at http://spectrum.library.concordia.ca/977460

    Design and investigation of scalable multicast recursive protocols for wired and wireless ad hoc networks

    Get PDF
    The ever-increasing demand on content distribution and media streaming over the Internet has created the need for efficient methods of delivering information. One of the most promising approaches is based on multicasting. However, multicast solutions have to cope with several constraints as well as being able to perform in different environments such as wired, wireless, and ad hoc environments. Additionally, the scale and size of the Internet introduces another dimension of difficulty. Providing scalable multicast for mobile hosts in wireless environment and in mobile ad hoc networks (MANETs) is a challenging problem. In the past few years, several protocols have been proposed to efficient multicast solutions over the Internet, but these protocols did not give efficient solution for the scalability issue. In this thesis, scalable multicast protocols for wired, wireless and wireless ad hoc networks are proposed and evaluated. These protocols share the idea of building up a multicast tree gradually and recursively as join/leave of the multicast group members using a dynamic branching node-based tree (DBT) concept. The DBT uses a pair of branching node messages (BNMs). These messages traverse between a set of dynamically assigned branching node routers (BNRs) to build the multicast tree. In the proposed protocols only the branching node routers (BNRs) carry the state information about their next BNRs rather than the multicast group members, which gives a fixed size of control packet header size as the multicast group size increases, i.e. a good solution to the problem of scalability. Also the process of join/leave of multicast group members is carried out locally which gives low join/leave latency. The proposed protocols include: Scalable Recursive Multicast protocol (SReM) which is proposed using the DBT concepts mentioned above, Mobile Scalable Recursive Multicast protocol (MoSReM) which is an extension for SReM by taking into consideration the mobility feature in the end hosts and performing an efficient roaming process, and finally, a Scalable Ad hoc Recursive Multicast protocol (SARM) to achieve the mobility feature for all nodes and performing an efficient solution for link recovery because of node movement. By cost analysis and an extensive simulation, the proposed protocols show many positive features like fixed size control messages, being scalable, low end to end delay, high packet rate delivery and low normalized routing overhead. The thesis concludes by discussing the contributions of the proposed protocols on scalable multicast in the Internet society

    Engineering Resilient Space Systems

    Get PDF
    Several distinct trends will influence space exploration missions in the next decade. Destinations are becoming more remote and mysterious, science questions more sophisticated, and, as mission experience accumulates, the most accessible targets are visited, advancing the knowledge frontier to more difficult, harsh, and inaccessible environments. This leads to new challenges including: hazardous conditions that limit mission lifetime, such as high radiation levels surrounding interesting destinations like Europa or toxic atmospheres of planetary bodies like Venus; unconstrained environments with navigation hazards, such as free-floating active small bodies; multielement missions required to answer more sophisticated questions, such as Mars Sample Return (MSR); and long-range missions, such as Kuiper belt exploration, that must survive equipment failures over the span of decades. These missions will need to be successful without a priori knowledge of the most efficient data collection techniques for optimum science return. Science objectives will have to be revised ‘on the fly’, with new data collection and navigation decisions on short timescales. Yet, even as science objectives are becoming more ambitious, several critical resources remain unchanged. Since physics imposes insurmountable light-time delays, anticipated improvements to the Deep Space Network (DSN) will only marginally improve the bandwidth and communications cadence to remote spacecraft. Fiscal resources are increasingly limited, resulting in fewer flagship missions, smaller spacecraft, and less subsystem redundancy. As missions visit more distant and formidable locations, the job of the operations team becomes more challenging, seemingly inconsistent with the trend of shrinking mission budgets for operations support. How can we continue to explore challenging new locations without increasing risk or system complexity? These challenges are present, to some degree, for the entire Decadal Survey mission portfolio, as documented in Vision and Voyages for Planetary Science in the Decade 2013–2022 (National Research Council, 2011), but are especially acute for the following mission examples, identified in our recently completed KISS Engineering Resilient Space Systems (ERSS) study: 1. A Venus lander, designed to sample the atmosphere and surface of Venus, would have to perform science operations as components and subsystems degrade and fail; 2. A Trojan asteroid tour spacecraft would spend significant time cruising to its ultimate destination (essentially hibernating to save on operations costs), then upon arrival, would have to act as its own surveyor, finding new objects and targets of opportunity as it approaches each asteroid, requiring response on short notice; and 3. A MSR campaign would not only be required to perform fast reconnaissance over long distances on the surface of Mars, interact with an unknown physical surface, and handle degradations and faults, but would also contain multiple components (launch vehicle, cruise stage, entry and landing vehicle, surface rover, ascent vehicle, orbiting cache, and Earth return vehicle) that dramatically increase the need for resilience to failure across the complex system. The concept of resilience and its relevance and application in various domains was a focus during the study, with several definitions of resilience proposed and discussed. While there was substantial variation in the specifics, there was a common conceptual core that emerged—adaptation in the presence of changing circumstances. These changes were couched in various ways—anomalies, disruptions, discoveries—but they all ultimately had to do with changes in underlying assumptions. Invalid assumptions, whether due to unexpected changes in the environment, or an inadequate understanding of interactions within the system, may cause unexpected or unintended system behavior. A system is resilient if it continues to perform the intended functions in the presence of invalid assumptions. Our study focused on areas of resilience that we felt needed additional exploration and integration, namely system and software architectures and capabilities, and autonomy technologies. (While also an important consideration, resilience in hardware is being addressed in multiple other venues, including 2 other KISS studies.) The study consisted of two workshops, separated by a seven-month focused study period. The first workshop (Workshop #1) explored the ‘problem space’ as an organizing theme, and the second workshop (Workshop #2) explored the ‘solution space’. In each workshop, focused discussions and exercises were interspersed with presentations from participants and invited speakers. The study period between the two workshops was organized as part of the synthesis activity during the first workshop. The study participants, after spending the initial days of the first workshop discussing the nature of resilience and its impact on future science missions, decided to split into three focus groups, each with a particular thrust, to explore specific ideas further and develop material needed for the second workshop. The three focus groups and areas of exploration were: 1. Reference missions: address/refine the resilience needs by exploring a set of reference missions 2. Capability survey: collect, document, and assess current efforts to develop capabilities and technology that could be used to address the documented needs, both inside and outside NASA 3. Architecture: analyze the impact of architecture on system resilience, and provide principles and guidance for architecting greater resilience in our future systems The key product of the second workshop was a set of capability roadmaps pertaining to the three reference missions selected for their representative coverage of the types of space missions envisioned for the future. From these three roadmaps, we have extracted several common capability patterns that would be appropriate targets for near-term technical development: one focused on graceful degradation of system functionality, a second focused on data understanding for science and engineering applications, and a third focused on hazard avoidance and environmental uncertainty. Continuing work is extending these roadmaps to identify candidate enablers of the capabilities from the following three categories: architecture solutions, technology solutions, and process solutions. The KISS study allowed a collection of diverse and engaged engineers, researchers, and scientists to think deeply about the theory, approaches, and technical issues involved in developing and applying resilience capabilities. The conclusions summarize the varied and disparate discussions that occurred during the study, and include new insights about the nature of the challenge and potential solutions: 1. There is a clear and definitive need for more resilient space systems. During our study period, the key scientists/engineers we engaged to understand potential future missions confirmed the scientific and risk reduction value of greater resilience in the systems used to perform these missions. 2. Resilience can be quantified in measurable terms—project cost, mission risk, and quality of science return. In order to consider resilience properly in the set of engineering trades performed during the design, integration, and operation of space systems, the benefits and costs of resilience need to be quantified. We believe, based on the work done during the study, that appropriate metrics to measure resilience must relate to risk, cost, and science quality/opportunity. Additional work is required to explicitly tie design decisions to these first-order concerns. 3. There are many existing basic technologies that can be applied to engineering resilient space systems. Through the discussions during the study, we found many varied approaches and research that address the various facets of resilience, some within NASA, and many more beyond. Examples from civil architecture, Department of Defense (DoD) / Defense Advanced Research Projects Agency (DARPA) initiatives, ‘smart’ power grid control, cyber-physical systems, software architecture, and application of formal verification methods for software were identified and discussed. The variety and scope of related efforts is encouraging and presents many opportunities for collaboration and development, and we expect many collaborative proposals and joint research as a result of the study. 4. Use of principled architectural approaches is key to managing complexity and integrating disparate technologies. The main challenge inherent in considering highly resilient space systems is that the increase in capability can result in an increase in complexity with all of the 3 risks and costs associated with more complex systems. What is needed is a better way of conceiving space systems that enables incorporation of capabilities without increasing complexity. We believe principled architecting approaches provide the needed means to convey a unified understanding of the system to primary stakeholders, thereby controlling complexity in the conception and development of resilient systems, and enabling the integration of disparate approaches and technologies. A representative architectural example is included in Appendix F. 5. Developing trusted resilience capabilities will require a diverse yet strategically directed research program. Despite the interest in, and benefits of, deploying resilience space systems, to date, there has been a notable lack of meaningful demonstrated progress in systems capable of working in hazardous uncertain situations. The roadmaps completed during the study, and documented in this report, provide the basis for a real funded plan that considers the required fundamental work and evolution of needed capabilities. Exploring space is a challenging and difficult endeavor. Future space missions will require more resilience in order to perform the desired science in new environments under constraints of development and operations cost, acceptable risk, and communications delays. Development of space systems with resilient capabilities has the potential to expand the limits of possibility, revolutionizing space science by enabling as yet unforeseen missions and breakthrough science observations. Our KISS study provided an essential venue for the consideration of these challenges and goals. Additional work and future steps are needed to realize the potential of resilient systems—this study provided the necessary catalyst to begin this process

    Context-Aware and Adaptive Usage Control Model

    Get PDF
    Information protection is a key issue for the acceptance and adoption of pervasive computing systems where various portable devices such as smart phones, Personal Digital Assistants (PDAs) and laptop computers are being used to share information and to access digital resources via wireless connection to the Internet. Because these are resources constrained devices and highly mobile, changes in the environmental context or device context can affect the security of the system a great deal. A proper security mechanism must be put in place which is able to cope with changing environmental and system context. Usage CONtrol (UCON) model is the latest major enhancement of the traditional access control models which enables mutability of subject and object attributes, and continuity of control on usage of resources. In UCON, access permission decision is based on three factors: authorisations, obligations and conditions. While authorisations and obligations are requirements that must be fulfilled by the subject and the object, conditions are subject and object independent requirements that must be satisfied by the environment. As a consequence, access permission may be revoked (and the access stopped) as a result of changes in the environment regardless of whether the authorisations and obligations requirements are met. This constitutes a major shortcoming of the UCON model in pervasive computing systems which constantly strive to adapt to environmental changes so as to minimise disruptions to the user. We propose a Context-Aware and Adaptive Usage Control (CA-UCON) model which extends the traditional UCON model to enable adaptation to environmental changes in the aim of preserving continuity of access. Indeed, when the authorisation and obligations requirements are fulfilled by the subject and object, and the conditions requirements fail due to changes in the environmental or the system context, our proposed model CA-UCON triggers specific actions in order to adapt to the new situation, so as to ensure continuity of usage. We then propose an architecture of CA-UCON model, presenting its various components. In this model, we integrated the adaptation decision with usage decision architecture, the comprehensive definition of each components and reveals the functions performed by each components in the architecture are presented. We also propose a novel computational model of our CA-UCON architecture. This model is formally specified as a finite state machine. It demonstrates how the access request of the subject is handled in CA-UCON model, including detail with regards to revoking of access and actions undertaken due to context changes. The extension of the original UCON architecture can be understood from this model. The formal specification of the CA-UCON is presented utilising the Calculus of Context-aware Ambients (CCA). This mathematical notation is considered suitable for modelling mobile and context-aware systems and has been preferred over alternatives for the following reasons: (i) Mobility and Context awareness are primitive constructs in CCA; (ii) A system's properties can be formally analysed; (iii) Most importantly, CCA specifications are executable allowing early validation of system properties and accelerated development of prototypes. For evaluation of CA-UCON model, a real-world case study of a ubiquitous learning (u-learning) system is selected. We propose a CA-UCON model for the u-learning system. This model is then formalised in CCA and the resultant specification is executed and analysed using an execution environment of CCA. Finally, we investigate the enforcement approaches for CA-UCON model. We present the CA-UCON reference monitor architecture with its components. We then proceed to demonstrate three types of enforcement architectures of the CA-UCON model: centralised architecture, distributed architecture and hybrid architecture. These are discussed in detail, including the analysis of their merits and drawbacks

    Intensional Cyberforensics

    Get PDF
    This work focuses on the application of intensional logic to cyberforensic analysis and its benefits and difficulties are compared with the finite-state-automata approach. This work extends the use of the intensional programming paradigm to the modeling and implementation of a cyberforensics investigation process with backtracing of event reconstruction, in which evidence is modeled by multidimensional hierarchical contexts, and proofs or disproofs of claims are undertaken in an eductive manner of evaluation. This approach is a practical, context-aware improvement over the finite state automata (FSA) approach we have seen in previous work. As a base implementation language model, we use in this approach a new dialect of the Lucid programming language, called Forensic Lucid, and we focus on defining hierarchical contexts based on intensional logic for the distributed evaluation of cyberforensic expressions. We also augment the work with credibility factors surrounding digital evidence and witness accounts, which have not been previously modeled. The Forensic Lucid programming language, used for this intensional cyberforensic analysis, formally presented through its syntax and operational semantics. In large part, the language is based on its predecessor and codecessor Lucid dialects, such as GIPL, Indexical Lucid, Lucx, Objective Lucid, MARFL, and JOOIP bound by the underlying intensional programming paradigm

    Engineering Resilient Space Systems

    Get PDF
    Several distinct trends will influence space exploration missions in the next decade. Destinations are becoming more remote and mysterious, science questions more sophisticated, and, as mission experience accumulates, the most accessible targets are visited, advancing the knowledge frontier to more difficult, harsh, and inaccessible environments. This leads to new challenges including: hazardous conditions that limit mission lifetime, such as high radiation levels surrounding interesting destinations like Europa or toxic atmospheres of planetary bodies like Venus; unconstrained environments with navigation hazards, such as free-floating active small bodies; multielement missions required to answer more sophisticated questions, such as Mars Sample Return (MSR); and long-range missions, such as Kuiper belt exploration, that must survive equipment failures over the span of decades. These missions will need to be successful without a priori knowledge of the most efficient data collection techniques for optimum science return. Science objectives will have to be revised ‘on the fly’, with new data collection and navigation decisions on short timescales. Yet, even as science objectives are becoming more ambitious, several critical resources remain unchanged. Since physics imposes insurmountable light-time delays, anticipated improvements to the Deep Space Network (DSN) will only marginally improve the bandwidth and communications cadence to remote spacecraft. Fiscal resources are increasingly limited, resulting in fewer flagship missions, smaller spacecraft, and less subsystem redundancy. As missions visit more distant and formidable locations, the job of the operations team becomes more challenging, seemingly inconsistent with the trend of shrinking mission budgets for operations support. How can we continue to explore challenging new locations without increasing risk or system complexity? These challenges are present, to some degree, for the entire Decadal Survey mission portfolio, as documented in Vision and Voyages for Planetary Science in the Decade 2013–2022 (National Research Council, 2011), but are especially acute for the following mission examples, identified in our recently completed KISS Engineering Resilient Space Systems (ERSS) study: 1. A Venus lander, designed to sample the atmosphere and surface of Venus, would have to perform science operations as components and subsystems degrade and fail; 2. A Trojan asteroid tour spacecraft would spend significant time cruising to its ultimate destination (essentially hibernating to save on operations costs), then upon arrival, would have to act as its own surveyor, finding new objects and targets of opportunity as it approaches each asteroid, requiring response on short notice; and 3. A MSR campaign would not only be required to perform fast reconnaissance over long distances on the surface of Mars, interact with an unknown physical surface, and handle degradations and faults, but would also contain multiple components (launch vehicle, cruise stage, entry and landing vehicle, surface rover, ascent vehicle, orbiting cache, and Earth return vehicle) that dramatically increase the need for resilience to failure across the complex system. The concept of resilience and its relevance and application in various domains was a focus during the study, with several definitions of resilience proposed and discussed. While there was substantial variation in the specifics, there was a common conceptual core that emerged—adaptation in the presence of changing circumstances. These changes were couched in various ways—anomalies, disruptions, discoveries—but they all ultimately had to do with changes in underlying assumptions. Invalid assumptions, whether due to unexpected changes in the environment, or an inadequate understanding of interactions within the system, may cause unexpected or unintended system behavior. A system is resilient if it continues to perform the intended functions in the presence of invalid assumptions. Our study focused on areas of resilience that we felt needed additional exploration and integration, namely system and software architectures and capabilities, and autonomy technologies. (While also an important consideration, resilience in hardware is being addressed in multiple other venues, including 2 other KISS studies.) The study consisted of two workshops, separated by a seven-month focused study period. The first workshop (Workshop #1) explored the ‘problem space’ as an organizing theme, and the second workshop (Workshop #2) explored the ‘solution space’. In each workshop, focused discussions and exercises were interspersed with presentations from participants and invited speakers. The study period between the two workshops was organized as part of the synthesis activity during the first workshop. The study participants, after spending the initial days of the first workshop discussing the nature of resilience and its impact on future science missions, decided to split into three focus groups, each with a particular thrust, to explore specific ideas further and develop material needed for the second workshop. The three focus groups and areas of exploration were: 1. Reference missions: address/refine the resilience needs by exploring a set of reference missions 2. Capability survey: collect, document, and assess current efforts to develop capabilities and technology that could be used to address the documented needs, both inside and outside NASA 3. Architecture: analyze the impact of architecture on system resilience, and provide principles and guidance for architecting greater resilience in our future systems The key product of the second workshop was a set of capability roadmaps pertaining to the three reference missions selected for their representative coverage of the types of space missions envisioned for the future. From these three roadmaps, we have extracted several common capability patterns that would be appropriate targets for near-term technical development: one focused on graceful degradation of system functionality, a second focused on data understanding for science and engineering applications, and a third focused on hazard avoidance and environmental uncertainty. Continuing work is extending these roadmaps to identify candidate enablers of the capabilities from the following three categories: architecture solutions, technology solutions, and process solutions. The KISS study allowed a collection of diverse and engaged engineers, researchers, and scientists to think deeply about the theory, approaches, and technical issues involved in developing and applying resilience capabilities. The conclusions summarize the varied and disparate discussions that occurred during the study, and include new insights about the nature of the challenge and potential solutions: 1. There is a clear and definitive need for more resilient space systems. During our study period, the key scientists/engineers we engaged to understand potential future missions confirmed the scientific and risk reduction value of greater resilience in the systems used to perform these missions. 2. Resilience can be quantified in measurable terms—project cost, mission risk, and quality of science return. In order to consider resilience properly in the set of engineering trades performed during the design, integration, and operation of space systems, the benefits and costs of resilience need to be quantified. We believe, based on the work done during the study, that appropriate metrics to measure resilience must relate to risk, cost, and science quality/opportunity. Additional work is required to explicitly tie design decisions to these first-order concerns. 3. There are many existing basic technologies that can be applied to engineering resilient space systems. Through the discussions during the study, we found many varied approaches and research that address the various facets of resilience, some within NASA, and many more beyond. Examples from civil architecture, Department of Defense (DoD) / Defense Advanced Research Projects Agency (DARPA) initiatives, ‘smart’ power grid control, cyber-physical systems, software architecture, and application of formal verification methods for software were identified and discussed. The variety and scope of related efforts is encouraging and presents many opportunities for collaboration and development, and we expect many collaborative proposals and joint research as a result of the study. 4. Use of principled architectural approaches is key to managing complexity and integrating disparate technologies. The main challenge inherent in considering highly resilient space systems is that the increase in capability can result in an increase in complexity with all of the 3 risks and costs associated with more complex systems. What is needed is a better way of conceiving space systems that enables incorporation of capabilities without increasing complexity. We believe principled architecting approaches provide the needed means to convey a unified understanding of the system to primary stakeholders, thereby controlling complexity in the conception and development of resilient systems, and enabling the integration of disparate approaches and technologies. A representative architectural example is included in Appendix F. 5. Developing trusted resilience capabilities will require a diverse yet strategically directed research program. Despite the interest in, and benefits of, deploying resilience space systems, to date, there has been a notable lack of meaningful demonstrated progress in systems capable of working in hazardous uncertain situations. The roadmaps completed during the study, and documented in this report, provide the basis for a real funded plan that considers the required fundamental work and evolution of needed capabilities. Exploring space is a challenging and difficult endeavor. Future space missions will require more resilience in order to perform the desired science in new environments under constraints of development and operations cost, acceptable risk, and communications delays. Development of space systems with resilient capabilities has the potential to expand the limits of possibility, revolutionizing space science by enabling as yet unforeseen missions and breakthrough science observations. Our KISS study provided an essential venue for the consideration of these challenges and goals. Additional work and future steps are needed to realize the potential of resilient systems—this study provided the necessary catalyst to begin this process
    • …
    corecore