140 research outputs found

    The relationship between search based software engineering and predictive modeling

    Full text link
    Search Based Software Engineering (SBSE) is an approach to software engineering in which search based optimization algorithms are used to identify optimal or near optimal solutions and to yield insight. SBSE techniques can cater for multiple, possibly competing objectives and/or constraints and applications where the potential solution space is large and complex. This paper will provide a brief overview of SBSE, explaining some of the ways in which it has already been applied to construction of predictive models. There is a mutually beneficial relationship between predictive models and SBSE. The paper sets out eleven open problem areas for Search Based Predictive Modeling and describes how predictive models also have role to play in improving SBSE

    DAG-Based Attack and Defense Modeling: Don't Miss the Forest for the Attack Trees

    Full text link
    This paper presents the current state of the art on attack and defense modeling approaches that are based on directed acyclic graphs (DAGs). DAGs allow for a hierarchical decomposition of complex scenarios into simple, easily understandable and quantifiable actions. Methods based on threat trees and Bayesian networks are two well-known approaches to security modeling. However there exist more than 30 DAG-based methodologies, each having different features and goals. The objective of this survey is to present a complete overview of graphical attack and defense modeling techniques based on DAGs. This consists of summarizing the existing methodologies, comparing their features and proposing a taxonomy of the described formalisms. This article also supports the selection of an adequate modeling technique depending on user requirements

    Control-Flow Security.

    Full text link
    Computer security is a topic of paramount importance in computing today. Though enormous effort has been expended to reduce the software attack surface, vulnerabilities remain. In contemporary attacks, subverting the control-flow of an application is often the cornerstone to a successful attempt to compromise a system. This subversion, known as a control-flow attack, remains as an essential building block of many software exploits. This dissertation proposes a multi-pronged approach to securing software control-flow to harden the software attack surface. The primary domain of this dissertation is the elimination of the basic mechanism in software enabling control-flow attacks. I address the prevalence of such attacks by going to the heart of the problem, removing all of the operations that inject runtime data into program control. This novel approach, Control-Data Isolation, provides protection by subtracting the root of the problem; indirect control-flow. Previous works have attempted to address control-flow attacks by layering additional complexity in an effort to shield software from attack. In this work, I take a subtractive approach; subtracting the primary cause of both contemporary and classic control-flow attacks. This novel approach to security advances the state of the art in control-flow security by ensuring the integrity of the programmer-intended control-flow graph of an application at runtime. Further, this dissertation provides methodologies to eliminate the barriers to adoption of control-data isolation while simultaneously moving ahead to reduce future attacks. The secondary domain of this dissertation is technique which leverages the process by which software is engineered, tested, and executed to pinpoint the statements in software which are most likely to be exploited by an attacker, defined as the Dynamic Control Frontier. Rather than reacting to successful attacks by patching software, the approach in this dissertation will move ahead of the attacker and identify the susceptible code regions before they are compromised. In total, this dissertation combines software and hardware design techniques to eliminate contemporary control-flow attacks. Further, it demonstrates the efficacy and viability of a subtractive approach to software security, eliminating the elements underlying security vulnerabilities.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133304/1/warthur_1.pd

    An investigation into hazard-centric analysis of complex autonomous systems

    Get PDF
    This thesis proposes a hypothesis that a conventional, and essentially manual, HAZOP process can be improved with information obtained with model-based dynamic simulation, using a Monte Carlo approach, to update a Bayesian Belief model representing the expected relations between cause and effects – and thereby produce an enhanced HAZOP. The work considers how the expertise of a hazard and operability study team might be augmented with access to behavioural models, simulations and belief inference models. This incorporates models of dynamically complex system behaviour, considering where these might contribute to the expertise of a hazard and operability study team, and how these might bolster trust in the portrayal of system behaviour. With a questionnaire containing behavioural outputs from a representative systems model, responses were collected from a group with relevant domain expertise. From this it is argued that the quality of analysis is dependent upon the experience and expertise of the participants but this might be artificially augmented using probabilistic data derived from a system dynamics model. Consequently, Monte Carlo simulations of an improved exemplar system dynamics model are used to condition a behavioural inference model and also to generate measures of emergence associated with the deviation parameter used in the study. A Bayesian approach towards probability is adopted where particular events and combinations of circumstances are effectively unique or hypothetical, and perhaps irreproducible in practice. Therefore, it is shown that a Bayesian model, representing beliefs expressed in a hazard and operability study, conditioned by the likely occurrence of flaw events causing specific deviant behaviour from evidence observed in the system dynamical behaviour, may combine intuitive estimates based upon experience and expertise, with quantitative statistical information representing plausible evidence of safety constraint violation. A further behavioural measure identifies potential emergent behaviour by way of a Lyapunov Exponent. Together these improvements enhance the awareness of potential hazard cases

    An investigation into computer and network curricula

    Get PDF
    This thesis consists of a series of internationally published, peer reviewed, journal and conference research papers that analyse the educational and training needs of undergraduate Information Technology (IT) students within the area of Computer and Network Technology (CNT) Education. Research by Maj et al has found that accredited computing science curricula can fail to meet the expectations of employers in the field of CNT: “It was found that none of these students could perform first line maintenance on a Personal Computer (PC) to a professional standard with due regard to safety, both to themselves and the equipment. Neither could they install communication cards, cables and network operating system or manage a population of networked PCs to an acceptable commercial standard without further extensive training. It is noteworthy that none of the students interviewed had ever opened a PC. It is significant that all those interviewed for this study had successfully completed all the units on computer architecture and communication engineering (Maj, Robbins, Shaw, & Duley, 1998). The students\u27 curricula at that time lacked units in which they gained hands-on experience in modern PC hardware or networking skills. This was despite the fact that their computing science course was level one accredited, the highest accreditation level offered by the Australian Computer Society (ACS). The results of the initial survey in Western Australia led to the introduction of two new units within the Computing Science Degree at Edith Cowan University (ECU), Computer Installation & Maintenance (CIM) and Network Installation & Maintenance (NIM) (Maj, Fetherston, Charlesworth, & Robbins, 1998). Uniquely within an Australian university context these new syllabi require students to work on real equipment. Such experience excludes digital circuit investigation, which is still a recommended approach by the Association for Computing Machinery (ACM) for computer architecture units (ACM, 2001, p.97). Instead, the CIM unit employs a top-down approach based initially upon students\u27 everyday experiences, which is more in accordance with constructivist educational theory and practice. These papers propose an alternate model of IT education that helps to accommodate the educational and vocational needs of IT students in the context of continual rapid changes and developments in technology. The ACM have recognised the need for variation noting that: There are many effective ways to organize a curriculum even for a particular set of goals and objectives (Tucker et al., 1991, p.70). A possible major contribution to new knowledge of these papers relates to how high level abstract bandwidth (B-Node) models may contribute to the understanding of why and how computer and networking technology systems have developed over time. Because these models are de-coupled from the underlying technology, which is subject to rapid change, these models may help to future-proof student knowledge and understanding of the ongoing and future development of computer and networking systems. The de-coupling is achieved through abstraction based upon bandwidth or throughput rather than the specific implementation of the underlying technologies. One of the underlying problems is that computing systems tend to change faster than the ability of most educational institutions to respond. Abstraction and the use of B-Node models could help educational models to more quickly respond to changes in the field, and can also help to introduce an element of future-proofing in the education of IT students. The importance of abstraction has been noted by the ACM who state that: Levels of Abstraction: the nature and use of abstraction in computing; the use of abstraction in managing complexity, structuring systems, hiding details, and capturing recurring patterns; the ability to represent an entity or system by abstractions having different levels of detail and specificity (ACM, 1991b). Bloom et al note the importance of abstraction, listing under a heading of: “Knowledge of the universals and abstractions in a field” the objective: Knowledge of the major schemes and patterns by which phenomena and ideas arc organized. These are large structures, theories, and generalizations which dominate a subject or field or problems. These are the highest levels of abstraction and complexity\u27\u27 (Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956, p. 203). Abstractions can be applied to computer and networking technology to help provide students with common fundamental concepts regardless of the particular underlying technological implementation to help avoid the rapid redundancy of a detailed knowledge of modem computer and networking technology implementation and hands-on skills acquisition. Again the ACM note that: “Enduring computing concepts include ideas that transcend any specific vendor, package or skill set... While skills are fleeting, fundamental concepts are enduring and provide long lasting benefits to students, critically important in a rapidly changing discipline (ACM, 2001, p.70) These abstractions can also be reinforced by experiential learning to commercial practices. In this context, the other possibly major contribution of new knowledge provided by this thesis is an efficient, scalable and flexible model for assessing hands-on skills and understanding of IT students. This is a form of Competency-Based Assessment (CBA), which has been successfully tested as part of this research and subsequently implemented at ECU. This is the first time within this field that this specific type of research has been undertaken within the university sector within Australia. Hands-on experience and understanding can become outdated hence the need for future proofing provided via B-Nodes models. The three major research questions of this study are: •Is it possible to develop a new, high level abstraction model for use in CNT education? •Is it possible to have CNT curricula that are more directly relevant to both student and employer expectations without suffering from rapid obsolescence? •Can WI effective, efficient and meaningful assessment be undertaken to test students\u27 hands-on skills and understandings? The ACM Special Interest Group on Data Communication (SJGCOMM) workshop report on Computer Networking, Curriculum Designs and Educational Challenges, note a list of teaching approaches: ... the more \u27hands-on\u27 laboratory approach versus the more traditional in-class lecture-based approach; the bottom-up approach towards subject matter verus the top-down approach (Kurose, Leibeherr, Ostermann, & Ott-Boisseau, 2002, para 1). Bandwidth considerations are approached from the PC hardware level and at each of the seven layers of the International Standards Organisation (ISO) Open Systems Interconnection (OSI) reference model. It is believed that this research is of significance to computing education. However, further research is needed

    Resilience-Building Technologies: State of Knowledge -- ReSIST NoE Deliverable D12

    Get PDF
    This document is the first product of work package WP2, "Resilience-building and -scaling technologies", in the programme of jointly executed research (JER) of the ReSIST Network of Excellenc

    Summer/Autumn 2004 Full Issue

    Get PDF

    Security of Cyber-Physical Systems

    Get PDF
    Cyber-physical system (CPS) innovations, in conjunction with their sibling computational and technological advancements, have positively impacted our society, leading to the establishment of new horizons of service excellence in a variety of applicational fields. With the rapid increase in the application of CPSs in safety-critical infrastructures, their safety and security are the top priorities of next-generation designs. The extent of potential consequences of CPS insecurity is large enough to ensure that CPS security is one of the core elements of the CPS research agenda. Faults, failures, and cyber-physical attacks lead to variations in the dynamics of CPSs and cause the instability and malfunction of normal operations. This reprint discusses the existing vulnerabilities and focuses on detection, prevention, and compensation techniques to improve the security of safety-critical systems

    Service-based Fault Tolerance for Cyber-Physical Systems: A Systems Engineering Approach

    Get PDF
    Cyber-physical systems (CPSs) comprise networked computing units that monitor and control physical processes in feedback loops. CPSs have potential to change the ways people and computers interact with the physical world by enabling new ways to control and optimize systems through improved connectivity and computing capabilities. Compared to classical control theory, these systems involve greater unpredictability which may affect the stability and dynamics of the physical subsystems. Further uncertainty is introduced by the dynamic and open computing environments with rapidly changing connections and system configurations. However, due to interactions with the physical world, the dependable operation and tolerance of failures in both cyber and physical components are essential requirements for these systems.The problem of achieving dependable operations for open and networked control systems is approached using a systems engineering process to gain an understanding of the problem domain, since fault tolerance cannot be solved only as a software problem due to the nature of CPSs, which includes close coordination among hardware, software and physical objects. The research methodology consists of developing a concept design, implementing prototypes, and empirically testing the prototypes. Even though modularity has been acknowledged as a key element of fault tolerance, the fault tolerance of highly modular service-oriented architectures (SOAs) has been sparsely researched, especially in distributed real-time systems. This thesis proposes and implements an approach based on using loosely coupled real-time SOA to implement fault tolerance for a teleoperation system.Based on empirical experiments, modularity on a service level can be used to support fault tolerance (i.e., the isolation and recovery of faults). Fault recovery can be achieved for certain categories of faults (i.e., non-deterministic and aging-related) based on loose coupling and diverse operation modes. The proposed architecture also supports the straightforward integration of fault tolerance patterns, such as FAIL-SAFE, HEARTBEAT, ESCALATION and SERVICE MANAGER, which are used in the prototype systems to support dependability requirements. For service failures, systems rely on fail-safe behaviours, diverse modes of operation and fault escalation to backup services. Instead of using time-bounded reconfiguration, services operate in best-effort capabilities, providing resilience for the system. This enables, for example, on-the-fly service changes, smooth recoveries from service failures and adaptations to new computing environments, which are essential requirements for CPSs.The results are combined into a systems engineering approach to dependability, which includes an analysis of the role of safety-critical requirements for control system software architecture design, architectural design, a dependability-case development approach for CPSs and domain-specific fault taxonomies, which support dependability case development and system reliability analyses. Other contributions of this work include three new patterns for fault tolerance in CPSs: DATA-CENTRIC ARCHITECTURE, LET IT CRASH and SERVICE MANAGER. These are presented together with a pattern language that shows how they relate to other patterns available for the domain
    • …
    corecore