11,252 research outputs found

    What makes industries believe in formal methods

    Get PDF
    The introduction of formal methods in the design and development departments of an industrial company has far reaching and long lasting consequences. In fact it changes the whole environment of methods, tools and skills that determine the design culture of that company. A decision to replace current design practice by formal methods, therefore, appears a vital one and is not lightly taken. The past has shown that efforts to introduce formal methods in industry has faced a lot of controversy and opposition at various hierarchical levels in companies, resulting in a marginal spread of such methods. This paper revisits the requirements for formal description techniques and identifies some critical success and inhibiting factors associated with the introduction of formal methods in the industrial practice. One of the inhibiting factors is the often encountered lack of appropriateness of the formal model to express and manipulate the design concerns that determine the world of the engineer. This factor motivated our research in the area of architectural and implementation design concepts. The last two sections of this paper report on some results of this research

    Hazard Contribution Modes of Machine Learning Components

    Get PDF
    Amongst the essential steps to be taken towards developing and deploying safe systems with embedded learning-enabled components (LECs) i.e., software components that use ma- chine learning (ML)are to analyze and understand the con- tribution of the constituent LECs to safety, and to assure that those contributions have been appropriately managed. This paper addresses both steps by, first, introducing the notion of hazard contribution modes (HCMs) a categorization of the ways in which the ML elements of LECs can contribute to hazardous system states; and, second, describing how argumentation patterns can capture the reasoning that can be used to assure HCM mitigation. Our framework is generic in the sense that the categories of HCMs developed i) can admit different learning schemes, i.e., supervised, unsupervised, and reinforcement learning, and ii) are not dependent on the type of system in which the LECs are embedded, i.e., both cyber and cyber-physical systems. One of the goals of this work is to serve a starting point for systematizing L analysis towards eventually automating it in a tool

    Engineering simulations for cancer systems biology

    Get PDF
    Computer simulation can be used to inform in vivo and in vitro experimentation, enabling rapid, low-cost hypothesis generation and directing experimental design in order to test those hypotheses. In this way, in silico models become a scientific instrument for investigation, and so should be developed to high standards, be carefully calibrated and their findings presented in such that they may be reproduced. Here, we outline a framework that supports developing simulations as scientific instruments, and we select cancer systems biology as an exemplar domain, with a particular focus on cellular signalling models. We consider the challenges of lack of data, incomplete knowledge and modelling in the context of a rapidly changing knowledge base. Our framework comprises a process to clearly separate scientific and engineering concerns in model and simulation development, and an argumentation approach to documenting models for rigorous way of recording assumptions and knowledge gaps. We propose interactive, dynamic visualisation tools to enable the biological community to interact with cellular signalling models directly for experimental design. There is a mismatch in scale between these cellular models and tissue structures that are affected by tumours, and bridging this gap requires substantial computational resource. We present concurrent programming as a technology to link scales without losing important details through model simplification. We discuss the value of combining this technology, interactive visualisation, argumentation and model separation to support development of multi-scale models that represent biologically plausible cells arranged in biologically plausible structures that model cell behaviour, interactions and response to therapeutic interventions

    Maximising transparency in a doctoral thesis: The complexities of writing about the use of QSR*NVIVO within a grounded theory study

    Get PDF
    This paper discusses the challenges of how to provide a transparent account of the use of the software programme QSR*NVIVO (QSR 2000) within a Grounded Theory framework (Glaser and Strauss 1967; Strauss and Corbin 1998). Psychology students are increasingly pursuing qualitative research projects such to the extent that the UK Economic and Social Research Council (ESRC) advise that students should have skill in the use of computer assisted qualitative data analysis software (CAQDAS) (Economic and Social Research Council 2001). Unlike quantitative studies, rigid formulae do not exist for writing-up qualitative projects for doctoral theses. Most authors, however, agree that transparency is essential when communicating the findings of qualitative research. Sparkes (2001) recommends that evaluative criteria for qualitative research should be commensurable with the aims, objectives, and epistemological assumptions of the research project. Likewise, the use of CAQDAS should vary according to the research methodology followed, and thus researchers should include a discussion of how CAQDAS was used. This paper describes how the evolving process of coding data, writing memos, categorising, and theorising were integrated into the written thesis. The structure of the written document is described including considerations about restructuring and the difficulties of writing about an iterative process within a linear document

    Defining the Proper Scope of Internet Patents: If We Don\u27t Know Where We Want to go, We\u27re Unlikely to get There

    Get PDF
    Part I of this Article addresses the appropriateness of protecting Internet innovations under the current patent regime. It concludes that the doctrinal, historical and policy arguments require different outcomes regarding computing (patentable subject matter) and competitive arts (at best a difficult fit) innovation. Part II argues that the new electronic economy has given rise to a particular kind of competitive arts market failure (interference with first-to-move lead-time incentives) which must be addressed. It concludes, however, that tinkering with the existing patent or copyright regimes is not only complex, but poses significant risks, and should be avoided. Part III sketches the outlines of a proposed competitive arts regime, combining the qualification features of patent law with the more nuanced approach to rights and remedies of copyright law. Part IV concludes by outlining a number of interim measures necessary to mitigate the effects of protecting the competitive arts under traditional patent law while awaiting the arrival of the new regime

    Securing Safety in Collaborative Cyber-Physical Systems through Fault Criticality Analysis

    Full text link
    Collaborative Cyber-Physical Systems (CCPS) are systems that contain tightly coupled physical and cyber components, massively interconnected subsystems, and collaborate to achieve a common goal. The safety of a single Cyber-Physical System (CPS) can be achieved by following the safety standards such as ISO 26262 and IEC 61508 or by applying hazard analysis techniques. However, due to the complex, highly interconnected, heterogeneous, and collaborative nature of CCPS, a fault in one CPS's components can trigger many other faults in other collaborating CPSs. Therefore, a safety assurance technique based on fault criticality analysis would require to ensure safety in CCPS. This paper presents a Fault Criticality Matrix (FCM) implemented in our tool called CPSTracer, which contains several data such as identified fault, fault criticality, safety guard, etc. The proposed FCM is based on composite hazard analysis and content-based relationships among the hazard analysis artifacts, and ensures that the safety guard controls the identified faults at design time; thus, we can effectively manage and control the fault at the design phase to ensure the safe development of CPSs. To validate our approach, we introduce a case study on the Platooning system (a collaborative CPS). We perform the criticality analysis of the Platooning system using FCM in our developed tool. After the detailed fault criticality analysis, we investigate the results to check the appropriateness and effectiveness with two research questions. Also, by performing simulation for the Platooning, we showed that the rate of collision of the Platooning system without using FCM was quite high as compared to the rate of collisions of the system after analyzing the fault criticality using FCM.Comment: This paper is an extended version of an article submitted to KCSE-202

    Creativity and Information Systems: A Theoretical and Empirical Investigation of Creativity in IS

    Get PDF
    Be productive. Since the industrial revolution, managers have had an almost singular focus on equipping employees with productivity tools in productivity-supportive environments. Information technologies—systems designed to increase productivity—entered the marketplace in the 1980\u27s and were initially credited with the subsequent boom. Eventually, innovation was shown to be the primary spark, and the managerial focus shifted. Increasingly, the imperative is: be creative. This dissertation investigates how a technology environment designed to be fast and mechanistic influences the slow and organic act of creativity. Creativity—the production of novel and useful solutions—can be an elusive subject and has a varied history within Information Systems (IS) research so the first essay is devoted to conducting an historical analysis of creativity research across several domains and developing a holistic, technologically-aware framework for researching creativity in modern organizations. IS literature published in the Senior Scholar\u27s journals is then mapped to the proposed framework as a means of identifying unexplored regions of the creativity phenomenon. This essay concludes with a discussion of future directions for creativity research within IS. The second essay integrates task-technology fit and conservation of resources theory and employs an experimental design to explore the task of being creative with an IS. Borrowing from fine arts research, the concept of IS Mastery is introduced as a resource which, when deployed efficiently, acts to conserve resources and enhance performance on cognitively demanding creative tasks. The third essay investigates an expectedly strong but unexpectedly negative relationship between technology fit and creative performance. This finding launches an exploration into alternate study designs, theoretical models and performance measures as we search for the true nature of the relationship between creativity and technology fit. The essay concludes with an updated map of the technology-to-performance chain. These essays contribute to IS research by creating a technology-aware creativity framework for motivating and positioning future research, by showing that the IS is neither a neutral nor frictionless collaborator in creative tasks and by exposing the inhibiting effects of a well-fitting technology for creative performance

    Verifiable self-certifying autonomous systems

    Get PDF
    Autonomous systems are increasingly being used in safety-and mission-critical domains, including aviation, manufacturing, healthcare and the automotive industry. Systems for such domains are often verified with respect to essential requirements set by a regulator, as part of a process called certification. In principle, autonomous systems can be deployed if they can be certified for use. However, certification is especially challenging as the condition of both the system and its environment will surely change, limiting the effective use of the system. In this paper we discuss the technological and regulatory background for such systems, and introduce an architectural framework that supports verifiably-correct dynamic self-certification by the system, potentially allowing deployed systems to operate more safely and effectively
    • …
    corecore