24,719 research outputs found

    Automatic allocation of safety requirements to components of a software product line

    Get PDF
    Safety critical systems developed as part of a product line must still comply with safety standards. Standards use the concept of Safety Integrity Levels (SILs) to drive the assignment of system safety requirements to components of a system under design. However, for a Software Product Line (SPL), the safety requirements that need to be allocated to a component may vary in different products. Variation in design can indeed change the possible hazards incurred in each product, their causes, and can alter the safety requirements placed on individual components in different SPL products. Establishing common SILs for components of a large scale SPL by considering all possible usage scenarios, is desirable for economies of scale, but it also poses challenges to the safety engineering process. In this paper, we propose a method for automatic allocation of SILs to components of a product line. The approach is applied to a Hybrid Braking System SPL design

    Factory modelling: data guidance for analysing production, utility and building architecture systems

    Get PDF
    Work on energy and resource reduction in factories is dependent on the availability of data. Typically, available sources are incomplete or inappropriate for direct use and manipulation is required. Identifying new improvement opportunities through simulation across factory production, utility and building architecture domains requires analysis of model feasibility, particularly in terms of system data composition, input resolution and simulation result fidelity. This paper reviews literature on developing appropriate model data for assessing energy and material flows at factory level. Gaps are found in guidance for analysis and integration of resource-flows across system boundaries. The process for how data was prepared, input and iteratively developed alongside conceptual and simulation models is described. The case of a large-scale UK manufacturer is presented alongside discussions on challenges associated with factory level modelling, and the insights gained from understanding the effect of data clarity on system performance

    Analysis of Software Binaries for Reengineering-Driven Product Line Architecture\^aAn Industrial Case Study

    Full text link
    This paper describes a method for the recovering of software architectures from a set of similar (but unrelated) software products in binary form. One intention is to drive refactoring into software product lines and combine architecture recovery with run time binary analysis and existing clustering methods. Using our runtime binary analysis, we create graphs that capture the dependencies between different software parts. These are clustered into smaller component graphs, that group software parts with high interactions into larger entities. The component graphs serve as a basis for further software product line work. In this paper, we concentrate on the analysis part of the method and the graph clustering. We apply the graph clustering method to a real application in the context of automation / robot configuration software tools.Comment: In Proceedings FMSPLE 2015, arXiv:1504.0301

    ANCHOR: logically-centralized security for Software-Defined Networks

    Get PDF
    While the centralization of SDN brought advantages such as a faster pace of innovation, it also disrupted some of the natural defenses of traditional architectures against different threats. The literature on SDN has mostly been concerned with the functional side, despite some specific works concerning non-functional properties like 'security' or 'dependability'. Though addressing the latter in an ad-hoc, piecemeal way, may work, it will most likely lead to efficiency and effectiveness problems. We claim that the enforcement of non-functional properties as a pillar of SDN robustness calls for a systemic approach. As a general concept, we propose ANCHOR, a subsystem architecture that promotes the logical centralization of non-functional properties. To show the effectiveness of the concept, we focus on 'security' in this paper: we identify the current security gaps in SDNs and we populate the architecture middleware with the appropriate security mechanisms, in a global and consistent manner. Essential security mechanisms provided by anchor include reliable entropy and resilient pseudo-random generators, and protocols for secure registration and association of SDN devices. We claim and justify in the paper that centralizing such mechanisms is key for their effectiveness, by allowing us to: define and enforce global policies for those properties; reduce the complexity of controllers and forwarding devices; ensure higher levels of robustness for critical services; foster interoperability of the non-functional property enforcement mechanisms; and promote the security and resilience of the architecture itself. We discuss design and implementation aspects, and we prove and evaluate our algorithms and mechanisms, including the formalisation of the main protocols and the verification of their core security properties using the Tamarin prover.Comment: 42 pages, 4 figures, 3 tables, 5 algorithms, 139 reference

    Engineering simulations for cancer systems biology

    Get PDF
    Computer simulation can be used to inform in vivo and in vitro experimentation, enabling rapid, low-cost hypothesis generation and directing experimental design in order to test those hypotheses. In this way, in silico models become a scientific instrument for investigation, and so should be developed to high standards, be carefully calibrated and their findings presented in such that they may be reproduced. Here, we outline a framework that supports developing simulations as scientific instruments, and we select cancer systems biology as an exemplar domain, with a particular focus on cellular signalling models. We consider the challenges of lack of data, incomplete knowledge and modelling in the context of a rapidly changing knowledge base. Our framework comprises a process to clearly separate scientific and engineering concerns in model and simulation development, and an argumentation approach to documenting models for rigorous way of recording assumptions and knowledge gaps. We propose interactive, dynamic visualisation tools to enable the biological community to interact with cellular signalling models directly for experimental design. There is a mismatch in scale between these cellular models and tissue structures that are affected by tumours, and bridging this gap requires substantial computational resource. We present concurrent programming as a technology to link scales without losing important details through model simplification. We discuss the value of combining this technology, interactive visualisation, argumentation and model separation to support development of multi-scale models that represent biologically plausible cells arranged in biologically plausible structures that model cell behaviour, interactions and response to therapeutic interventions

    What use are formal design and analysis methods to telecommunications services?

    Get PDF
    Have formal methods failed, or will they fail, to help us solve problems of detecting and resolving of feature interactions in telecommunications software? This paper contains SWOT(Strengths, Weaknesses, Opportunities and Threats) analysis of the use of formula design and analysis methods in feature interaction analysis and makes some suggestions for future research

    Integrating IVHM and Asset Design

    Get PDF
    Integrated Vehicle Health Management (IVHM) describes a set of capabilities that enable effective and efficient maintenance and operation of the target vehicle. It accounts for the collection of data, conducting analysis, and supporting the decision-making process for sustainment and operation. The design of IVHM systems endeavours to account for all causes of failure in a disciplined, systems engineering, manner. With industry striving to reduce through-life cost, IVHM is a powerful tool to give forewarning of impending failure and hence control over the outcome. Benefits have been realised from this approach across a number of different sectors but, hindering our ability to realise further benefit from this maturing technology, is the fact that IVHM is still treated as added on to the design of the asset, rather than being a sub-system in its own right, fully integrated with the asset design. The elevation and integration of IVHM in this way will enable architectures to be chosen that accommodate health ready sub-systems from the supply chain and design trade-offs to be made, to name but two major benefits. Barriers to IVHM being integrated with the asset design are examined in this paper. The paper presents progress in overcoming them, and suggests potential solutions for those that remain. It addresses the IVHM system design from a systems engineering perspective and the integration with the asset design will be described within an industrial design process

    Evaluating kernels on Xeon Phi to accelerate Gysela application

    Get PDF
    This work describes the challenges presented by porting parts ofthe Gysela code to the Intel Xeon Phi coprocessor, as well as techniques used for optimization, vectorization and tuning that can be applied to other applications. We evaluate the performance of somegeneric micro-benchmark on Phi versus Intel Sandy Bridge. Several interpolation kernels useful for the Gysela application are analyzed and the performance are shown. Some memory-bound and compute-bound kernels are accelerated by a factor 2 on the Phi device compared to Sandy architecture. Nevertheless, it is hard, if not impossible, to reach a large fraction of the peek performance on the Phi device,especially for real-life applications as Gysela. A collateral benefit of this optimization and tuning work is that the execution time of Gysela (using 4D advections) has decreased on a standard architecture such as Intel Sandy Bridge.Comment: submitted to ESAIM proceedings for CEMRACS 2014 summer school version reviewe

    Integrating IVHM and asset design

    Get PDF
    Integrated Vehicle Health Management (IVHM) describes a set of capabilities that enable effective and efficient maintenance and operation of the target vehicle. It accounts for the collecting of data, conducting analysis, and supporting the decision-making process for sustainment and operation. The design of IVHM systems endeavours to account for all causes of failure in a disciplined, systems engineering, manner. With industry striving to reduce through-life cost, IVHM is a powerful tool to give forewarning of impending failure and hence control over the outcome. Benefits have been realised from this approach across a number of different sectors but, hindering our ability to realise further benefit from this maturing technology, is the fact that IVHM is still treated as added on to the design of the asset, rather than being a sub-system in its own right, fully integrated with the asset design. The elevation and integration of IVHM in this way will enable architectures to be chosen that accommodate health ready sub-systems from the supply chain and design trade-offs to be made, to name but two major benefits. Barriers to IVHM being integrated with the asset design are examined in this paper. The paper presents progress in overcoming them, and suggests potential solutions for those that remain. It addresses the IVHM system design from a systems engineering perspective and the integration with the asset design will be described within an industrial design process
    • …
    corecore