85 research outputs found

    Specification: The Biggest Bottleneck in Formal Methods and Autonomy

    Get PDF
    Advancement of AI-enhanced control in autonomous systems stands on the shoulders of formal methods, which make possible the rigorous safety analysis autonomous systems require. An aircraft cannot operate autonomously unless it has design-time reasoning to ensure correct operation of the autopilot and runtime reasoning to ensure system health management, or the ability to detect and respond to off-nominal situations. Formal methods are highly dependent on the specifications over which they reason; there is no escaping the “garbage in, garbage out” reality. Specification is difficult, unglamorous, and arguably the biggest bottleneck facing verification and validation of aerospace, and other, autonomous systems. This VSTTE invited talk and paper examines the outlook for the practice of formal specification, and highlights the on-going challenges of specification, from design-time to runtime system health management. We exemplify these challenges for specifications in Linear Temporal Logic (LTL) though the focus is not limited to that specification language. We pose challenge questions for specification that will shape both the future of formal methods, and our ability to more automatically verify and validate autonomous systems of greater variety and scale. We call for further research into LTL Genesis

    Survey on Safety Evidence Change Impact Analysis in Practice: Detailed Description and Analysis

    Get PDF
    Critical systems must comply with safety standards in many application domains. This involves gathering safety evidence in the form of artefacts such as safety analyses, system specifications, and testing results. These artefacts can evolve during a system’s lifecycle, and impact analysis might be necessary to guarantee that system safety and compliance are not jeopardised. Although extensive research has been conducted on impact analysis and on safety evidence management, the knowledge about how safety evidence change impact analysis is addressed in practice is limited. This technical report presents a survey targeted at filling this gap by analysing the circumstances under which safety evidence change impact analysis is addressed, the tool support used, and the challenges faced. We obtained 97 valid responses representing 16 application domains, 28 countries, and 47 safety standards. The results suggest that most projects deal with safety evidence change impact analysis during system development and mainly from system specifications, the level of automation in the process is low, and insufficient tool support is the most frequent challenge. Other notable findings are that safety case evolution should probably be better managed, no commercial impact analysis tool has been reported as used for all artefact types, and experience and automation do not seem to greatly help in avoiding challenges

    Safety related cyber-attacks identification and assessment for autonomous inland ships

    Get PDF
    Recent advances in the maritime industry include the research and development of new sophisticated ships including the autonomous ships. The new autonomy concept though comes at the cost of additional complexity introduced by the number of systems that need to be installed on-board and on-shore, the software intensiveness of the complete system, the involved interactions between the systems, components and humans and the increased connectivity. All the above results in the increased system vulnerability to cyber-attacks, which may lead to unavailability or hazardous behaviour of the critical ship systems. The aim of this study is the identification of the safety related cyber-attacks to the navigation and propulsion systems of an inland autonomous ship as well as the safety enhancement of the ship systems design. For this purpose, the Cyber Preliminary Hazard Analysis method is employed supported by the literature review of the system vulnerabilities and potential cyber-attacks. The Formal Safety Assessment risk matrix is employed for ranking of the hazardous scenarios. The results demonstrate that a number of critical scenarios can arise on the investigated autonomous vessel due to the known vulnerabilities. These can be sufficiently controlled by introducing appropriate modifications of the system design

    Formal Requirements Analysis and Specification-Based Testing in Cyber-Physical Systems

    Get PDF
    openFormal requirements analysis plays an important role in the design of safety- and security-critical complex systems such as, e.g., Cyber-Physical Systems (CPS). It can help in detecting problems early in the system development life-cycle, reducing time and cost to completion. Moreover, its results can be employed at the end of the process to validate the implemented system, guiding the testing phase. Despite its importance, requirements analysis is still largely carried out manually due to the intrinsic difficulty of dealing with natural language requirements, the most common way to represent them. However, manual reviews are time-consuming and error-prone, reducing the potential benefit of the requirement engineering process. Automation can be achieved with the employment of formal methods, but their application is still limited by their complexity and lack of specialized tools. In this work we focus on the analysis of requirements for the design of CPSs, and on how to automatize some activities related to such analysis. We first study how to formalize requirements expressed in a structured English language, encode them in linear temporal logic, check their consistency with off-the-shelf model checkers, and find minimal set of conflicting requirements in case of inconsistency. We then present a new methodology to automatically generate tests from requirements and execute them on a given system, without requiring knowledge of its internal structure. Finally, we provide a set of tools that implement the studied algorithms and provide easy-to-use interfaces to help their adoption from the users.openXXXIII CICLO - INFORMATICA E INGEGNERIA DEI SISTEMI/ COMPUTER SCIENCE AND SYSTEMS ENGINEERING - Informatica/computer sciencePULINA, LUCAVuotto, Simon

    Systems Engineering

    Get PDF
    The book "Systems Engineering: Practice and Theory" is a collection of articles written by developers and researches from all around the globe. Mostly they present methodologies for separate Systems Engineering processes; others consider issues of adjacent knowledge areas and sub-areas that significantly contribute to systems development, operation, and maintenance. Case studies include aircraft, spacecrafts, and space systems development, post-analysis of data collected during operation of large systems etc. Important issues related to "bottlenecks" of Systems Engineering, such as complexity, reliability, and safety of different kinds of systems, creation, operation and maintenance of services, system-human communication, and management tasks done during system projects are addressed in the collection. This book is for people who are interested in the modern state of the Systems Engineering knowledge area and for systems engineers involved in different activities of the area. Some articles may be a valuable source for university lecturers and students; most of case studies can be directly used in Systems Engineering courses as illustrative materials

    Control-Theoretical Perspective in Feedback-Based Systems Testing

    Get PDF
    Self-Adaptive Systems (SAS) and Cyber-Physical Systems (CPS) have received significant attention in recent computer engineering research. This is due to their ability to improve the level of autonomy of engineering artefacts. In both cases, this autonomy increase is achieved through feedback. Feedback is the iteration of sens- ing and actuation to respectively acquire knowledge about the current state of said artefacts and steer them toward a desired state or behaviour. In this thesis we dis- cuss the challenges that the introduction of feedback poses on the verification and validation process for such systems, more specifically, on their testing. We highlight three types of new challenges with respect to traditional software testing: alteration of testing input and output definition, and intertwining of components with different nature. Said challenges affect the ways we can define different elements of the test- ing process: coverage criteria, testing set-ups, test-case generation strategies, and oracles in the testing process. This thesis consists of a collection of three papers and contributes to the definition of each of the mentioned testing elements. In terms of coverage criteria for SAS, Paper I proposes the casting of the testing problem, to a semi-infinite optimisation problem. This allows to leverage the Scenario Theory from the field of robust control, and provide a worst-case probabilistic bound on a given performance metric of the system under test. For what concerns the definition of testing set-ups for control-based CPS, Paper II investigates the implications of the use of different abstractions (i.e., the use of implemented or emulated compo- nents) on the significance of the testing. The paper provides evidence that confutes the common assumption present in previous literature on the existence of a hierar- chy among commonly used testing set-ups. Finally, regarding the test-case gener- ation and oracle definition, Paper III defines the problem of stress testing control- based CPS software. We contribute to the generation and identification of stress test cases for such software by proposing a novel test case parametrisation. Leveraging the proposed parametrisation we define metamorphic relations on the expected be- haviour of the system under test. We use said relations for the development of stress testing approach and sanity checks on the testing results

    Analysing call graphs for software architecture quality profiling

    Get PDF
    Dissertação de mestrado em Engenharia de InformáticaRisk assessment is an important topic for financial institution nowadays, especially in the context of loan applications or loan requests and credit scoring. Some of these institutions have already implemented their own custom credit scoring systems to evaluate their clients’ risk supporting the loan application decision with this indicator. In fact, the information gathered by financial institutions constitutes a valuable source of data for the creation of information assets from which credit scoring mechanisms may be developed. Historically, most financial institutions support their decision mechanisms on regression algorithms, however, these algorithms are no longer considered the state of the art on decision algorithms. This fact has led to the interest on the research of new types of learning algorithms from machine learning able to deal with the credit scoring problem. The work presented in this dissertation has as an objective the evaluation of state of the art algorithms for credit decision proposing new optimization to improve their performance. In parallel, a suggestion system on credit scoring is also proposed in order to allow the perception of how algorithm produce decisions on clients’ loan applications, provide clients with a source of research on how to improve their chances of being granted with a loan and also develop client profiles that suit specific credit conditions and credit purposes. At last, all the components studied and developed are combined on a platform able to deal with the problem of credit scoring through an experts system implemented upon a multi-agent system. The use of multi-agent systems to solve complex problems in today’s world is not a new approach. Nevertheless, there has been a growing interest in using its properties in conjunction with machine learning and data mining techniques in order to build efficient systems. The work presented aims to demonstrate the viability and utility of this type of systems for the credit scoring problem.Hoje em dia, a análise de risco é um tópico importante para as instituições financeiras, especialmente no contexto de pedidos de empréstimo e de classificação de crédito. Algumas destas instituições têm já implementados sistemas de classificação de crédito personalizados para avaliar o risco dos seus clientes baseando a decisão do pedido de empréstimo neste indicador. De facto, a informação recolhida pelas instituições financeiras constitui uma valiosa fonte de dados para a criação de ativos de informação, de onde mecanismos de classificação de crédito podem ser desenvolvidos. Historicamente, a maioria das instituições financeiras baseia os seus mecanismos de decisão sobre algoritmos de regressão. No entanto, estes algoritmos já não são considerados o estado da arte em algoritmos de decisão. Este facto levou ao interesse na pesquisa de diferentes algoritmos de aprendizagem baseados em algoritmos de aprendizagem máquina, capaz de lidar com o problema de classificação de crédito. O trabalho apresentado nesta dissertação tem como objetivo avaliar o estado da arte em algoritmos de decisão de crédito, propondo novos conceitos de optimização que melhorem o seu desempenho. Paralelamente, um sistema de sugestão é proposto no âmbito do tema de decisão de crédito, de forma a possibilitar a perceção de como os algoritmos tomam decisões relativas a pedidos de crédito por parte de clientes, dotando-os de uma fonte de pesquisa sobre como melhorar as possibilidades de concessão de crédito e, ainda, elaborar perfis de clientes que se adequam a determinadas condições e propósitos de crédito. Por último, todos os componentes estudados e desenvolvidos são combinados numa plataforma capaz de lidar com o problema da classificação de crédito através de um sistema de especialistas, implementado como um sistema multi-agente. O uso de sistemas multi-agente para resolver problemas complexos no mundo de hoje não é uma nova abordagem. No entanto, tem havido um interesse crescente no uso das suas propriedades, em conjunto com técnicas de aprendizagem máquina e data mining para construir sistemas mais eficazes. O trabalho desenvolvido e aqui apresentado pretende demonstrar a viabilidade e utilidade do uso deste tipo de sistemas no problema de decisão de crédito

    Improving time predictability of shared hardware resources in real-time multicore systems : emphasis on the space domain

    Get PDF
    Critical Real-Time Embedded Systems (CRTES) follow a verification and validation process on the timing and functional correctness. This process includes the timing analysis that provides Worst-Case Execution Time (WCET) estimates to provide evidence that the execution time of the system, or parts of it, remain within the deadlines. A key design principle for CRTES is the incremental qualification, whereby each software component can be subject to verification and validation independently of any other component, with obvious benefits for cost. At timing level, this requires time composability, such that the timing behavior of a function is not affected by other functions. CRTES are experiencing an unprecedented growth with rising performance demands that have motivated the use of multicore architectures. Multicores can provide the performance required and bring the potential of integrating several software functions onto the same hardware. However, multicore contention in the access to shared hardware resources creates a dependence of the execution time of a task with the rest of the tasks running simultaneously. This dependence threatens time predictability and jeopardizes time composability. In this thesis we analyze and propose hardware solutions to be applied on current multicore designs for CRTES to improve time predictability and time composability, focusing on the on-chip bus and the memory controller. At hardware level, we propose new bus and memory controller designs that control and mitigate contention between different cores and allow to have time composability by design, also in the context of mixed-criticality systems. At analysis level, we propose contention prediction models that factor the impact of contenders and don¿t need modifications to the hardware. We also propose a set of Performance Monitoring Counters (PMC) that provide evidence about the contention. We give an special emphasis on the Space domain focusing on the Cobham Gaisler NGMP multicore processor, which is currently assessed by the European Space Agency for its future missions.Los Sistemas Críticos Empotrados de Tiempo Real (CRTES) siguen un proceso de verificación y validación para su correctitud funcional y temporal. Este proceso incluye el análisis temporal que proporciona estimaciones de el peor caso del tiempo de ejecución (WCET) para dar evidencia de que el tiempo de ejecución del sistema, o partes de él, permanecen dentro de los límites temporales. Un principio de diseño clave para los CRTES es la cualificación incremental, por la que cada componente de software puede ser verificado y validado independientemente del resto de componentes, con beneficios obvios para el coste. A nivel temporal, esto requiere composabilidad temporal, por la que el comportamiento temporal de una función no se ve afectado por otras funciones. CRTES están experimentando un crecimiento sin precedentes con crecientes demandas de rendimiento que han motivado el uso the arquitecturas multi-núcleo (multicore). Los procesadores multi-núcleo pueden proporcionar el rendimiento requerido y tienen el potencial de integrar varias funcionalidades software en el mismo hardware. A pesar de ello, la interferencia entre los diferentes núcleos que aparece en los recursos compartidos de os procesadores multi núcleo crea una dependencia del tiempo de ejecución de una tarea con el resto de tareas ejecutándose simultáneamente en el procesador. Esta dependencia amenaza la predictabilidad temporal y compromete la composabilidad temporal. En esta tésis analizamos y proponemos soluciones hardware para ser aplicadas en los diseños multi núcleo actuales para CRTES que mejoran la predictabilidad y composabilidad temporal, centrándose en el bus y el controlador de memoria internos al chip. A nivel de hardware, proponemos nuevos diseños de buses y controladores de memoria que controlan y mitigan la interferencia entre los diferentes núcleos y permiten tener composabilidad temporal por diseño, también en el contexto de sistemas de criticalidad mixta. A nivel de análisis, proponemos modelos de predicción de la interferencia que factorizan el impacto de los núcleos y no necesitan modificaciones hardware. También proponemos un conjunto de Contadores de Control del Rendimiento (PMC) que proporcionoan evidencia de la interferencia. En esta tésis, damós especial importancia al dominio espacial, centrándonos en el procesador mutli núcleo Cobham Gaisler NGMP, que está siendo actualmente evaluado por la Agencia Espacial Europea para sus futuras misiones
    corecore