13 research outputs found

    Security and Safety Assurance for Aerospace Embedded Systems

    Get PDF
    International audienceThe paper starts with the list of basic principles that guided the development of the SEISES security and safety assurance framework. Then we present the SEISES structure and we provide some examples of assurance objectives and related assurance activities. We detail the convergence between safety and security assurance activities that we have identified. Finally, we introduce the three demonstrators and we summarize the main lessons learnt from these experimentations. We conclude the paper by summarizing the results of the SEISES project, by comparing these results with other approaches dealing with joint safety and security assurance and by listing promising directions for further research

    Quality Properties of Execution Tracing, an Empirical Study

    Get PDF
    The authors are grateful to all the professionals who participated in the focus groups; moreover, they also express special thanks to the management of the companies involved for making the organisation of the focus groups possible.Data are made available in the appendix including the results of the data coding process.The quality of execution tracing impacts the time to a great extent to locate errors in software components; moreover, execution tracing is the most suitable tool, in the majority of the cases, for doing postmortem analysis of failures in the field. Nevertheless, software product quality models do not adequately consider execution tracing quality at present neither do they define the quality properties of this important entity in an acceptable manner. Defining these quality properties would be the first step towards creating a quality model for execution tracing. The current research fills this gap by identifying and defining the variables, i.e., the quality properties, on the basis of which the quality of execution tracing can be judged. The present study analyses the experiences of software professionals in focus groups at multinational companies, and also scrutinises the literature to elicit the mentioned quality properties. Moreover, the present study also contributes to knowledge with the combination of methods while computing the saturation point for determining the number of the necessary focus groups. Furthermore, to pay special attention to validity, in addition to the the indicators of qualitative research: credibility, transferability, dependability, and confirmability, the authors also considered content, construct, internal and external validity

    Intégration de la sûreté de fonctionnement dans les processus d'ingénierie système

    Get PDF
    L'intégration de diverses technologies, notamment celles de l'informatique et l'électronique, fait que les systèmes conçus de nos jours sont de plus en plus complexes. Ils ont des comportements plus élaborés et plus difficiles à prévoir, ont un nombre de constituants en interaction plus important et/ou réalisent des fonctions de plus haut niveau. Parallèlement à cette complexification des systèmes, la compétitivité du marché mondial impose aux développeurs de systèmes des contraintes de coût et de délais de plus en plus strictes. La même course s'opère concernant la qualité des systèmes, notamment lorsque ceux-ci mettent en jeu un risque en vies humaines ou un risque financier important. Ainsi, les développeurs sont contraints d'adopter une approche de conception rigoureuse pour répondre aux exigences du système souhaité et satisfaire les diverses contraintes (coût, délais, qualité, sûreté de fonctionnement,...). Plusieurs démarches méthodologiques visant à guider la conception de système sont définies par l'intermédiaire de normes d'Ingénierie Système. Notre travail s'appuie sur la norme EIA-632, qui est largement employée, en particulier dans les domaines aéronautique et militaire. Il consiste à améliorer les processus d'ingénierie système décrits par l'EIA-632, afin d'intégrer une prise en compte globale et explicite de la sûreté de fonctionnement. En effet, jusqu'à présent la sûreté de fonctionnement était obtenue par la réutilisation de modèles génériques après avoir étudié et développé chaque fonction indépendamment. Il n'y avait donc pas de prise en compte spécifique des risques liés à l'intégration de plusieurs technologies. Pour cette raison, nous proposons de nous intéresser aux exigences de Sûreté de Fonctionnement au niveau global et le plus tôt possible dans la phase de développement, pour ensuite les décliner aux niveaux inférieurs, ceci en s'appuyant sur les processus de la norme EIA-632 que nous étoffons. Nous proposons également une méthode originale de déclinaison d'exigences de sûreté de fonctionnement à base d'arbres de défaillances et d'AMDEC, ainsi qu'un modèle d'information basé sur SysML pour appuyer notre approche. Un exemple issu du monde aéronautique permet d'illustrer nos propositions.The integration of various technologies, including computer and electronics, makes the nowadays designed systems increasingly complex. They have behaviors which are more elaborate and difficult to predict, they have a greater number of components in interaction and/or perform highest level functions. Parallel to this increasing complexity of these systems, the competitive of the global market imposes strong constraints of cost and time to the system developers. Other strong constraints deal with the quality of these systems, especially when they involve human risks or significant financial risks. Thus, developers are forced to adopt a rigorous design approach to meet the desired system requirements and satisfy the various constraints (cost, time, quality, dependability...). Several methodological approaches to guide the system design are defined through system engineering standards. Our work is based on the EIA-632 standard, which is widely used, especially in the aeronautical and military fields. It is to improve the systems engineering process described by the EIA-632, in order to incorporate a global and explicit consideration of dependability. Indeed, till now the dependability was achieved by reusing generic models after having studied and developed independently each function. So there was no specific consideration of the risks associated with the integration of several technologies. For this reason, we propose to concern ourselves with the dependability requirements at the global level and as early as possible in the development phase. Then, these requirements will be decline to lower levels. We based our approach on the processes of the EIA-632 standard that we expand. We also propose an original method for the declination of the dependability requirements based on fault trees and FMEAC, and an information model based on SysML in order to support our approach. An example from the aeronautical field illustrates our proposals

    Upper Bounds for the Selection of the Cryptographic Key Lifetimes: Bounding the Risk of Key Exposure in the Presence of Faults

    Get PDF
    With physical attacks threatening the security of current cryptographic schemes, no security policy can be developed without taking into account the physical nature of computation. In this article we first introduce the notion of \emph{Cryptographic Key Failure Tolerance}, then we offer a framework for the determination of upper bounds to the key lifetimes for any cryptographic scheme used in the presence of faults, given a desired (negligible) error-bound to the risk of key exposure. Finally we emphasize the importance of choosing keys and designing schemes with good values of failure tolerance, and recommend minimal values for this metric. In fact, in \emph{standard environmental conditions}, cryptographic keys that are especially susceptible to erroneous computations (e.g., RSA keys used with CRT-based implementations) are exposed with a probability greater than a standard error-bound (e.g., 240{2^{-40}}) after operational times shorter than one year, if the failure-rate of the cryptographic infrastructure is greater than 1.04×1016{1.04\times10^{-16}} {\it failures/hours}

    Data Science Techniques for Modelling Execution Tracing Quality

    Get PDF
    This research presents how to handle a research problem when the research variables are still unknown, and no quantitative study is possible; how to identify the research variables, to be able to perform a quantitative research, how to collect data by means of the research variables identified, and how to carry out modelling with the considerations of the specificities of the problem domain. In addition, validation is also encompassed in the scope of modelling in the current study. Thus, the work presented in this thesis comprises the typical stages a complex data science problem requires, including qualitative and quantitative research, data collection, modelling of vagueness and uncertainty, and the leverage of artificial intelligence to gain such insights, which are impossible with traditional methods. The problem domain of the research conducted encompasses software product quality modelling, and assessment, with particular focus on execution tracing quality. The terms execution tracing quality and logging are used interchangeably throughout the thesis. The research methods and mathematical tools used allow considering uncertainty and vagueness inherently associated with the quality measurement and assessment process through which reality can be approximated more appropriately in comparison to plain statistical modelling techniques. Furthermore, the modelling approach offers direct insights into the problem domain by the application of linguistic rules, which is an additional advantage. The thesis reports (1) an in-depth investigation of all the identified software product quality models, (2) a unified summary of the identified software product quality models with their terminologies and concepts, (3) the identification of the variables influencing execution tracing quality, (4) the quality model constructed to describe execution tracing quality, and (5) the link of the constructed quality model to the quality model of the ISO/IEC 25010 standard, with the possibility of tailoring to specific project needs. Further work, outside the frames of this PhD thesis, would also be useful as presented in the study: (1) to define application-project profiles to assist tailoring the quality model for execution tracing to specific application and project domains, and (2) to approximate the present quality model for execution tracing, within defined bounds, by simpler mathematical approaches. In conclusion, the research contributes to (1) supporting the daily work of software professionals, who need to analyse execution traces; (2) raising awareness that execution tracing quality has a huge impact on software development, software maintenance and on the professionals involved in the different stages of the software development life-cycle; (3) providing a framework in which the present endeavours for log improvements can be placed, and (4) suggesting an extension of the ISO/IEC 25010 standard by linking the constructed quality model to that. In addition, in the scope of the qualitative research methodology, the current PhD thesis contributes to the knowledge of research methods with determining a saturation point in the course of the data collection process

    Aide à l'Évolution Logicielle dans les Organisations

    Get PDF
    Software systems are now so intrinsically part of our lives that we don't see them any more. They run our phones, our cars, our leisures, our banks, our shops, our cities … This brings a significant burden on the software industry. All these systems need to be updated, corrected, and enhanced as the users and consumers have new needs. As a result, most of the software engineering activity may be classified as Software Maintenance, “the totality of activities required to provide cost-effective support to a software system”.In an ecosystem where processing power for computers, and many other relevant metrics such as disk capacity or network bandwidth, doubles every 18 months (“Moore's Law”), technologies evolve at a fast pace. In this ecosystem, software maintenance suffers from the drawback of having to address the past (past languages, existing systems, old technologies). It is often ill-perceived, and treated as a punishment. Because of this, solutions and tools for software maintenance have long lagged far behind those for new software development. For example, the antique approach of manually inserting traces in the source code to understand the execution path is still a very valid one.All my research activity focused on helping people to do software maintenance in better conditions or more efficiently. An holistic approach of the problem must consider the software that has to be maintained, the people doing it, and the organization in which and for which it is done. As such, I studied different facets of the problem that will be presented in three parts in this document: Software: The source code is the center piece of the maintenance activity. Whatever the task (ex: enhancement or bug correction), it typically comes down to understand the current source code and find out what to change and/or add to make it behave as expected. I studied how to monitor the evolution of the source code, how to prevent it's decaying and how to remedy bad situations; People: One of the fundamental asset of people dealing with maintenance is the knowledge they have, of computer science (programming techniques), of the application domain, of the software itself. It is highly significant that from 40% to 60% of software maintenance time is spent reading the code to understand what it does, how it does it, how it can be changed; Organization: Organizations may have a strong impact on the way activities such as software maintenance are performed by their individual members. The support offered within the organization, the constraints they impose, the cultural environment, all affect how easy or difficult it can be to do the tasks and therefore how well or badly they can be done. I studied some software maintenance processes that organizations use.In this document, the various research topics I addressed, are organized in a logical way that does not always respect the chronological order of events. I wished to highlight, not only the results of the research, through the publications that attest to them, but also the collaborations that made them possible, collaboration with students or fellow researchers. For each result presented here, I tried to summarize as much as possible the discussion of the previous state of the art and the result itself. First because, more details can easily be found in the referenced publications, but also because some of this research is quite old and sometimes fell in the realm of “common sense”.Les systèmes logiciels font maintenant partie intrinsèque de nos vies à tel point que nous ne les voyons plus. Ils pilotent nos téléphones, nos voitures, nos loisirs, nos banques, nos magasins, nos villes, … Cela apporte de lourdes contraintes à l'industrie du logiciel car tous ces systèmes doivent être continuellement mis à jour, corrigés, étendus quand les utilisateurs et consommateurs expriment de nouveaux besoins. Le résultat en est que la plus grande part de l'activité de génie logiciel peut être classifié comme de la Maintenance Logicielle, « La totalité des activités requises pour fournir un support efficient d'un système logiciel ».Dans un écosystème où la puissance de calcul des ordinateurs, ou beaucoup d'autres métriques reliées comme la capacité des disques, ou le débit des réseaux, double tous les 18 mois (« loi de Moore »), les technologies évoluent rapidement. Dans cet écosystème la maintenance logiciel souffre de l'inconvénient de devoir traiter le passé (langages du passé, systèmes existants, vielles technologies). Elle est souvent mal perçue et traitée comme une punition. À cause de cela, les solutions et les outils pour la maintenance logicielle sont depuis toujours très en retard sur ceux pour le développement. Par exemple, l'antique méthode de correction de problème consistant à insérer des instructions pour retracer et comprendre le flot d'exécution d'un programme est toujours complètement actuelle.Toute mon activité de recherche s'est concentrée à aider les gens à faire de la maintenance logicielle dans de meilleures conditions ou plus efficacement. Une approche holistique du problème doit considérer le logiciel qui doit être maintenu, les gens qui le font et les organisations dans lesquelles et pour lesquelles cela est fait. Pour cela, j'ai étudié différents aspects du problème qui seront présentés en trois parties dans ce document : Le Logiciel : Le code source est la pièce centrale de l'activité de maintenance logicielle. Quelque soit la tâche (ex : amélioration ou correction d'erreur), elle revient typiquement à comprendre le code source actuel pour trouver quoi changer et/ou ajouter pour obtenir le comportement souhaité. J'ai étudié comment contrôler l'évolution du code source, comment prévenir son délitement et comment remédier à des mauvaises situations ; les Gens : L'un des principaux avantages des personnes qui traitent de maintenance logicielle est les connaissances qu'elles ont de l'informatique (techniques de programmation), du domaine d'application, du logiciel lui-même. Il est très significatif que de 40 % à 60 % du temps de maintenance logicielle soit passé à lire le code pour comprendre ce qu'il fait, comment il le fait et comment il peut-être changé ; les Organisations : Elles peuvent avoir un profond impact sur la façon dont des activités comme la maintenance logicielle sont exécutées par les individus. Le support offert à l'intérieur des organisations, ou les contraintes qu'elles imposent, l’environnement culturel, peuvent tous affecter la facilité ou difficulté de ces tâches et donc la qualité qui en résultera. J'ai étudié quelques processus liés à la maintenance logicielle utilisés dans les organisations.Dans ce document, les différents sujets de recherche que j'ai considéré sont présentés de façon logique qui ne respecte pas toujours l'ordre chronologique des évènements. J'ai souhaité aussi mettre en valeur, non uniquement les résultats scientifiques de mes recherches, au travers des publications qui les attestent, mais aussi les collaborations qui les ont rendus possibles, collaborations avec des étudiants ou des collègues chercheurs. Pour chaque résultat présenté ici, j'ai tenté de résumer le plus possible les discussions sur l'état de l'art antérieur et les résultats eux-mêmes. Ce d'abord parce que de plus amples détails peuvent facilement être trouvés dans les publications citées, mais aussi parce que une part de cette recherche est maintenant vielle et peux parfois être tombé dans le domaine du « sens commun »

    Modélisation conjointe de la sûreté et de la sécurité pour l’évaluation des risques dans les systèmes cyber-physiques

    Get PDF
    Cyber physical systems (CPS) denote systems that embed programmable components in order to control a physical process or infrastructure. CPS are henceforth widely used in different industries like energy, aeronautics, automotive, medical or chemical industry. Among the variety of existing CPS stand SCADA (Supervisory Control And Data Acquisition) systems that offer the necessary means to control and supervise critical infrastructures. Their failure or malfunction can engender adverse consequences on the system and its environment.SCADA systems used to be isolated and based on simple components and proprietary standards. They are nowadays increasingly integrating information and communication technologies (ICT) in order to facilitate supervision and control of the industrial process and to reduce exploitation costs. This trend induces more complexity in SCADA systems and exposes them to cyber-attacks that exploit vulnerabilities already existent in the ICT components. Such attacks can reach some critical components within the system and alter its functioning causing safety harms.We associate throughout this dissertation safety with accidental risks originating from the system and security with malicious risks with a focus on cyber-attacks. In this context of industrial systems supervised by new SCADA systems, safety and security requirements and risks converge and can have mutual interactions. A joint risk analysis covering both safety and security aspects would be necessary to identify these interactions and optimize the risk management.In this thesis, we give first a comprehensive survey of existing approaches considering both safety and security issues for industrial systems, and highlight their shortcomings according to the four following criteria that we believe essential for a good model-based approach: formal, automatic, qualitative and quantitative and robust (i.e. easily integrates changes on system into the model).Next, we propose a new model-based approach for a safety and security joint risk analysis: S-cube (SCADA Safety and Security modeling), that satisfies all the above criteria. The S-cube approach enables to formally model CPS and yields the associated qualitative and quantitative risk analysis. Thanks to graphical modeling, S-cube enables to input the system architecture and to easily consider different hypothesis about it. It enables next to automatically generate safety and security risk scenarios likely to happen on this architecture and that lead to a given undesirable event, with an estimation of their probabilities.The S-cube approach is based on a knowledge base that describes the typical components of industrial architectures encompassing information, process control and instrumentation levels. This knowledge base has been built upon a taxonomy of attacks and failure modes and a hierarchical top-down reasoning mechanism. It has been implemented using the Figaro modeling language and the associated tools. In order to build the model of a system, the user only has to describe graphically the physical and functional (in terms of software and data flows) architectures of the system. The association of the knowledge base and the system architecture produces a dynamic state based model: a Continuous Time Markov Chain. Because of the combinatorial explosion of the states, this CTMC cannot be exhaustively built, but it can be explored in two ways: by a search of sequences leading to an undesirable event, or by Monte Carlo simulation. This yields both qualitative and quantitative results.We finally illustrate the S-cube approach on a realistic case study: a pumped storage hydroelectric plant, in order to show its ability to yield a holistic analysis encompassing safety and security risks on such a system. We investigate the results obtained in order to identify potential safety and security interactions and give recommendations.Les Systèmes Cyber Physiques (CPS) intègrent des composants programmables afin de contrôler un processus physique. Ils sont désormais largement répandus dans différentes industries comme l’énergie, l’aéronautique, l’automobile ou l’industrie chimique. Parmi les différents CPS existants, les systèmes SCADA (Supervisory Control And Data Acquisition) permettent le contrôle et la supervision des installations industrielles critiques. Leur dysfonctionnement peut engendrer des impacts néfastes sur l’installation et son environnement.Les systèmes SCADA ont d’abord été isolés et basés sur des composants et standards propriétaires. Afin de faciliter la supervision du processus industriel et réduire les coûts, ils intègrent de plus en plus les technologies de communication et de l’information (TIC). Ceci les rend plus complexes et les expose à des cyber-attaques qui exploitent les vulnérabilités existantes des TIC. Ces attaques peuvent modifier le fonctionnement du système et nuire à sa sûreté.On associe dans la suite la sûreté aux risques de nature accidentelle provenant du système, et la sécurité aux risques d’origine malveillante et en particulier les cyber-attaques. Dans ce contexte où les infrastructures industrielles sont contrôlées par les nouveaux systèmes SCADA, les risques et les exigences liés à la sûreté et à la sécurité convergent et peuvent avoir des interactions mutuelles. Une analyse de risque qui couvre à la fois la sûreté et la sécurité est indispensable pour l’identification de ces interactions ce qui conditionne l’optimalité de la gestion de risque.Dans cette thèse, on donne d’abord un état de l’art complet des approches qui traitent la sûreté et la sécurité des systèmes industriels et on souligne leur carences par rapport aux quatre critères suivants qu’on juge nécessaires pour une bonne approche basée sur les modèles : formelle, automatique, qualitative et quantitative, et robuste (i.e. intègre facilement dans le modèle des variations d’hypothèses sur le système).On propose ensuite une nouvelle approche orientée modèle d’analyse conjointe de la sûreté et de la sécurité : S-cube (SCADA Safety and Security modeling), qui satisfait les critères ci-dessus. Elle permet une modélisation formelle des CPS et génère l’analyse de risque qualitative et quantitative associée. Grâce à une modélisation graphique de l’architecture du système, S-cube permet de prendre en compte différentes hypothèses et de générer automatiquement les scenarios de risque liés à la sûreté et à la sécurité qui amènent à un évènement indésirable donné, avec une estimation de leurs probabilités.L’approche S-cube est basée sur une base de connaissance (BDC) qui décrit les composants typiques des architectures industrielles incluant les systèmes d’information, le contrôle et la supervision, et l’instrumentation. Cette BDC a été conçue sur la base d’une taxonomie d’attaques et modes de défaillances et un mécanisme de raisonnement hiérarchique. Elle a été mise en œuvre à l’aide du langage de modélisation Figaro et ses outils associés. Afin de construire le modèle du système, l’utilisateur saisit graphiquement l’architecture physique et fonctionnelle (logiciels et flux de données) du système. L’association entre la BDC et ce modèle produit un modèle d’états dynamiques : une chaîne de Markov à temps continu. Pour limiter l’explosion combinatoire, cette chaîne n’est pas construite mais peut être explorée de deux façons : recherche de séquences amenant à un évènement indésirable ou simulation de Monte Carlo, ce qui génère des résultats qualitatifs et quantitatifs.On illustre enfin l’approche S-cube sur un cas d’étude réaliste : un système de stockage d’énergie par pompage, et on montre sa capacité à générer une analyse holistique couvrant les risques liés à la sûreté et à la sécurité. Les résultats sont ensuite analysés afin d’identifier les interactions potentielles entre sûreté et sécurité et de donner des recommandations

    Applying patterns in embedded systems design for managing quality attributes and their trade-offs

    Get PDF
    Embedded systems comprise one of the most important types of software-intensive systems, as they are pervasive and used in daily life more than any other type, e.g., in cars or in electrical appliances. When these systems operate under hard constraints, the violation of which can lead to catastrophic events, the system is classified as a critical embedded system (CES). The quality attributes related to these hard constraints are named critical quality attributes (CQAs). For example, the performance of the software for cruise-control or self-driving in a car are critical as they can potentially relate to harming human lives. Despite the growing body of knowledge on engineering CESs, there is still a lack of approaches that can support its design, while managing CQAs and their trade-offs with noncritical ones (e.g., maintainability and reusability). To address this gap, the state-of-research and practice on designing CES and managing quality trade-offs were explored, approaches to improve its design identified, and the merit of these approaches empirically investigated. When designing software, one common approach is to organize its components according to well-known structures, named design patterns. However, these patterns may be avoided in some classes of systems such as CES, as they are sometimes associated with the detriment of CQAs. In short, the findings reported in the thesis suggest that, when applicable, design patterns can promote CQAs while supporting the management of trade-offs. The thesis also reports on a phenomena, namely pattern grime, and factors that can influence the extent of the observed benefits

    Applying patterns in embedded systems design for managing quality attributes and their trade-offs

    Get PDF

    Enhancing Trust –A Unified Meta-Model for Software Security Vulnerability Analysis

    Get PDF
    Over the last decade, a globalization of the software industry has taken place which has facilitated the sharing and reuse of code across existing project boundaries. At the same time, such global reuse also introduces new challenges to the Software Engineering community, with not only code implementation being shared across systems but also any vulnerabilities it is exposed to as well. Hence, vulnerabilities found in APIs no longer affect only individual projects but instead might spread across projects and even global software ecosystem borders. Tracing such vulnerabilities on a global scale becomes an inherently difficult task, with many of the resources required for the analysis not only growing at unprecedented rates but also being spread across heterogeneous resources. Software developers are struggling to identify and locate the required data to take full advantage of these resources. The Semantic Web and its supporting technology stack have been widely promoted to model, integrate, and support interoperability among heterogeneous data sources. This dissertation introduces four major contributions to address these challenges: (1) It provides a literature review of the use of software vulnerabilities databases (SVDBs) in the Software Engineering community. (2) Based on findings from this literature review, we present SEVONT, a Semantic Web based modeling approach to support a formal and semi-automated approach for unifying vulnerability information resources. SEVONT introduces a multi-layer knowledge model which not only provides a unified knowledge representation, but also captures software vulnerability information at different abstract levels to allow for seamless integration, analysis, and reuse of the modeled knowledge. The modeling approach takes advantage of Formal Concept Analysis (FCA) to guide knowledge engineers in identifying reusable knowledge concepts and modeling them. (3) A Security Vulnerability Analysis Framework (SV-AF) is introduced, which is an instantiation of the SEVONT knowledge model to support evidence-based vulnerability detection. The framework integrates vulnerability ontologies (and data) with existing Software Engineering ontologies allowing for the use of Semantic Web reasoning services to trace and assess the impact of security vulnerabilities across project boundaries. Several case studies are presented to illustrate the applicability and flexibility of our modelling approach, demonstrating that the presented knowledge modeling approach cannot only unify heterogeneous vulnerability data sources but also enables new types of vulnerability analysis
    corecore