48 research outputs found

    Modéliser et analyser les risques de propagations dans les projets complexes : application au développement de nouveaux véhicules

    Get PDF
    The management of complex projects requires orchestrating the cooperation of hundreds of individuals from various companies, professions and backgrounds, working on thousands of activities, deliverables, and risks. As well, these numerous project elements are more and more interconnected, and no decision or action is independent. This growing complexity is one of the greatest challenges of project management and one of the causes for project failure in terms of cost overruns and time delays. For instance, in the automotive industry, increasing market orientation and growing complexity of automotive product has changed the management structure of the vehicle development projects from a hierarchical to a networked structure, including the manufacturer but also numerous suppliers. Dependencies between project elements increase risks, since problems in one element may propagate to other directly or indirectly dependent elements. Complexity generates a number of phenomena, positive or negative, isolated or in chains, local or global, that will more or less interfere with the convergence of the project towards its goals. The thesis aim is thus to reduce the risks associated with the complexity of the vehicle development projects by increasing the understanding of this complexity and the coordination of project actors. To do so, a first research question is to prioritize actions to mitigate complexity-related risks. Then, a second research question is to propose a way to organize and coordinate actors in order to cope efficiently with the previously identified complexity-related phenomena.The first question will be addressed by modeling project complexity and by analyzing complexity-related phenomena within the project, at two levels. First, a high-level factor-based descriptive modeling is proposed. It permits to measure and prioritize project areas where complexity may have the most impact. Second, a low-level graph-based modeling is proposed, based on the finer modeling of project elements and interdependencies. Contributions have been made on the complete modeling process, including the automation of some data-gathering steps, in order to increase performance and decrease effort and error risk. These two models can be used consequently; a first high-level measure can permit to focus on some areas of the project, where the low-level modeling will be applied, with a gain of global efficiency and impact. Based on these models, some contributions are made to anticipate potential behavior of the project. Topological and propagation analyses are proposed to detect and prioritize critical elements and critical interdependencies, while enlarging the sense of the polysemous word “critical."The second research question will be addressed by introducing a clustering methodology to propose groups of actors in new product development projects, especially for the actors involved in many deliverable-related interdependencies in different phases of the project life cycle. This permits to increase coordination between interdependent actors who are not always formally connected via the hierarchical structure of the project organization. This allows the project organization to be actually closer to what a networked structure should be. The automotive-based industrial application has shown promising results for the contributions to both research questions. Finally, the proposed methodology is discussed in terms of genericity and seems to be applicable to a wide set of complex projects for decision support.La gestion de projets complexes nécessite d’orchestrer la coopération de centaines de personnes provenant de diverses entreprises, professions et compétences, de travailler sur des milliers d'activités, livrables, objectifs, actions, décisions et risques. En outre, ces nombreux éléments du projet sont de plus en plus interconnectés, et aucune décision ou action n’est indépendante. Cette complexité croissante est l'un des plus grands défis de la gestion de projet et l'une des causes de l'échec du projet en termes de dépassements de coûts et des retards. Par exemple, dans l'industrie automobile, l'augmentation de l'orientation du marché et de la complexité croissante des véhicules a changé la structure de gestion des projets de développement de nouveaux véhicules à partir d'une structure hiérarchique à une structure en réseau, y compris le constructeur, mais aussi de nombreux fournisseurs. Les dépendances entre les éléments du projet augmentent les risques, car les problèmes dans un élément peuvent se propager à d'autres éléments qui en dépendent directement ou indirectement. La complexité génère un certain nombre de phénomènes, positifs ou négatifs, isolés ou en chaînes, locaux ou globaux, qui vont plus ou moins interférer avec la convergence du projet vers ses objectifs.L'objectif de la thèse est donc de réduire les risques associés à la complexité des projets véhicules en augmentant la compréhension de cette complexité et de la coordination des acteurs du projet. Pour ce faire, une première question de recherche est de prioriser les actions pour atténuer les risques liés à la complexité. Puis, une seconde question de recherche est de proposer un moyen d'organiser et de coordonner les acteurs afin de faire face efficacement avec les phénomènes liés à la complexité identifiés précédemment.La première question sera abordée par la modélisation de complexité du projet en analysant les phénomènes liés à la complexité dans le projet, à deux niveaux. Tout d'abord, une modélisation descriptive de haut niveau basée facteur est proposé. Elle permet de mesurer et de prioriser les zones de projet où la complexité peut avoir le plus d'impact. Deuxièmement, une modélisation de bas niveau basée sur les graphes est proposée. Elle permet de modéliser plus finement les éléments du projet et leurs interdépendances. Des contributions ont été faites sur le processus complet de modélisation, y compris l'automatisation de certaines étapes de collecte de données, afin d'augmenter les performances et la diminution de l'effort et le risque d'erreur. Ces deux modèles peuvent être utilisés en conséquence; une première mesure de haut niveau peut permettre de se concentrer sur certains aspects du projet, où la modélisation de bas niveau sera appliquée, avec un gain global d'efficacité et d'impact. Basé sur ces modèles, certaines contributions sont faites pour anticiper le comportement potentiel du projet. Des analyses topologiques et de propagation sont proposées pour détecter et hiérarchiser les éléments essentiels et les interdépendances critiques, tout en élargissant le sens du mot polysémique "critique".La deuxième question de recherche sera traitée en introduisant une méthodologie de « Clustering » pour proposer des groupes d'acteurs dans les projets de développement de nouveaux produits, en particulier pour les acteurs impliqués dans de nombreuses interdépendances liées aux livrables à différentes phases du cycle de vie du projet. Cela permet d'accroître la coordination entre les acteurs interdépendants qui ne sont pas toujours formellement reliés par la structure hiérarchique de l'organisation du projet. Cela permet à l'organisation du projet d’être effectivement plus proche de la structure en « réseau » qu’elle devrait avoir. L'application industrielle aux projets de développement de nouveaux véhicules a montré des résultats prometteurs pour les contributions aux deux questions de recherche

    Benchmarking Eventually Consistent Distributed Storage Systems

    Get PDF
    Cloud storage services and NoSQL systems typically offer only "Eventual Consistency", a rather weak guarantee covering a broad range of potential data consistency behavior. The degree of actual (in-)consistency, however, is unknown. This work presents novel solutions for determining the degree of (in-)consistency via simulation and benchmarking, as well as the necessary means to resolve inconsistencies leveraging this information

    Data-Driven Methods for Data Center Operations Support

    Get PDF
    During the last decade, cloud technologies have been evolving at an impressive pace, such that we are now living in a cloud-native era where developers can leverage on an unprecedented landscape of (possibly managed) services for orchestration, compute, storage, load-balancing, monitoring, etc. The possibility to have on-demand access to a diverse set of configurable virtualized resources allows for building more elastic, flexible and highly-resilient distributed applications. Behind the scenes, cloud providers sustain the heavy burden of maintaining the underlying infrastructures, consisting in large-scale distributed systems, partitioned and replicated among many geographically dislocated data centers to guarantee scalability, robustness to failures, high availability and low latency. The larger the scale, the more cloud providers have to deal with complex interactions among the various components, such that monitoring, diagnosing and troubleshooting issues become incredibly daunting tasks. To keep up with these challenges, development and operations practices have undergone significant transformations, especially in terms of improving the automations that make releasing new software, and responding to unforeseen issues, faster and sustainable at scale. The resulting paradigm is nowadays referred to as DevOps. However, while such automations can be very sophisticated, traditional DevOps practices fundamentally rely on reactive mechanisms, that typically require careful manual tuning and supervision from human experts. To minimize the risk of outages—and the related costs—it is crucial to provide DevOps teams with suitable tools that can enable a proactive approach to data center operations. This work presents a comprehensive data-driven framework to address the most relevant problems that can be experienced in large-scale distributed cloud infrastructures. These environments are indeed characterized by a very large availability of diverse data, collected at each level of the stack, such as: time-series (e.g., physical host measurements, virtual machine or container metrics, networking components logs, application KPIs); graphs (e.g., network topologies, fault graphs reporting dependencies among hardware and software components, performance issues propagation networks); and text (e.g., source code, system logs, version control system history, code review feedbacks). Such data are also typically updated with relatively high frequency, and subject to distribution drifts caused by continuous configuration changes to the underlying infrastructure. In such a highly dynamic scenario, traditional model-driven approaches alone may be inadequate at capturing the complexity of the interactions among system components. DevOps teams would certainly benefit from having robust data-driven methods to support their decisions based on historical information. For instance, effective anomaly detection capabilities may also help in conducting more precise and efficient root-cause analysis. Also, leveraging on accurate forecasting and intelligent control strategies would improve resource management. Given their ability to deal with high-dimensional, complex data, Deep Learning-based methods are the most straightforward option for the realization of the aforementioned support tools. On the other hand, because of their complexity, this kind of models often requires huge processing power, and suitable hardware, to be operated effectively at scale. These aspects must be carefully addressed when applying such methods in the context of data center operations. Automated operations approaches must be dependable and cost-efficient, not to degrade the services they are built to improve. i

    Complexity in financial market. Modeling psychological behavior in agent-based models and order book models

    Get PDF
    The fundamental idea developed throughout this work is the introduction of new metrics in Social Sciences (Economics, Finance, opinion dynamics, etc). The concept of metric, that is the concept of measure, is usually neglected by mainstream theories of Economics and Finance. Financial Markets are the natural starting point of such an approach to Social Sciences because a systematic approach can be undertaken and the methods of Physics has shown to be very effective. In fact since a decade there exists a very huge amount of high frequency data from stock exchanges which permit to perform experimental procedures as in Natural Sciences. Financial markets appear as a perfect playground where models can be tested and where repeatability of empirical evidences are well-established features differently from, for instance, Macro-Economy and Micro-Economy. Thus Finance has been the first point of contact for the interdisciplinary application of methods and tools deriving from Physics and it has been also the starting point of this work. We investigated the origin of the so-called Stylized Facts of financial markets (i.e. the statistical properties of financial time series) in the framework of agent-based models. We found that Stylized Facts can be interpreted as a finite size effect in terms of the number of effectively independent agents (i.e. strategy) which results to be a key variable to understand the self-organization of financial markets. As a second issue we focused our attention on the order book dynamics both from a theoretical and a data oriented point of view. We developed a zero intelligence model in order to investigate the role of vanishing liquidity in the price response to incoming orders. Within the framework of this model we have analyzed the effect of the introduction of strategies pointing out that simple strategic behaviors can explain bursts of intermittency and long memory effects. On the other hand we quantitatively showed that there exists a feedback effect in markets called self-fulfilling prophecy which is the mechanism through which technical trading can exist and work. This feature is a very interesting quantitative evidence of a self-reinforcement of agents’ belief. Last but not least nowadays we live in a computerized and networked society where many of our actions leave a digital trace and affect other people’s actions. This has lead to the emergence of a new data-driven research field. In this work we highlighted how non financial data can be used to track financial activity, in detail we investigate query log volumes, i.e. the volumes of searches for a specific query done by users in a search engine, as a proxy for trading volumes and we find that users’ activity on Yahoo! search engine anticipates trading volume by one-two days. Differently from Finance, Economics is far from being an ideal candidate to export the methodology of Natural Sciences because of the lack of empirical data since controlled (and repeatable) experiments are totally artificial while real experiments are almost incontrollable and non repeatable due to a high degree of non stationarity of economical systems. However, the application of method deriving from complexity to the Economics of Growth is one of the more important achievement of the work here developed. The basic idea is to study the network defined by international trade flows and introduce a (non-monetary) metric to measure the complexity and the competitiveness of countries’ productive system. In addition we are able to define a metric for products’ quality which overcomes traditional economic measure for the quality of products given in terms of hours of qualified labour needed to produce a good. The method developed provides some impressive results in predicting economical growth of countries and offers many opportunities of improvements and generalizations

    Cyber-Physical Threat Intelligence for Critical Infrastructures Security

    Get PDF
    Modern critical infrastructures can be considered as large scale Cyber Physical Systems (CPS). Therefore, when designing, implementing, and operating systems for Critical Infrastructure Protection (CIP), the boundaries between physical security and cybersecurity are blurred. Emerging systems for Critical Infrastructures Security and Protection must therefore consider integrated approaches that emphasize the interplay between cybersecurity and physical security techniques. Hence, there is a need for a new type of integrated security intelligence i.e., Cyber-Physical Threat Intelligence (CPTI). This book presents novel solutions for integrated Cyber-Physical Threat Intelligence for infrastructures in various sectors, such as Industrial Sites and Plants, Air Transport, Gas, Healthcare, and Finance. The solutions rely on novel methods and technologies, such as integrated modelling for cyber-physical systems, novel reliance indicators, and data driven approaches including BigData analytics and Artificial Intelligence (AI). Some of the presented approaches are sector agnostic i.e., applicable to different sectors with a fair customization effort. Nevertheless, the book presents also peculiar challenges of specific sectors and how they can be addressed. The presented solutions consider the European policy context for Security, Cyber security, and Critical Infrastructure protection, as laid out by the European Commission (EC) to support its Member States to protect and ensure the resilience of their critical infrastructures. Most of the co-authors and contributors are from European Research and Technology Organizations, as well as from European Critical Infrastructure Operators. Hence, the presented solutions respect the European approach to CIP, as reflected in the pillars of the European policy framework. The latter includes for example the Directive on security of network and information systems (NIS Directive), the Directive on protecting European Critical Infrastructures, the General Data Protection Regulation (GDPR), and the Cybersecurity Act Regulation. The sector specific solutions that are described in the book have been developed and validated in the scope of several European Commission (EC) co-funded projects on Critical Infrastructure Protection (CIP), which focus on the listed sectors. Overall, the book illustrates a rich set of systems, technologies, and applications that critical infrastructure operators could consult to shape their future strategies. It also provides a catalogue of CPTI case studies in different sectors, which could be useful for security consultants and practitioners as well

    Handling Information and its Propagation to Engineer Complex Embedded Systems

    Get PDF
    Avec l’intérêt que la technologie d’aujourd’hui a sur les données, il est facile de supposer que l’information est au bout des doigts, prêt à être exploité. Les méthodologies et outils de recherche sont souvent construits sur cette hypothèse. Cependant, cette illusion d’abondance se brise souvent lorsqu’on tente de transférer des techniques existantes à des applications industrielles. Par exemple, la recherche a produit divers méthodologies permettant d’optimiser l’utilisation des ressources de grands systèmes complexes, tels que les avioniques de l’Airbus A380. Ces approches nécessitent la connaissance de certaines mesures telles que les temps d’exécution, la consommation de mémoire, critères de communication, etc. La conception de ces systèmes complexes a toutefois employé une combinaison de compétences de différents domaines (probablement avec des connaissances en génie logiciel) qui font que les données caractéristiques au système sont incomplètes ou manquantes. De plus, l’absence d’informations pertinentes rend difficile de décrire correctement le système, de prédire son comportement, et améliorer ses performances. Nous faisons recours au modèles probabilistes et des techniques d’apprentissage automatique pour remédier à ce manque d’informations pertinentes. La théorie des probabilités, en particulier, a un grand potentiel pour décrire les systèmes partiellement observables. Notre objectif est de fournir des approches et des solutions pour produire des informations pertinentes. Cela permet une description appropriée des systèmes complexes pour faciliter l’intégration, et permet l’utilisation des techniques d’optimisation existantes. Notre première étape consiste à résoudre l’une des difficultés rencontrées lors de l’intégration de système : assurer le bon comportement temporelle des composants critiques des systèmes. En raison de la mise à l’échelle de la technologie et de la dépendance croissante à l’égard des architectures à multi-coeurs, la surcharge de logiciels fonctionnant sur différents coeurs et le partage d’espace mémoire n’est plus négligeable. Pour tel, nous étendons la boîte à outils des système temps réel avec une analyse temporelle probabiliste statique qui estime avec précision l’exécution d’un logiciel avec des considerations pour les conflits de mémoire partagée. Le modèle est ensuite intégré dans un simulateur pour l’ordonnancement de systèmes temps réel multiprocesseurs. ----------ABSTRACT: In today’s data-driven technology, it is easy to assume that information is at the tip of our fingers, ready to be exploited. Research methodologies and tools are often built on top of this assumption. However, this illusion of abundance often breaks when attempting to transfer existing techniques to industrial applications. For instance, research produced various methodologies to optimize the resource usage of large complex systems, such as the avionics of the Airbus A380. These approaches require the knowledge of certain metrics such as the execution time, memory consumption, communication delays, etc. The design of these complex systems, however, employs a mix of expertise from different fields (likely with limited knowledge in software engineering) which might lead to incomplete or missing specifications. Moreover, the unavailability of relevant information makes it difficult to properly describe the system, predict its behavior, and improve its performance. We fall back on probabilistic models and machine learning techniques to address this lack of relevant information. Probability theory, especially, has great potential to describe partiallyobservable systems. Our objective is to provide approaches and solutions to produce relevant information. This enables a proper description of complex systems to ease integration, and allows the use of existing optimization techniques. Our first step is to tackle one of the difficulties encountered during system integration: ensuring the proper timing behavior of critical systems. Due to technology scaling, and with the growing reliance on multi-core architectures, the overhead of software running on different cores and sharing memory space is no longer negligible. For such, we extend the real-time system tool-kit with a static probabilistic timing analysis technique that accurately estimates the execution of software with an awareness of shared memory contention. The model is then incorporated into a simulator for scheduling multi-processor real-time systems

    Integrated application of compositional and behavioural safety analysis

    Get PDF
    To address challenges arising in the safety assessment of critical engineering systems, research has recently focused on automating the synthesis of predictive models of system failure from design representations. In one approach, known as compositional safety analysis, system failure models such as fault trees and Failure Modes and Effects Analyses (FMEAs) are constructed from component failure models using a process of composition. Another approach has looked into automating system safety analysis via application of formal verification techniques such as model checking on behavioural models of the system represented as state automata. So far, compositional safety analysis and formal verification have been developed separately and seen as two competing paradigms to the problem of model-based safety analysis. This thesis shows that it is possible to move forward the terms of this debate and use the two paradigms synergistically in the context of an advanced safety assessment process. The thesis develops a systematic approach in which compositional safety analysis provides the basis for the systematic construction and refinement of state-automata that record the transition of a system from normal to degraded and failed states. These state automata can be further enhanced and then be model-checked to verify the satisfaction of safety properties. Note that the development of such models in current practice is ad hoc and relies only on expert knowledge, but it being rationalised and systematised in the proposed approach – a key contribution of this thesis. Overall the approach combines the advantages of compositional safety analysis such as simplicity, efficiency and scalability, with the benefits of formal verification such as the ability for automated verification of safety requirements on dynamic models of the system, and leads to an improved model-based safety analysis process. In the context of this process, a novel generic mechanism is also proposed for modelling the detectability of errors which typically arise as a result of component faults and then propagate through the architecture. This mechanism is used to derive analyses that can aid decisions on appropriate detection and recovery mechanisms in the system model. The thesis starts with an investigation of the potential for useful integration of compositional and formal safety analysis techniques. The approach is then developed in detail and guidelines for analysis and refinement of system models are given. Finally, the process is evaluated in three cases studies that were iteratively performed on increasingly refined and improved models of aircraft and automotive braking and cruise control systems. In the light of the results of these studies, the thesis concludes that integration of compositional and formal safety analysis techniques is feasible and potentially useful in the design of safety critical systems

    Cyber-Physical Threat Intelligence for Critical Infrastructures Security

    Get PDF
    Modern critical infrastructures can be considered as large scale Cyber Physical Systems (CPS). Therefore, when designing, implementing, and operating systems for Critical Infrastructure Protection (CIP), the boundaries between physical security and cybersecurity are blurred. Emerging systems for Critical Infrastructures Security and Protection must therefore consider integrated approaches that emphasize the interplay between cybersecurity and physical security techniques. Hence, there is a need for a new type of integrated security intelligence i.e., Cyber-Physical Threat Intelligence (CPTI). This book presents novel solutions for integrated Cyber-Physical Threat Intelligence for infrastructures in various sectors, such as Industrial Sites and Plants, Air Transport, Gas, Healthcare, and Finance. The solutions rely on novel methods and technologies, such as integrated modelling for cyber-physical systems, novel reliance indicators, and data driven approaches including BigData analytics and Artificial Intelligence (AI). Some of the presented approaches are sector agnostic i.e., applicable to different sectors with a fair customization effort. Nevertheless, the book presents also peculiar challenges of specific sectors and how they can be addressed. The presented solutions consider the European policy context for Security, Cyber security, and Critical Infrastructure protection, as laid out by the European Commission (EC) to support its Member States to protect and ensure the resilience of their critical infrastructures. Most of the co-authors and contributors are from European Research and Technology Organizations, as well as from European Critical Infrastructure Operators. Hence, the presented solutions respect the European approach to CIP, as reflected in the pillars of the European policy framework. The latter includes for example the Directive on security of network and information systems (NIS Directive), the Directive on protecting European Critical Infrastructures, the General Data Protection Regulation (GDPR), and the Cybersecurity Act Regulation. The sector specific solutions that are described in the book have been developed and validated in the scope of several European Commission (EC) co-funded projects on Critical Infrastructure Protection (CIP), which focus on the listed sectors. Overall, the book illustrates a rich set of systems, technologies, and applications that critical infrastructure operators could consult to shape their future strategies. It also provides a catalogue of CPTI case studies in different sectors, which could be useful for security consultants and practitioners as well
    corecore