13 research outputs found
Modélisation et évalution des pôles d'échanges multimodaux (une approche hybride multiéchelle basée sur les réseaux Pétri Lots)
Les pôles d'échanges multimodaux (PEM) constituent aujourd'hui la vitrine des réseaux de transport. C'est un système de transport complexe dont le bon fonctionnement conditionne celui du réseau de transport dans son ensemble. Dans le but de prévenir, de réduire ou si possible d'éviter les dysfonctionnements, les incidents et les accidents qui affectent quotidiennement un grand nombre de sites en exploitation, et qui, par ricochet, détériorent l'image et l'attractivité des transports collectifs en général, il devient nécessaire de systématiser l'évaluation à priori et à posteriori des performances des PEM comme cela s'est fait avec succès dans d'autres domaines, tels que celui des systèmes industriels de production. Ainsi, l'objectif de ces travaux de recherche est d'élaborer un modèle de simulation pour la mesure d'un certain nombre d'indicateurs quantitatifs de performance dédiés aux PEM. Le modèle de simulation proposé est basé sur les réseaux de Petri Lots, une extension des réseaux de Pétri Hybrides particulièrement adaptée pour la modélisation des systèmes multiéchelles complexes. Ce paradigme de modélisation offre, en outre, des techniques d'analyse formelle pour la vérification et la commande du modèle élaboré. Le modèle de simulation proposé sera utile poour définir et programmer à court terme des opérations de maintenance dans le but de prévenir des dysfonctionnements latents ou probables. Concernant les projets de PEM en cours de réalisation, il permettra d'éclairer les choix préalables à la conception des différentes composantes de ces sites en assistant les concepteurs dans les procédures de dimensionnement et de planification.A Multimodal Hub is a complex transportation system which has the role to interconnect several public and private transportation modes in order to promote intermodality practice. Because of many observed problems (such as recurrent congestion phenomena inside stations, high transfer times, long queues in front of services, etc.) which contribute to deteriorate the image of public transport in general, it becomes more and more important for transit authorities to be able to perform many performance measures for identifying the causes of these problems and trying to find solutions. The main goal of the PhD thesis is to propose a simulation model for evaluating the main performance factors of multimodal transportation hubs. Among the most important quantitative factors, we can mention occupancy rates, queue lengths, mean service times, evacuation times, and measures related to intermodality practice such as connection times and waiting times. The suggested simulation model is based on Batches Petri nets which are an extension of Hybrid Petri nets. This paradigm is suitable for our study because it offers a multiscale modular modeling approach which allows mastering the complexity of the studied system. Besides, it offers formal analysis techniques for checking and design (control) purposes. This simulation model can be successfully used for (i) evaluating existing multimodal hubs, (ii) validating design projects of new multimodal hubs, and (iii) assisting designers during sizing and planning procedures.BELFORT-BU L. FEBVRE (900102102) / SudocBELFORT-UTBM-SEVENANS (900942101) / SudocSudocFranceF
Analyse statique de code : une méthode intégrée outillée développée par Thales pour l'étude du passage en entiers et des débordements pour des composants logiciels SIL4
Ce chapitre présente une méthode basée sur l'arithmétique des intervalles pour analyser les risques de calculs dus aux conversions en entiers de spécifications décrites ne nombre réels. Ce travail s'intéresse également aux risques de débordements et de division par zéro dans les chaines de traitement arithmétique. La méthode détaillée dans ce chapitre a été développée et appliquée dans le contexte du contrôle/commande de système ferroviaire. Une application de la méthode à un algorithme qui contrôle la vitesse sécuritaire des trains dans un système CBTC est présentée
Efficient Method Developed by Thales for Safety Evaluation of Real-to-Integer Discretization and Overflows in SIL4 Software
This book presents real examples of the formal techniques called?abstract interpretation? currently being used in various industrialfields: railway, aeronautics, space, automotive, etc.The current literature seems to only provide very general books on theformal techniques. The purpose of this book is to present students andresearchers, in a single book, with the wealth of experience of peoplewho are intrinsically involved in the realization and evaluation ofsoftware-based safety critical systems. As the authors are people currently working withinthe industry, the usual problems of confidentiality, which can occur with other books, isnot an issue and so makes it possible to supply new useful information (photos,architectural plans, real examples).This chapter introduce a method based on interval arithmetic to analyse computation risks due to integer conversions of an infinite precision specification. The work also takle the overflow and division by zero problems in arithmetic computation chains. The method was developed and applied within the context of railway command and control systems and the chapter present a case study on a speed control algorithm
Safe Design of Stable Neural Networks for Fault Detection in Small UAVs
International audienceStability of a machine learning model is the extent to which a model can continue to operate correctly despite small perturbations in its inputs. A formal method to measure stability is the Lipschitz constant of the model which allows to evaluate how small perturbations in the inputs impact the output variations. Variations in the outputs may lead to high errors for regression tasks or unintended changes in the classes for classification tasks. Verification of the stability of ML models is crucial in many industrial domains such as aeronautics, space, automotive etc. It has been recognized that data-driven models are intrinsically extremely sensitive to small perturbation of the inputs. Therefore, the need to design methods for verifying the stability of ML models is of importance for manufacturers developing safety critical products. In this work, we focus on Small Unmanned Aerial Vehicles (UAVs) which are in the frontage of new technology solutions for intelligent systems. However, real-time fault detection/diagnosis in such UAVs remains a challenge from data collection to prediction tasks. This work presents application of neural networks to detect in real-time elevon positioning faults. We show the efficiency of a formal method based on the Lipschitz constant for quantifying the stability of neural network models. We also present how this method can be coupled with spectral normalization constraints at the design phase to control the internal parameters of the model and make it more stable while keeping a high level of performance (accuracy-stability trade-off)
A Quantitative Analysis Of The Robustness Of Neural Networks For Tabular Data
International audienceThis paper presents a quantitative approach to demonstrate the robustness of neural networks for tabular data. These data form the backbone of the data structures found in most industrial applications. We analyse the effect of various widely used techniques we encounter in neural network practice, such as regularization of weights, addition of noise to the data, and positivity constraints. This analysis is performed by using three state-of-the-art techniques, which provide mathematical proofs of robustness in terms of Lipschitz constant for feed-forward networks. The experiments are carried out on two prediction tasks and one classification task. Our work brings insights into building robust neural network architectures for safety critical systems that require certification or approval from a competent authority
An Adversarial Attacker for Neural Networks in Regression Problems
International audienceAdversarial attacks against neural networks and their defenses have been mostly investigated in classification scenarios. However, adversarial attacks in a regression setting remain understudied, although they play a critical role in a large portion of safety-critical applications. In this work, we present an adversarial attacker for regression tasks, derived from the algebraic properties of the Jacobian of the network. We show that our attacker successfully fools the neural network, and we measure its effectiveness in reducing the estimation performance. We present a white-box adversarial attacker to support engineers in designing safety-critical regression machine learning models. We present our results on various open-source and real industrial tabular datasets. In particular, the proposed adversarial attacker outperforms attackers based on random perturbations of the inputs. Our analysis relies on the quantification of the fooling error as well as various error metrics. A noteworthy feature of our attacker is that it allows us to optimally attack a subset of inputs, which may be helpful to analyse the sensitivity of some specific inputs
Multivariate Lipschitz Analysis of the Stability of Neural Networks
International audienceThe stability of neural networks with respect to adversarial perturbations has been extensively studied. One of the main strategies consist of quantifying the Lipschitz regularity of neural networks. In this paper, we introduce a multivariate Lipschitz constant-based stability analysis of fully connected neural networks allowing us to capture the influence of each input or group of inputs on the neural network stability. Our approach relies on a suitable re-normalization of the input space, with the objective to perform a more precise analysis than the one provided by a global Lipschitz constant. We investigate the mathematical properties of the proposed multivariate Lipschitz analysis and show its usefulness in better understanding the sensitivity of the neural network with regard to groups of inputs. We display the results of this analysis by a new representation designed for machine learning practitioners and safety engineers termed as a Lipschitz star. The Lipschitz star is a graphical and practical tool to analyze the sensitivity of a neural network model during its development, with regard to different combinations of inputs. By leveraging this tool, we show that it is possible to build robust-by-design models using spectral normalization techniques for controlling the stability of a neural network, given a safety Lipschitz target. Thanks to our multivariate Lipschitz analysis, we can also measure the efficiency of adversarial training in inference tasks. We perform experiments on various open access tabular datasets, and also on a real Thales Air Mobility industrial application subject to certification requirements
An Adversarial Attacker for Neural Networks in Regression Problems
International audienceAdversarial attacks against neural networks and their defenses have been mostly investigated in classification scenarios. However, adversarial attacks in a regression setting remain understudied, although they play a critical role in a large portion of safety-critical applications. In this work, we present an adversarial attacker for regression tasks, derived from the algebraic properties of the Jacobian of the network. We show that our attacker successfully fools the neural network, and we measure its effectiveness in reducing the estimation performance. We present a white-box adversarial attacker to support engineers in designing safety-critical regression machine learning models. We present our results on various open-source and real industrial tabular datasets. In particular, the proposed adversarial attacker outperforms attackers based on random perturbations of the inputs. Our analysis relies on the quantification of the fooling error as well as various error metrics. A noteworthy feature of our attacker is that it allows us to optimally attack a subset of inputs, which may be helpful to analyse the sensitivity of some specific inputs
An Adversarial Attacker for Neural Networks in Regression Problems
International audienceAdversarial attacks against neural networks and their defenses have been mostly investigated in classification scenarios. However, adversarial attacks in a regression setting remain understudied, although they play a critical role in a large portion of safety-critical applications. In this work, we present an adversarial attacker for regression tasks, derived from the algebraic properties of the Jacobian of the network. We show that our attacker successfully fools the neural network, and we measure its effectiveness in reducing the estimation performance. We present a white-box adversarial attacker to support engineers in designing safety-critical regression machine learning models. We present our results on various open-source and real industrial tabular datasets. In particular, the proposed adversarial attacker outperforms attackers based on random perturbations of the inputs. Our analysis relies on the quantification of the fooling error as well as various error metrics. A noteworthy feature of our attacker is that it allows us to optimally attack a subset of inputs, which may be helpful to analyse the sensitivity of some specific inputs