36 research outputs found

    Enhancing the Automotive E/E Architecture Utilising Container-Based Electronic Control Units

    Get PDF
    Over the past 40 years, with the advent of computing technology and embedded systems, such as Electronic Control Units (ECUs), cars have moved from solely mechanical control to predominantly digital control. Whilst improvements have been realised in terms of passenger safety and vehicle efficiency, there are several issues currently facing the automotive industry as a result of the rising number of ECUs. These include greater demands placed on power, increased vehicle weight, complexities of hardware and software, dependency on software, software life expectancy, ad-hoc methods concerning automotive software updates, and rising costs for the vehicle manufacturer and consumer. As the modern-day motor car enters the autonomous age, these issues are predicted to increase because there will be an even greater reliance on computing hardware and software technology to support these new driving functions. To address the issues highlighted above, a number of solutions that aid hardware consolidation and promote software reusability have been proposed. However, these depend on bespoke embedded hardware and there remains a lack of clearly defined mechanisms through which to update ECU software. This research moves away from these current practices and identifies many similarities between the datacentre and the automotive Electronic and Electrical (E/E) architecture, demonstrating that virtualisation technologies, which have provided many benefits to the datacentre, can be replicated within an automotive context. Specifically, the research presents a comprehensive study of the Central Processor Unit (CPU) and memory resources required and consumed to support a container-based ECU automotive function. The research reveals that lightweight container virtualisation offers many advantages. A container-based ECU can promote consolidation and enhance the automotive E/E architecture through power, weight and cost savings, as well as enabling a robust mechanism to facilitate future software updates throughout the lifetime of a vehicle. Furthermore, this research demonstrates there are opportunities to adopt this new research methodology within both the automotive industry and industries that utilise embedded systems, more broadly

    Software Technologies - 8th International Joint Conference, ICSOFT 2013 : Revised Selected Papers

    Get PDF

    TOWARDS CHANGE VALIDATION IN DYNAMIC SYSTEM UPDATING FRAMEWORKS

    Get PDF
    Dynamic Software Updating (DSU) provides mechanisms to update a program without stopping its execution. An indiscriminate update that does not consider the current state of the computation, potentially undermines the stability of the running application. Determining automatically a safe moment, the time that the updating process could be started, is still an open crux that usually neglected from the existing DSU systems. The program developer is the best one who knows the program semantics and the logical relations between two successive versions as well as the constraints which should be respected in order to proceed with the update. Therefore, a set of meta-data has been introduced that could be exploited to explain the constraints of the update. These constraints should be considered at the dynamic update time. Thus, a runtime validator has been designed and implemented to verify these constraints before starting the update process. The validator is independent of existing DSU systems and can be plugged into DSUs as a pre-update component. An architecture for validation has been proposed that includes the DSU, the running program, the validator, and their communications. Along with the ability to describe the restrictions by using meta-data, a method has been presented to extract some constraints automatically. The gradual transition from the old version to the new version requires that the running application frequently switches between executing old and new code for a transient period. Although this swinging execution phenomenon is inevitable, its beginning can be selected. Considering this issue, an automatic method has been proposed to determine which part of the code is unsafe to participate in the swinging execution. The method has been implemented as a static analyzer which can annotate the unsafe part of the code as constraints. This approach is demonstrated in the evolution of the various versions of three different long-running software systems and compared to other approaches. Although the approach has been evaluated by evolving various programs, the impact of different changes in the dynamic update is not entirely clear. In addition, the study of the effect of these changes can identify code smells on the program, regarding the dynamic update issue. For the first time, the code smells have been introduced that may cause a run-time or syntax error on the dynamic update process. A set of candidate error-prone patterns has been developed based on programming language features and possible changes for each item. This set of 75 patterns is inspected by three distinct DSUs to identify problematic cases as code smells. Additionally, error- prone patterns set can be exploited as a reference set by other DSUs to measure own flexibility

    Du génie logiciel pour déployer, gérer et reconfigurer les logiciels

    Get PDF
    As a discipline, software engineering embraces various schools of thought, yet remains consistent with respect to its objective. It aims at providing means for effective and inexpensive production of software by contributing mathematical frameworks, methods and tools. Consequently, we witness some automation in software production process that, as of today, allows producing astronomical amounts of lines of code daily. This rapidly and massively produced software is required for all computer equipment that has invaded our daily life in various forms of other devices (PC, tablet, phone, refrigerator, car, etc.). In this world of large software consumption, it is somewhat surprising that the management of software, after its production, remains dominated by manual practices like searching in lists, downloading units and manual installations. In this context, I organized my research activities such that they aim at providing mathematical frameworks, methods and tools to deploy, distribute or update massive amounts of software since 2001, the year of my PhD defense. These research activities were mainly conducted in Brest at the CS department of Telecom Bretagne as part of the PASS team of IRISA. This document puts into perspective my various scientific contributions, undertaken projects, endeavors in training research students and efforts invested as a teacher. My scientific contributions can be divided into five parts: mathematical models and algorithms for dependency management in software deployment; software component models; processes and tools for massive software deployment; dynamic update of programs at runtime; languages for the design and implementation of software development processes. All these works complement each other, thus making it possible to imagine the proposition of methods and tools for large-scale software deployment.Le génie logiciel est une discipline constituée de nombreux courants mais cohérente par l'objectif affiché. Il s'agit d'aider à la production, de manière efficace et peu coûteuse, de logiciels en offrant des cadres mathématiques, des méthodes et des outils. Ainsi, on a pu assister à une certaine industrialisation du processus de production de logiciel qui permet aujourd'hui de produire, chaque jour, des quantités astronomiques de logiciel. Ce logiciel produit rapidement et en grande quantité est nécessaire pour tous les équipements informatiques qui ont envahi notre quotidien (ordinateur, tablette, téléphone, réfrigérateur, voiture, ...). Dans ce monde de grande consommation du logiciel, il est cependant surprenant de constater que la gestion des logiciels après leur production est resté dominé par des pratiques manuelles de recherche dans des listes, de téléchargement unitaire et d'installation manuelle. C'est dans ce cadre que j'ai développé une activité de recherche visant à fournir des cadres mathématiques, des méthodes et des outils pour déployer, diffuser ou mettre à jour massivement les logiciels depuis 2001 année de ma soutenance de thèse. Ces activités de recherche ont été conduites principalement à Brest au sein du département informatique de Télécom Bretagne dans le cadre de l'équipe PASS de l'IRISA. Mon Habilitation à Diriger des Recherches est l'occasion de remettre en perspective mes différentes contributions scientifiques, les étudiants formés à la recherche, les projets réalisés ainsi que mon investissement en tant qu'enseignant. Les contributions scientifiques peuvent être classées en cinq parties : - des modèles mathématiques et les algorithmes associés pour la gestion des dépendances de logiciels lors de leur déploiement ; - les modèles de composants logiciels ; - les processus et outils pour le déploiement de logiciel massif ; - la mise à jour de programmes sans interrompre leur exécution ; - des langages pour la conception et la réalisation de processus de développement logiciel. Tous ces travaux qui se nourrissent et se complètent permettent d'imaginer la proposition de méthodes et outils pour passer à l'échelle dans la gestion du déploiement des logiciels

    Plates-formes et mises à jour dynamiques configurables

    Get PDF
    Dynamic software updating allows applications to be modified without interrupting the services it provides. Because today's systems rely heavily on software and its availability, such a possibility is an important issue. Many mechanisms with diverse needs and properties enable dynamic updates. They are used by platforms targeting specific types of applications and/or updates. While the specialization of these platforms make the development of dynamic updates easier, it can cause the platform to be ill suited in the case of unforeseen updates. A solution is to select and combine best-suited mechanisms for each update in order to guarantee a best compatibility of platforms with the different kinds of applications and updates. The three contributions detailed in this thesis follow this objective: - Studying platforms and identify generic models for platforms and updates - Studying the needs and properties of mechanisms as well as their capacity to be combined - Develop configurable platforms allowing the selection of best-suited mechanisms for each update. Theses contributions open leads towards a new generation of platforms and towards new uses of dynamic updates. The third contribution lead to the development of Pymoult, a configurable platform for Python programs. Pymoult provides several mechanisms through a high-level API suited to the conception of dynamic updates.La mise à jour dynamique des logiciels permet de modifier ces derniers sans interrompre les services qu'ils fournissent. C'est un enjeu important à une époque où les logiciels sont omniprésents et où leur indisponibilité peut être coûteuse (service commercial) ou même dangereuse (système de sécurité). De nombreux mécanismes aux propriétés et besoins variés permettent d'atteindre cet objectif. Ces mécanismes sont employés par des plates-formes dédiées à des types de logiciel et/ou de mises à jour spécifiques. En se spécialisant, ces plates-formes facilitent l'écriture de mises à jour dynamiques mais peuvent être mal adaptées à l'application de certaines modifications imprévues. Il convient alors de sélectionner et combiner les mécanismes les mieux adaptés à chaque mise à jour afin d'assurer une meilleure compatibilité des plates-formes avec les différents logiciels et mises à jour. C'est autour de cet objectif que s'organisent les contributions de ce manuscrit: - Étudier les plates-formes et identifier des modèles génériques de plate-forme et de mise à jour - Étudier les besoins et les propriétés des mécanismes de mise à jour ainsi que leurs capacités à être combinés. - Développer des plates-formes configurables permettant de sélectionner les mécanismes les mieux adaptés pour chaque mise à jour. Les résultats obtenus ouvrent des pistes vers une nouvelle génération de plates-formes ainsi que vers de nouvelles utilisations de la mise à jour dynamique. Le troisième axe a mené au développement de Pymoult, plate-forme configurable pour programmes Python. Cette plate-forme fournit de nombreux mécanismes au travers d'une API de haut niveau adaptée à la conception de mises à jour dynamiques

    Matching Possible Mitigations to Cyber Threats: A Document-Driven Decision Support Systems Approach

    Get PDF
    Cyber systems are ubiquitous in all aspects of society. At the same time, breaches to cyber systems continue to be front-page news (Calfas, 2018; Equifax, 2017) and, despite more than a decade of heightened focus on cybersecurity, the threat continues to evolve and grow, costing globally up to $575 billion annually (Center for Strategic and International Studies, 2014; Gosler & Von Thaer, 2013; Microsoft, 2016; Verizon, 2017). To address possible impacts due to cyber threats, information system (IS) stakeholders must assess the risks they face. Following a risk assessment, the next step is to determine mitigations to counter the threats that pose unacceptably high risks. The literature contains a robust collection of studies on optimizing mitigation selections, but they universally assume that the starting list of appropriate mitigations for specific threats exists from which to down-select. In current practice, producing this starting list is largely a manual process and it is challenging because it requires detailed cybersecurity knowledge from highly decentralized sources, is often deeply technical in nature, and is primarily described in textual form, leading to dependence on human experts to interpret the knowledge for each specific context. At the same time cybersecurity experts remain in short supply relative to the demand, while the delta between supply and demand continues to grow (Center for Cyber Safety and Education, 2017; Kauflin, 2017; Libicki, Senty, & Pollak, 2014). Thus, an approach is needed to help cybersecurity experts (CSE) cut through the volume of available mitigations to select those which are potentially viable to offset specific threats. This dissertation explores the application of machine learning and text retrieval techniques to automate matching of relevant mitigations to cyber threats, where both are expressed as unstructured or semi-structured English language text. Using the Design Science Research Methodology (Hevner & March, 2004; Peffers, Tuunanen, Rothenberger, & Chatterjee, 2007), we consider a number of possible designs for the matcher, ultimately selecting a supervised machine learning approach that combines two techniques: support vector machine classification and latent semantic analysis. The selected approach demonstrates high recall for mitigation documents in the relevant class, bolstering confidence that potentially viable mitigations will not be overlooked. It also has a strong ability to discern documents in the non-relevant class, allowing approximately 97% of non-relevant mitigations to be excluded automatically, greatly reducing the CSE’s workload over purely manual matching. A false v positive rate of up to 3% prevents totally automated mitigation selection and requires the CSE to reject a few false positives. This research contributes to theory a method for automatically mapping mitigations to threats when both are expressed as English language text documents. This artifact represents a novel machine learning approach to threat-mitigation mapping. The research also contributes an instantiation of the artifact for demonstration and evaluation. From a practical perspective the artifact benefits all threat-informed cyber risk assessment approaches, whether formal or ad hoc, by aiding decision-making for cybersecurity experts whose job it is to mitigate the identified cyber threats. In addition, an automated approach makes mitigation selection more repeatable, facilitates knowledge reuse, extends the reach of cybersecurity experts, and is extensible to accommodate the continued evolution of both cyber threats and mitigations. Moreover, the selection of mitigations applicable to each threat can serve as inputs into multifactor analyses of alternatives, both automated and manual, thereby bridging the gap between cyber risk assessment and final mitigation selection

    Marshall Space Flight Center Research and Technology Report 2019

    Get PDF
    Today, our calling to explore is greater than ever before, and here at Marshall Space Flight Centerwe make human deep space exploration possible. A key goal for Artemis is demonstrating and perfecting capabilities on the Moon for technologies needed for humans to get to Mars. This years report features 10 of the Agencys 16 Technology Areas, and I am proud of Marshalls role in creating solutions for so many of these daunting technical challenges. Many of these projects will lead to sustainable in-space architecture for human space exploration that will allow us to travel to the Moon, on to Mars, and beyond. Others are developing new scientific instruments capable of providing an unprecedented glimpse into our universe. NASA has led the charge in space exploration for more than six decades, and through the Artemis program we will help build on our work in low Earth orbit and pave the way to the Moon and Mars. At Marshall, we leverage the skills and interest of the international community to conduct scientific research, develop and demonstrate technology, and train international crews to operate further from Earth for longer periods of time than ever before first at the lunar surface, then on to our next giant leap, human exploration of Mars. While each project in this report seeks to advance new technology and challenge conventions, it is important to recognize the diversity of activities and people supporting our mission. This report not only showcases the Centers capabilities and our partnerships, it also highlights the progress our people have achieved in the past year. These scientists, researchers and innovators are why Marshall and NASA will continue to be a leader in innovation, exploration, and discovery for years to come

    The web-based simulation and information service for multi-hazard impact chains. Design document.

    Get PDF
    The overall objective of the PARATUS project and the platform is the co-development of a web-based simulation and information service for first and second responders and other stakeholders to evaluate the impact chains of multi-hazard events with particular emphasis on cross-border and cascading impacts. This deliverable provides a first impression of the platform and its components. A central theme in the PARATUS project is the co-development of the tools with stakeholders. The central stakeholders within the four applications case studies are therefore full project partners. They will be directly involved in the development of the platform. We foresee that the PARATUS Platform will have two major blocks: an information service that provides static information (or regularly updated information) and simulation service, which is a dynamic component where stakeholders can interactively work with the tools in the platform. The PARATUS will further make sure that documentation (e.g., software accompanying documentation) is also publicly available via the project website1 and other trusted repositories. The deliverable 4.1 was submitted to the European Commission on 31/07/2023 and is waiting for approval by the Research Executive Agency. Therefore, this current version may not represent the final version of the deliverable

    High performance scientific computing in applications with direct finite element simulation

    Get PDF
    xiii, 133 p.La predicción del flujo separado, incluida la pérdida de un avión completo mediantela dinámica de fluidos computacional (CFD) se considera uno de los grandes desaf¿¿os que seresolverán en 2030, según NASA. Las ecuaciones no lineales de Navier-Stokes proporcionan laformulación matemática para flujo de fluidos en espacios tridimensionales. Sin embargo, todaviafaltan soluciones clásicas, existencia y singularidad. Ya que el cálculo de la fuerza bruta esintratable para realizar simulación predictiva para un avión completo, uno puede usar la simulaciónnumérica directa (DNS); sin embargo, prohibitivamente caro ya que necesita resolver laturbulencia a escala de magnitud Re power (9/4). Considerando otros métodos como el estad¿¿sticopromedio Reynolds¿s Average Navier Stokes (RANS), spatial average Large Eddy Simulation(LES), y Hybrid Detached Eddy Simulation (DES), que requieren menos cantidad de grados delibertad. Todos estos métodos deben ajustarse a los problemas de referencia y, además, cerca las paredes, la malla tieneque ser muy fina para resolver las capas l¿¿mite (lo cual significa que el costo computacional es muycostoso). Por encima de todo, los resultados son sensibles a, por ejemplo, parámetros expl¿¿citos enel método, la malla, etc.Como una solución al desaf¿¿o, aqu¿¿ presentamos la adaptación Metodolog¿¿a de solución directa deFEM (DFS) con resolución numérica disparo, como una familia predictiva, libre de parámetros demétodos para flujo turbulento. Resolvimos el modelo de avión JAXA Standard Model (JSM) ennúmero realista de Reynolds, presentado como parte del High Lift Taller de predicción 3.Predijimos un aumento de Cl dentro de un error de 5 % vs experimento, arrastre Cd dentro de 10 %error y detenga 1 ¿ dentro del ángulo de ataque.El taller identificó un probable experimento error depedido 10 % para los resultados de arrastre. La simulación es 10 veces más rápido y más barato encomparación con CFD tradicional o existente enfoques. La eficiencia proviene principalmente dell¿¿mite de deslizamiento condición que permite mallas gruesas cerca de las paredes, orientada aobjetivos control de error adaptativo que refina la malla solo donde es necesario y grandes pasos detiempo utilizando un método de iteración de punto fijo tipo Schur, sin comprometer la precisión delos resultados de la simulación.También presentamos una generalización de DFS a densidad variable y validado contra el problemade referencia MARIN bien establecido. los Los resultados muestran un buen acuerdo con losresultados experimentales en forma de sensores de presión. Más tarde, usamos esta metodolog¿¿apara resolver dos aplicaciones en problemas de flujo multifásico. Uno tiene que ver con un flashtanque de almacenamiento de agua de lluvia (consorcio de agua de Bilbao), y el segundo es sobre eldiseño de una boquilla para impresión 3D. En el agua de lluvia tanque de almacenamiento,predijimos que la altura del agua en el tanque tiene un influencia significativa sobre cómo secomporta el flujo aguas abajo de la puerta del tanque (válvula). Para la impresión 3D,desarrollamos un diseño eficiente con El flujo de chorro enfocado para evitar la oxidación y elcalentamiento en la punta del boquilla durante un proceso de fusión.Finalmente, presentamos aqu¿¿ el paralelismo en múltiples GPU y el incrustado sistema dearquitectura Kalray. Casi todas las supercomputadoras de hoy tienen arquitecturas heterogéneas,1 See the UNESCO Internacional Standard nomenclature for fields of Science and Technologyacomo CPU+GPU u otros aceleradores, y, por lo tanto, es esencial desarrollar marcoscomputacionales para aprovecha de ellos. Como lo hemos visto antes, se comienza a desarrollar eseCFD más tarde en la década de 1060 cuando podemos tener poder computacional, por lo tanto, Esesencial utilizar y probar estos aceleradores para los cálculos de CFD. Las GPU tienen unaarquitectura diferente en comparación con las CPU tradicionales. Técnicamente, la GPU tienemuchos núcleos en comparación con las CPU que hacen de la GPU una buena opción para elcómputo paralelo.Para múltiples GPU, desarrollamos un cálculo de plantilla, aplicado a simulación depliegues geológicos. Exploramos la computación de halo y utilizamos Secuencias CUDA paraoptimizar el tiempo de computación y comunicación. La ganancia de rendimiento resultante fue de23 % para cuatro GPU con arquitectura Fermi, y la mejora correspondiente obtenida en cuatro LasGPU Kepler fueron de 47 %.This research was carried out at the Basque Center for Applied Mathematics (BCAM) within the CFD Computational Technology (CFDCT) and also at the School of Electrical Engineering and Computer Science(Royal Institue of Technology, Stockholm, Sweden). Which is suported by Fundacion Obra Social “la Caixa“, Severo Ochoa Excellence research centre 2014-2018 SEV-2013-0323, Severo Ochoa Excellence research centre 2018-2022 SEV-2017-0718, BERC program 2014-2017, BERC program 2018-2021, MSO4SC European project, Elkartek. This work has been performed using the computing infrastructure from SNIC (Swedish National Infrastructure for Computing)
    corecore