4,207 research outputs found

    GPU devices for safety-critical systems: a survey

    Get PDF
    Graphics Processing Unit (GPU) devices and their associated software programming languages and frameworks can deliver the computing performance required to facilitate the development of next-generation high-performance safety-critical systems such as autonomous driving systems. However, the integration of complex, parallel, and computationally demanding software functions with different safety-criticality levels on GPU devices with shared hardware resources contributes to several safety certification challenges. This survey categorizes and provides an overview of research contributions that address GPU devices’ random hardware failures, systematic failures, and independence of execution.This work has been partially supported by the European Research Council with Horizon 2020 (grant agreements No. 772773 and 871465), the Spanish Ministry of Science and Innovation under grant PID2019-107255GB, the HiPEAC Network of Excellence and the Basque Government under grant KK-2019-00035. The Spanish Ministry of Economy and Competitiveness has also partially supported Leonidas Kosmidis with a Juan de la Cierva Incorporación postdoctoral fellowship (FJCI-2020- 045931-I).Peer ReviewedPostprint (author's final draft

    Towards a Secure and Resilient Vehicle Design: Methodologies, Principles and Guidelines

    Get PDF
    The advent of autonomous and connected vehicles has brought new cyber security challenges to the automotive industry. It requires vehicles to be designed to remain dependable in the occurrence of cyber-attacks. A modern vehicle can contain over 150 computers, over 100 million lines of code, and various connection interfaces such as USB ports, WiFi, Bluetooth, and 4G/5G. The continuous technological advancements within the automotive industry allow safety enhancements due to increased control of, e.g., brakes, steering, and the engine. Although the technology is beneficial, its complexity has the side-effect to give rise to a multitude of vulnerabilities that might leverage the potential for cyber-attacks. Consequently, there is an increase in regulations that demand compliance with vehicle cyber security and resilience requirements that state vehicles should be designed to be resilient to cyber-attacks with the capability to detect and appropriately respond to these attacks. Moreover, increasing requirements for automotive digital forensic capabilities are beginning to emerge. Failures in automated driving functions can be caused by hardware and software failures as well as cyber security issues. It is imperative to investigate the cause of these failures. However, there is currently no clear guidance on how to comply with these regulations from a technical perspective.In this thesis, we propose a methodology to predict and mitigate vulnerabilities in vehicles using a systematic approach for security analysis; a methodology further used to develop a framework ensuring a resilient and secure vehicle design concerning a multitude of analyzed vehicle cyber-attacks. Moreover, we review and analyze scientific literature on resilience techniques, fault tolerance, and dependability for attack detection, mitigation, recovery, and resilience endurance. These techniques are then further incorporated into the above-mentioned framework. Finally, to meet requirements to hastily and securely patch the increasing number of bugs in vehicle software, we propose a versatile framework for vehicle software updates

    Fuzzing Deep Learning Compilers with HirGen

    Full text link
    Deep Learning (DL) compilers are widely adopted to optimize advanced DL models for efficient deployment on diverse hardware. Their quality has profound effect on the quality of compiled DL models. A recent bug study shows that the optimization of high-level intermediate representation (IR) is the most error-prone compilation stage. Bugs in this stage are accountable for 44.92% of the whole collected ones. However, existing testing techniques do not consider high-level optimization related features (e.g. high-level IR), and are therefore weak in exposing bugs at this stage. To bridge this gap, we propose HirGen, an automated testing technique that aims to effectively expose coding mistakes in the optimization of high-level IR. The design of HirGen includes 1) three coverage criteria to generate diverse and valid computational graphs; 2) full use of high-level IRs language features to generate diverse IRs; 3) three test oracles inspired from both differential testing and metamorphic testing. HirGen has successfully detected 21 bugs that occur at TVM, with 17 bugs confirmed and 12 fixed. Further, we construct four baselines using the state-of-the-art DL compiler fuzzers that can cover the high-level optimization stage. Our experiment results show that HirGen can detect 10 crashes and inconsistencies that cannot be detected by the baselines in 48 hours. We further validate the usefulness of our proposed coverage criteria and test oracles in evaluation

    IMUs: validation, gait analysis and system’s implementation

    Get PDF
    Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Eletrónica Médica)Falls are a prevalent problem in actual society. The number of falls has been increasing greatly in the last fifteen years. Some falls result in injuries and the cost associated with their treatment is high. However, this is a complex problem that requires several steps in order to be tackled. Namely, it is crucial to develop strategies that recognize the mode of locomotion, indicating the state of the subject in various situations, namely normal gait, step before fall (pre-fall) and fall situation. Thus, this thesis aims to develop a strategy capable of identifying these situations based on a wearable system that collects information and analyses the human gait. The strategy consists, essentially, in the construction and use of Associative Skill Memories (ASMs) as tools for recognizing the locomotion modes. Consequently, at an early stage, the capabilities of the ASMs for the different modes of locomotion were studied. Then, a classifier was developed based on a set of ASMs. Posteriorly, a neural network classifier based on deep learning was used to classify, in a similar way, the same modes of locomotion. Deep learning is a technique actually widely used in data classification. These classifiers were implemented and compared, providing for a tool with a good accuracy in recognizing the modes of locomotion. In order to implement this strategy, it was previously necessary to carry out extremely important support work. An inertial measurement units’ (IMUs) system was chosen due to its extreme potential to monitor outpatient activities in the home environment. This system, which combines inertial and magnetic sensors and is able to perform the monitoring of gait parameters in real time, was validated and calibrated. Posteriorly, this system was used to collect data from healthy subjects that mimicked Fs. Results have shown that the accuracy of the classifiers was quite acceptable, and the neural networks based classifier presented the best results with 92.71% of accuracy. As future work, it is proposed to apply these strategies in real time in order to avoid the occurrence of falls.As quedas são um problema predominante na sociedade atual. O número de quedas tem aumentado bastante nos últimos quinze anos. Algumas quedas resultam em lesões e o custo associado ao seu tratamento é alto. No entanto, trata-se de um problema complexo que requer várias etapas a serem abordadas. Ou seja, é crucial desenvolver estratégias que reconheçam o modo de locomoção, indicando o estado do sujeito em várias situações, nomeadamente, marcha normal, passo antes da queda (pré-queda) e situação de queda. Assim, esta tese tem como objetivo desenvolver uma estratégia capaz de identificar essas situações com base num sistema wearable que colete informações e analise a marcha humana. A estratégia consiste, essencialmente, na construção e utilização de Associative Skill Memories (ASMs) como ferramenta para reconhecimento dos modos de locomoção. Consequentemente, numa fase inicial, foram estudadas as capacidades das ASMs para os diferentes modos de locomoção. Depois, foi desenvolvido um classificador baseado em ASMs. Posteriormente, um classificador de redes neuronais baseado em deep learning foi utilizado para classificar, de forma semelhante, os mesmos modos de locomoção. Deep learning é uma técnica bastante utilizada em classificação de dados. Estes classificadores foram implementados e comparados, fornecendo a uma ferramenta com uma boa precisão no reconhecimento dos modos de locomoção. Para implementar esta estratégia, era necessário realizar previamente um trabalho de suporte extremamente importante. Um sistema de unidades de medição inercial (IMUs), foi escolhido devido ao seu potencial extremo para monitorizar as atividades ambulatórias no ambiente domiciliar. Este sistema que combina sensores inerciais e magnéticos e é capaz de efetuar a monitorização de parâmetros da marcha em tempo real, foi validado e calibrado. Posteriormente, este Sistema foi usado para adquirir dados da marcha de indivíduos saudáveis que imitiram quedas. Os resultados mostraram que a precisão dos classificadores foi bastante aceitável e o classificador baseado em redes neuronais apresentou os melhores resultados com 92.71% de precisão. Como trabalho futuro, propõe-se a aplicação destas estratégias em tempo real de forma a evitar a ocorrência de quedas

    RISK ASSESSMENT AND MITIGATION OF TELECOM EQUIPMENT UNDER FREE AIR COOLING CONDITIONS

    Get PDF
    In recent years, about 40% of the total energy is devoted to the cooling infrastructures in data centers. One way to save energy is free air cooling (FAC), which utilizes the outside air as the primary cooling medium, instead of air conditioning, to reduce the energy consumption to cool the data centers. Despite the energy saving, the implementation of free air cooling will change the operating environment, which may adversely affect the performance and reliability of telecom equipment. This thesis reviews the challenges and risks posed by free air cooling. The increased temperature, uncontrolled humidity, and possible contamination may cause some failure mechanisms, e.g., Conductive anodic filament (CAF) and corrosion, to be more active. If the local temperatures of some hot spots go beyond their recommended operating conditions (RoC), the performances of the equipment may be affected. In this thesis, a methodology is proposed to identify the impact of free air cooling on telecom equipment performance. It uses the performance variations under traditional air condition (A/C) to create a baseline, and compares the performance variation under variable temperature and humidity representing FAC with the baseline. This method can help data centers determine an appropriate operating environment based on the service requirements, when FAC is implemented. In addition, a statics-based approach is also developed to identify the appropriate metric for the performance variations comparison. It is the first study focusing on the impact of FAC on the telecom equipment performance. This thesis also proposes a multi-stage (design, test, and operation) approach to mitigate the reliability risks of telecom equipment under free air cooling conditions. Specifically, a prognostics-based approach is proposed to mitigate the reliability risks at operation stage, and a case study is presented to show the implementation process. This approach needn't interrupt data center services and doesn't consume additional useful life of telecom equipment. It allows the implementation of FAC in data centers which were not originally designed for this cooling method

    STANDARDIZING FUNCTIONAL SAFETY ASSESSMENTS FOR OFF-THE-SHELF INSTRUMENTATION AND CONTROLS

    Get PDF
    It is typical for digital instrumentation and controls, used to manage significant risk, to undergo substantial amounts of scrutiny. The equipment must be proven to have the necessary level of design integrity. The details of the scrutiny vary based on the particular industry, but the ultimate goal is to provide sufficient evidence that the equipment will operate successfully when performing their required functions. To be able to stand up to the scrutiny and more importantly, successfully perform the required safety functions, the equipment must be designed to defend against random hardware failures and also to prevent systematic faults. These design activities must also have been documented in a manner that sufficiently proves their adequacy. The variability in the requirements of the different industries makes this task difficult for instrumentation and controls equipment manufacturers. To assist the manufacturers in dealing with these differences, a standardization of requirements is needed to facilitate clear communication of expectations. The IEC 61508 set of standards exists to fulfill this role, but it is not yet universally embraced. After that occurs, various industries, from nuclear power generation to oil & gas production, will benefit from the existence of a wider range of equipment that has been designed to perform in these critical roles and that also includes the evidence necessary to prove its integrity. The manufacturers will then be able to enjoy the benefit of having a larger customer base interested in their products. The use of IEC 61508 will also help industries avoid significant amounts of uncertainty when selecting commercial off-the-shelf equipment. It is currently understood that it cannot be assumed that a typical commercial manufacturer’s equipment designs and associated design activities will be adequate to allow for success in these high risk applications. In contrast, a manufacturer that seeks to comply with IEC 61508 and seeks to achieve certification by an independent third party can be assumed to be better suited for meeting the needs of these demanding situations. Use of these manufacturers help to avoid substantial uncertainty and risk

    The risk mitigation strategy taxonomy and generated risk event effect neutralization method

    Get PDF
    In the design of new products and systems, the mitigation of potential failures is very important. The sooner in a product\u27s design mitigation can be performed, the lower the cost and easier to implement those mitigations become. However, currently, most mitigations strategies rely on the expertise of the engineers designing a product, and while models and for failure modes do exist to help, there are no guidelines for performing product changes to reduce risk. To help alleviate this, the risk mitigation strategy taxonomy is created from an empirical collection of mitigation strategies used in industry for failure mitigation, creating a consistent set of definitions for electromechanical risk mitigation strategies. By storing mitigation data in this consistent format, the data can be used to evaluate and compare different mitigation strategies. Applying this, the Generated Risk Event Effect Neutralization (GREEN) method is used to generate mitigation strategies for a product during the conceptual design of the product, where changes are the easiest to implement and cost the least. The GREEN method then compares and selects the best strategy based on the popularity, likelihood change, and consequence change that result from implementing the strategies --Abstract, page iv

    Design and validation of decision and control systems in automated driving

    Get PDF
    xxvi, 148 p.En la última década ha surgido una tendencia creciente hacia la automatización de los vehículos, generando un cambio significativo en la movilidad, que afectará profundamente el modo de vida de las personas, la logística de mercancías y otros sectores dependientes del transporte. En el desarrollo de la conducción automatizada en entornos estructurados, la seguridad y el confort, como parte de las nuevas funcionalidades de la conducción, aún no se describen de forma estandarizada. Dado que los métodos de prueba utilizan cada vez más las técnicas de simulación, los desarrollos existentes deben adaptarse a este proceso. Por ejemplo, dado que las tecnologías de seguimiento de trayectorias son habilitadores esenciales, se deben aplicar verificaciones exhaustivas en aplicaciones relacionadas como el control de movimiento del vehículo y la estimación de parámetros. Además, las tecnologías en el vehículo deben ser lo suficientemente robustas para cumplir con los requisitos de seguridad, mejorando la redundancia y respaldar una operación a prueba de fallos. Considerando las premisas mencionadas, esta Tesis Doctoral tiene como objetivo el diseño y la implementación de un marco para lograr Sistemas de Conducción Automatizados (ADS) considerando aspectos cruciales, como la ejecución en tiempo real, la robustez, el rango operativo y el ajuste sencillo de parámetros. Para desarrollar las aportaciones relacionadas con este trabajo, se lleva a cabo un estudio del estado del arte actual en tecnologías de alta automatización de conducción. Luego, se propone un método de dos pasos que aborda la validación de ambos modelos de vehículos de simulación y ADS. Se introducen nuevas formulaciones predictivas basadas en modelos para mejorar la seguridad y el confort en el proceso de seguimiento de trayectorias. Por último, se evalúan escenarios de mal funcionamiento para mejorar la seguridad en entornos urbanos, proponiendo una estrategia alternativa de estimación de posicionamiento para minimizar las condiciones de riesgo
    corecore