2,387 research outputs found

    A Survey on Security Threats and Countermeasures in IEEE Test Standards

    Get PDF
    International audienceEditor's note: Test infrastructure has been shown to be a portal for hackers. This article reviews the threats and countermeasures for IEEE test infrastructure standards

    Quantifiable Assurance: From IPs to Platforms

    Get PDF
    Hardware vulnerabilities are generally considered more difficult to fix than software ones because they are persistent after fabrication. Thus, it is crucial to assess the security and fix the vulnerabilities at earlier design phases, such as Register Transfer Level (RTL) and gate level. The focus of the existing security assessment techniques is mainly twofold. First, they check the security of Intellectual Property (IP) blocks separately. Second, they aim to assess the security against individual threats considering the threats are orthogonal. We argue that IP-level security assessment is not sufficient. Eventually, the IPs are placed in a platform, such as a system-on-chip (SoC), where each IP is surrounded by other IPs connected through glue logic and shared/private buses. Hence, we must develop a methodology to assess the platform-level security by considering both the IP-level security and the impact of the additional parameters introduced during platform integration. Another important factor to consider is that the threats are not always orthogonal. Improving security against one threat may affect the security against other threats. Hence, to build a secure platform, we must first answer the following questions: What additional parameters are introduced during the platform integration? How do we define and characterize the impact of these parameters on security? How do the mitigation techniques of one threat impact others? This paper aims to answer these important questions and proposes techniques for quantifiable assurance by quantitatively estimating and measuring the security of a platform at the pre-silicon stages. We also touch upon the term security optimization and present the challenges for future research directions

    Human-Centered Automation for Resilience in Acquiring Construction Field Information

    Get PDF
    abstract: Resilient acquisition of timely, detailed job site information plays a pivotal role in maintaining the productivity and safety of construction projects that have busy schedules, dynamic workspaces, and unexpected events. In the field, construction information acquisition often involves three types of activities including sensor-based inspection, manual inspection, and communication. Human interventions play critical roles in these three types of field information acquisition activities. A resilient information acquisition system is needed for safer and more productive construction. The use of various automation technologies could help improve human performance by proactively providing the needed knowledge of using equipment, improve the situation awareness in multi-person collaborations, and reduce the mental workload of operators and inspectors. Unfortunately, limited studies consider human factors in automation techniques for construction field information acquisition. Fully utilization of the automation techniques requires a systematical synthesis of the interactions between human, tasks, and construction workspace to reduce the complexity of information acquisition tasks so that human can finish these tasks with reliability. Overall, such a synthesis of human factors in field data collection and analysis is paving the path towards “Human-Centered Automation” (HCA) in construction management. HCA could form a computational framework that supports resilient field data collection considering human factors and unexpected events on dynamic job sites. This dissertation presented an HCA framework for resilient construction field information acquisition and results of examining three HCA approaches that support three use cases of construction field data collection and analysis. The first HCA approach is an automated data collection planning method that can assist 3D laser scan planning of construction inspectors to achieve comprehensive and efficient data collection. The second HCA approach is a Bayesian model-based approach that automatically aggregates the common sense of people from the internet to identify job site risks from a large number of job site pictures. The third HCA approach is an automatic communication protocol optimization approach that maximizes the team situation awareness of construction workers and leads to the early detection of workflow delays and critical path changes. Data collection and simulation experiments extensively validate these three HCA approaches.Dissertation/ThesisDoctoral Dissertation Civil, Environmental and Sustainable Engineering 201

    Low-cost and efficient fault detection and diagnosis schemes for modern cores

    Get PDF
    Continuous improvements in transistor scaling together with microarchitectural advances have made possible the widespread adoption of high-performance processors across all market segments. However, the growing reliability threats induced by technology scaling and by the complexity of designs are challenging the production of cheap yet robust systems. Soft error trends are haunting, especially for combinational logic, and parity and ECC codes are therefore becoming insufficient as combinational logic turns into the dominant source of soft errors. Furthermore, experts are warning about the need to also address intermittent and permanent faults during processor runtime, as increasing temperatures and device variations will accelerate inherent aging phenomena. These challenges specially threaten the commodity segments, which impose requirements that existing fault tolerance mechanisms cannot offer. Current techniques based on redundant execution were devised in a time when high penalties were assumed for the sake of high reliability levels. Novel light-weight techniques are therefore needed to enable fault protection in the mass market segments. The complexity of designs is making post-silicon validation extremely expensive. Validation costs exceed design costs, and the number of discovered bugs is growing, both during validation and once products hit the market. Fault localization and diagnosis are the biggest bottlenecks, magnified by huge detection latencies, limited internal observability, and costly server farms to generate test outputs. This thesis explores two directions to address some of the critical challenges introduced by unreliable technologies and by the limitations of current validation approaches. We first explore mechanisms for comprehensively detecting multiple sources of failures in modern processors during their lifetime (including transient, intermittent, permanent and also design bugs). Our solutions embrace a paradigm where fault tolerance is built based on exploiting high-level microarchitectural invariants that are reusable across designs, rather than relying on re-execution or ad-hoc block-level protection. To do so, we decompose the basic functionalities of processors into high-level tasks and propose three novel runtime verification solutions that combined enable global error detection: a computation/register dataflow checker, a memory dataflow checker, and a control flow checker. The techniques use the concept of end-to-end signatures and allow designers to adjust the fault coverage to their needs, by trading-off area, power and performance. Our fault injection studies reveal that our methods provide high coverage levels while causing significantly lower performance, power and area costs than existing techniques. Then, this thesis extends the applicability of the proposed error detection schemes to the validation phases. We present a fault localization and diagnosis solution for the memory dataflow by combining our error detection mechanism, a new low-cost logging mechanism and a diagnosis program. Selected internal activity is continuously traced and kept in a memory-resident log whose capacity can be expanded to suite validation needs. The solution can catch undiscovered bugs, reducing the dependence on simulation farms that compute golden outputs. Upon error detection, the diagnosis algorithm analyzes the log to automatically locate the bug, and also to determine its root cause. Our evaluations show that very high localization coverage and diagnosis accuracy can be obtained at very low performance and area costs. The net result is a simplification of current debugging practices, which are extremely manual, time consuming and cumbersome. Altogether, the integrated solutions proposed in this thesis capacitate the industry to deliver more reliable and correct processors as technology evolves into more complex designs and more vulnerable transistors.El continuo escalado de los transistores junto con los avances microarquitectónicos han posibilitado la presencia de potentes procesadores en todos los segmentos de mercado. Sin embargo, varios problemas de fiabilidad están desafiando la producción de sistemas robustos. Las predicciones de "soft errors" son inquietantes, especialmente para la lógica combinacional: soluciones como ECC o paridad se están volviendo insuficientes a medida que dicha lógica se convierte en la fuente predominante de soft errors. Además, los expertos están alertando acerca de la necesidad de detectar otras fuentes de fallos (causantes de errores permanentes e intermitentes) durante el tiempo de vida de los procesadores. Los segmentos "commodity" son los más vulnerables, ya que imponen unos requisitos que las técnicas actuales de fiabilidad no ofrecen. Estas soluciones (generalmente basadas en re-ejecución) fueron ideadas en un tiempo en el que con tal de alcanzar altos nivel de fiabilidad se asumían grandes costes. Son por tanto necesarias nuevas técnicas que permitan la protección contra fallos en los segmentos más populares. La complejidad de los diseños está encareciendo la validación "post-silicon". Su coste excede el de diseño, y el número de errores descubiertos está aumentando durante la validación y ya en manos de los clientes. La localización y el diagnóstico de errores son los mayores problemas, empeorados por las altas latencias en la manifestación de errores, por la poca observabilidad interna y por el coste de generar las señales esperadas. Esta tesis explora dos direcciones para tratar algunos de los retos causados por la creciente vulnerabilidad hardware y por las limitaciones de los enfoques de validación. Primero exploramos mecanismos para detectar múltiples fuentes de fallos durante el tiempo de vida de los procesadores (errores transitorios, intermitentes, permanentes y de diseño). Nuestras soluciones son de un paradigma donde la fiabilidad se construye explotando invariantes microarquitectónicos genéricos, en lugar de basarse en re-ejecución o en protección ad-hoc. Para ello descomponemos las funcionalidades básicas de un procesador y proponemos tres soluciones de `runtime verification' que combinadas permiten una detección de errores a nivel global. Estas tres soluciones son: un verificador de flujo de datos de registro y de computación, un verificador de flujo de datos de memoria y un verificador de flujo de control. Nuestras técnicas usan el concepto de firmas y permiten a los diseñadores ajustar los niveles de protección a sus necesidades, mediante compensaciones en área, consumo energético y rendimiento. Nuestros estudios de inyección de errores revelan que los métodos propuestos obtienen altos niveles de protección, a la vez que causan menos costes que las soluciones existentes. A continuación, esta tesis explora la aplicabilidad de estos esquemas a las fases de validación. Proponemos una solución de localización y diagnóstico de errores para el flujo de datos de memoria que combina nuestro mecanismo de detección de errores, junto con un mecanismo de logging de bajo coste y un programa de diagnóstico. Cierta actividad interna es continuamente registrada en una zona de memoria cuya capacidad puede ser expandida para satisfacer las necesidades de validación. La solución permite descubrir bugs, reduciendo la necesidad de calcular los resultados esperados. Al detectar un error, el algoritmo de diagnóstico analiza el registro para automáticamente localizar el bug y determinar su causa. Nuestros estudios muestran un alto grado de localización y de precisión de diagnóstico a un coste muy bajo de rendimiento y área. El resultado es una simplificación de las prácticas actuales de depuración, que son enormemente manuales, incómodas y largas. En conjunto, las soluciones de esta tesis capacitan a la industria a producir procesadores más fiables, a medida que la tecnología evoluciona hacia diseños más complejos y más vulnerables

    Design of the software development and verification system (SWDVS) for shuttle NASA study task 35

    Get PDF
    An overview of the Software Development and Verification System (SWDVS) for the space shuttle is presented. The design considerations, goals, assumptions, and major features of the design are examined. A scenario that shows three persons involved in flight software development using the SWDVS in response to a program change request is developed. The SWDVS is described from the standpoint of different groups of people with different responsibilities in the shuttle program to show the functional requirements that influenced the SWDVS design. The software elements of the SWDVS that satisfy the requirements of the different groups are identified

    Secure Physical Design

    Get PDF
    An integrated circuit is subject to a number of attacks including information leakage, side-channel attacks, fault-injection, malicious change, reverse engineering, and piracy. Majority of these attacks take advantage of physical placement and routing of cells and interconnects. Several measures have already been proposed to deal with security issues of the high level functional design and logic synthesis. However, to ensure end-to-end trustworthy IC design flow, it is necessary to have security sign-off during physical design flow. This paper presents a secure physical design roadmap to enable end-to-end trustworthy IC design flow. The paper also discusses utilization of AI/ML to establish security at the layout level. Major research challenges in obtaining a secure physical design are also discussed

    Intelligent Economic Alarm Processor (IEAP)

    Get PDF
    The advent of electricity market deregulation has placed great emphasis on the availability of information, the analysis of this information, and the subsequent decision-making to optimize system operation in a competitive environment. This creates a need for better ways of correlating the market activity with the physical grid operating states in real time and sharing such information among market participants. Choices of command and control actions may result in different financial consequences for market participants and severely impact their profits. This work provides a solution, the Intelligent Economic Alarm Processor to be implemented in a control center to assist the grid operator in rapidly identifying the faulted sections and market operation management. The task of fault section estimation is difficult when multiple faults, failures of protection devices, and false data are involved. A Fuzzy Reasoning Petri-nets approach has been proposed to tackle the complexities. In this approach, the fuzzy reasoning starting from protection system status data and ending with estimation of faulted power system section is formulated by Petri-nets. The reasoning process is implemented by matrix operations. Next, in order to better feed the FRPN model with more accurate inputs, the failure rates of the protections devices are analyzed. A new approach to assess the circuit breaker’s life cycle or deterioration stages using its control circuit data is introduced. Unlike the traditional “mean time” criteria, the deterioration stages have been mathematically defined by setting up the limits of various performance indices. The model can be automatically updated as the new real-time condition-based data become available to assess the CB’s operation performance using probability distributions. The economic alarm processor module is discussed in the end. This processor firstly analyzes the fault severity based on the information retrieved from the fault section estimation module, and gives the changes in the LMPs, total generation cost, congestion revenue etc. with electricity market schedules and trends. Then some suggested restorative actions are given to optimize the overall system benefit. When market participants receive such information in advance, they make estimation about the system operator's restorative action and their competitors' reaction to it
    corecore