350 research outputs found

    Correct-by-construction microarchitectural pipelining

    Get PDF
    This paper presents a method for correct-by-construction microarchitectural pipelining that handles cyclic systems with dependencies between iterations. Our method combines previously known bypass and retiming transformations with a few transformations valid only for elastic systems with early evaluation (namely, empty FIFO insertion, FIFO capacity sizing, insertion of anti-tokens, and introducing early evaluation multiplexors). By converting the design to a synchronous elastic form and then applying this extended set of transformations, one can pipeline a functional specification with an automatically generated distributed controller that implements stalling logic resolving data hazards off the critical path of the design. We have developed an interactive toolkit for exploring elastic microarchitectural transformations. The method is illustrated by pipelining a few simple examples of instruction set architecture ISA specifications.Peer ReviewedPostprint (published version

    Policies of System Level Pipeline Modeling

    Get PDF
    Pipelining is a well understood and often used implementation technique for increasing the performance of a hardware system. We develop several SystemC/C++ modeling techniques that allow us to quickly model, simulate, and evaluate pipelines. We employ a small domain specific language (DSL) based on resource usage patterns that automates the drudgery of boilerplate code needed to configure connectivity in simulation models. The DSL is embedded directly in the host modeling language SystemC/C++. Additionally we develop several techniques for parameterizing a pipeline's behavior based on policies of function, communication, and timing (performance modeling)

    Recover Data about Detected Defects of Underground Metal Elements of Constructions in Amazon Elasticsearch Service

    Get PDF
    This paper examines data manipulation in terms of data recovery using cloud computing and search engine. Accidental deletion or problems with the remote service cause information loss. This case has unpredictable consequences, as data must be re-collected. In some cases, this is not possible due to system features. The primary purpose of this work is to offer solutions for received data on detected defects of underground metal structural elements using modern information technologies.The main factors that affect underground metal structural elements' durability are the soil environment's external action and constant maintenance-free use. Defects can usually occur in several places, so control must be carried out along the entire length of the underground network. To avoid the loss of essential data, approaches for recovery using Amazon Web Service and a developed web service based on the REST architecture are considered. The general algorithm of the system is proposed in work to collect and monitor data on defects of underground metal structural elements. The result of the study for the possibility of data recovery using automatic snapshots or backup data duplication for the developed system.

    New trends for conducting hazard & operability (HAZOP) studies in continuous chemical processes

    Get PDF
    Identifying hazards is fundamental for ensuring the safe design and operation of a system in process plants and other facilities. Several techniques are available to identify hazardous situations, all of which require their rigorous, thorough, and systematic application by a multi-disciplinary team of experts. Success rests upon first identifying and subsequently analyzing possible scenarios that can cause accidents with different degrees of severity. While hazard identification may be the most important stage for risk management, it depends on subjectivity issues (e.g., human observation, good judgment and intuition, creativity, expertise, knowledge) which introduce bias. Without a structured identification system, hazards can be overlooked, thus entailing incomplete risk-evaluations and potential loss. The present Thesis is focused on developing both managerial and technical aspects intended to standardize one of the most used techniques for hazard identification; viz. HAZard & Operability (HAZOP) study. These criteria have been carefully implemented not only to ensure that most of the hazardous scenarios will be identified, but also that US OSHA PSM Rule, EPA RMP, and Seveso Directive requirements will be accomplished. Chapter I pioneers the main research topic; from introducing the process safety concept up to the evidence of more detailed information is required from related regulations. A review of regulations (i.e., US, Europe legislation) focused on Hazard Identification has been conducted, highlighting, there is an absence of specific criteria for performing techniques intended to identify what can go wrong. Chapter II introduces the risk management system required to analyze the risk from chemical process facilities, and justifies that hazard identification stage is the Process Safety foundation. Hereafter, an overview of the key Process Hazard Analyzes (PHA) has been conducted, and the specific HAZOP weaknesses and strengths have been highlighted to establish the first steps to focus on. Chapter III establishes the scope, the purpose and the specific objectives that the research covers. It answers the following questions on the spot: why the present research is performed, which elements are included, and what has been considered for acquiring the final conclusions of the manuscript. Chapter IV gathers HAZOP-related literature from books, guidelines, standards, major journals, and conference proceedings with the purpose of classifying the research conducted over the years and finally define the HAZOP state-of-the-art. Additionally, and according to the information collected, the current HAZOP limitations have been emphasized, and thus, the research needs that should be considered for the HAZOP improvement and advance. Chapter V analyzes the data collected while preparing, organizing, executing and writing HAZOPs in five petroleum-refining processes. A statistical analysis has been performed to extract guidance and conclusions to support the established criteria to conduct effectively HAZOP studies. Chapter VI establishes the whole set of actions that have to be taken into account for ensuring a wellplanned and executed HAZOP study. Both technical and management issues are addressed, criteria supported after considering the previous chapters of the manuscript. Chapter VI itself is the result of the present research, and could be used as a guideline not only for team leaders, but also for any related party interested on performing HAZOPs in continuous chemical processes. Chapter VII states the final conclusions of the research. The interested parties should be released about the hazard identification related-gaps present in current process safety regulations; which are the key limitations of the HAZOP study, and finally, which are the criteria to cover the research needs that have been found Annex I proposes the key tools (tables, figures and checklists "ready-to use'') to be used for conducting HAZOPs in continuous chemical processes. The information layout is structured according to the proposed HAZOP Management System. This information is intended to provide concise and structured documentation to be used as a reference book when conducting HAZOPs. Annex II is intended to overview the most relevant petroleum refining processes by highlighting key factors to take into account in the point of view of process safety and hazard identification, i.e. HAZOP. In this sense, key health and safety information of specific petroleum refining units is provided as a valuable guidance during brainstorming sessions. Annex III illustrates the complete set of data collected during the field work of the present research, and also analyzed in Chapter V of the manuscript. Additionally, it depicts a statistical summary of the key variables treated during the analysis. Finally, the Nomenclature, References, and Abbreviations & Acronyms used and cited during the manuscript have been listed. Additionally, a Glossary of key terms related to the Process Safety field has been illustrated.La present Tesis doctoral té com a objectiu estandarditzar l'aplicació d'una de les tècniques més utilitzades a la industria de procés per a la identificació de perills; l'anomenat HAZard & OPerability (HAZOP) study, específicament a processos complexes, com per exemple, unitat de refineria del petroli.El capítol I defineix el concepte de Seguretat de Processos, i progressivament analitza les diferents regulacions relacionades amb la temàtica, detallant específicament les mancances i buits d'informació que actualment hi ha presents a la primera etapa de la gestió del risc en industries de procés: la identificació de perills.El capítol II defineix el sistema de gestió del risc tecnològic que aplica a les industries de procés, i es justifica que l'etapa d'identificació de perills és el pilar de tot el sistema. Finalment, es mencionen algunes de les tècniques d'identificació més utilitzades, els anomenats Process Hazard Analysis (PHA), i es detallen les seves mancances i fortaleses, característiques que han acabat definint la temàtica específica de la Tesis. Concretament, es dóna èmfasis a la tècnica anomenada HAZard & OPerability (HAZOP) study, objecte principal de la recerca.El capítol III defineix l'abast, el propòsit i els objectius específics de la recerca. La intenció d'aquest capítol és donar resposta a les següents qüestions: el perquè de la recerca, quins elements han estat inclosos i què s'ha considerat per tal d'assolir les conclusions de la Tesis.El capítol IV descriu l'estat de l'art de la literatura relacionada amb el HAZOP. Aquesta revisió no només permet classificar les diferents línies de recerca relacionades amb el HAZOP, sinó que també permet assolir un coneixement profund de les diferents particularitats de la pròpia tècnica. El capítol finalitza amb un conjunt de mancances tant de gestió com tècniques, així com les necessitats de recerca que poden millorar l'organització i execució dels HAZOPs.El capítol V analitza la informació que ha estat recopilada durant la fase experimental de la tesis. Les dades procedeixen de la participació en cinc estudis HAZOP aplicats a la industria de refineria del petroli.En aquest sentit, el capítol V desenvolupa una anàlisi estadística d'aquestes dades per extreure'n conclusions quant a la preparació, organització i execució dels HAZOPs.El capítol VI estableix el conjunt d'accions que s'ha de tenir en compte per tal d'assegurar que un estudi HAZOP estigui ben organitzat i executat (la metodologia). Es defineix un Sistema de Gestió del HAZOP, i a partir de les seves fases, es desenvolupa una metodologia que pretén donar suport a tots aquells punts febles que han estat identificats en els capítols anteriors. Aquesta metodologia té la intenció de donar suport i guia no només als líders del HAZOP, sinó també a qualsevol part interessada en aquesta temàtica.El capítol VII descriu les conclusions de la recerca. En primera instància s'enumeren les mancances quant a la definició de criteris a seguir de diferents regulacions que apliquen a la Seguretat de Processos.Seguidament, es mencionen les limitacions de la pròpia tècnica HAZOP, i finalment, es descriuen quins són els criteris establerts per donar solució a totes aquestes febleses que han estat identificades.L'Annex I és una recopilació de diferents criteris que han estat desenvolupats al llarg de l'escrit en forma de taules i figures. Aquestes han estat ordenades cronològicament d'acord amb les diferents fases que defineixen el Sistema de Gestió HAZOP. L'annex I es pot utilitzar com a una referència concisa i pràctica, preparada i pensada per ésser utilitzada directament a camp, amb la intenció de donar suport a les parts interessades en liderar estudis HAZOP.L'annex II recopila informació relacionada amb aspectes clau de seguretat i medi ambient en diferents unitats de refineria. Aquest informació és un suport per tal de motivar el "brainstorming" dels diferents membres que conformen l'equip HAZOP.L'Annex III recopila les dades de les diferents variables que han estat considerades a la fase experimental de la recerca, juntament amb un conjunt de figures que mostren la seva estadística bàsica

    Automatic verification of pipelined microprocessors

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (p. 71-72).by Vishal Lalit Bhagwati.M.S

    2023 Projects Day Booklet

    Get PDF
    https://scholarworks.seattleu.edu/projects-day/1002/thumbnail.jp

    New techniques for functional testing of microprocessor based systems

    Get PDF
    Electronic devices may be affected by failures, for example due to physical defects. These defects may be introduced during the manufacturing process, as well as during the normal operating life of the device due to aging. How to detect all these defects is not a trivial task, especially in complex systems such as processor cores. Nevertheless, safety-critical applications do not tolerate failures, this is the reason why testing such devices is needed so to guarantee a correct behavior at any time. Moreover, testing is a key parameter for assessing the quality of a manufactured product. Consolidated testing techniques are based on special Design for Testability (DfT) features added in the original design to facilitate test effectiveness. Design, integration, and usage of the available DfT for testing purposes are fully supported by commercial EDA tools, hence approaches based on DfT are the standard solutions adopted by silicon vendors for testing their devices. Tests exploiting the available DfT such as scan-chains manipulate the internal state of the system, differently to the normal functional mode, passing through unreachable configurations. Alternative solutions that do not violate such functional mode are defined as functional tests. In microprocessor based systems, functional testing techniques include software-based self-test (SBST), i.e., a piece of software (referred to as test program) which is uploaded in the system available memory and executed, with the purpose of exciting a specific part of the system and observing the effects of possible defects affecting it. SBST has been widely-studies by the research community for years, but its adoption by the industry is quite recent. My research activities have been mainly focused on the industrial perspective of SBST. The problem of providing an effective development flow and guidelines for integrating SBST in the available operating systems have been tackled and results have been provided on microprocessor based systems for the automotive domain. Remarkably, new algorithms have been also introduced with respect to state-of-the-art approaches, which can be systematically implemented to enrich SBST suites of test programs for modern microprocessor based systems. The proposed development flow and algorithms are being currently employed in real electronic control units for automotive products. Moreover, a special hardware infrastructure purposely embedded in modern devices for interconnecting the numerous on-board instruments has been interest of my research as well. This solution is known as reconfigurable scan networks (RSNs) and its practical adoption is growing fast as new standards have been created. Test and diagnosis methodologies have been proposed targeting specific RSN features, aimed at checking whether the reconfigurability of such networks has not been corrupted by defects and, in this case, at identifying the defective elements of the network. The contribution of my work in this field has also been included in the first suite of public-domain benchmark networks

    Architectural Exploration of KeyRing Self-Timed Processors

    Get PDF
    RÉSUMÉ Les dernières décennies ont vu l’augmentation des performances des processeurs contraintes par les limites imposées par la consommation d’énergie des systèmes électroniques : des très basses consommations requises pour les objets connectés, aux budgets de dépenses électriques des serveurs, en passant par les limitations thermiques et la durée de vie des batteries des appareils mobiles. Cette forte demande en processeurs efficients en énergie, couplée avec les limitations de la réduction d’échelle des transistors—qui ne permet plus d’améliorer les performances à densité de puissance constante—, conduit les concepteurs de circuits intégrés à explorer de nouvelles microarchitectures permettant d’obtenir de meilleures performances pour un budget énergétique donné. Cette thèse s’inscrit dans cette tendance en proposant une nouvelle microarchitecture de processeur, appelée KeyRing, conçue avec l’intention de réduire la consommation d’énergie des processeurs. La fréquence d’opération des transistors dans les circuits intégrés est proportionnelle à leur consommation dynamique d’énergie. Par conséquent, les techniques de conception permettant de réduire dynamiquement le nombre de transistors en opération sont très largement adoptées pour améliorer l’efficience énergétique des processeurs. La technique de clock-gating est particulièrement usitée dans les circuits synchrones, car elle réduit l’impact de l’horloge globale, qui est la principale source d’activité. La microarchitecture KeyRing présentée dans cette thèse utilise une méthode de synchronisation décentralisée et asynchrone pour réduire l’activité des circuits. Elle est dérivée du processeur AnARM, un processeur développé par Octasic sur la base d’une microarchitecture asynchrone ad hoc. Bien qu’il soit plus efficient en énergie que des alternatives synchrones, le AnARM est essentiellement incompatible avec les méthodes de synthèse et d’analyse temporelle statique standards. De plus, sa technique de conception ad hoc ne s’inscrit que partiellement dans les paradigmes de conceptions asynchrones. Cette thèse propose une approche rigoureuse pour définir les principes généraux de cette technique de conception ad hoc, en faisant levier sur la littérature asynchrone. La microarchitecture KeyRing qui en résulte est développée en association avec une méthode de conception automatisée, qui permet de s’affranchir des incompatibilités natives existant entre les outils de conception et les systèmes asynchrones. La méthode proposée permet de pleinement mettre à profit les flots de conception standards de l’industrie microélectronique pour réaliser la synthèse et la vérification des circuits KeyRing. Cette thèse propose également des protocoles expérimentaux, dont le but est de renforcer la relation de causalité entre la microarchitecture KeyRing et une réduction de la consommation énergétique des processeurs, comparativement à des alternatives synchrones équivalentes.----------ABSTRACT Over the last years, microprocessors have had to increase their performances while keeping their power envelope within tight bounds, as dictated by the needs of various markets: from the ultra-low power requirements of the IoT, to the electrical power consumption budget in enterprise servers, by way of passive cooling and day-long battery life in mobile devices. This high demand for power-efficient processors, coupled with the limitations of technology scaling—which no longer provides improved performances at constant power densities—, is leading designers to explore new microarchitectures with the goal of pulling more performances out of a fixed power budget. This work enters into this trend by proposing a new processor microarchitecture, called KeyRing, having a low-power design intent. The switching activity of integrated circuits—i.e. transistors switching on and off—directly affects their dynamic power consumption. Circuit-level design techniques such as clock-gating are widely adopted as they dramatically reduce the impact of the global clock in synchronous circuits, which constitutes the main source of switching activity. The KeyRing microarchitecture presented in this work uses an asynchronous clocking scheme that relies on decentralized synchronization mechanisms to reduce the switching activity of circuits. It is derived from the AnARM, a power-efficient ARM processor developed by Octasic using an ad hoc asynchronous microarchitecture. Although it delivers better power-efficiency than synchronous alternatives, it is for the most part incompatible with standard timing-driven synthesis and Static Timing Analysis (STA). In addition, its design style does not fit well within the existing asynchronous design paradigms. This work lays the foundations for a more rigorous definition of this rather unorthodox design style, using circuits and methods coming from the asynchronous literature. The resulting KeyRing microarchitecture is developed in combination with Electronic Design Automation (EDA) methods that alleviate incompatibility issues related to ad hoc clocking, enabling timing-driven optimizations and verifications of KeyRing circuits using industry-standard design flows. In addition to bridging the gap with standard design practices, this work also proposes comprehensive experimental protocols that aims to strengthen the causal relation between the reported asynchronous microarchitecture and a reduced power consumption compared with synchronous alternatives. The main achievement of this work is a framework that enables the architectural exploration of circuits using the KeyRing microarchitecture

    Learner-focussed methodology for improving the resilience of training organisations in complex environments

    Full text link
    Organisations are increasingly relying on resilience to adapt to uncertain and evolving operational environments, whilst continuing to achieve their requirements, and addressing pathologies in their organisational design and operations. The challenge is exacerbated by the growing organisational complexity, competing priorities and unintended consequences from various system modifications and trade-off decisions. To address the organisational resilience challenges, a comprehensive approach to organisational resilience is required. Despite the proliferation of resilience research in the academic literature, organisational resilience practitioners do not have a holistic practical methodology on how to design and maintain resilience in continuously operating and mature organisations. The research reports its comprehensive approach to ensuring desired organisational performance and resilience characteristics. The key aspects of resilience and organisation are determined in the extensive literature review and key stakeholder engagements, followed by the establishment of the current ‘as is’ state of a Defence training organisation. It is characterised by a mature design, complexity, and the need for uninterrupted delivery of its functions in continuous operations. The research combines resilience conceptualisation and organisational design review outcomes to formulate its approach to the organisational transition from the current ‘as is’ to the future ‘to-be’ state to secure a long-term delivery of the required outputs under diverse stressors. The approach is based on an original resilience framework and architecture; new resilience measures introduced via the survey instrument; and non-traditional application of various system thinking and modelling and simulation methodologies to review and modify a mature and fully operational training organisation targeting resilience. The approach was applied in more than 20 Defence training establishments at different levels of aggregation over three years and reported indicative results and real benefits to the participating organisations, as well as research limitations, contributions, and continuous improvement strategies. Although a Defence training organisation context is used in this paper, the principles of the research approach may be applied to any organisation. Future research directions concern further quantification of organisational resilience aspects such as their interrelationships effect on organisational performance and organisational importance ratings; expanding the scope of organisational context from training to include other organisational types; and developing automation approaches for the resilience survey data analysis and reporting

    Combining qualitative and quantitative reasoning to support hazard identification by computer

    Get PDF
    This thesis investigates the proposition that use must be made of quantitative information to control the reporting of hazard scenarios in automatically generated HAZOP reports. HAZOP is a successful and widely accepted technique for identification of process hazards. However, it requires an expensive commitment of time and personnel near the end of a project. Use of a HAZOP emulation tool before conventional HAZOP could speed up the examination of routine hazards, or identify deficiencies I in the design of a plant. Qualitative models of process equipment can efficiently model fault propagation in chemical plants. However, purely qualitative models lack the representational power to model many constraints in real plants, resulting in indiscriminate reporting of failure scenarios. In the AutoHAZID computer program, qualitative reasoning is used to emulate HAZOP. Signed-directed graph (SDG) models of equipment are used to build a graph model of the plant. This graph is searched to find links between faults and consequences, which are reported as hazardous scenarios associated with process variable deviations. However, factors not represented in the SDG, such as the fluids in the plant, often affect the feasibility of scenarios. Support for the qualitative model system, in the form of quantitative judgements to assess the feasibility of certain hazards, was investigated and is reported here. This thesis also describes the novel "Fluid Modelling System" (FMS) which now provides this quantitative support mechanism in AutoHAZID. The FMS allows the attachment of conditions to SDG arcs. Fault paths are validated by testing the conditions along their arcs. Infeasible scenarios are removed. In the FMS, numerical limits on process variable deviations have been used to assess the sufficiency of a given fault to cause any linked consequence. In a number of case studies, use of the FMS in AutoHAZID has improved the focus of the automatically generated HAZOP results. This thesis describes qualitative model-based methods for identifying process hazards by computer, in particular AutoHAZID. It identifies a range of problems where the purely qualitative approach is inadequate and demonstrates how such problems can be tackled by selective use of quantitative information about the plant or the fluids in it. The conclusion is that quantitative knowledge is' required to support the qualitative reasoning in hazard identification by computer
    • …
    corecore