145 research outputs found

    Memory-Based Shallow Parsing

    Full text link
    We present memory-based learning approaches to shallow parsing and apply these to five tasks: base noun phrase identification, arbitrary base phrase recognition, clause detection, noun phrase parsing and full parsing. We use feature selection techniques and system combination methods for improving the performance of the memory-based learner. Our approach is evaluated on standard data sets and the results are compared with that of other systems. This reveals that our approach works well for base phrase identification while its application towards recognizing embedded structures leaves some room for improvement

    Схема коррекции сигналов для комбинационных устройств автоматики на основе логического дополнения с контролем вычислений по паритету

    Get PDF
    Simpler than known structure of the system with error correction in calculations is proposed based on duplication and triplication of blocks with majority principle of choosing the values of signals. It is advisable to use the new fault-tolerant structure for automation devices with combinational logic. In fault-tolerant structure synthesis, the parity method is used to establish the fact of a fault in the main logic unit and the logical complement method is used determine incorrectly calculated output functions and to generate signals for their correction. The method also allows to adjust the values of incorrectly calculated functions. Structural diagram and description of error correction system are given. The synthesis algorithm of control equipment is described with minimization of the technical implementation complexity. The experiment results with control combinational circuits are given, confirming the high efficiency of proposed system structure with error correction.Предложена более простая структура системы с коррекцией ошибок в вычислениях, чем известные структуры, основанные на дублировании и троировании блоков с мажоритарным принципом выбора значений сигналов. Новую отказоустойчивую структуру целесообразно использовать для устройств автоматики с комбинационной логикой. При синтезе отказоустойчивой структуры применяется метод паритета для установления факта возникновения неисправности в контролируемом объекте и метод логического дополнения для определения неправильно вычисленных выходных функций и формирования сигналов для их коррекции. Приведена структурная схема системы с коррекцией ошибок и дано ее описание. Представлен алгоритм синтеза контрольного оборудования с минимизацией сложности его технической реализации. Результаты экспериментов с контрольными комбинационными схемами подтверждают высокую эффективность применения предложенной структуры системы с коррекцией ошибок

    Design and Implementation of Fault Tolerant Adders on Field Programmable Gate Arrays

    Get PDF
    Fault tolerance on various adder architectures implemented on Field Programmable Gate Arrays (FPGAs) is studied in this thesis. This involves developing error detection and correction techniques for the sparse Kogge-Stone adder and comparing it with Triple Modular Redundancy (TMR) techniques. Fault tolerance is implemented on a Kogge-Stone adder by taking advantage of the inherent redundancy in the carry tree. On a sparse Kogge-Stone adder, fault tolerance is realized by introducing additional ripple carry adders into the design. The implementation of this fault tolerance approach on the sparse Kogge-Stone adder is successfully completed and verified by introducing faults either on the ripple carry adder or in the carry tree. Two types of Xilinx FPGAs were used in this study: the Spartan 3E and Virtex 5. The fault tolerant adders were analyzed in terms of their delay and resource utilization as a function of the widths of the adders. The results of this research provide important design guidelines for the implementation of fault tolerant adders on FPGAs. The Triple Modular Redundancy-Ripple Carry Adder (TMR-RCA) is the most efficient approach for fault tolerant design on an FPGA in terms of its resources due to its simplicity and the ability to take advantage of the fast-carry chain. However, for very large bit widths, there are indications that the sparse Kogge-Stone adder offers superior performance over an RCA when implemented on an FPGA. Two fault tolerant approaches were implemented using a sparse Kogge-Stone architecture. First, a fault tolerant sparse Kogge-Stone adder is designed by taking advantage of the existing ripple carry adders in the architecture and adopting a similar approach to the TMR-RCA by inserting two additional ripple carry adders into the design. Second, a graceful degradation approach is implemented with the sparse Kogge-Stone adder. In this approach, a faulty block is permanently replaced with a spare block. As the spare block is initially used for fault checking, the fault tolerant capability of the circuit is degraded in order to continue fault-free operation. The adder delay is smaller for the graceful degradation approach by approximately 1 ns from measured results and 2 ns from the synthesis results independent of the bit widths when compared with the fault tolerant Kogge-Stone adder. However, the resource utilization is similar for both adders

    Error correction circuits structures based on Boolean complement with calculation checking by code with summation of weighted transitions from bit to bit

    Get PDF
    Предложены новые структуры отказоустойчивых цифровых вычислительных устройств и систем, в основе которых лежит использование принципа логического дополнения для фиксации искаженных сигналов и схем встроенного контроля. Последние реализуются с применением кода с суммированием взвешенных переходов от разряда к разряду в информационном векторе, при построении которого использована последовательность весовых коэффициентов, образующая ряд возрастающих степеней числа 2. Использование данного кода с суммированием позволяет обнаруживать любые комбинации искажений на выходах объекта диагностирования, за исключением одновременного искажения всех выходов, что на практике бывает достаточно редко. Дано описание четырех структур: структуры с двойной модульной избыточностью и контролем вычислений основным блоком по выбранному коду, структуры с двойной модульной избыточностью и контролем вычислений резервным блоком по выбранному коду, структуры с контролем вычислений основным блоком по выбранному коду и блоком фиксации искаженных сигналов на основе логического дополнения, структуры с блоком фиксации искаженных сигналов на основе логического дополнения с непосредственным контролем вычислений данным блоком. Приводятся примеры синтеза отказоустойчивых устройств и дана оценка их эффективности по сравнению с использованием традиционной структуры отказоустойчивых устройств и систем, основанной на тройной модульной избыточности с мажоритарной коррекцией сигналов. Освещены результаты экспериментов с контрольными комбинационными схемами LG’93 и MCNC Benchmarks, также показавшие эффективность предлагаемых отказоустойчивых структур. Использование принципа логического дополнения позволяет синтезировать отказоустойчивые цифровые устройства и системы, в которых не требуется прямого резервирования и внесения модульной избыточности, что на практике может давать существенное снижение структурной избыточности конечного объекта

    Single event upset hardened embedded domain specific reconfigurable architecture

    Get PDF

    A GHz-range, High-resolution Multi-modulus Prescaler for Extreme Environment Applications

    Get PDF
    The generation of a precise, low-noise, reliable clock source is critical to developing mixed-signal and digital electronic systems. The applications of such a clock source are greatly expanded if the clock source can be configured to output different clock frequencies. The phase-locked loop (PLL) is a well-documented architecture for realizing this configurable clock source. Principle to the configurability of a PLL is a multi-modulus divider. The resolution of this divider (or prescaler) dictates the resolution of the configurable PLL output frequency. In integrated PLL designs, such a multi-modulus prescaler is usually sourced from a GHz-range voltage-controlled oscillator. Therefore, a fully-integrated PLL ASIC requires the development of a high-speed, high-resolution multi-modulus prescaler. The design challenges associated with developing such a prescaler are compounded when the application requires the device to operate in an extreme environment. In these extreme environments (often extra-terrestrial), wide temperature ranges and radiation effects can adversely affect the operation of electronic systems. Even more problematic is that extreme temperatures and ionizing radiation can cause permanent damage to electronic devices. Typical commercial-off-the-shelf (COTS) components are not able withstand such an environment, and any electronics operating in these extreme conditions must be designed to accommodate such operation. This dissertation describes the development of a high-speed, high-resolution, multi-modulus prescaler capable of operating in an extreme environment. This prescaler has been developed using current-mode logic (CML) on a 180-nm silicon-germanium (SiGe) BiCMOS process. The prescaler is capable of operating up to at least 5.4 GHz over a division range of 16-48 with a total of 27 configurable moduli. The prescaler is designed to provide excellent ionizing radiation hardness, single-event latch-up (SEL) immunity, and single-event upset (SEU) resistance over a temperature range of −180°C to 125°C

    Power-Aware Resilience for Exascale Computing

    Get PDF
    To enable future scientific breakthroughs and discoveries, the next generation of scientific applications will require exascale computing performance to support the execution of predictive models and analysis of massive quantities of data, with significantly higher resolution and fidelity than what is possible within existing computing infrastructure. Delivering exascale performance will require massive parallelism, which could result in a computing system with over a million sockets, each supporting many cores. Resulting in a system with millions of components, including memory modules, communication networks, and storage devices. This increase in number of components significantly increases the propensity of exascale computing systems to faults, while driving power consumption and operating costs to unforeseen heights. To achieve exascale performance two challenges must be addressed: resilience to failures and adherence to power budget constraints. These two objectives conflict insofar as performance is concerned, as achieving high performance may push system components past their thermal limit and increase the likelihood of failure. With current systems, the dominant resilience technique is checkpoint/restart. It is believed, however, that this technique alone will not scale to the level necessary to support future systems. Therefore, alternative methods have been suggested to augment checkpoint/restart -- for example process replication. In this thesis, we present a new fault tolerance model called shadow replication that addresses resilience and power simultaneously. Shadow replication associates a shadow process with each main process, similar to traditional replication, however, the shadow process executes at a reduced speed. Shadow replication reduces energy consumption and produces solutions faster than checkpoint/restart and other replication methods in limited power environments. Shadow replication reduces energy consumption up to 25 depending upon the application type, system parameters, and failure rates. The major contribution of this thesis is the development of shadow replication, a power-aware fault tolerant computational model. The second contribution is an execution model applying shadow replication to future high performance exascale-class systems. Next, is a framework to analyze and simulate the power and energy consumption of fault tolerance methods in high performance computing systems. Lastly, to prove the viability of shadow replication an implementation is presented for the Message Passing Interface

    New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs

    Full text link
    Tesis por compendio[EN] Relevance of electronics towards safety of common devices has only been growing, as an ever growing stake of the functionality is assigned to them. But of course, this comes along the constant need for higher performances to fulfill such functionality requirements, while keeping power and budget low. In this scenario, industry is struggling to provide a technology which meets all the performance, power and price specifications, at the cost of an increased vulnerability to several types of known faults or the appearance of new ones. To provide a solution for the new and growing faults in the systems, designers have been using traditional techniques from safety-critical applications, which offer in general suboptimal results. In fact, modern embedded architectures offer the possibility of optimizing the dependability properties by enabling the interaction of hardware, firmware and software levels in the process. However, that point is not yet successfully achieved. Advances in every level towards that direction are much needed if flexible, robust, resilient and cost effective fault tolerance is desired. The work presented here focuses on the hardware level, with the background consideration of a potential integration into a holistic approach. The efforts in this thesis have focused several issues: (i) to introduce additional fault models as required for adequate representativity of physical effects blooming in modern manufacturing technologies, (ii) to provide tools and methods to efficiently inject both the proposed models and classical ones, (iii) to analyze the optimum method for assessing the robustness of the systems by using extensive fault injection and later correlation with higher level layers in an effort to cut development time and cost, (iv) to provide new detection methodologies to cope with challenges modeled by proposed fault models, (v) to propose mitigation strategies focused towards tackling such new threat scenarios and (vi) to devise an automated methodology for the deployment of many fault tolerance mechanisms in a systematic robust way. The outcomes of the thesis constitute a suite of tools and methods to help the designer of critical systems in his task to develop robust, validated, and on-time designs tailored to his application.[ES] La relevancia que la electrónica adquiere en la seguridad de los productos ha crecido inexorablemente, puesto que cada vez ésta copa una mayor influencia en la funcionalidad de los mismos. Pero, por supuesto, este hecho viene acompañado de una necesidad constante de mayores prestaciones para cumplir con los requerimientos funcionales, al tiempo que se mantienen los costes y el consumo en unos niveles reducidos. En este escenario, la industria está realizando esfuerzos para proveer una tecnología que cumpla con todas las especificaciones de potencia, consumo y precio, a costa de un incremento en la vulnerabilidad a múltiples tipos de fallos conocidos o la introducción de nuevos. Para ofrecer una solución a los fallos nuevos y crecientes en los sistemas, los diseñadores han recurrido a técnicas tradicionalmente asociadas a sistemas críticos para la seguridad, que ofrecen en general resultados sub-óptimos. De hecho, las arquitecturas empotradas modernas ofrecen la posibilidad de optimizar las propiedades de confiabilidad al habilitar la interacción de los niveles de hardware, firmware y software en el proceso. No obstante, ese punto no está resulto todavía. Se necesitan avances en todos los niveles en la mencionada dirección para poder alcanzar los objetivos de una tolerancia a fallos flexible, robusta, resiliente y a bajo coste. El trabajo presentado aquí se centra en el nivel de hardware, con la consideración de fondo de una potencial integración en una estrategia holística. Los esfuerzos de esta tesis se han centrado en los siguientes aspectos: (i) la introducción de modelos de fallo adicionales requeridos para la representación adecuada de efectos físicos surgentes en las tecnologías de manufactura actuales, (ii) la provisión de herramientas y métodos para la inyección eficiente de los modelos propuestos y de los clásicos, (iii) el análisis del método óptimo para estudiar la robustez de sistemas mediante el uso de inyección de fallos extensiva, y la posterior correlación con capas de más alto nivel en un esfuerzo por recortar el tiempo y coste de desarrollo, (iv) la provisión de nuevos métodos de detección para cubrir los retos planteados por los modelos de fallo propuestos, (v) la propuesta de estrategias de mitigación enfocadas hacia el tratamiento de dichos escenarios de amenaza y (vi) la introducción de una metodología automatizada de despliegue de diversos mecanismos de tolerancia a fallos de forma robusta y sistemática. Los resultados de la presente tesis constituyen un conjunto de herramientas y métodos para ayudar al diseñador de sistemas críticos en su tarea de desarrollo de diseños robustos, validados y en tiempo adaptados a su aplicación.[CA] La rellevància que l'electrònica adquireix en la seguretat dels productes ha crescut inexorablement, puix cada volta més aquesta abasta una major influència en la funcionalitat dels mateixos. Però, per descomptat, aquest fet ve acompanyat d'un constant necessitat de majors prestacions per acomplir els requeriments funcionals, mentre es mantenen els costos i consums en uns nivells reduïts. Donat aquest escenari, la indústria està fent esforços per proveir una tecnologia que complisca amb totes les especificacions de potència, consum i preu, tot a costa d'un increment en la vulnerabilitat a diversos tipus de fallades conegudes, i a la introducció de nous tipus. Per oferir una solució a les noves i creixents fallades als sistemes, els dissenyadors han recorregut a tècniques tradicionalment associades a sistemes crítics per a la seguretat, que en general oferixen resultats sub-òptims. De fet, les arquitectures empotrades modernes oferixen la possibilitat d'optimitzar les propietats de confiabilitat en habilitar la interacció dels nivells de hardware, firmware i software en el procés. Tot i això eixe punt no està resolt encara. Es necessiten avanços a tots els nivells en l'esmentada direcció per poder assolir els objectius d'una tolerància a fallades flexible, robusta, resilient i a baix cost. El treball ací presentat se centra en el nivell de hardware, amb la consideració de fons d'una potencial integració en una estratègia holística. Els esforços d'esta tesi s'han centrat en els següents aspectes: (i) la introducció de models de fallada addicionals requerits per a la representació adequada d'efectes físics que apareixen en les tecnologies de fabricació actuals, (ii) la provisió de ferramentes i mètodes per a la injecció eficient del models proposats i dels clàssics, (iii) l'anàlisi del mètode òptim per estudiar la robustesa de sistemes mitjançant l'ús d'injecció de fallades extensiva, i la posterior correlació amb capes de més alt nivell en un esforç per retallar el temps i cost de desenvolupament, (iv) la provisió de nous mètodes de detecció per cobrir els reptes plantejats pels models de fallades proposats, (v) la proposta d'estratègies de mitigació enfocades cap al tractament dels esmentats escenaris d'amenaça i (vi) la introducció d'una metodologia automatitzada de desplegament de diversos mecanismes de tolerància a fallades de forma robusta i sistemàtica. Els resultats de la present tesi constitueixen un conjunt de ferramentes i mètodes per ajudar el dissenyador de sistemes crítics en la seua tasca de desenvolupament de dissenys robustos, validats i a temps adaptats a la seua aplicació.Espinosa García, J. (2016). New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/73146TESISCompendi

    Hierarchical Strategies for Fault-Tolerance in Reconfigurable Architectures

    Get PDF
    This thesis presents a novel hierarchical fault-tolerance methodology for fault recovery in reconfigurable devices. As the semiconductor industry moves to producing ever smaller transistors, the number of faults occurring increases. At current technology nodes, unavoidable variations in production cause transistor devices to perform outside of ideal ranges. This variability manifests as faults at higher levels and has a knock-on effect for yields. In some ways, fault tolerance has never been more important. To better explore the area of variability, a novel reconfigurable architecture was designed: Programmable Analogue and Digital Array (PAnDA). By allowing reconfiguration from the transistor level to the logic block level, PAnDA allows for design space exploration, previously only available through simulation, in hardware. The main advantage of this is that design modifications can be tested almost instantaneously, as opposed to running time consuming transistor-level simulations. As a result of this design, each level of PAnDA’s configuration contains structural homogeneity, allowing multiple implementations of the same circuit on the same hardware. This potentially creates opportunities for fault tolerance through reconfiguration, and so experimental work is performed to discover how best to utilise these properties of PAnDA. The findings show that it is possible to optimise the reconfiguration in the event of a fault, even if the nature and location of the fault are unknown
    corecore