1,026 research outputs found

    Two Years of Economic Reforms in Russia. Main Results

    Get PDF
    The goal of this article is the presentation of the general results of the Russian economic reforms in the period from November 1991 to November 1992, with particular emphasis on macroeconomic policy and the failed attempts at fiscal stability. Such a profile of the article results from the personal interests of the author, the specific condition in which Russia found itself after the collapse of the USSR (the existence of a rouble zone with a series of independent central banks), as well as from the importance of macroeconomic balance (often under appreciated) in the success of the process of transformation from a planned to a market economy. Furthermore, macroeconomic policy is precisely the factor of the chosen strategy of transformation, which most differentiates Russia from the countries of the so-called Visegrad group. The article will also leave out a deeper analysis of the political situation, even though it has had a significant influence on the course of the reform process. The article will focus mainly on the second year of the Russian transformation. This is due to the possibility of presenting rather current economic results, as well as formulating more advanced conclusions. Furthermore, the initial program assumptions and the course of events in the first year of transformation (1992) have been the subject of analysis for a number of studies [Aslund and Layard, 1993; Blanchard et al., 1993; D1browski et al., 1993]. It must however be clearly emphasized that it is still too early for formulating definite evaluations of what has occurred in Russia in the years 1991-1993. This will require, as the Polish example illustrates, a somewhat longer time span. An additional complication is the low quality of statistical data dealing with Russia. This is in regards to not only the GDP statistics or inflation, which, in light of the experiences of other post-communist countries, seems obvious, but also such things as monetary and fiscal statistics. Due to this, the conclusions presented in this article should be treated as introductory, and might be corrected as new events unfold and more accurate statistical data can be attained. The content of the article has been organized as follows: Point 2 presents a concise view of the characteristics of the starting point of reform at the end of 1991. Point 3 describes the process of the liberalization of the domestic prices and market. Point 4 deals with the meanders of the liberalization policy in foreign trade. Point 5 presents a synthetic picture of the privatization policy Point 6 describes macroeconomic policy in 1992, while point 7 - the stabilization efforts of 1993. In point 8 there is a short history of the collapse of the rouble zone. Point 9 contains conclusions and an attempt to forecast future events.transition, Russia, liberalization, deregulation, privatization, rouble zone

    Single event upset hardened embedded domain specific reconfigurable architecture

    Get PDF

    Large-displacement Lightweight Armor

    Get PDF
    Randomly entangled fibers forming loosely bound nonwoven structures are evaluated for use in lightweight armor applications. These materials sacrifice volumetric efficiency in order to realize a reduction in mass versus traditional armor materials, while maintaining equivalent ballistic performance. The primary material characterized, polyester fiberfill, is shown to have improved ballistic performance over control samples of monolithic polyester as well as 1095 steel sheets. The response of fiberfill is investigated at a variety of strain rates, from quasistatic to ballistic, under compression, tension, and shear deformation to elucidate mechanisms at work during ballistic defeat. Fiberfill’s primary mechanisms during loading are fiber reorientation, fiber unfurling, and frictional sliding. Frictional sliding, coupled with high macroscopic strain to failure, is thought to be the source of the high specific ballistic performance in fiberfill materials. The proposed armor is tested for penetration resistance against spherical and cylindrical 7.62 mm projectiles fired from a gas gun. A constitutive model incorporating the relevant deformation mechanisms of texture evolution and progressive damage is developed and implemented in Abaqus explicit in order to expedite further research on ballistic nonwoven fabrics

    Fault Tolerant Electronic System Design

    Get PDF
    Due to technology scaling, which means reduced transistor size, higher density, lower voltage and more aggressive clock frequency, VLSI devices may become more sensitive against soft errors. Especially for those devices used in safety- and mission-critical applications, dependability and reliability are becoming increasingly important constraints during the development of system on/around them. Other phenomena (e.g., aging and wear-out effects) also have negative impacts on reliability of modern circuits. Recent researches show that even at sea level, radiation particles can still induce soft errors in electronic systems. On one hand, processor-based system are commonly used in a wide variety of applications, including safety-critical and high availability missions, e.g., in the automotive, biomedical and aerospace domains. In these fields, an error may produce catastrophic consequences. Thus, dependability is a primary target that must be achieved taking into account tight constraints in terms of cost, performance, power and time to market. With standards and regulations (e.g., ISO-26262, DO-254, IEC-61508) clearly specify the targets to be achieved and the methods to prove their achievement, techniques working at system level are particularly attracting. On the other hand, Field Programmable Gate Array (FPGA) devices are becoming more and more attractive, also in safety- and mission-critical applications due to the high performance, low power consumption and the flexibility for reconfiguration they provide. Two types of FPGAs are commonly used, based on their configuration memory cell technology, i.e., SRAM-based and Flash-based FPGA. For SRAM-based FPGAs, the SRAM cells of the configuration memory highly susceptible to radiation induced effects which can leads to system failure; and for Flash-based FPGAs, even though their non-volatile configuration memory cells are almost immune to Single Event Upsets induced by energetic particles, the floating gate switches and the logic cells in the configuration tiles can still suffer from Single Event Effects when hit by an highly charged particle. So analysis and mitigation techniques for Single Event Effects on FPGAs are becoming increasingly important in the design flow especially when reliability is one of the main requirements

    The 1999 Center for Simulation of Dynamic Response in Materials Annual Technical Report

    Get PDF
    Introduction: This annual report describes research accomplishments for FY 99 of the Center for Simulation of Dynamic Response of Materials. The Center is constructing a virtual shock physics facility in which the full three dimensional response of a variety of target materials can be computed for a wide range of compressive, ten- sional, and shear loadings, including those produced by detonation of energetic materials. The goals are to facilitate computation of a variety of experiments in which strong shock and detonation waves are made to impinge on targets consisting of various combinations of materials, compute the subsequent dy- namic response of the target materials, and validate these computations against experimental data

    Robust design of deep-submicron digital circuits

    Get PDF
    Avec l'augmentation de la probabilité de fautes dans les circuits numériques, les systèmes développés pour les environnements critiques comme les centrales nucléaires, les avions et les applications spatiales doivent être certifies selon des normes industrielles. Cette thèse est un résultat d'une cooperation CIFRE entre l'entreprise Électricité de France (EDF) R&D et Télécom Paristech. EDF est l'un des plus gros producteurs d'énergie au monde et possède de nombreuses centrales nucléaires. Les systèmes de contrôle-commande utilisé dans les centrales sont basés sur des dispositifs électroniques, qui doivent être certifiés selon des normes industrielles comme la CEI 62566, la CEI 60987 et la CEI 61513 à cause de la criticité de l'environnement nucléaire. En particulier, l'utilisation des dispositifs programmables comme les FPGAs peut être considérée comme un défi du fait que la fonctionnalité du dispositif est définie par le concepteur seulement après sa conception physique. Le travail présenté dans ce mémoire porte sur la conception de nouvelles méthodes d'analyse de la fiabilité aussi bien que des méthodes d'amélioration de la fiabilité d'un circuit numérique.The design of circuits to operate at critical environments, such as those used in control-command systems at nuclear power plants, is becoming a great challenge with the technology scaling. These circuits have to pass through a number of tests and analysis procedures in order to be qualified to operate. In case of nuclear power plants, safety is considered as a very high priority constraint, and circuits designed to operate under such critical environment must be in accordance with several technical standards such as the IEC 62566, the IEC 60987, and the IEC 61513. In such standards, reliability is treated as a main consideration, and methods to analyze and improve the circuit reliability are highly required. The present dissertation introduces some methods to analyze and to improve the reliability of circuits in order to facilitate their qualification according to the aforementioned technical standards. Concerning reliability analysis, we first present a fault-injection based tool used to assess the reliability of digital circuits. Next, we introduce a method to evaluate the reliability of circuits taking into account the ability of a given application to tolerate errors. Concerning reliability improvement techniques, first two different strategies to selectively harden a circuit are proposed. Finally, a method to automatically partition a TMR design based on a given reliability requirement is introduced.PARIS-Télécom ParisTech (751132302) / SudocSudocFranceF

    Resilience of an embedded architecture using hardware redundancy

    Get PDF
    In the last decade the dominance of the general computing systems market has being replaced by embedded systems with billions of units manufactured every year. Embedded systems appear in contexts where continuous operation is of utmost importance and failure can be profound. Nowadays, radiation poses a serious threat to the reliable operation of safety-critical systems. Fault avoidance techniques, such as radiation hardening, have been commonly used in space applications. However, these components are expensive, lag behind commercial components with regards to performance and do not provide 100% fault elimination. Without fault tolerant mechanisms, many of these faults can become errors at the application or system level, which in turn, can result in catastrophic failures. In this work we study the concepts of fault tolerance and dependability and extend these concepts providing our own definition of resilience. We analyse the physics of radiation-induced faults, the damage mechanisms of particles and the process that leads to computing failures. We provide extensive taxonomies of 1) existing fault tolerant techniques and of 2) the effects of radiation in state-of-the-art electronics, analysing and comparing their characteristics. We propose a detailed model of faults and provide a classification of the different types of faults at various levels. We introduce an algorithm of fault tolerance and define the system states and actions necessary to implement it. We introduce novel hardware and system software techniques that provide a more efficient combination of reliability, performance and power consumption than existing techniques. We propose a new element of the system called syndrome that is the core of a resilient architecture whose software and hardware can adapt to reliable and unreliable environments. We implement a software simulator and disassembler and introduce a testing framework in combination with ERA’s assembler and commercial hardware simulators
    corecore