60 research outputs found

    Architectural Solutions for NanoMagnet Logic

    Get PDF
    The successful era of CMOS technology is coming to an end. The limit on minimum fabrication dimensions of transistors and the increasing leakage power hinder the technological scaling that has characterized the last decades. In several different ways, this problem has been addressed changing the architectures implemented in CMOS, adopting parallel processors and thus increasing the throughput at the same operating frequency. However, architectural alternatives cannot be the definitive answer to a continuous increase in performance dictated by Moore’s law. This problem must be addressed from a technological point of view. Several alternative technologies that could substitute CMOS in next years are currently under study. Among them, magnetic technologies such as NanoMagnet Logic (NML) are interesting because they do not dissipate any leakage power. More- over, magnets have memory capability, so it is possible to merge logic and memory in the same device. However, magnetic circuits, and NML in this specific research, have also some important drawbacks that need to be addressed: first, the circuit clock frequency is limited to 100 MHz, to avoid errors in data propagation; second, there is a connection between circuit layout and timing, and in particular, longer wires will have longer latency. These drawbacks are intrinsic to the technology and for this reason they cannot be avoided. The only chance is to limit their impact from an architectural point of view. The first step followed in the research path of this thesis is indeed the choice and optimization of architectures able to deal with the problems of NML. Systolic Ar- rays are identified as an ideal solution for this technology, because they are regular structures with local interconnections that limit the long latency of wires; more- over they are composed of several Processing Elements that work in parallel, thus exploit parallelization to increase throughput (limiting the impact of the low clock frequency). Through the analysis of Systolic Arrays for NML, several possible im- provements have been identified and addressed: 1) it has been defined a rigorous way to increase throughput with interleaving, providing equations that allow to esti- mate the number of operations to be interleaved and the rules to provide inputs; 2) a latency insensitive circuit has been designed, that exploits a data communication protocol between processing elements to avoid data synchronization problems. This feature has been exploited to design a latency insensitive Systolic Array that is able to execute the Floyd-Steinberg dithering algorithm. All the improvements presented in this framework apply to Systolic Arrays implemented in any technology. So, they can also be exploited to increase performance of today’s CMOS parallel circuits. This research path is presented in Chapter 3. While Systolic Arrays are an interesting solution for NML, their usage could be quite limited because they are normally application-specific. The second re- search path addresses this problem. A Reconfigurable Systolic Array is presented, that can be programmed to execute several algorithms. This architecture has been tested implementing many algorithms, including FIR and IIR filters, Discrete Cosine Transform and Matrix Multiplication. This research path is presented in Chapter 4. In common Von Neumann architectures, the logic part of the circuit and the memory one are separated. Today bus communication between logic and memory represents the bottleneck of the system. This problem is addressed presenting Logic- In-Memory (LIM), an architecture where memory elements are merged in logic ones. This research path aims at defining a real LIM architectures. This has been done in two steps. The first step is represented by an architecture composed of three layers: memory, routing and logic. In the second step instead the routing plane is no more present, and its features are inherited by the memory plane. In this solution, a pyramidal memory model is used, where memories near logic elements contain the most probably used data, and other memory layers contain the remaining data and instruction set. This circuit has been tested with odd-even sort algorithms and it has been benchmarked against GPUs and ASIC. This research path is presented in Chapter 5. MagnetoElastic NML (ME-NML) is a technological improvement of the NML principle, proposed by researchers of Politecnico di Torino, where the clock system is based on the induced stretch of a piezoelectric substrate when a voltage is ap- plied to its boundaries. The main advantage of this solution is that it consumes much less power than the classic clock implementation. This technology has not yet been investigated from an architectural point of view and considering complex circuits. In this research field, a standard methodology for the design of ME-NML circuits has been proposed. It is based on a Standard Cell Library and an enhanced VHDL model. The effectiveness of this methodology has been proved designing a Galois Field Multiplier. Moreover the serial-parallel trade-off in ME-NML has been investigated, designing three different solutions for the Multiply and Accumulate structure. This research path is presented in Chapter 6. While ME-NML is an extremely interesting technology, it needs to be combined with other faster technologies to have a real competitive system. Signal interfaces between NML and other technologies (mainly CMOS) have been rarely presented in literature. A mixed-technology multiplexer is designed and presented as the basis for a CMOS to NML interface. The reverse interface (from ME-NML to CMOS) is instead based on a sensing circuit for the Faraday effect: a change in the polarization of a magnet induces an electric field that can be used to generate an input signal for a CMOS circuit. This research path is presented in Chapter 7. The research work presented in this thesis represents a fundamental milestone in the path towards nanotechnologies. The most important achievement is the de- sign and simulation of complex circuits with NML, benchmarking this technology with real application examples. The characterization of a technology considering complex functions is a major step to be performed and that has not yet been ad- dressed in literature for NML. Indeed, only in this way it is possible to intercept in advance any weakness of NanoMagnet Logic that cannot be discovered consid- ering only small circuits. Moreover, the architectural improvements introduced in this thesis, although technology-driven, can be actually applied to any technology. We have demonstrated the advantages that can derive applying them to CMOS cir- cuits. This thesis represents therefore a major step in two directions: the first is the enhancement of NML technology; the second is a general improvement of parallel architectures and the development of the new Logic-In-Memory paradigm

    NASA Tech Briefs, October 2001

    Get PDF
    Topics include: special coverage section on composites and plastics, electronic components and systems, software, mechanics, physical sciences, information sciences, book and reports, and a special sections of Photonics Tech Briefs and Motion Control Tech Briefs

    Multi-Agent System Based Distributed Voltage Control in Distribution Systems

    Get PDF
    Distribution System is a standout among the most complex entities of the electric power grid. Moreover, voltage quality sustainability till customer premises, with the introduction of Distributed Generation (DG), is one of the most frenzied control areas. Previously, SCADA in cohesion with Wide Area Measurement Systems (WAMS) was a dependable control strategy, yet as the ever growing and complex distribution system is advancing towards the Smart Grids, control strategies are becoming more and more distributed in spite of the centralized one. A detailed literature review of the voltage control methods ranging from the centralized one to the fully distributed agent based control is conducted. In the light of the previous researches, a distributed voltage control based on Multi-Agent System is proposed, as the agents based control strategies, are becoming well known day by day, due to its autonomous control and decision making capacity. To make the proposed algorithm fully distributed, token transversal through the network and agents communication to remove voltage violation over least correspondence and measurements of the system, are utilized. Following instant voltage control at the load nodes, a penalty function is employed to keep the voltage value curve throughout the network as close as possible to the nominal, with minimum network losses and minimum voltage damage. The authentication of the devised control algorithm is acknowledged by utilizing a Greenfield distribution Network, which is based on the realistic loading data. Agents and the controlling logic are codded in Matlab ® programming software. A sensitivity analysis is performed based on DG penetration to have the complete overview of the proposed methodology. The principle objective of the technique is to keep the voltage value within the standard limit of ±10% of the nominal, at all load nodes while instantly utilizing voltage control entities like DGs, Static VAR Compensator (SVCs) and On-Load Tap Changer (OLTC). In addition, the optimization of network losses and voltage level close to nominal is to be accomplished by the penalty function implementation

    Low Power Memory/Memristor Devices and Systems

    Get PDF
    This reprint focusses on achieving low-power computation using memristive devices. The topic was designed as a convenient reference point: it contains a mix of techniques starting from the fundamental manufacturing of memristive devices all the way to applications such as physically unclonable functions, and also covers perspectives on, e.g., in-memory computing, which is inextricably linked with emerging memory devices such as memristors. Finally, the reprint contains a few articles representing how other communities (from typical CMOS design to photonics) are fighting on their own fronts in the quest towards low-power computation, as a comparison with the memristor literature. We hope that readers will enjoy discovering the articles within

    Space construction system analysis. Part 2: Platform definition

    Get PDF
    The top level system requirements are summarized and the accompanying conceptual design for an engineering and technology verification platform (ETVP) system is presented. An encompassing statement of the system objectives which drive the system requirements is presented and the major mission and subsystem requirements are described with emphasis on the advanced communications technology mission payload. The platform design is defined and used as a reference configuration for an end to space construction analyses. The preferred construction methods and processes, the important interactions between the platform design and the construction system design and operation, and the technology development efforts required to support the design and space construction of the ETVP are outlined

    Navigation/traffic control satellite mission study. Volume 3 - Selected navigation/ traffic control satellite system analysis and equipment definition Final report

    Get PDF
    L band and VHF voice communication in satellite navigation and traffic control networ

    Doctor of Philosophy

    Get PDF
    dissertationRecent breakthroughs in silicon photonics technology are enabling the integration of optical devices into silicon-based semiconductor processes. Photonics technology enables high-speed, high-bandwidth, and high-fidelity communications on the chip-scale-an important development in an increasingly communications-oriented semiconductor world. Significant developments in silicon photonic manufacturing and integration are also enabling investigations into applications beyond that of traditional telecom: sensing, filtering, signal processing, quantum technology-and even optical computing. In effect, we are now seeing a convergence of communications and computation, where the traditional roles of optics and microelectronics are becoming blurred. As the applications for opto-electronic integrated circuits (OEICs) are developed, and manufacturing capabilities expand, design support is necessary to fully exploit the potential of this optics technology. Such design support for moving beyond custom-design to automated synthesis and optimization is not well developed. Scalability requires abstractions, which in turn enables and requires the use of optimization algorithms and design methodology flows. Design automation represents an opportunity to take OEIC design to a larger scale, facilitating design-space exploration, and laying the foundation for current and future optical applications-thus fully realizing the potential of this technology. This dissertation proposes design automation for integrated optic system design. Using a buildingblock model for optical devices, we provide an EDA-inspired design flow and methodologies for optical design automation. Underlying these flows and methodologies are new supporting techniques in behavioral and physical synthesis, as well as device-resynthesis techniques for thermal-aware system integration. We also provide modeling for optical devices and determine optimization and constraint parameters that guide the automation techniques. Our techniques and methodologies are then applied to the design and optimization of optical circuits and devices. Experimental results are analyzed to evaluate their efficacy. We conclude with discussions on the contributions and limitations of the approaches in the context of optical design automation, and describe the tremendous opportunities for future research in design automation for integrated optics

    Challenges of Inductive Electric Vehicle Charging Systems in both Stationary and Dynamic Modes

    Get PDF
    Inductive power transfer as an emerging technology has become applicable in wide power ranges including Electric Vehicle, Electric Aircraft, wheelchair, cellphone, scooter and so on. Among them, inductive Electric Vehicle (EV) charging has gained great interest in the last decade due to many merits namely contactless technology, more convenience, full automotive charging process. However, inductive EV charging systems could bring about so many issues and concerns which are addressed in this dissertation. One of the critical challenges addressed in this dissertation is a virtual inertia based IPT controller to prevent the undesirable dynamics imposed by the EVs increasing number in the grid. Another adverse issue solved in this dissertation is detecting any metal object intrusions into the charging zone to the Inductive Power Transfer (IPT) systems before leading to heat generation on the metal or risk of fire. Moreover, in this dissertation, a new self-controlled multi-power level IPT controller is developed that enables EV charging level regulation in a wide range of power; suitable for different applications from golf-cart charging system (light duty EV) to truck (heavy duty EV). The proposed controller has many merits including easy to be implemented, cons-effective, and the least complexities compared to conventional PWM methods. Additionally, in this dissertation, the online estimation of IPT parameters using primary measurement including coupling factor, battery current and battery voltage is introduced; the developed method can find immediate applications for the development of adaptive controllers for static and dynamic inductive charging systems. Finally, the last objective of this research is physics-based design optimization techniques for the magnetic structures of inductive EV charging systems for dynamic application (getting charged while in motion). New configuration of IPT transmitting couplers with objective of high-power density, low power loss, low cost and less electromagnetic emission are designed and developed in the lab

    The Winning Hybrid - A case study of isomorphosm in the airline industry

    Get PDF
    The deregulated scheduled passenger airline industry is in a constant state of motion as managers continually adapt their business models to meet the challenging market environment. Such adaptation has led to a variety of airlines populating the industry; from the birth of low-cost carriers to the transformation of state-owned behemoths to lean and successful carriers. These dynamics challenge airline managers to continuously acclimate their business models and to understand industry evolution. This doctoral dissertation addresses the issue of industry evolution and attempts to propose future airline business models based on airline behavior. The intention is to improve understanding of industry evolution, propose a method for constructing future business models, and aid airline management in future strategic decisions. Three central themes are raised in the research: business model heterogeneity and its impact on airline performance, innovation and imitation as a justification for business model heterogeneity, and future business models grounded on airline innovation and imitation. Each theme forms the basis for the project’s three analyses. The research is categorized according to the customary industrial segmentation of full-service carriers, low-cost carriers, and regional carriers. The findings show that business model heterogeneity is evident at varying degrees in the industry, and that there is a positive relationship between the level adherence to a strategic group’s traditional business model and financial performance. This indicates that airlines that abide by their strategic group’s traditional business model perform better than those that differentiate themselves form the traditional business model. The low-cost carrier group is the most heterogeneous while the full-service carrier group is the most homogenous, which one may attribute to the historical emergence of these two groups. Results from a global survey distributed to airline CEOs show that business model differentiation is predicated on both innovation and imitation. The research shows that all airlines innovate, however business model changes based on this phenomenon may only afford an airline an advantage for a limited time period as imitation is prolific in the industry. Airline behavior indicates that airlines that populate the periphery of their strategic group are more prone to imitate other strategic groups. In addition, it is shown that airlines that closely adhere to their strategic group’s traditional business model are more likely to imitate airlines populating their own strategic group. The final analysis is based on the presence of innovation and imitation in the industry and incorporates these concepts in algebraic analyses which determine the unique combinations that continuously lead to a positive operating margin. The business model results suggest that the clear, historical distinctions between the strategic groups in the industry are becoming blurred, and that a winning hybrid may emerge

    Design, Analysis and Test of Logic Circuits under Uncertainty.

    Full text link
    Integrated circuits are increasingly susceptible to uncertainty caused by soft errors, inherently probabilistic devices, and manufacturing variability. As device technologies scale, these effects become detrimental to circuit reliability. In order to address this, we develop methods for analyzing, designing, and testing circuits subject to probabilistic effects. Our main contributions are: 1) a fast, soft-error rate (SER) analyzer that uses functional-simulation signatures to capture error effects, 2) novel design techniques that improve reliability using little area and performance overhead, 3) a matrix-based reliability-analysis framework that captures many types of probabilistic faults, and 4) test-generation/compaction methods aimed at probabilistic faults in logic circuits. SER analysis must account for the main error-masking mechanisms in ICs: logic, timing, and electrical masking. We relate logic masking to node testability of the circuit and utilize functional-simulation signatures, i.e., partial truth tables, to efficiently compute estability (signal probability and observability). To account for timing masking, we compute error-latching windows (ELWs) from timing analysis information. Electrical masking is incorporated into our estimates through derating factors for gate error probabilities. The SER of a circuit is computed by combining the effects of all three masking mechanisms within our SER analyzer called AnSER. Using AnSER, we develop several low-overhead techniques that increase reliability, including: 1) an SER-aware design method that uses redundancy already present within the circuit, 2) a technique that resynthesizes small logic windows to improve area and reliability, and 3) a post-placement gate-relocation technique that increases timing masking by decreasing ELWs. We develop the probabilistic transfer matrix (PTM) modeling framework to analyze effects beyond soft errors. PTMs are compressed into algebraic decision diagrams (ADDs) to improve computational efficiency. Several ADD algorithms are developed to extract reliability and error susceptibility information from PTMs representing circuits. We propose new algorithms for circuit testing under probabilistic faults, which require a reformulation of existing test techniques. For instance, a test vector may need to be repeated many times to detect a fault. Also, different vectors detect the same fault with different probabilities. We develop test generation methods that account for these differences, and integer linear programming (ILP) formulations to optimize test sets.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/61584/1/smita_1.pd
    • …
    corecore