13 research outputs found

    An Analog VLSI Deep Machine Learning Implementation

    Get PDF
    Machine learning systems provide automated data processing and see a wide range of applications. Direct processing of raw high-dimensional data such as images and video by machine learning systems is impractical both due to prohibitive power consumption and the “curse of dimensionality,” which makes learning tasks exponentially more difficult as dimension increases. Deep machine learning (DML) mimics the hierarchical presentation of information in the human brain to achieve robust automated feature extraction, reducing the dimension of such data. However, the computational complexity of DML systems limits large-scale implementations in standard digital computers. Custom analog signal processing (ASP) can yield much higher energy efficiency than digital signal processing (DSP), presenting means of overcoming these limitations. The purpose of this work is to develop an analog implementation of DML system. First, an analog memory is proposed as an essential component of the learning systems. It uses the charge trapped on the floating gate to store analog value in a non-volatile way. The memory is compatible with standard digital CMOS process and allows random-accessible bi-directional updates without the need for on-chip charge pump or high voltage switch. Second, architecture and circuits are developed to realize an online k-means clustering algorithm in analog signal processing. It achieves automatic recognition of underlying data pattern and online extraction of data statistical parameters. This unsupervised learning system constitutes the computation node in the deep machine learning hierarchy. Third, a 3-layer, 7-node analog deep machine learning engine is designed featuring online unsupervised trainability and non-volatile floating-gate analog storage. It utilizes massively parallel reconfigurable current-mode analog architecture to realize efficient computation. And algorithm-level feedback is leveraged to provide robustness to circuit imperfections in analog signal processing. At a processing speed of 8300 input vectors per second, it achieves 1×1012 operation per second per Watt of peak energy efficiency. In addition, an ultra-low-power tunable bump circuit is presented to provide similarity measures in analog signal processing. It incorporates a novel wide-input-range tunable pseudo-differential transconductor. The circuit demonstrates tunability of bump center, width and height with a power consumption significantly lower than previous works

    Sensor de envelhecimento para células de memória CMOS

    Get PDF
    Dissertação de Mestrado, Engenharia e Tecnologia, Instituto Superior de Engenharia, Universidade do Algarve, 2016As memórias Complementary Metal Oxide Semiconductor (CMOS) ocupam uma percentagem de área significativa nos circuitos integrados e, com o desenvolvimento de tecnologias de fabrico a uma escala cada vez mais reduzida, surgem problemas de performance e de fiabilidade. Efeitos como o BTI (Bias Thermal Instability), TDDB (Time Dependent Dielectric Breakdown), HCI (Hot Carrier Injection), EM (Electromigration), degradam os parâmetros físicos dos transístores de efeito de campo (MOSFET), alterando as suas propriedades elétricas ao longo do tempo. O efeito BTI pode ser subdividido em NBTI (Negative BTI) e PBTI (Positive BTI). O efeito NBTI é dominante no processo de degradação e envelhecimento dos transístores CMOS, afetando os transístores PMOS, enquanto o efeito PBTI assume especial relevância na degradação dos transístores NMOS. A degradação provocada por estes efeitos, manifesta-se nos transístores através do incremento do módulo da tensão de limiar de condução |ℎ| ao longo do tempo. A degradação dos transístores é designada por envelhecimento, sendo estes efeitos cumulativos e possuindo um grande impacto na performance do circuito, em particular se ocorrerem outras variações paramétricas. Outras variações paramétricas adicionais que podem ocorrer são as variações de processo (P), tensão (V) e temperatura (T), ou considerando todas estas variações, e de uma forma genérica, PVTA (Process, Voltage, Temperature and Aging). As células de memória de acesso aleatório (RAM, Random Access Memory), em particular as memórias estáticas (SRAM, Static Random Access Memory) e dinâmicas (DRAM, Dynamic Random Access Memory), possuem tempos de leitura e escrita precisos. Quando ao longo do tempo ocorre o envelhecimento das células de memória, devido à degradação das propriedades dos transístores MOSFET, ocorre também uma degradação da performance das células de memória. A degradação de performance é, portanto, resultado das transições lentas que ocorrem, devido ao envelhecimento dos transístores MOSFET que comutam mais tarde, comparativamente a transístores novos. A degradação de performance nas memórias devido às transições lentas pode traduzir-se em leituras e escritas mais lentas, bem como em alterações na capacidade de armazenamento da memória. Esta propriedade pode ser expressa através da margem de sinal ruído (SNM). O SNM é reduzido com o envelhecimento dos transístores MOSFET e, quando o valor do SNM é baixo, a célula perde a sua capacidade de armazenamento, tornando-se mais vulnerável a fontes de ruído. O SNM é, portanto, um valor que permite efetuar a aferição (benchmarking) e comparar as características da memória perante o envelhecimento ou outras variações paramétricas que possam ocorrer. O envelhecimento das memórias CMOS traduz-se portanto na ocorrência de erros nas memórias ao longo do tempo, o que é indesejável especialmente em sistemas críticos. O trabalho apresentado nesta dissertação tem como objetivo o desenvolvimento de um sensor de envelhecimento e performance para memórias CMOS, detetando e sinalizando para o exterior o envelhecimento em células de memória SRAM devido à constante monitorização da sua performance. O sensor de envelhecimento e performance é ligado na bit line da célula de memória e monitoriza ativamente as operações de leitura e escrita decorrentes da operação da memória. O sensor de envelhecimento é composto por dois blocos: um detetor de transições e um detetor de pulsos. O detetor de transições é constituído por oito inversores e uma porta lógica XOR realizada com portas de passagem. Os inversores possuem diferentes relações nos tamanhos dos transístores P/N, permitindo tempos de comutação em diferentes valores de tensão. Assim, quando os inversores com tensões de comutações diferentes são estimulados pelo mesmo sinal de entrada e são ligados a uma porta XOR, permitem gerar na saída um impulso sempre que existe uma comutação na bit line. O impulso terá, portanto, uma duração proporcional ao tempo de comutação do sinal de entrada, que neste caso particular são as operações de leitura e escrita da memória. Quando o envelhecimento ocorre e as transições se tornam mais lentas, os pulsos possuem uma duração superior face aos pulsos gerados numa SRAM nova. Os pulsos gerados seguem para um elemento de atraso (delay element) que provoca um atraso aos pulsos, invertendo-os de seguida, e garantindo que a duração dos pulsos é suficiente para que exista uma deteção. O impulso gerado é ligado ao bloco seguinte que compõe o sensor de envelhecimento e performance, sendo um circuito detetor de pulso. O detetor de pulso implementa um NOR CMOS, controlado por um sinal de relógio (clock) e pelos pulsos invertidos. Quando os dois sinais de input do NOR são ‘0’ o output resultante será ‘1’, criando desta forma uma janela de deteção. O sensor de envelhecimento será ajustado em cada implementação, de forma a que numa célula de memória nova os pulsos invertidos se encontrem alinhados temporalmente com os pulsos de relógio. Este ajuste é feito durante a fase de projeto, em função da frequência de operação requerida para a célula, quer pelo dimensionamento do delay element (ajustando o seu atraso), quer pela definição do período do sinal de relógio. À medida que o envelhecimento dos circuitos ocorre e as comutações nos transístores se tornam mais lentas, a duração dos pulsos aumenta e consequentemente entram na janela de deteção, originando uma sinalização na saída do sensor. Assim, caso ocorram operações de leitura e escrita instáveis, ou seja, que apresentem tempos de execução acima do expectável ou que os seus níveis lógicos estejam degradados, o sensor de envelhecimento e performance devolve para o exterior ‘1’, sinalizando um desempenho crítico para a operação realizada, caso contrário a saída será ‘0’, indicando que não é verificado nenhum erro no desempenho das operações de escrita e leitura. Os transístores do sensor de envelhecimento e performance são dimensionados de acordo com a implementação; por exemplo, os modelos dos transístores selecionados, tensões de alimentação, ou número de células de memória conectadas na bit line, influenciam o dimensionamento prévio do sensor, já que tanto a performance da memória como o desempenho do sensor dependem das condições de operação. Outras soluções previamente propostas e disponíveis na literatura, nomeadamente o sensor de envelhecimento embebido no circuito OCAS (On-Chip Aging Sensor), permitem detetar envelhecimento numa SRAM devido ao envelhecimento por NBTI. Porém esta solução OCAS apenas se aplica a um conjunto de células SRAM conectadas a uma bit line, não sendo aplicado individualmente a outras células de memória como uma DRAM e não contemplando o efeito PBTI. Uma outra solução já existente, o sensor Scout flip-flop utilizado para aplicações ASIC (Application Specific Integrated Circuit) em circuitos digitais síncronos, atua também como um sensor de performance local e responde de forma preditiva na monitorização de faltas por atraso, utilizando por base janelas de deteção. Esta solução não foi projetada para a monitorização de operações de leitura e escrita em memórias SRAM e DRAM. No entanto, pela sua forma de atuar, esta solução aproxima-se mais da solução proposta neste trabalho, uma vez que o seu funcionamento se baseia em sinalização de sinais atrasados. Nesta dissertação, o recurso a simulações SPICE (Simulation Program with Integrated Circuit Emphasis) permite validar e testar o sensor de envelhecimento e performance. O caso de estudo utilizado para aplicar o sensor é uma memória CMOS, SRAM, composta por 6 transístores, juntamente com os seus circuitos periféricos, nomeadamente o amplificador sensor e o circuito de pré-carga e equalização, desenvolvidos em tecnologia CMOS de 65nm e 22nm, com recurso aos modelos de MOSFET ”Berkeley Predictive Technology Models (PTM)”. O sensor é devolvido e testado em 65nm e em 22nm com os modelos PTM, permitindo caracterizar o sensor de envelhecimento e performance desenvolvido, avaliando também de que forma o envelhecimento degrada as operações de leitura e escrita da SRAM, bem como a sua capacidade de armazenamento e robustez face ao ruído. Por fim, as simulações apresentadas provam que o sensor de envelhecimento e performance desenvolvido nesta tese de mestrado permite monitorizar com sucesso a performance e o envelhecimento de circuitos de memória SRAM, ultrapassando os desafios existentes nas anteriores soluções disponíveis para envelhecimento de memórias. Verificou-se que na presença de um envelhecimento que provoque uma degradação igual ou superior a 10%, o sensor de envelhecimento e performance deteta eficazmente a degradação na performance, sinalizando os erros. A sua utilização em memórias DRAM, embora possível, não foi testada nesta dissertação, ficando reservada para trabalho futuro

    Addressing the RRAM Reliability and Radiation Soft-Errors in the Memory Systems

    Get PDF
    With the continuous and aggressive technology scaling, the design of memory systems becomes very challenging. The desire to have high-capacity, reliable, and energy efficient memory arrays is rising rapidly. However, from the technology side, the increasing leakage power and the restrictions resulting from the manufacturing limitations complicate the design of memory systems. In addition to this, with the new machine learning applications, which require tremendous amount of mathematical operations to be completed in a timely manner, the interest in neuromorphic systems has increased in recent years. Emerging Non- Volatile Memory (NVM) devices have been suggested to be incorporated in the design of memory arrays due to their small size and their ability to reduce leakage power since they can retain their data even in the absence of power supply. Compared to other novel NVM devices, the Resistive Random Access Memory (RRAM) device has many advantages including its low-programming requirements, the large ratio between its high and low resistive states, and its compatibility with the Complementary Metal Oxide Semiconductor (CMOS) fabrication process. RRAM device suffers from other disadvantages including the instability in its switching dynamics and its sensitivity to process variations. Yet, one of the popular issues hindering the deployment of RRAM arrays in products are the RRAM reliability and radiation soft-errors. The RRAM reliability soft-errors result from the diffusion of oxygen vacations out of the conductive channels within the oxide material of the device. On the other hand, the radiation soft-errors are caused by the highly energetic cosmic rays incident on the junction of the MOS device used as a selector for the RRAM cell. Both of those soft-errors cause the unintentional change of the resistive state of the RRAM device. While there is research work in literature to address some of the RRAM disadvantages such as the switching dynamic instability, there is no dedicated work discussing the impact of RRAM soft-errors on the various designs to which the RRAM device is integrated and how the soft-errors can be automatically detected and fixed. In this thesis, we bring the attention to the need of considering the RRAM soft-errors to avoid the degradation in design performance. In addition to this, using previously reported SPICE models, which were experimentally verified, and widely adapted system level simulators and test benches, various solutions are provided to automatically detect and fix the degradation in design performance due to the RRAM soft-errors. The main focus in this work is to propose methodologies which solve or improve the robustness of memory systems to the RRAM soft-errors. These memories are expected to be incorporated in the current and futuristic platforms running the advanced machine learning applications. In more details, the main contributions of this thesis can be summarized as: - Provide in depth analysis of the impact of RRAM soft-errors on the performance of RRAM-based designs. - Provide a new SRAM cell which uses the RRAM device to reduce the SRAM leakage power with minimal impact on its read and write operations. This new SRAM cell can be incorporated in the Graphical Processing Unit (GPU) design used currently in the implementation of the machine learning platforms. - Provide a circuit and system solutions to resolve the reliability and radiation soft-errors in the RRAM arrays. These solution can automatically detect and fix the soft-errors with minimum impact on the delay and energy consumption of the memory array. - A framework is developed to estimate the effect of RRAM soft-errors on the performance of RRAM-based neuromorphic systems. This actually provides, for the first time, a very generic methodology through which the device level RRAM soft-errors are mapped to the overall performance of the neuromorphic systems. Our analysis show that the accuracy of the RRAM-based neuromorphic system can degrade by more than 48% due to RRAM soft-errors. - Two algorithms are provided to automatically detect and restore the degradation in RRAM-based neuromorphic systems due to RRAM soft-errors. The system and circuit level techniques to implement these algorithms are also explained in this work. In conclusion, this work offers initial steps for enabling the usage of RRAM devices in products by tackling one of its most known challenges: RRAM reliability and radiation soft-errors. Despite using experimentally verified SPICE models and widely popular system simulators and test benches, the provided solutions in this thesis need to be verified in the future work through fabrication to study the impact of other RRAM technology shortcomings including: a) the instability in its switching dynamics due to the stochastic nature of oxygen vacancies movement, and b) its sensitivity to process variations

    Leveraging Non-Volatile Memory in Modern Storage Management Architectures

    Get PDF
    Non-volatile memory technologies (NVM) introduce a novel class of devices that combine characteristics of both storage and main memory. Like storage, NVM is not only persistent, but also denser and cheaper than DRAM. Like DRAM, NVM is byte-addressable and has lower access latency. In recent years, NVM has gained a lot of attention both in academia and in the data management industry, with views ranging from skepticism to over excitement. Some critics claim that NVM is not cheap enough to replace flash-based SSDs nor is it fast enough to replace DRAM, while others see it simply as a storage device. Supporters of NVM have observed that its low latency and byte-addressability requires radical changes and a complete rewrite of storage management architectures. This thesis takes a moderate stance between these two views. We consider that, while NVM might not replace flash-based SSD or DRAM in the near future, it has the potential to reduce the gap between them. Furthermore, treating NVM as a regular storage media does not fully leverage its byte-addressability and low latency. On the other hand, completely redesigning systems to be NVM-centric is impractical. Proposals that attempt to leverage NVM to simplify storage management result in completely new architectures that face the same challenges that are already well-understood and addressed by the traditional architectures. Therefore, we take three common storage management architectures as a starting point, and propose incremental changes to enable them to better leverage NVM. First, in the context of log-structured merge-trees, we investigate the impact of storing data in NVM, and devise methods to enable small granularity accesses and NVM-aware caching policies. Second, in the context of B+Trees, we propose to extend the buffer pool and describe a technique based on the concept of optimistic consistency to handle corrupted pages in NVM. Third, we employ NVM to enable larger capacity and reduced costs in a index+log key-value store, and combine it with other techniques to build a system that achieves low tail latency. This thesis aims to describe and evaluate these techniques in order to enable storage management architectures to leverage NVM and achieve increased performance and lower costs, without major architectural changes.:1 Introduction 1.1 Non-Volatile Memory 1.2 Challenges 1.3 Non-Volatile Memory & Database Systems 1.4 Contributions and Outline 2 Background 2.1 Non-Volatile Memory 2.1.1 Types of NVM 2.1.2 Access Modes 2.1.3 Byte-addressability and Persistency 2.1.4 Performance 2.2 Related Work 2.3 Case Study: Persistent Tree Structures 2.3.1 Persistent Trees 2.3.2 Evaluation 3 Log-Structured Merge-Trees 3.1 LSM and NVM 3.2 LSM Architecture 3.2.1 LevelDB 3.3 Persistent Memory Environment 3.4 2Q Cache Policy for NVM 3.5 Evaluation 3.5.1 Write Performance 3.5.2 Read Performance 3.5.3 Mixed Workloads 3.6 Additional Case Study: RocksDB 3.6.1 Evaluation 4 B+Trees 4.1 B+Tree and NVM 4.1.1 Category #1: Buffer Extension 4.1.2 Category #2: DRAM Buffered Access 4.1.3 Category #3: Persistent Trees 4.2 Persistent Buffer Pool with Optimistic Consistency 4.2.1 Architecture and Assumptions 4.2.2 Embracing Corruption 4.3 Detecting Corruption 4.3.1 Embracing Corruption 4.4 Repairing Corruptions 4.5 Performance Evaluation and Expectations 4.5.1 Checksums Overhead 4.5.2 Runtime and Recovery 4.6 Discussion 5 Index+Log Key-Value Stores 5.1 The Case for Tail Latency 5.2 Goals and Overview 5.3 Execution Model 5.3.1 Reactive Systems and Actor Model 5.3.2 Message-Passing Communication 5.3.3 Cooperative Multitasking 5.4 Log-Structured Storage 5.5 Networking 5.6 Implementation Details 5.6.1 NVM Allocation on RStore 5.6.2 Log-Structured Storage and Indexing 5.6.3 Garbage Collection 5.6.4 Logging and Recovery 5.7 Systems Operations 5.8 Evaluation 5.8.1 Methodology 5.8.2 Environment 5.8.3 Other Systems 5.8.4 Throughput Scalability 5.8.5 Tail Latency 5.8.6 Scans 5.8.7 Memory Consumption 5.9 Related Work 6 Conclusion Bibliography A PiBenc

    Technical workflow in TV coverage of Mountain Bike events

    Get PDF
    This project focuses on the technical requirements needed for the television production of a live action sport event, especifically the RedBull's Mountain Bike competitions in the Catalan cup 2010. There are three main stages: the TV production on location, the editing and the deliver of the final video to the customer. Each different stage requires several technical decisions to be made and this document provides a detailed and comprehensible steps to achieve each purpose

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community

    NASA Tech Briefs, October 1991

    Get PDF
    Topics: New Product Ideas; NASA TU Services; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Machinery; Fabrication Technology; Mathematics and Information Sciences; Life Sciences

    7th International Conference on Higher Education Advances (HEAd'21)

    Full text link
    Information and communication technologies together with new teaching paradigms are reshaping the learning environment.The International Conference on Higher Education Advances (HEAd) aims to become a forum for researchers and practitioners to exchange ideas, experiences,opinions and research results relating to the preparation of students and the organization of educational systems.Doménech I De Soria, J.; Merello Giménez, P.; Poza Plaza, EDL. (2021). 7th International Conference on Higher Education Advances (HEAd'21). Editorial Universitat Politècnica de València. https://doi.org/10.4995/HEAD21.2021.13621EDITORIA
    corecore