41 research outputs found

    A programmable triangular neighborhood function for a Kohonen self-organizing map implemented on chip

    Get PDF
    An efficient transistor level implementation of a flexible, programmable Triangular Function (TF) that can be used as a Triangular Neighborhood Function (TNF) in ultra-low power, self-organizing maps (SOMs) realized as Application-Specific Integrated Circuit (ASIC) is presented. The proposed TNF block is a component of a larger neighborhood mechanism, whose role is to determine the distance between the winning neuron and all neighboring neurons. Detailed simulations carried out for the software model of such network show that the TNF forms a good approximation of the Gaussian Neighborhood Function (GNF), while being implemented in a much easier way in hardware. The overall mechanism is very fast. In the CMOS 0.18 mu m technology, distances to all neighboring neurons are determined in parallel, within the time not exceeding 11 ns, for an example neighborhood range, R, of 15. The TNF blocks in particular neurons require another 6 ns to calculate the output values directly used in the adaptation process. This is also performed in parallel in all neurons. As a result, after determining the winning neuron, the entire map is ready for the adaptation after the time not exceeding 17 ns, even for large numbers of neurons. This feature allows for the realization of ultra low power SOMs, which are hundred times faster than similar SOMs realized on PC. The signal resolution at the output of the TNF block has a dominant impact on the overall energy consumption as well as the silicon area. Detailed system level simulations of the SOM show that even for low resolutions of 3 to 6 bits, the learning abilities of the SUM are not affected. The circuit performance has been verified by means of transistor level Hspice simulations carried out for different transistor models and different values of supply voltage and the environment temperature - a typical procedure completed in case of commercial chips that makes the obtained results reliable. (C) 2011 Elsevier Ltd. All rights reserved

    Single Electron Devices and Circuit Architectures: Modeling Techniques, Dynamic Characteristics, and Reliability Analysis

    Get PDF
    The Single Electron (SE) technology is an important approach to enabling further feature size reduction and circuit performance improvement. However, new methods are required for device modeling, circuit behavior description, and reliability analysis with this technology due to its unique operation mechanism. In this thesis, a new macro-model of SE turnstile is developed to describe its physical characteristics for large-scale circuit simulation and design. Based on this model, several novel circuit architectures are proposed and implemented to further demonstrate the advantages of SE technique. The dynamic behavior of SE circuits, which is different from their CMOS counterpart, is also investigated using a statistical method. With the unreliable feature of SE devices in mind, a fast and recursive algorithm is developed to evaluate the reliability of SE logic circuits in a more efficient and effective manner

    A Flexible, Low-Power, Programmable Unsupervised Neural Network Based on Microcontrollers for Medical Applications

    Get PDF
    We present an implementation and laboratory tests of a winner takes all (WTA) artificial neural network (NN) on two microcontrollers (μC) with the ARM Cortex M3 and the AVR cores. The prospective application of this device is in wireless body sensor network (WBSN) in an on-line analysis of electrocardiograph (ECG) and electromyograph (EMG) biomedical signals. The proposed device will be used as a base station in the WBSN, acquiring and analysing the signals from the sensors placed on the human body. The proposed system is equiped with an analog-todigital converter (ADC), and allows for multi-channel acquisition of analog signals, preprocessing (filtering) and further analysis

    On microelectronic self-learning cognitive chip systems

    Get PDF
    After a brief review of machine learning techniques and applications, this Ph.D. thesis examines several approaches for implementing machine learning architectures and algorithms into hardware within our laboratory. From this interdisciplinary background support, we have motivations for novel approaches that we intend to follow as an objective of innovative hardware implementations of dynamically self-reconfigurable logic for enhanced self-adaptive, self-(re)organizing and eventually self-assembling machine learning systems, while developing this new particular area of research. And after reviewing some relevant background of robotic control methods followed by most recent advanced cognitive controllers, this Ph.D. thesis suggests that amongst many well-known ways of designing operational technologies, the design methodologies of those leading-edge high-tech devices such as cognitive chips that may well lead to intelligent machines exhibiting conscious phenomena should crucially be restricted to extremely well defined constraints. Roboticists also need those as specifications to help decide upfront on otherwise infinitely free hardware/software design details. In addition and most importantly, we propose these specifications as methodological guidelines tightly related to ethics and the nowadays well-identified workings of the human body and of its psyche

    Machine Learning

    Get PDF
    Machine Learning can be defined in various ways related to a scientific domain concerned with the design and development of theoretical and implementation tools that allow building systems with some Human Like intelligent behavior. Machine learning addresses more specifically the ability to improve automatically through experience

    Understanding Quantum Technologies 2022

    Full text link
    Understanding Quantum Technologies 2022 is a creative-commons ebook that provides a unique 360 degrees overview of quantum technologies from science and technology to geopolitical and societal issues. It covers quantum physics history, quantum physics 101, gate-based quantum computing, quantum computing engineering (including quantum error corrections and quantum computing energetics), quantum computing hardware (all qubit types, including quantum annealing and quantum simulation paradigms, history, science, research, implementation and vendors), quantum enabling technologies (cryogenics, control electronics, photonics, components fabs, raw materials), quantum computing algorithms, software development tools and use cases, unconventional computing (potential alternatives to quantum and classical computing), quantum telecommunications and cryptography, quantum sensing, quantum technologies around the world, quantum technologies societal impact and even quantum fake sciences. The main audience are computer science engineers, developers and IT specialists as well as quantum scientists and students who want to acquire a global view of how quantum technologies work, and particularly quantum computing. This version is an extensive update to the 2021 edition published in October 2021.Comment: 1132 pages, 920 figures, Letter forma

    Right Research

    Get PDF
    "Educational institutions play an instrumental role in social and political change, and are responsible for the environmental and social ethics of their institutional practices. The essays in this volume critically examine scholarly research practices in the age of the Anthropocene, and ask what accountability educators and researchers have in ‘righting’ their relationship to the environment. The volume further calls attention to the geographical, financial, legal and political barriers that might limit scholarly dialogue by excluding researchers from participating in traditional modes of scholarly conversation. As such, Right Research is a bold invitation to the academic community to rigorous self-reflection on what their research looks like, how it is conducted, and how it might be developed so as to increase accessibility and sustainability, and decrease carbon footprint. The volume follows a three-part structure that bridges conceptual and practical concerns: the first section challenges our assumptions about how sustainability is defined, measured and practiced; the second section showcases artist-researchers whose work engages with the impact of humans on our environment; while the third section investigates how academic spaces can model eco-conscious behaviour. This timely volume responds to an increased demand for environmentally sustainable research, and is outstanding not only in its interdisciplinarity, but its embrace of non-traditional formats, spanning academic articles, creative acts, personal reflections and dialogues. Right Research will be a valuable resource for educators and researchers interested in developing and hybridizing their scholarly communication formats in the face of the current climate crisis.

    Segmentación y detección de objetos en imágenes y vídeo mediante inteligencia computacional

    Get PDF
    Finalmente, se exponen las conclusiones obtenidas tras la realización de esta tesis y unas posibles líneas futuras de investigación. Fecha de lectura de Tesis: 17 diciembre 2018.La presente tesis trata sobre el procesamiento y análisis de imágenes y video mediante sistemas informáticos. Primeramente se hace una introducción, especificando contexto, objetivos y metodología. Luego se muestran los antecedentes, los fundamentos de la videovigilancia, las dificultades existentes y diversos algoritmos del estado del arte, seguido de las principales características del aprendizaje profundo, transporte inteligente y sistemas con cámara PTZ, finalizando con la evaluación de métodos y distintos conjuntos de datos. Después se muestran tres partes. La primera comenta los estudios desarrollados que tratan sobre segmentación. Aquí se explican diferentes modelos desarrollados cuyo objetivo es la detección de objetos, tanto usando hardware genérico o especifico como en ámbitos específicos, o un estudio de cómo influye la reducción del tamaño de las imágenes al rendimiento de los algoritmos. La segunda parte describe los trabajos que utilizan una cámara PTZ. El primero trabajo hace un seguimiento del objeto más anómalo del escenario, siendo el propio sistema el que decide cuáles son anómalos y cuáles no; el segundo muestra un sistema que indica a la cámara los movimientos a realizar en función de la salida producida por un modelo de fondo no panorámico y mejorada con un gas neuronal creciente. La tercera parte trata sobre los estudios desarrollados con relación con el transporte inteligente, como es la clasificación de los vehículos que aparecen en secuencias de tráfico. El primer trabajo aplica técnicas tradicionales como segmentación y extracción de rasgos; el segundo utiliza segmentación y redes convolucionales, complementado con un estudio del redimensionado de imágenes para proveerlas en el formato necesario a cada red; y el tercero emplea un modelo que detecta y clasifica objetos, estimando posteriormente la contaminación generada por los vehículos
    corecore