10 research outputs found

    Automated Synthesis of Unconventional Computing Systems

    Get PDF
    Despite decades of advancements, modern computing systems which are based on the von Neumann architecture still carry its shortcomings. Moore\u27s law, which had substantially masked the effects of the inherent memory-processor bottleneck of the von Neumann architecture, has slowed down due to transistor dimensions nearing atomic sizes. On the other hand, modern computational requirements, driven by machine learning, pattern recognition, artificial intelligence, data mining, and IoT, are growing at the fastest pace ever. By their inherent nature, these applications are particularly affected by communication-bottlenecks, because processing them requires a large number of simple operations involving data retrieval and storage. The need to address the problems associated with conventional computing systems at the fundamental level has given rise to several unconventional computing paradigms. In this dissertation, we have made advancements for automated syntheses of two types of unconventional computing paradigms: in-memory computing and stochastic computing. In-memory computing circumvents the problem of limited communication bandwidth by unifying processing and storage at the same physical locations. The advent of nanoelectronic devices in the last decade has made in-memory computing an energy-, area-, and cost-effective alternative to conventional computing. We have used Binary Decision Diagrams (BDDs) for in-memory computing on memristor crossbars. Specifically, we have used Free-BDDs, a special class of binary decision diagrams, for synthesizing crossbars for flow-based in-memory computing. Stochastic computing is a re-emerging discipline with several times smaller area/power requirements as compared to conventional computing systems. It is especially suited for fault-tolerant applications like image processing, artificial intelligence, pattern recognition, etc. We have proposed a decision procedures-based iterative algorithm to synthesize Linear Finite State Machines (LFSM) for stochastically computing non-linear functions such as polynomials, exponentials, and hyperbolic functions

    Evaluation of automated organ segmentation for total-body PET-CT

    Get PDF
    The ability to diagnose rapidly and accurately and treat patients is substantially facilitated by medical images. Radiologists' visual assessment of medical images is crucial to their study. Segmenting images for diagnostic purposes is a crucial step in the medical imaging process. The purpose of medical image segmentation is to locate and isolate ‘Regions of Interest’ (ROI) within a medical image. Several medical uses rely on this procedure, including diagnosis, patient management, and medical study. Medical image segmentation has applications beyond just diagnosis and treatment planning. Quantitative information from medical images can be extracted by image segmentation and employed in the research of new diagnostic and treatment procedures. In addition, image segmentation is a critical procedure in several programs for image processing, including image fusion and registration. In order to construct a single, high-resolution, high-contrast image of an item or organ from several images, a process called "image registration" is used. A more complete picture of the patient's anatomy can be obtained through image fusion, which entails integrating numerous images from different modalities such as computed tomography (CT) and Magnetic resonance imaging (MRI). Once images are obtained using imaging technologies, they go through post-processing procedures before being analyzed. One of the primary and essential steps in post-processing is image segmentation, which involves dividing the images into parts and utilizing only the relevant sections for analysis. This project explores various imaging technologies and tools that can be utilized for image segmentation. Many open-source imaging tools are available for segmenting medical images across various applications. The objective of this study is to use the Jaccard index to evaluate the degree of similarity between the segmentations produced by various medical image visualization and analysis programs

    The Logic of Random Pulses: Stochastic Computing.

    Full text link
    Recent developments in the field of electronics have produced nano-scale devices whose operation can only be described in probabilistic terms. In contrast with the conventional deterministic computing that has dominated the digital world for decades, we investigate a fundamentally different technique that is probabilistic by nature, namely, stochastic computing (SC). In SC, numbers are represented by bit-streams of 0's and 1's, in which the probability of seeing a 1 denotes the value of the number. The main benefit of SC is that complicated arithmetic computation can be performed by simple logic circuits. For example, a single (logic) AND gate performs multiplication. The dissertation begins with a comprehensive survey of SC and its applications. We highlight its main challenges, which include long computation time and low accuracy, as well as the lack of general design methods. We then address some of the more important challenges. We introduce a new SC design method, called STRAUSS, that generates efficient SC circuits for arbitrary target functions. We then address the problems arising from correlation among stochastic numbers (SNs). In particular, we show that, contrary to general belief, correlation can sometimes serve as a resource in SC design. We also show that unlike conventional circuits, SC circuits can tolerate high error rates and are hence useful in some new applications that involve nondeterministic behavior in the underlying circuitry. Finally, we show how SC's properties can be exploited in the design of an efficient vision chip that is suitable for retinal implants. In particular, we show that SC circuits can directly operate on signals with neural encoding, which eliminates the need for data conversion.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113561/1/alaghi_1.pd

    New Views for Stochastic Computing: From Time-Encoding to Deterministic Processing

    Get PDF
    University of Minnesota Ph.D. dissertation.July 2018. Major: Electrical/Computer Engineering. Advisor: David Lilja. 1 computer file (PDF); xi, 149 pages.Stochastic computing (SC), a paradigm first introduced in the 1960s, has received considerable attention in recent years as a potential paradigm for emerging technologies and ''post-CMOS'' computing. Logical computation is performed on random bitstreams where the signal value is encoded by the probability of obtaining a one versus a zero. This unconventional representation of data offers some intriguing advantages over conventional weighted binary. Implementing complex functions with simple hardware (e.g., multiplication using a single AND gate), tolerating soft errors (i.e., bit flips), and progressive precision are the primary advantages of SC. The obvious disadvantage, however, is latency. A stochastic representation is exponentially longer than conventional binary radix. Long latencies translate into high energy consumption, often higher than that of their binary counterpart. Generating bit streams is also costly. Factoring in the cost of the bit-stream generators, the overall hardware cost of an SC implementation is often comparable to a conventional binary implementation. This dissertation begins by proposing a highly unorthodox idea: performing computation with digital constructs on time-encoded analog signals. We introduce a new, energy-efficient, high-performance, and much less costly approach for SC using time-encoded pulse signals. We explore the design and implementation of arithmetic operations on time-encoded data and discuss the advantages, challenges, and potential applications. Experimental results on image processing applications show up to 99% performance speedup, 98% saving in energy dissipation, and 40% area reduction compared to prior stochastic implementations. We further introduce a low-cost approach for synthesizing sorting network circuits based on deterministic unary bit-streams. Synthesis results show more than 90% area and power savings compared to the costs of the conventional binary implementation. Time-based encoding of data is then exploited for fast and energy-efficient processing of data with the developed sorting circuits. Poor progressive precision is the main challenge with the recently developed deterministic methods of SC. We propose a high-quality down-sampling method which significantly improves the processing time and the energy consumption of these deterministic methods by pseudo-randomizing bitstreams. We also propose two novel deterministic methods of processing bitstreams by using low-discrepancy sequences. We further introduce a new advantage to SC paradigm-the skew tolerance of SC circuits. We exploit this advantage in developing polysynchronous clocking, a design strategy for optimizing the clock distribution network of SC systems. Finally, as the first study of its kind to the best of our knowledge, we rethink the memory system design for SC. We propose a seamless stochastic system, StochMem, which features analog memory to trade the energy and area overhead of data conversion for computation accuracy

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Computer Science & Technology Series : XVIII Argentine Congress of Computer Science. Selected papers

    Get PDF
    CACIC’12 was the eighteenth Congress in the CACIC series. It was organized by the School of Computer Science and Engineering at the Universidad Nacional del Sur. The Congress included 13 Workshops with 178 accepted papers, 5 Conferences, 2 invited tutorials, different meetings related with Computer Science Education (Professors, PhD students, Curricula) and an International School with 5 courses. CACIC 2012 was organized following the traditional Congress format, with 13 Workshops covering a diversity of dimensions of Computer Science Research. Each topic was supervised by a committee of 3-5 chairs of different Universities. The call for papers attracted a total of 302 submissions. An average of 2.5 review reports were collected for each paper, for a grand total of 752 review reports that involved about 410 different reviewers. A total of 178 full papers, involving 496 authors and 83 Universities, were accepted and 27 of them were selected for this book.Red de Universidades con Carreras en Informática (RedUNCI

    Development of quality assurance procedures and methods for the CBM Silicon Tracking System

    Get PDF
    The Compressed Baryonic Matter (CBM) experiment at the future Facility for Antiproton and Ion Research (FAIR) aims to study the properties of nuclear matter at high net-baryon densities and moderate temperatures. It is expected that, utilizing ultra-relativistic heavy-ion collisions, a phase transition from hadronic matter to QCD matter will be probed. Among the key objectives are the determination of the nature and order of the transition (deconfinement and/or chiral) and the observation of a critical end-point. To measure and determine the physics phenomena occurring in these collisions, appropriate detectors are required. The Silicon Tracking System (STS) is the key detector to reconstruct charged particle tracks created in heavy-ion collisions. In order to assure the necessary detector performance, about 900 silicon microstrip sensors must be checked and tested for their quality. For these tasks highly efficient and highly automated procedures and methods have to be developed. The first part of this dissertation reports on a novel automated inspection system developed for the optical quality control of silicon microstrip sensors. Proposed methods and procedures allow to scan along the individual sensors to recognize and classify sensor defects. Examples of these defects are: surface scratches, implant defects, metalization layer lithography defects and others. In order to separate and classify these defects various image-processing algorithms based on machine vision are used. The silicon sensors are also characterized geometrically to ensure the mechanical precision targeted for the detector assembly procedures. Since the STS detector will be operated in a high radiation environment with a total non-ionizing radiation dose up to 1x10^14 n_eq/cm^2 over 6 years of operation, the silicon sensors need to be kept in the temperature range of -5 to -10 °C at all times to minimize reverse annealing effects and to avoid thermal runaway. The second part of this work is devoted to the development and optimization of the design of cooling bodies, which remove the thermal energy of overall more than 40 kW produced by the front-end readout electronics. In particular, thermodynamical models were developed to estimate the cooling regimes and thermal simulations of the cooling bodies were carried out. Based on the performed calculations an innovative bi-phase CO2 cooling system of up to 200 W cooling power was built and allowed to verify the simulated cooling body designs experimentally.In der geplanten Experimentieranlage für Antiprotonen- und Ionenforschung (Facility for Antiproton and Ion Research, FAIR) wird das Compressed Baryonic Matter Experiment (CBM) nukleare Materie bei hoher Baryonendichte und moderaten Temperaturen untersuchen. Der Phasenübergang zwischen hadronischer und QCD-Materie kann mithilfe von ultrarelativistischen Schwerionenkollisionen untersucht werden. Die wichtigsten Ziele sind die Bestimmung der Art des Übergangs (Deconfinement- und/oder chiraler Phasenübergang) und die Untersuchung des kritischen Endpunktes im Phasendiagramm. Um diese Phänomene zu untersuchen, sind geeignete Detektorsysteme notwendig. Das Silicon Tracking System (STS) ist der zentrale Detektor, mit Hilfe dessen die Spuren der in den Schwerionenkollisionen erzeugten geladenen Teilchen rekonstruiert werden. Um die volle Funktionsfähigkeit des STS sicherzustellen, müssen die mehr als 900 Siliziumstreifensensoren vor dem Zusammenbau überprüft und getestet werden. Hierfür müssen die hocheffiziente und automatisierte Prozeduren und Methoden entwickelt werden. In erstem Teil dieser Dissertation wird über ein automatisiertes optisches Inspektionssystem berichtet. Das System erlaubt es, die einzelnen Siliziumsensoren auf potentielle vorhandene Oberflächendefekte zu untersuchen und sie zu klassifizieren. Beispiele hierfür sind: Kratzer auf der Oberfläche, Implantierungsdefekte oder Lithographiedefekte der Metallisierungsschicht. Für das Erkennen dieser Defekte werden mehrere “Machine Vision” Bildbearbeitungsalgorithmen benutzt. Außerdem werden die geometrischen Parameter der Sensoren, die für den Zusammenbau des STS wichtig sind, optisch kontrolliert. Der STS Detektor wird bei extrem hohen Kollisionsraten betrieben. Innerhalb einer Betriebsbszeit von 6 Jahren wird eine Strahlungsdosis von bis zu 1x10^14 n_eq/cm^2 akkumuliert, was zu einer deutlichen Erhöhung des Dunkelstrom führt und letztlich des “end-of-life” Kriterium darstellt. Die Siliziumsensoren müssen deswegen auf -5 bis -10 °C gekühlt werden, um “reverse Annealing” Effekte zu minimieren und das “Thermal Runaway” Phänomen zu verzögern. Durch die Ausleselektronik werden andererseits mehr als 40 kW an thermischer Energie nahe der Sensoren produziert, die deshalb mit Kühlkörpern komplett abgeleitet werden muß. Das zweite Teil dieser Dissertation wurde der Optimierung von Kühlkörpern gewidmet. Dafür wurden thermodynamische Modelle implementiert und entsprechende thermische Simulationen durchgeführt. Im Rahmen der Arbeit wurde ein 200 W CO2 Kühlungssystem gebaut, das es erlaubt, die Modellberechnungen und Simulationen einer Kühlung mit 2-phasigem CO2 zu überprüfen
    corecore