43 research outputs found

    XOR multiplexing technique for nanocomputers

    Get PDF
    In emerging nanotechnologies, due to the manufacturing process, a significant percentage of components may be faulty. In order to make systems based on unreliable nano-scale components reliable, it is necessary to design fault-tolerant architectures. This paper presents a novel fault-tolerant technique for nanocomputers, namely the XOR multiplexing technique. This hardware redundancy technique is based on a numerous duplication of faulty components. We analyze the error distributions of the XOR multiplexing unit and the error distributions of multiple stages of the XOR multiplexing system, then compare them to the NAND multiplexing unit and the NAND multiplexing multiple stages system, respectively. The simulation results show that XOR multiplexing is more reliable than NAND multiplexing. Bifurcation theory is used to analyze the fault-tolerant ability of the system and the results show that XOR multiplexing technique has a high fault-tolerant ability. Similarly to the NAND multiplexing technique, this fault-tolerant technique is a potentially effective fault tolerant technique for future nanoelectronics

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community

    2022 roadmap on neuromorphic computing and engineering

    Get PDF

    Miniaturizing High Throughput Droplet Assays For Ultrasensitive Molecular Detection On A Portable Platform

    Get PDF
    Digital droplet assays – in which biological samples are compartmentalized into millions of femtoliter-volume droplets and interrogated individually – have generated enormous enthusiasm for their ability to detect biomarkers with single-molecule sensitivity. These assays have untapped potential for point-of-care diagnostics but are mainly confined to laboratory settings due to the instrumentation necessary to serially generate, control, and measure millions of compartments. To address this challenge, we developed an optofluidic platform that miniaturizes digital assays into a mobile format by parallelizing their operation. This technology has three key innovations: 1. the integration and parallel operation of hundred droplet generators onto a single chip that operates \u3e100x faster than a single droplet generator. 2. the fluorescence detection of droplets at \u3e100x faster than conventional in-flow detection using time-domain encoded mobile-phone imaging, and 3. the integration of on-chip delay lines and sample processing to allow serum-to-answer device operation. By using this time-domain modulation with cloud computing, we overcome the low framerate of digital imaging, and achieve throughputs of one million droplets per second. To demonstrate the power of this approach, we performed a duplex digital enzyme-linked immunosorbent assay (ELISA) in serum to show a 1000x improvement over standard ELISA and matching that of the existing laboratory-based gold standard digital ELISA system. This work has broad potential for ultrasensitive, highly multiplexed detection, in a mobile format. Building on our initial demonstration, we explored the following: (i) we demonstrated that the platform can be extended to \u3e100x multiplexing by using time-domain encoded light sources to detect color-coded beads that each correspond to a unique assay, (ii) we demonstrated that the platform can be extended to the detection of nucleic acid by implementing polymerase chain reaction, and (iii) we demonstrated that sensitivity can be improved with a nanoparticle-enhanced ELISA. Clinical applications can be expanded to measure numerous biomarkers simultaneously such as surface markers, proteins, and nucleic acids. Ultimately, by building a robust device, suitable for low-cost implementation with ultrasensitive capabilities, this platform can be used as a tool to quantify numerous medical conditions and help physicians choose optimal treatment strategies to enable personalized medicine in a cost-effective manner

    Energy efficient hybrid computing systems using spin devices

    Get PDF
    Emerging spin-devices like magnetic tunnel junctions (MTJ\u27s), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ∼20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode\u27 processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ∼100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters

    Joint Communication and Positioning based on Channel Estimation

    Get PDF
    Mobile wireless communication systems have rapidly and globally become an integral part of everyday life and have brought forth the internet of things. With the evolution of mobile wireless communication systems, joint communication and positioning becomes increasingly important and enables a growing range of new applications. Humanity has already grown used to having access to multimedia data everywhere at every time and thereby employing all sorts of location-based services. Global navigation satellite systems can provide highly accurate positioning results whenever a line-of-sight path is available. Unfortunately, harsh physical environments are known to degrade the performance of existing systems. Therefore, ground-based systems can assist the existing position estimation gained by satellite systems. Determining positioning-relevant information from a unified signal structure designed for a ground-based joint communication and positioning system can either complement existing systems or substitute them. Such a system framework promises to enhance the existing systems by enabling a highly accurate and reliable positioning performance and increased coverage. Furthermore, the unified signal structure yields synergetic effects. In this thesis, I propose a channel estimation-based joint communication and positioning system that employs a virtual training matrix. This matrix consists of a relatively small training percentage, plus the detected communication data itself. Via a core semi- blind estimation approach, this iteratively includes the already detected data to accurately determine the positioning-relevant parameter, by mutually exchanging information between the communication part and the positioning part of the receiver. Synergy is created. I propose a generalized system framework, suitable to be used in conjunction with various communication system techniques. The most critical positioning-relevant parameter, the time-of-arrival, is part of a physical multipath parameter vector. Estimating the time-of-arrival, therefore, means solving a global, non-linear, multi-dimensional optimization problem. More precisely, it means solving the so-called inverse problem. I thoroughly assess various problem formulations and variations thereof, including several different measurements and estimation algorithms. A significant challenge, when it comes to solving the inverse problem to determine the positioning-relevant path parameters, is imposed by realistic multipath channels. Most parameter estimation algorithms have proven to perform well in moderate multipath environments. It is mathematically straightforward to optimize this performance in the sense that the number of observations has to exceed the number of parameters to be estimated. The typical parameter estimation problem, on the other hand, is based on channel estimates, and it assumes that so-called snapshot measurements are available. In the case of realistic channel models, however, the number of observations does not necessarily exceed the number of unknowns. In this thesis, I overcome this problem, proposing a method to reduce the problem dimensionality via joint model order selection and parameter estimation. Employing the approximated and estimated parameter covariance matrix inherently constrains the estimation problem’s model order selection to result in optimal parameter estimation performance and hence optimal positioning performance. To compare these results with the optimally achievable solution, I introduce a focused order-related lower bound in this thesis. Additionally, I use soft information as a weighting matrix to enhance the positioning algorithm positioning performance. For demonstrating the feasibility and the interplay of the proposed system components, I utilize a prototype system, based on multi-layer interleave division multiple access. This proposed system framework and the investigated techniques can be employed for multiple existing systems or build the basis for future joint communication and positioning systems. The assessed estimation algorithms are transferrable to all kinds of joint communication and positioning system designs. This thesis demonstrates their capability to, in principle, successfully cope with challenging estimation problems stemming from harsh physical environments

    Circuits and Systems Advances in Near Threshold Computing

    Get PDF
    Modern society is witnessing a sea change in ubiquitous computing, in which people have embraced computing systems as an indispensable part of day-to-day existence. Computation, storage, and communication abilities of smartphones, for example, have undergone monumental changes over the past decade. However, global emphasis on creating and sustaining green environments is leading to a rapid and ongoing proliferation of edge computing systems and applications. As a broad spectrum of healthcare, home, and transport applications shift to the edge of the network, near-threshold computing (NTC) is emerging as one of the promising low-power computing platforms. An NTC device sets its supply voltage close to its threshold voltage, dramatically reducing the energy consumption. Despite showing substantial promise in terms of energy efficiency, NTC is yet to see widescale commercial adoption. This is because circuits and systems operating with NTC suffer from several problems, including increased sensitivity to process variation, reliability problems, performance degradation, and security vulnerabilities, to name a few. To realize its potential, we need designs, techniques, and solutions to overcome these challenges associated with NTC circuits and systems. The readers of this book will be able to familiarize themselves with recent advances in electronics systems, focusing on near-threshold computing

    DICOM for EIT

    Get PDF
    With EIT starting to be used in routine clinical practice [1], it important that the clinically relevant information is portable between hospital data management systems. DICOM formats are widely used clinically and cover many imaging modalities, though not specifically EIT. We describe how existing DICOM specifications, can be repurposed as an interim solution, and basis from which a consensus EIT DICOM ‘Supplement’ (an extension to the standard) can be writte

    Estimation of thorax shape for forward modelling in lungs EIT

    Get PDF
    The thorax models for pre-term babies are developed based on the CT scans from new-borns and their effect on image reconstruction is evaluated in comparison with other available models

    Rapid generation of subject-specific thorax forward models

    Get PDF
    For real-time monitoring of lung function using accurate patient geometry, shape information needs to be acquired and a forward model generated rapidly. This paper shows that warping a cylindrical model to an acquired shape results in meshes of acceptable mesh quality, in terms of stretch and aspect ratio
    corecore