2,611 research outputs found
Optimization of state assignment in a finite state machine: evaluation of a simulated annealing approach
In this research, the application of the Simulated Annealing algorithm to solve the state assignment problem in finite state machines is investigated. The state assignment is a classic NP-Complete problem in digital systems design and impacts directly on both area and power costs as well as on the design time. The solutions found in the literature uses population-based methods that consume additional computer resources. The Simulated Annealing algorithm has been chosen because it does not use populations while seeking a solution. Therefore, the objective of this research is to evaluate the impact on the quality of the solution when using the Simulated Annealing approach. The proposed solution is evaluated using the LGSynth89 benchmark and compared with other approaches in the state-of-the-art. The experimental simulations point out an average loss in solution quality of 11%, while an average processing performance of 86%. The results indicate that it is possible to have few quality losses with a significant increase in processing performance
CMOS Ising Machines with Coupled Bistable Nodes
Ising machines use physics to naturally guide a dynamical system towards an
optimal state which can be read out as a heuristical solution to a
combinatorial optimization problem. Such designs that use nature as a computing
mechanism can lead to higher performance and/or lower operation costs. Quantum
annealers are a prominent example of such efforts. However, existing Ising
machines are generally bulky and energy intensive. Such disadvantages might
lead to intrinsic advantages at some larger scale in the future. But for now,
integrated electronic designs allow more immediate applications. We propose one
such design that uses bistable nodes, coupled with programmable and variable
strengths. The design is fully CMOS compatible for on-chip applications and
demonstrates competitive solution quality and significantly superior execution
time and energy.Comment: 11 pages, 12 figures, 2 tables, 5 sections
Approximate In-memory computing on RERAMs
Computing systems have seen tremendous growth over the past few decades in their capabilities, efficiency, and deployment use cases. This growth has been driven by progress in lithography techniques, improvement in synthesis tools, architectures and power management. However, there is a growing disparity between computing power and the demands on modern computing systems. The standard Von-Neuman architecture has separate data storage and data processing locations. Therefore, it suffers from a memory-processor communication bottleneck, which is commonly referred to as the \u27memory wall\u27. The relatively slower progress in memory technology compared with processing units has continued to exacerbate the memory wall problem. As feature sizes in the CMOS logic family reduce further, quantum tunneling effects are becoming more prominent. Simultaneously, chip transistor density is already so high that all transistors cannot be powered up at the same time without violating temperature constraints, a phenomenon characterized as dark-silicon. Coupled with this, there is also an increase in leakage currents with smaller feature sizes, resulting in a breakdown of \u27Dennard\u27s\u27 scaling. All these challenges cannot be met without fundamental changes in current computing paradigms. One viable solution is in-memory computing, where computing and storage are performed alongside each other. A number of emerging memory fabrics such as ReRAMS, STT-RAMs, and PCM RAMs are capable of performing logic in-memory. ReRAMs possess high storage density, have extremely low power consumption and a low cost of fabrication. These advantages are due to the simple nature of its basic constituting elements which allow nano-scale fabrication. We use flow-based computing on ReRAM crossbars for computing that exploits natural sneak paths in those crossbars. Another concurrent development in computing is the maturation of domains that are error resilient while being highly data and power intensive. These include machine learning, pattern recognition, computer vision, image processing, and networking, etc. This shift in the nature of computing workloads has given weight to the idea of approximate computing , in which device efficiency is improved by sacrificing tolerable amounts of accuracy in computation. We present a mathematically rigorous foundation for the synthesis of approximate logic and its mapping to ReRAM crossbars using search based and graphical methods
Quantum Cognitive Modeling: New Applications and Systems Research Directions
Expanding the benefits of quantum computing to new domains remains a
challenging task. Quantum applications are concentrated in only a few domains,
and driven by these few, the quantum stack is limited in supporting the
development or execution demands of new applications. In this work, we address
this problem by identifying both a new application domain, and new directions
to shape the quantum stack. We introduce computational cognitive models as a
new class of quantum applications. Such models have been crucial in
understanding and replicating human intelligence, and our work connects them
with quantum computing for the first time. Next, we analyze these applications
to make the case for redesigning the quantum stack for programmability and
better performance. Among the research opportunities we uncover, we study two
simple ideas of quantum cloud scheduling using data from gate-based and
annealing-based quantum computers. On the respective systems, these ideas can
enable parallel execution, and improve throughput. Our work is a contribution
towards realizing versatile quantum systems that can broaden the impact of
quantum computing on science and society
- …