27 research outputs found
Configurable multiple value encoders using semi floating-gate
This thesis presents a new multiple-valued encoder with re-configurable radix. The proposed circuits utilize serial cyclic D/A conversion and semi floatinggate (SFG) inverters for compact design and a high functional capacity per device. A re-configurable radix is not supported by existing SFG inverter based multiple-valued encoders which make use of parallel binary weight D/A conversion. The study covers least significant bit-first (LSB), least significant bit-first with alternate bit inversion (LSB ABI) and most significant bit-first (MSB) digital input codes.
The serial cyclic D/A converters with LSB and LSB ABI input codes are implemented in a double-poly 0.35um AMS process. Measured results are provided and analyzed using standard static D/A converter performance measures. Circuits are tested using the practical radices 4, 8 and 16. Experimental results demonstrate that serial cyclic D/A converters using SFG inverters are feasible. Compared to related work on cyclic D/A conversion, the proposed circuits feature both a reduced number of devices and a reduction in the required die area.
Several new techniques are identified for extending the resolution beyond radix 4, 8 and 16 MVL applications. This includes an error correction algorithm called least significant bit-first with alternate bit inversion (LSB ABI), a sample and hold clock scheme and a Dual Data-Rate (DDR) mode of D/A converter operation. The techniques are implemented on a chip and measured results are provided.
The thesis also includes simulation work on several new SFG based circuits. A ternary serial D/A converter, a MSB-first serial D/A converter and a multiple-valued frequency divider which features re-configurable modulus
An Optimal Gate Design for the Synthesis of Ternary Logic Circuits
Department of Electrical EngineeringOver the last few decades, CMOS-based digital circuits have been steadily developed. However, because of the power density limits, device scaling may soon come to an end, and new approaches for circuit designs are required. Multi-valued logic (MVL) is one of the new approaches, which increases the radix for computation to lower the complexity of the circuit. For the MVL implementation, ternary logic circuit designs have been proposed previously, though they could not show advantages over binary logic, because of unoptimized synthesis techniques.
In this thesis, we propose a methodology to design ternary gates by modeling pull-up and pull-down operations of the gates. Our proposed methodology makes it possible to synthesize ternary gates with a minimum number of transistors. From HSPICE simulation results, our ternary designs show significant power-delay product reductions; 49 % in the ternary full adder and 62 % in the ternary multiplier compared to the existing methodology. We have also compared the number of transistors in CMOS-based binary logic circuits and ternary device-based logic circuits
We propose a methodology for using ternary values effectively in sequential logic. Proposed ternary D flip-flop is designed to normally operate in four-edges of a ternary clock signal. A quad-edge-triggered ternary D flip-flop (QETDFF) is designed with static gates using CNTFET. From HSPICE simulation results, we have confirmed that power-delay-product (PDP) of QETDFF is reduced by 82.31 % compared to state of the art ternary D flip-flop. We synthesize a ternary serial adder using QETDFF. PDP of the proposed ternary serial adder is reduced by 98.23 % compared to state of the art design.ope
Integrated Circuits/Microchips
With the world marching inexorably towards the fourth industrial revolution (IR 4.0), one is now embracing lives with artificial intelligence (AI), the Internet of Things (IoTs), virtual reality (VR) and 5G technology. Wherever we are, whatever we are doing, there are electronic devices that we rely indispensably on. While some of these technologies, such as those fueled with smart, autonomous systems, are seemingly precocious; others have existed for quite a while. These devices range from simple home appliances, entertainment media to complex aeronautical instruments. Clearly, the daily lives of mankind today are interwoven seamlessly with electronics. Surprising as it may seem, the cornerstone that empowers these electronic devices is nothing more than a mere diminutive semiconductor cube block. More colloquially referred to as the Very-Large-Scale-Integration (VLSI) chip or an integrated circuit (IC) chip or simply a microchip, this semiconductor cube block, approximately the size of a grain of rice, is composed of millions to billions of transistors. The transistors are interconnected in such a way that allows electrical circuitries for certain applications to be realized. Some of these chips serve specific permanent applications and are known as Application Specific Integrated Circuits (ASICS); while, others are computing processors which could be programmed for diverse applications. The computer processor, together with its supporting hardware and user interfaces, is known as an embedded system.In this book, a variety of topics related to microchips are extensively illustrated. The topics encompass the physics of the microchip device, as well as its design methods and applications
Exploitation of Unintentional Information Leakage from Integrated Circuits
Unintentional electromagnetic emissions are used to recognize or verify the identity of a unique integrated circuit (IC) based on fabrication process-induced variations in a manner analogous to biometric human identification. The effectiveness of the technique is demonstrated through an extensive empirical study, with results presented indicating correct device identification success rates of greater than 99:5%, and average verification equal error rates (EERs) of less than 0:05% for 40 near-identical devices. The proposed approach is suitable for security applications involving commodity commercial ICs, with substantial cost and scalability advantages over existing approaches. A systematic leakage mapping methodology is also proposed to comprehensively assess the information leakage of arbitrary block cipher implementations, and to quantitatively bound an arbitrary implementation\u27s resistance to the general class of differential side channel analysis techniques. The framework is demonstrated using the well-known Hamming Weight and Hamming Distance leakage models, and approach\u27s effectiveness is demonstrated through the empirical assessment of two typical unprotected implementations of the Advanced Encryption Standard. The assessment results are empirically validated against correlation-based differential power and electromagnetic analysis attacks
Silicon Photomultiplier Readout Electronics for Imaging Calorimetry Applications
Experiments at future linear collider experiments will allow to reach an unprecedented
measurement resolution for standard model processes and the search for new physics. In
order to exploit the full potential of the clean initial state in the electron-positron colli-
sions, a jet energy resolution of 3-4% is required, which is not achievable with classical
calorimetry approaches. The detectors will be optimized for the use of particle flow algo-
rithms to achieve the required energy resolution, resulting in the need of highly granular
(imaging) and compact calorimetry systems.
This work covers the development of specialized readout electronics for scintillator-based
calorimeters read out by Silicon Photomultipliers. The readout electronics are required
to provide a precise charge measurement capability over a large dynamic range, be fully
integrated and self triggered. In order to allow a calibration of the calorimeters, the
readout electronics must be capable of measuring the detector gain from the response to
signals at the level of few photons. Noise contributions affecting the resolution for the gain
calibration are discussed and used for the circuit optimization. Due to the high channel
density, the power consumption of the front-end electronics is extremely limited, requiring
to implement power pulsing techniques to minimize the power consumption
NASA Tech Briefs, September 1993
Topics include: Microelectronics; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Machinery/Automation; Manufacturing/Fabrication; Mathematics and Information Sciences; Life Sciences; Books and Reports
Intrinsic Hardware Evolution on the Transistor Level
This thesis presents a novel approach to the automated synthesis of analog circuits. Evolutionary algorithms are used in conjunction with a fitness evaluation on a dedicated ASIC that serves as the analog substrate for the newly bred candidate solutions. The advantage of evaluating the candidate circuits directly in hardware is twofold. First, it may speed up the evolutionary algorithms, because hardware tests can usually be performed faster than simulations. Second, the evolved circuits are guaranteed to work on a real piece of silicon. The proposed approach is realized as a hardware evolution system consisting of an IBM compatible general purpose computer that hosts the evolutionary algorithm, an FPGA-based mixed signal test board, and the analog substrate. The latter one is designed as a Field Programmable Transistor Array (FPTA) whose programmable transistor cells can be almost freely connected. The transistor cells can be configured to adopt one out of 75 different channel geometries. The chip was produced in a 0.6µm CMOS process and provides ample means for the input and output of analog signals. The configuration is stored in SRAM cells embedded in the programmable transistor cells. The hardware evolution system is used for numerous evolution experiments targeted at a wide variety of different circuit functionalities. These comprise logic gates, Gaussian function circuits, D/A converters, low- and highpass filters, tone discriminators, and comparators. The experimental results are thoroughly analyzed and discussed with respect to related work
Recommended from our members
Dead in the long room
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonThis thesis comprises a crime novel, Dead in the Long Room, and an accompanying extended critical essay. Dead in the Long Room is a novel written in homage to the Golden Age of crime and the many writers that genre represents. It embodies Gulddal & Rolls’ (2016) notion of the creative-critical nexus - the idea that literary texts do not exist as distinct from critical endeavour, but are in themselves exercises in critical response. Dead in the Long Room is therefore to be conceived as what I have termed elsewhere (Green 2021) ‘enacted criticism’. The critical essay that follows the novel offers an extended critical commentary on and engagement with my processes as a writer in the production of the novel. Drawing on critical resources derived from inter alia Bruner, Barthes, Bakhtin and Todorov, it engages in sustained critical fashion with the genre of Golden Age detective fiction - a form that is eminently self-reflexive and that is frequently marked by a deep awareness of its own ‘constructedness’ - and the ways in which textual interaction is conceived. Taken together, the novel and the critical essay are considered as examples of the kinds of ‘poetics’ envisioned by Lasky (2013)
Comparison of the vocabularies of the Gregg shorthand dictionary and Horn-Peterson's basic vocabulary of business letters
This study is a comparative analysis of the vocabularies of Horn and Peterson's The Basic Vocabulary of Business Letters1 and the Gregg Shorthand Dictionary.2 Both books purport to present a list of words most frequently encountered by stenographers and students of shorthand. The, Basic Vocabulary of Business Letters, published "in answer to repeated requests for data on the words appearing most frequently in business letters,"3 is a frequency list specific to business writing. Although the book carries the copyright date of 1943, the vocabulary was compiled much earlier. The listings constitute a part of the data used in the preparation of the 10,000 words making up the ranked frequency list compiled by Ernest Horn and staff and published in 1926 under the title of A Basic Writing Vocabulary: 10,000 Words Lost Commonly Used in Writing. The introduction to that publication gives credit to Miss Cora Crowder for the contribution of her Master's study at the University of Minnesota concerning words found in business writing. With additional data from supplementary sources, the complete listing represents twenty-six classes of business, as follows 1. Miscellaneous 2. Florists 3. Automobile manufacturers and sales companie
Recommended from our members
Portable computers for real-time signal processing: EEG analysis as a case study
Recent advances in both digital hardware and digital signal theory have led to a rapid expansion in the importance and application of computer-aided measurement (CAM) techniques. Of these advances, the emergence of cheap microprocessor technology of sufficient processing power and speed to support some of the real-time signal-processing tasks encountered in CAM, is probably the single most important factor.
The Roving Slave Processor (RSP) represents a novel extension to the field of CAM. The RSP is a basic hardware unit comprising, in its simplest form, a central processor, a memory system and a means' of input-output. By the use of
a microprocessor, it is possible to reduce the size of the complete system to very small dimensions, i.e. to construct a portable computer.
The unit is wholly dependent upon a master computer for the provision of all fundamental peripherals (e.g. teletype, reader-punch, etc.) and for all program preparation. To provide these facilities, a special purpose interface has been constructed. The RSP is, however, capable of disconnected operation and this is shown to lead to a very efficient and economical means by which to perform CAM operations. The design and development of two prototypes is described with particular attention being given to the choice of processor, the storage system and the link to the rnaster computer. Some consideration has also been given to the problem of how the RSPs should be programmed and a scheme based on a high-level calling system is detailed. Problems of reliability, both hardware and software, are also discussed.
An application of the RSP technique in the very demanding field of real-time EEG analysis is described, with particular attention given to the development of an automatic spike detector algorithm. The occurrence of spikes
in the EEG signal is of particular clinical significance as it is indicative of the onset of an epileptic attack. Sharpwaves, slow-waves and all other abnormal behaviour have been omitted from this study.
A system, based on a filtered, first-order difference of the EEG signal has been developed and is described. Very encouraging results have been obtained, with a 95% success rate for the abnormal spikes occurring in a series of test records.
Finally, techniques for the production of a miniature version of the RSP, which may be attached to and conveniently carried by a patient, are discussed