10 research outputs found
Efficient Procedure Improving Precision of High Conditioned Matrices in Electronic Circuits Analysis
In this article, we propose several improvements that could be done to SPICE simulator. The first is based on a functional implementation of device models. The advantages of functional implementation are demonstrated on basic Shichman-Hodges model of MOS transistor. It starts with a description of primary algorithms used in SPICE simulator for the solution of circuits with nonlinear devices and identify the problems that can occur during simulations.Main part of the article is devoted to improved factorization procedure for simulation of the nonlinear electronic circuits. The primary intention of the proposed method is to increase final precision of the result in a case of high condition linear systems. The procedure is based on a use of the iterative methods for solution of nonlinear and linear equations. Utilizing those methods for one iterative process helps to reduce memory consumption during simulation computation, and it can significantly improve simulation precision. The procedure allows to use enumeration with definable precision in a very efficient way
Algorithms for Analysis of Nonlinear High-Frequency Circuits
The most efficient simulation solvers use composite procedures that adaptively rearrange
computation algorithms to maximize simulation performance. Fast and stable processing
optimized for given simulation problem is essential for any modern simulator. It is
characteristic for electronic circuit analysis that complexity of simulation is affected
by circuit size and used device models. Implementation of electronic device models in
program SPICE uses traditional implementation allowing fast computation but further
modification of model can be questionable.
The first fundamental thesis aim is scalability of the simulation based on the adaptive
internal solver composing different algorithms according to properties of simulation
problem to maximize simulation performance. In a case of the small circuit as faster
solution prove simple, straightforward methods that utilize arithmetic operations without
unnecessary condition jumping and memory rearrangements that can not be effectively
optimized by a compiler. The limit of small size simulation problems is related to
computation machine capabilities. The present day PC sets this limit to fifty independent
voltage nodes where inefficiency of calculation procedure does not play any role in overall
processor performance. The scalable solver must also be able to handle correctly simulation
of large-scale circuits that requires entirely different approach apart to standard size
circuits. The unique properties of simulation of the electronic circuits that played until this
time only the minor role suddenly gain on significance for circuits with several thousand
voltage nodes. In those particular cases, iterative algorithms based on Krylov subspace
methods provide better results from the aspect of performance than standard direct
methods. This thesis also proposes unique techniques of indexation of the large-scale
sparse matrix system. The primary purpose is to reduce memory requirements for storing
sparse matrices during simulation computation.
The second fundamental thesis aim is automatic adaptivity of device models definition
respecting current simulation state and settings. This principle is denoted as Functional
Chaining mechanism that is based on the principle of the automatic self-modifying
procedure utilizing state-of-the-art functional computation layer during the simulation
process. It can significantly improve mapping performance of circuit variables to device
models; it also allows autonomous redefinition of simulation algorithms during analysis
with an intention to reduce computation time. The core idea is based on utilization of
programming principles related to functional programming languages. It is also presents
possibilites of reimplementation to the modern object-oriented languages.
The third fundamental thesis aim focuses on simulation accuracy and reliability. Arbitrary
precision variable types can directly lead to increased simulation accuracy but on
the other hand; they can significantly decrease simulation performance. In last chapters,
there are several algorithms provided with the claim to provide better simulation accuracy
and suppress computation errors of floating point data types.Katedra radioelektronik
Theory and Practice of Computing with Excitable Dynamics
Reservoir computing (RC) is a promising paradigm for time series processing. In this paradigm, the desired output is computed by combining measurements of an excitable system that responds to time-dependent exogenous stimuli. The excitable system is called a reservoir and measurements of its state are combined using a readout layer to produce a target output. The power of RC is attributed to an emergent short-term memory in dynamical systems and has been analyzed mathematically for both linear and nonlinear dynamical systems. The theory of RC treats only the macroscopic properties of the reservoir, without reference to the underlying medium it is made of. As a result, RC is particularly attractive for building computational devices using emerging technologies whose structure is not exactly controllable, such as self-assembled nanoscale circuits. RC has lacked a formal framework for performance analysis and prediction that goes beyond memory properties. To provide such a framework, here a mathematical theory of memory and information processing in ordered and disordered linear dynamical systems is developed. This theory analyzes the optimal readout layer for a given task. The focus of the theory is a standard model of RC, the echo state network (ESN). An ESN consists of a fixed recurrent neural network that is driven by an external signal. The dynamics of the network is then combined linearly with readout weights to produce the desired output. The readout weights are calculated using linear regression.
Using an analysis of regression equations, the readout weights can be calculated using only the statistical properties of the reservoir dynamics, the input signal, and the desired output. The readout layer weights can be calculated from a priori knowledge of the desired function to be computed and the weight matrix of the reservoir. This formulation explicitly depends on the input weights, the reservoir weights, and the statistics of the target function. This formulation is used to bound the expected error of the system for a given target function. The effects of input-output correlation and complex network structure in the reservoir on the computational performance of the system have been mathematically characterized. Far from the chaotic regime, ordered linear networks exhibit a homogeneous decay of memory in different dimensions, which keeps the input history coherent. As disorder is introduced in the structure of the network, memory decay becomes inhomogeneous along different dimensions causing decoherence in the input history, and degradation in task-solving performance. Close to the chaotic regime, the ordered systems show loss of temporal information in the input history, and therefore inability to solve tasks. However, by introducing disorder and therefore heterogeneous decay of memory the temporal information of input history is preserved and the task-solving performance is recovered. Thus for systems at the edge of chaos, disordered structure may enhance temporal information processing. Although the current framework only applies to linear systems, in principle it can be used to describe the properties of physical reservoir computing, e.g., photonic RC using short coherence-length light
Recent Trends in Communication Networks
In recent years there has been many developments in communication technology. This has greatly enhanced the computing power of small handheld resource-constrained mobile devices. Different generations of communication technology have evolved. This had led to new research for communication of large volumes of data in different transmission media and the design of different communication protocols. Another direction of research concerns the secure and error-free communication between the sender and receiver despite the risk of the presence of an eavesdropper. For the communication requirement of a huge amount of multimedia streaming data, a lot of research has been carried out in the design of proper overlay networks. The book addresses new research techniques that have evolved to handle these challenges
2007 Research and Technology
The NASA Glenn Research Center is pushing the envelope of research and technology in aeronautics, space exploration, science, and space operations. Our research in aeropropulsion, structures and materials, and instrumentation and controls is enabling next-generation transportation systems that are faster, more environmentally friendly, more fuel efficient, and safer. Our research and development of space flight systems is enabling advanced power, propulsion, communications, and human health systems that will advance the exploration of our solar system. This report selectively summarizes NASA Glenn Research Center s research and technology accomplishments for fiscal year 2007. Comprising 104 short articles submitted by the staff scientists and engineers, the report is organized into six major sections: Aeropropulsion, Power and Space Propulsion, Communications, Space Processes and Experiments, Instrumentation and Controls, and Structures and Materials. It is not intended to be a comprehensive summary of all the research and technology work done over the past fiscal year; most of the work is reported in Glenn-published technical reports, journal articles, and presentations. For each article in this report, a Glenn contact person has been identified, and where possible, a reference document is listed so that additional information can be easily obtained
Cutting Edge Nanotechnology
The main purpose of this book is to describe important issues in various types of devices ranging from conventional transistors (opening chapters of the book) to molecular electronic devices whose fabrication and operation is discussed in the last few chapters of the book. As such, this book can serve as a guide for identifications of important areas of research in micro, nano and molecular electronics. We deeply acknowledge valuable contributions that each of the authors made in writing these excellent chapters
Advanced Photonic Sciences
The new emerging field of photonics has significantly attracted the interest of many societies, professionals and researchers around the world. The great importance of this field is due to its applicability and possible utilization in almost all scientific and industrial areas. This book presents some advanced research topics in photonics. It consists of 16 chapters organized into three sections: Integrated Photonics, Photonic Materials and Photonic Applications. It can be said that this book is a good contribution for paving the way for further innovations in photonic technology. The chapters have been written and reviewed by well-experienced researchers in their fields. In their contributions they demonstrated the most profound knowledge and expertise for interested individuals in this expanding field. The book will be a good reference for experienced professionals, academics and researchers as well as young researchers only starting their carrier in this field