980 research outputs found
A functional link network based adaptive power system stabilizer
An on-line identifier using Functional Link Network (FLN) and Pole-shift (PS) controller for power system stabilizer (PSS) application are presented in this thesis. To have the satisfactory performance of the PSS controller, over a wide range of operating conditions, it is desirable to adapt PSS parameters in real time. Artificial Neural Networks (ANNs) transform the inputs in a low-dimensional space to high-dimensional nonlinear hidden unit space and they have the ability to model the nonlinear characteristics of the power system. The ability of ANNs to learn makes them more suitable for use in adaptive control techniques.
On-line identification obtains a mathematical model at each sampling period to track the dynamic behavior of the plant. The ANN identifier consisting of a Functional link Network (FLN) is used for identifying the model parameters. A FLN model eliminates the need of hidden layer while retaining the nonlinear mapping capability of the neural network by using enhanced inputs. This network may be conveniently used for function approximation with faster convergence rate and lesser computational load.
The most commonly used Pole Assignment (PA) algorithm for adaptive control purposes assign the pole locations to fixed locations within the unit circle in the z-plane. It may not be optimum for different operating conditions. In this thesis, PS type of adaptive control algorithm is used. This algorithm, instead of assigning the closed-loop poles to fixed locations within the unit circle in the z-plane, this algorithm assumes that the pole characteristic polynomial of the closed-loop system has the same form as the pole characteristic of the open-loop system and shifts the open-loop poles radially towards the centre of the unit circle in the z-plane by a shifting factor α according to some rules. In this control algorithm, no coefficients need to be tuned manually, so manual parameter tuning (which is a drawback in conventional power system stabilizer) is minimized. The PS control algorithm uses the on-line updated ARMA parameters to calculate the new closed-loop poles of the system that are always inside the unit circle in the z-plane.
Simulation studies on a single-machine infinite bus and on a multi-machine power system for various operating condition changes, verify the effectiveness of the combined model of FLN identifier and PS control in damping the local and multi-mode oscillations occurring in the system. Simulation studies prove that the APSSs have significant benefits over conventional PSSs: performance improvement and no requirement for parameter tuning
Digital Filters
The new technology advances provide that a great number of system signals can be easily measured with a low cost. The main problem is that usually only a fraction of the signal is useful for different purposes, for example maintenance, DVD-recorders, computers, electric/electronic circuits, econometric, optimization, etc. Digital filters are the most versatile, practical and effective methods for extracting the information necessary from the signal. They can be dynamic, so they can be automatically or manually adjusted to the external and internal conditions. Presented in this book are the most advanced digital filters including different case studies and the most relevant literature
FLEXIBLE LOW-COST HW/SW ARCHITECTURES FOR TEST, CALIBRATION AND CONDITIONING OF MEMS SENSOR SYSTEMS
During the last years smart sensors based on Micro-Electro-Mechanical systems (MEMS) are widely spreading over various fields as automotive, biomedical, optical and consumer, and nowadays they represent the outstanding state of the art.
The reasons of their diffusion is related to the capability to measure physical and chemical information using miniaturized components.
The developing of this kind of architectures, due to the heterogeneities of their components, requires a very complex design flow, due to the utilization of both mechanical parts typical of the MEMS sensor and electronic components for the interfacing and the conditioning.
In these kind of systems testing activities gain a considerable importance, and they concern various phases of the life-cycle of a MEMS based system. Indeed, since the design phase of the sensor, the validation of the design by the extraction of characteristic parameters is important, because they are necessary to design the sensor interface circuit. Moreover, this kind of architecture requires techniques for the calibration and the evaluation of the whole system in addition to the traditional methods for the testing of the control circuitry.
The first part of this research work addresses the testing optimization by the developing of different hardware/software architecture for the different testing stages of the developing flow of a MEMS based system. A flexible and low-cost platform for the characterization and the prototyping of MEMS sensors has been developed in order to provide an environment that allows also to support the design of the sensor interface. To reduce the reengineering time requested during the verification testing a universal client-server architecture has been designed to provide a unique framework to test different kind of devices, using different development environment and programming languages. Because the use of ATE during the engineering phase of the calibration algorithm is expensive in terms of ATEâs occupation time, since it requires the interruption of the production process, a flexible and easily adaptable low-cost hardware/software architecture for the calibration and the evaluation of the performance has been developed in order to allow the developing of the calibration algorithm in a user-friendly environment that permits also to realize a small and medium volume production.
The second part of the research work deals with a topic that is becoming ever more important in the field of applications for MEMS sensors, and concerns the capability to combine information extracted from different typologies of sensors (typically accelerometers, gyroscopes and magnetometers) to obtain more complex information. In this context two different algorithm for the sensor fusion has been analyzed and developed: the first one is a fully software algorithm that has been used as a means to estimate how much the errors in MEMS sensor data affect the estimation of the parameter computed using a sensor fusion algorithm; the second one, instead, is a sensor fusion algorithm based on a simplified Kalman filter. Starting from this algorithm, a bit-true model in Mathworks Simulink(TM) has been created as a system study for the implementation of the algorithm on chip
Frequency Translation loops for RF filtering-Theory and Design
Modern wireless transceivers are required to operate over a wide range of frequencies in order to support the multitude of currently available wireless standards. Wideband operation also enables future systems that aim for better utilization of the available spectrum through dynamic allocation. As such, co-existence problems like harmonic mixing and phase noise become a main concern. In particular, dealing with interfer- ence scenarios is crucial since they directly translate to higher linearity requirements in a receiver. With CMOS driving the consumer electronics market due to low cost and high level of integration demands, the continued increase in speed, mainly intended for digital applications, oers new possibilities for RF design to improve the linearity of front-end receivers. Furthermore, the readily available switches in CMOS have proven to be a viable alternative to traditional active mixers for frequency translation due to their high linearity, low flicker noise, and, most recently recognized, their impedance transformation properties. In this thesis, frequency translation feedback loops employing passive mixers are explored as a means to relax the linearity requirements in a front-end receiver by providing channel selectivity as early as possible in the receiver chain. The proposed receiver architecture employing such loop addresses some of the most common prob- lems of integrated RF lters, while maintaining their inherent tunability. Through a simplied and intuitive analysis, the operation of the receiver is first examined and the design parameters aecting the lter characteristics, such as band- width and stop-band rejection, are determined. A systematic procedure for analyzing the linearity of the receiver reveals the possibility of LNA distortion canceling, which decouples the trade-o between noise, linearity and harmonic radiation. Next, a detailed analysis of frequency translation loops using passive mixers is developed. Only highly simplied analysis of such loops is commonly available in literature. The analysis is based on an iterative procedure to address the complexity introduced by the presence of LO harmonics in the loop and the lack of reverse isolation in the mixers, and results in highly accurate expressions for the harmonic and noise transfer functions of the system. Compared to the alternative of applying general LPTV theory, the procedure developed oers more intuition into the operation of the system and only requires the knowledge of basic Fourier analysis. The solution is shown to be capable of predicting trade-os arising due to harmonic mixing and loop stability requirements, and is therefore useful for both system design and optimization. Finally, as a proof of concept, a chip prototype is designed in a standard 65nm CMOS process. The design occupies +12dBm. As such, the work presented in this thesis aims to provide a highly-integrated means for programmable RF channel selection in wideband receivers. The topic oers several possibilities for further research, either in terms of extending the viability of the system, for example by providing higher order ltering, or by improving performance, such as noise
Broadband Doherty Power Amplifiers with Enhanced Linearity for Emerging Radio Transmitters
The ever-increasing demand for utilizing wireless spectra has led to development of spectrally efficient radio systems. While these systems offer much higher data throughput, they employ more sophisticated modulation schemes, which result in wideband signals with high peak-to-average power ratios. These signal characteristics significantly complicate the design of RF transmitters, particularly power amplifiers, in terms of power efficiency and linearity requirements. Furthermore, upcoming wireless standards, such as long term evolution advanced (LTE-A) require adoption of carrier aggregation which incorporates multiple component carriers to yield aggregated channels of larger bandwidth (up to 100 MHz). On the other hand, the emerging systems are expected to support legacy standards with minimum area, cost, and power overhead, and thus call for highly-efficient linear broadband power amplifiers capable of efficiently amplifying concurrent modulated signals located over a broad carrier frequency range.
This thesis focuses on Doherty power amplifiers (DPAs) with extended high-efficiency range, enhanced bandwidth and improved linearity as a solution for high-efficiency multi-band multi-standard transmitters. It addresses three major concerns associated with DPAs, namely, back-off efficiency, bandwidth, and linearity. The Thesis begins with a detailed theoretical analysis of two-way and three-way Doherty configurations from which the governing equations are derived. This is followed by a comprehensive study of bandwidth limitation in DPA variants.
As the first contribution, it is shown that the two existing three-way Doherty structures, i.e., conventional and modified DPAs have inherently broadband characteristics and thus are promising solutions for multi-standard base station transmitters.
As a proof of concept, a 30-W three-way modified Doherty amplifier was designed and implemented using packaged GaN transistors over 0.73-0.98 GHz. The prototype was successfully linearized under modulated signals with up to 20 MHz modulation bandwidth.
To further improve the linearizability of the DPAs under wideband and multi-band modulated signals, this thesis investigates major sources of static and dynamic nonlinearity in two-way DPAs both at device and circuit levels and explores circuit techniques to mitigate them. Furthermore, the challenges of applying the Doherty technique for concurrent transmission of multiple modulated signals are tackled.
The most significant contribution of this thesis is to develop a novel waveform engineering approach to designing ultrawideband DPAs. This approach completely reformulates the DPA's output combiner conditions in order to accommodate complex-valued load modulation. Moreover, it relaxes the harmonic termination requirements of the DPAs to further enlarge the Doherty design space, thereby enhancing the bandwidth. A 50-W waveform-engineered two-way DPA prototype was designed for 1.5-2.5 GHz range and was successfully linearized under intra- and inter-band carrier-aggregated signals with up to 600 MHz carrier spacing.
Lastly, an input matching network design methodology is proposed for broadband DPAs. This methodology uses the novel concept of ``current contours'' to minimize the bandwidth, efficiency and linearity degradation of DPAs caused by device input non-idealities
Geometric Accuracy Testing, Evaluation and Applicability of Space Imagery to the Small Scale Topographic Mapping of the Sudan
The geometric accuracy, interpretabilty and the applicability of using space imagery for the production of small-scale topographic maps of the Sudan have been assessed. Two test areas have been selected. The first test area was selected in the central Sudan including the area between the Blue Nile and the White Nile and extending to Atbara in the Nile Province. The second test area was selected in the Red Sea Hills area which has modern 1:100,000 scale topographic map coverage and has been covered by six types of images, Landsat MSS TM and RBV; MOMS; Metric Camera (MC); and Large format Camera (LFC). Geometric accuracy testing has been carried out using a test field of well-defined control points whose terrain coordinates have been obtained from the existing maps. The same points were measured on each of the images in a Zeiss Jena Stereocomparator (Stecometer C II) and transformed into the terrain coordinate system using polynomial transformations in the case of the scanner and RBV images; and space resection/intersection, relative/absolute orientation and bundle adjustment in the case of the MC and LFC photographs. The two sets of coordinates were then compared. The planimetric accuracies (root mean square errors) obtained for the scanner and RBV images were: Landsat MSS +/-80 m; TM +/-45 m; REV +/-40 m; and MOMS +/-28 m. The accuracies of the 3-dimensional coordinates obtained from the photographs were: MC:-X=+/-16 m, Y=+/-16 m, Z=+/-30 m; and LFC:- X=+/-14 m, Y=+/-14 m, and Z=+/-20 m. The planimetric accuracy figures are compatible with the specifications for topographic maps at scales of 1:250,000 in the case of MSS; 1:125,000 scale in the case of TM and RBV; and 1:100,000 scale in the case of MOMS. The planimetric accuracies (vector =+/-20 m) achieved with the two space cameras are compatible with topographic mapping at 1:60,000 to 1:70,000 scale. However, the spot height accuracies of +/-20 to +/-30 m - equivalent to a contour interval of 50 to 60 m - fall short of the required heighting accuracies for 1:60,000 to 1:100,000 scale mapping. The interpretation tests carried out on the MSS, TM, and RBV images showed that, while the main terrain features (hills, ridges, wadis, etc.) can be mapped reasonably well, there was an almost complete failure to pick up the cultural features - towns, villages, roads, railways, etc. - present in the test areas. The high resolution MOMS images and the space photographs were much more satisfactory in this respect though still the cultural features are difficult to pick up due to the buildings and roads being built out of local material and exhibiting little contrast on the images
Analysis and synthesis of self-synchronizing chaotic systems
Includes bibliographical references (p. 225-228).Supported by the U.S. Air Force Office of Scientific Research. AFOSR-91-0034-C Supported by the U.S. Navy Office of Naval Research. N00014-91-C-0125 N00014-93-1-0686 Supported by Lockheed Sanders, Inc.Kevin M. Cuomo
Theory of measurement-based quantum computing
In the study of quantum computation, data is represented in terms of linear
operators which form a generalized model of probability, and computations are
most commonly described as products of unitary transformations, which are the
transformations which preserve the quality of the data in a precise sense. This
naturally leads to "unitary circuit models", which are models of computation in
which unitary operators are expressed as a product of "elementary" unitary
transformations. However, unitary transformations can also be effected as a
composition of operations which are not all unitary themselves: the "one-way
measurement model" is one such model of quantum computation.
In this thesis, we examine the relationship between representations of
unitary operators and decompositions of those operators in the one-way
measurement model. In particular, we consider different circumstances under
which a procedure in the one-way measurement model can be described as
simulating a unitary circuit, by considering the combinatorial structures which
are common to unitary circuits and two simple constructions of one-way based
procedures. These structures lead to a characterization of the one-way
measurement patterns which arise from these constructions, which can then be
related to efficiently testable properties of graphs. We also consider how
these characterizations provide automatic techniques for obtaining complete
measurement-based decompositions, from unitary transformations which are
specified by operator expressions bearing a formal resemblance to path
integrals. These techniques are presented as a possible means to devise new
algorithms in the one-way measurement model, independently of algorithms in the
unitary circuit model.Comment: Ph.D. thesis in Combinatorics and Optimization. 199 pages main text,
26 PDF figures. Official electronic version available at
http://hdl.handle.net/10012/413
Recommended from our members
Improved methods for single-particle cryogenic electron microscopy
Biological macromolecules such as enzymes are nanoscale machines. This is true in a concrete sense: if the atomic structure of a biological macromolecule can be obtained, the theories of mechanics and intermolecular forces can be applied to explain how the machine works in terms that engineers would understand, including motors, ratchets, gates and transducers. Nevertheless, biological macromolecules are complex, fragile and extremely small, so obtaining their structures is a challenging experimental endeavor. Single-particle cryogenic electron microscopy (cryo-EM) is a technique for determining the 3D structure of a biological macromolecule from a large set of 2D electron micrographs of individual structurally-identical particles. To obtain such images, a solution of the macromolecules must be prepared in the frozen-hydrated state, embedded in a thin electron-transparent glassy film of water. This specimen must then be imaged with a very short exposure to avoid radiation damage. A powerful computer must then be used to sort, align, and average the 2D particle images to back-calculate the 3D structure. At its best, cryo-EM can determine the structures of biological macromolecules to atomic resolution. In practice, this goal is usually not achieved. Cryo-EM has gotten significantly more powerful in the past few years due to improvements in equipment and methodology. Several of the most significant advances originated in the labs of David Agard and Yifan Cheng at UCSF. When I began my PhD with Yifan, the spirit in the lab was that cryo-EM could keep getting better and better: with enough engineering, determining the 3D structure of an arbitrary biological macromolecule would be as routine an experiment as gel electrophoresis or DNA sequencing. Inspired, I took on projects in the lab that I thought would move the field closer to that goal. In the first chapter of this thesis, I describe work I did supporting a project initiated by David Agard and his long-time scientific programmer Shawn Zheng. They developed and implemented an algorithm, MotionCor2, for correcting the complex, anisotropic movements that occur when a frozen-hydrated specimen interacts with the high-energy electron beam. My role was to benchmark MotionCor2 on a panel of real-world 3D reconstruction tasks. I was able to show that MotionCor2 restored the highest resolution details in the images, ultimately yielding significantly better structures than simpler algorithms. For me, this projected highlighted the importance of benchmarking an algorithm for use in routine real-world conditions with the right metrics. In chapter 1, I include the manuscript for the MotionCor2 study, formatted to highlight my contributions that were moved to the supplement in the original publication by Nature Methods. One of the major remaining issues with cryo-EM is sample preparation: preparing the thin freestanding films of frozen-hydrated particles necessarily exposes those particles to air-water interfaces. Many fragile macromolecular complexes denature when exposed to such interfaces, preventing structure determination with cryo-EM. In chapters 2 and 3, I describe my efforts to develop a simple, robust approach to stabilizing fragile macromolecular complexes during the vitrification process. In chapter 2, I develop a method for coating EM grids with an electron-transparent and functionalizable graphene-oxide support film. I demonstrate that such GO grids are compatible with high-resolution structure determination. This work was published in the Journal of Structural Biology in 2018. In chapter 3, I extend this work by functionalizing GO grids with nucleic acids, enabling routine structure determination of uncrosslinked chromatin specimens. In on-going work, I used nucleic acid grids to solve high-resolution structures of a highly fragile specimen, the snf2h-nucleosome complex, and analyzed the conformational heterogeneity of the nucleosome substrate. These results were made possible by the nucleic acid grid, as the other major approach for stabilizing chromatin specimens, chemical crosslinking, not work for this specimen.Perhaps the most fundamental problem with single-particle cryo-EM is the radiation sensitivity of frozen-hydrated macromolecules. To image biological matter with electrons is to destroy it, so obtaining images of undamaged specimens requires very short, highly under sampled exposures. The resultant images are extremely noisy and low contrast, with most particles barely visible from the background. In chapter 4, I describe a novel computational approach to generating contrast in cryo-EM. Using a recently described machine learning strategy for training a parameterized denoising algorithm, I developed a computer program, restore, that denoises cryo-EM images, greatly enhancing their contrast and interpretability. This program leverages recent advances in computer vision and deep learning which have not yet been widely used in cryo-EM image processing algorithms. To characterize the performance of the algorithm on real-world data, I extended conventional metrics for image resolution to measure how an arbitrary transformation affects images at different spatial frequencies. These novel metrics are general and may be useful for characterizing other nonlinear reconstruction algorithms in cryo-EM and medical imaging. Finally, I showed that denoised cryo-EM images maintain the high-resolution information required for accurate 3D reconstruction. Denoising can be applied to conventional cryo-EM images and can be reversed whenever necessary. I have made the software for restore program publicly available and have submitted a manuscript for peer-reviewed publication
- âŠ