742 research outputs found
Functional Regression
Functional data analysis (FDA) involves the analysis of data whose ideal
units of observation are functions defined on some continuous domain, and the
observed data consist of a sample of functions taken from some population,
sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the
development of this field, which has accelerated in the past 10 years to become
one of the fastest growing areas of statistics, fueled by the growing number of
applications yielding this type of data. One unique characteristic of FDA is
the need to combine information both across and within functions, which Ramsay
and Silverman called replication and regularization, respectively. This article
will focus on functional regression, the area of FDA that has received the most
attention in applications and methodological development. First will be an
introduction to basis functions, key building blocks for regularization in
functional regression methods, followed by an overview of functional regression
methods, split into three types: [1] functional predictor regression
(scalar-on-function), [2] functional response regression (function-on-scalar)
and [3] function-on-function regression. For each, the role of replication and
regularization will be discussed and the methodological development described
in a roughly chronological manner, at times deviating from the historical
timeline to group together similar methods. The primary focus is on modeling
and methodology, highlighting the modeling structures that have been developed
and the various regularization approaches employed. At the end is a brief
discussion describing potential areas of future development in this field
Sub-Nyquist Sampling: Bridging Theory and Practice
Sampling theory encompasses all aspects related to the conversion of
continuous-time signals to discrete streams of numbers. The famous
Shannon-Nyquist theorem has become a landmark in the development of digital
signal processing. In modern applications, an increasingly number of functions
is being pushed forward to sophisticated software algorithms, leaving only
those delicate finely-tuned tasks for the circuit level.
In this paper, we review sampling strategies which target reduction of the
ADC rate below Nyquist. Our survey covers classic works from the early 50's of
the previous century through recent publications from the past several years.
The prime focus is bridging theory and practice, that is to pinpoint the
potential of sub-Nyquist strategies to emerge from the math to the hardware. In
that spirit, we integrate contemporary theoretical viewpoints, which study
signal modeling in a union of subspaces, together with a taste of practical
aspects, namely how the avant-garde modalities boil down to concrete signal
processing systems. Our hope is that this presentation style will attract the
interest of both researchers and engineers in the hope of promoting the
sub-Nyquist premise into practical applications, and encouraging further
research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin
Modeling, control and simulation of control-affine nonlinear systems with state-dependent transfer functions
There has been no known research that applies nonlinear transfer function to a nonlinear control problem. The belief is that nonlinear systems have no transfer functions. The Laplace transformation required to define transfer functions is not tractable mathematically when the coefficients of the differential equation are functions of state, output and control variables. In other words, it is not defined for systems that do not obey principles of superposition. Only linear systems obey this principle. Therefore, this dissertation work represents the very first research to demonstrate how transfer functions can be used to represent and design feedback control for nonlinear systems.
Real systems are inherently nonlinear. A few important examples include an aerospace vehicle whose mass parameter is variable because of fuel consumption, artificial pancreas and HIV drug delivery systems in the bio-medical field, robot arm and magnetic levitation systems in the mechanical engineering field and phase-locked-loop in the electrical engineering field. The subject of nonlinear system control, however, is more of an art than science. There is no unified framework for analysis and design. Success of a design usually depends on a designer’s experience. All the theory and design tools available, e.g., the whole subject of linear algebra, are based on systems described with linear models, which obey the principle of superposition. Control system design by linearization, which is based on approximated linear time invariant (LTI) system design model, is the closest to a general design framework available for nonlinear systems.
The most important problem in a control system designed by linearization is the problem of design model parameter variation during its operation. Obviously, this problem is the result of assuming a constant parameter or LTI design model for a real system that is actually nonlinear or has variable parameter model. In other words, a real system does not have constant parameters as approximated by its LTI design model. This problem is important enough to have specific design methods such as robust control and Horowitz quantitative feedback theory developed to address it. As the system is operated further and further out of the approximate linear range this problem gets worst. Furthermore, the controller based on design by linearization is not a tracking controller. It is a regulator that usually cannot track a varying reference input.
Investigated in the research presented in this dissertation is a nonlinear transfer function-based control method, i.e., one based on a model represented with varying parameters therefore a natural solution to the model parameter variation problem of design by linearization. The class of applicable nonlinear and time-varying systems are those that are affine in their control input such that they can be described by the central concept of this scheme, a state-dependent transfer function (SDTF). The introduction of this concept of nonlinear transfer function design model and the feedback control scheme based on it are the contributions of the research presented in this dissertation
Channel Capacity under Sub-Nyquist Nonuniform Sampling
This paper investigates the effect of sub-Nyquist sampling upon the capacity
of an analog channel. The channel is assumed to be a linear time-invariant
Gaussian channel, where perfect channel knowledge is available at both the
transmitter and the receiver. We consider a general class of right-invertible
time-preserving sampling methods which include irregular nonuniform sampling,
and characterize in closed form the channel capacity achievable by this class
of sampling methods, under a sampling rate and power constraint. Our results
indicate that the optimal sampling structures extract out the set of
frequencies that exhibits the highest signal-to-noise ratio among all spectral
sets of measure equal to the sampling rate. This can be attained through
filterbank sampling with uniform sampling at each branch with possibly
different rates, or through a single branch of modulation and filtering
followed by uniform sampling. These results reveal that for a large class of
channels, employing irregular nonuniform sampling sets, while typically
complicated to realize, does not provide capacity gain over uniform sampling
sets with appropriate preprocessing. Our findings demonstrate that aliasing or
scrambling of spectral components does not provide capacity gain, which is in
contrast to the benefits obtained from random mixing in spectrum-blind
compressive sampling schemes.Comment: accepted to IEEE Transactions on Information Theory, 201
Exploiting Microstructural Instabilities in Solids and Structures: From Metamaterials to Structural Transitions
Instabilities in solids and structures are ubiquitous across all length and time scales, and engineering design principles have commonly aimed at preventing instability. However, over the past two decades, engineering mechanics has undergone a paradigm shift, away from avoiding instability and toward taking advantage thereof. At the core of all instabilities—both at the microstructural scale in materials and at the macroscopic, structural level—lies a nonconvex potential energy landscape which is responsible, e.g., for phase transitions and domain switching, localization, pattern formation, or structural buckling and snapping. Deliberately driving a system close to, into, and beyond the unstable regime has been exploited to create new materials systems with superior, interesting, or extreme physical properties. Here, we review the state-of-the-art in utilizing mechanical instabilities in solids and structures at the microstructural level in order to control macroscopic (meta)material performance. After a brief theoretical review, we discuss examples of utilizing material instabilities (from phase transitions and ferroelectric switching to extreme composites) as well as examples of exploiting structural instabilities in acoustic and mechanical metamaterials
A physically oriented method for quantitative magnetic resonance imaging
Quantitative magnetic resonance imaging (qMRI) denotes the task of estimating the values of magnetic and tissue parameters, e.g., relaxation times T1, T2, proton density ρ and others. Recently in [Ma et al., Nature, 2013], an approach named Magnetic Resonance Fingerprinting (MRF) was introduced, being capable of simultaneously recovering these parameters by using a two step procedure: (i) a series of magnetization maps are created and then (ii) these are matched to parameters with the help of a pre-computed dictionary (Bloch manifold). In this paper, we initially put MRF and its variants in the perspective of optimization and inverse problems, providing some mathematical insights into these methods. Motivated by the fact that the Bloch manifold is non-convex, and the accuracy of the MRF type algorithms is limited by the discretization size of the dictionary, we propose here a novel physically oriented method for qMRI. In contrast to the conventional two step models, our model is dictionary-free and it is described by a single non-linear equation, governed by an operator for which we prove differentiability and other properties. This non-linear equation is efficiently solved via robust Newton type methods. The effectiveness of our method for noisy and undersampled data is shown both analytically and via numerical examples where also improvement over MRF and its variants is observed
TOWARDS OPTIMAL OPERATION AND CONTROL OF EMERGING ELECTRIC DISTRIBUTION NETWORKS
The growing integration of power-electronics converters enabled components causes low inertia in the evolving electric distribution networks, which also suffer from uncertainties due to renewable energy sources, electric demands, and anomalies caused by physical or cyber attacks, etc. These issues are addressed in this dissertation. First, a virtual synchronous generator (VSG) solution is provided for solar photovoltaics (PVs) to address the issues of low inertia and system uncertainties. Furthermore, for a campus AC microgrid, coordinated control of the PV-VSG and a combined heat and power (CHP) unit is proposed and validated. Second, for islanded AC microgrids composed of SGs and PVs, an improved three-layer predictive hierarchical power management framework is presented to provide economic operation and cyber-physical security while reducing uncertainties. This scheme providessuperior frequency regulation capability and maintains low system operating costs. Third, a decentralized strategy for coordinating adaptive controls of PVs and battery energy storage systems (BESSs) in islanded DC nanogrids is presented. Finally, for transient stability evaluation (TSE) of emerging electric distribution networks dominated by EV supercharging stations, a data-driven region of attraction (ROA) estimation approach is presented. The proposed data-driven method is more computationally efficient than traditional model-based methods, and it also allows for real-time ROA estimation for emerging electric distribution networks with complex dynamics
- …