136 research outputs found
Transient dynamics under structured perturbations: bridging unstructured and structured pseudospectra
The structured -stability radius is introduced as a quantity to
assess the robustness of transient bounds of solutions to linear differential
equations under structured perturbations of the matrix. This applies to general
linear structures such as complex or real matrices with a given sparsity
pattern or with restricted range and corange, or special classes such as
Toeplitz matrices. The notion conceptually combines unstructured and structured
pseudospectra in a joint pseudospectrum, allowing for the use of resolvent
bounds as with unstructured pseudospectra and for structured perturbations as
with structured pseudospectra. We propose and study an algorithm for computing
the structured -stability radius. This algorithm solves eigenvalue
optimization problems via suitably discretized rank-1 matrix differential
equations that originate from a gradient system. The proposed algorithm has
essentially the same computational cost as the known rank-1 algorithms for
computing unstructured and structured stability radii. Numerical experiments
illustrate the behavior of the algorithm
Approximating the Real Structured Stability Radius with Frobenius Norm Bounded Perturbations
We propose a fast method to approximate the real stability radius of a linear
dynamical system with output feedback, where the perturbations are restricted
to be real valued and bounded with respect to the Frobenius norm. Our work
builds on a number of scalable algorithms that have been proposed in recent
years, ranging from methods that approximate the complex or real pseudospectral
abscissa and radius of large sparse matrices (and generalizations of these
methods for pseudospectra to spectral value sets) to algorithms for
approximating the complex stability radius (the reciprocal of the
norm). Although our algorithm is guaranteed to find only upper bounds to the
real stability radius, it seems quite effective in practice. As far as we know,
this is the first algorithm that addresses the Frobenius-norm version of this
problem. Because the cost mainly consists of computing the eigenvalue with
maximal real part for continuous-time systems (or modulus for discrete-time
systems) of a sequence of matrices, our algorithm remains very efficient for
large-scale systems provided that the system matrices are sparse
On real structured controllability/stabilizability/stability radius: Complexity and unified rank-relaxation based methods
This paper addresses the real structured controllability, stabilizability,
and stability radii (RSCR, RSSZR, and RSSR, respectively) of linear systems,
which involve determining the distance (in terms of matrix norms) between a
(possibly large-scale) system and its nearest uncontrollable, unstabilizable,
and unstable systems, respectively, with a prescribed affine structure. This
paper makes two main contributions. First, by demonstrating that determining
the feasibilities of RSCR and RSSZR is NP-hard when the perturbations have a
general affine parameterization, we prove that computing these radii is
NP-hard. Additionally, we prove the NP-hardness of a problem related to the
RSSR. These hardness results are independent of the matrix norm used. Second,
we develop unified rank-relaxation based algorithms for these problems, which
can handle both the Frobenius norm and the -norm based problems and share
the same framework for the RSCR, RSSZR, and RSSR problems. These algorithms
utilize the low-rank structure of the original problems and relax the
corresponding rank constraints with a regularized truncated nuclear norm term.
Moreover, a modified version of these algorithms can find local optima with
performance specifications on the perturbations, under appropriate conditions.
Finally, simulations suggest that the proposed methods, despite being in a
simple framework, can find local optima as good as several existing methods.Comment: To appear in System & Control Letter
ML4Chem: A Machine Learning Package for Chemistry and Materials Science
ML4Chem is an open-source machine learning library for chemistry and
materials science. It provides an extendable platform to develop and deploy
machine learning models and pipelines and is targeted to the non-expert and
expert users. ML4Chem follows user-experience design and offers the needed
tools to go from data preparation to inference. Here we introduce its atomistic
module for the implementation, deployment, and reproducibility of atom-centered
models. This module is composed of six core building blocks: data,
featurization, models, model optimization, inference, and visualization. We
present their functionality and easiness of use with demonstrations utilizing
neural networks and kernel ridge regression algorithms.Comment: 32 pages, 11 Figure
Compressed Sensing in the Presence of Side Information
Reconstruction of continuous signals from a number of their discrete samples is central to digital signal processing. Digital devices can only process discrete data and thus processing the continuous signals requires discretization.
After discretization, possibility of unique reconstruction of the source signals from their samples is crucial. The classical sampling theory provides bounds on the sampling rate for unique source reconstruction, known as the Nyquist sampling rate. Recently a new sampling scheme, Compressive Sensing (CS), has been formulated for sparse
signals.
CS is an active area of research in signal processing. It has revolutionized the classical sampling theorems and has provided a new scheme to sample and reconstruct sparse signals uniquely, below Nyquist sampling rates. A signal is called (approximately) sparse when a relatively large number of its elements are (approximately) equal to zero. For the class of sparse signals, sparsity can be viewed as prior information about the source signal. CS has found numerous applications and has improved some image acquisition devices.
Interesting instances of CS can happen, when apart from sparsity, side information is available about the source signals. The side information can be about the source structure, distribution, etc. Such cases can be viewed as extensions of the classical CS. In such cases we are interested in incorporating the side information to either improve the quality of the source reconstruction or decrease the number of the required samples for accurate reconstruction.
A general CS problem can be transformed to an equivalent optimization problem. In this thesis, a special case of CS with side information about the feasible region of the equivalent optimization problem is studied. It is shown that in such cases uniqueness and stability of the equivalent optimization problem still holds. Then, an efficient reconstruction method is proposed. To demonstrate the practical value of the proposed scheme, the algorithm is applied on two real world applications: image deblurring in optical imaging and surface reconstruction in the gradient field. Experimental results are provided to further investigate and confirm the effectiveness and usefulness of the proposed scheme
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Modelling of Channelization in Unconsolidated Formations Due to Liquid Injection
Fluid injection has been increasingly implemented in oil and gas producing fields to optimize hydrocarbon production in both young and mature oil fields across the globe. This situation arises from the realization that early planning of second and tertiary recovery methods may improve project economics significantly. Injectors are then intended to perform at high injection rates with low bottomhole flowing pressures for as long as possible. In the case of injectors drilled in unconsolidated formations, deficient performance often times seems to be the norm. With a majority of these fields being offshore developments in which capital expenditures are high, a profound need for solutions arises. From a wide range of problems leading to injectivity impairment in injectors targeting poorly consolidated formations, research on rock failure caused by fluid injection operations appears to be at its earliest stages hence it is the main subject of the present study. Some studies trying to address this issue intend to modify existing fracture mechanics theory to explain rock failure. Although many of the proposed theories properly explain some observations and are founded on well derived scientific principles, they fail to explain formation sand erosion and transportation often perceived at these wells. To study poorly cemented sands’ failure, dynamic fluid flow equations are implemented using Finite Element Methods coupled to an erosional model. Different scenarios are implemented in two and three dimensions. It can be seen from the study that the drag forces created by injecting fluid weaken and damage the target formation. The damage caused is consequence of the pressure gradient as the fluid travels through the porous media. The latter observation leads to the conclusion that limiting injection rates to avoid damage may not be sufficient and a more appropriate approach is to control the fluid velocity at the sandface given a fluid viscosity, relative permeability and porosity. If proper injection parameters are achieved, the injectivity declines in these wells can be minimized. The observations from this study can be used to better design wells and how to complete them
Intrusion detection by machine learning = Behatolás detektálás gépi tanulás által
Since the early days of information technology, there have been many stakeholders who used the technological capabilities for their own benefit, be it legal operations, or illegal access to computational assets and sensitive information. Every year, businesses invest large amounts of effort into upgrading their IT infrastructure, yet, even today, they are unprepared to protect their most valuable assets: data and knowledge. This lack of protection was the main reason for the creation of this dissertation. During this study, intrusion detection, a field of information security, is evaluated through the use of several machine learning models performing signature and hybrid detection. This is a challenging field, mainly due to the high velocity and imbalanced nature of network traffic. To construct machine learning models capable of intrusion detection, the applied methodologies were the CRISP-DM process model designed to help data scientists with the planning, creation and integration of machine learning models into a business information infrastructure, and design science research interested in answering research questions with information technology artefacts. The two methodologies have a lot in common, which is further elaborated in the study. The goals of this dissertation were two-fold: first, to create an intrusion detector that could provide a high level of intrusion detection performance measured using accuracy and recall and second, to identify potential techniques that can increase intrusion detection performance. Out of the designed models, a hybrid autoencoder + stacking neural network model managed to achieve detection performance comparable to the best models that appeared in the related literature, with good detections on minority classes. To achieve this result, the techniques identified were synthetic sampling, advanced hyperparameter optimization, model ensembles and autoencoder networks. In addition, the dissertation set up a soft hierarchy among the different detection techniques in terms of performance and provides a brief outlook on potential future practical applications of network intrusion detection models as well
- …