30,976 research outputs found
Fuzzy Inference Systems Optimization
This paper compares various optimization methods for fuzzy inference system
optimization. The optimization methods compared are genetic algorithm, particle
swarm optimization and simulated annealing. When these techniques were
implemented it was observed that the performance of each technique within the
fuzzy inference system classification was context dependent.Comment: Paper Submitted to INTEC
Learning Manifolds from Non-stationary Streaming Data
Streaming adaptations of manifold learning based dimensionality reduction
methods, such as Isomap, are based on the assumption that a small initial batch
of observations is enough for exact learning of the manifold, while remaining
streaming data instances can be cheaply mapped to this manifold. However, there
are no theoretical results to show that this core assumption is valid.
Moreover, such methods typically assume that the underlying data distribution
is stationary. Such methods are not equipped to detect, or handle, sudden
changes or gradual drifts in the distribution that may occur when the data is
streaming. We present theoretical results to show that the quality of a
manifold asymptotically converges as the size of data increases. We then show
that a Gaussian Process Regression (GPR) model, that uses a manifold-specific
kernel function and is trained on an initial batch of sufficient size, can
closely approximate the state-of-art streaming Isomap algorithms. The
predictive variance obtained from the GPR prediction is then shown to be an
effective detector of changes in the underlying data distribution. Results on
several synthetic and real data sets show that the resulting algorithm can
effectively learn lower dimensional representation of high dimensional data in
a streaming setting, while identifying shifts in the generative distribution.Comment: 27 pages, 9 figure
Toward a Robust Crowd-labeling Framework using Expert Evaluation and Pairwise Comparison
Crowd-labeling emerged from the need to label large-scale and complex data, a
tedious, expensive, and time-consuming task. One of the main challenges in the
crowd-labeling task is to control for or determine in advance the proportion of
low-quality/malicious labelers. If that proportion grows too high, there is
often a phase transition leading to a steep, non-linear drop in labeling
accuracy as noted by Karger et al. [2014]. To address these challenges, we
propose a new framework called Expert Label Injected Crowd Estimation (ELICE)
and extend it to different versions and variants that delay phase transition
leading to a better labeling accuracy. ELICE automatically combines and boosts
bulk crowd labels supported by labels from experts for limited number of
instances from the dataset. The expert-labels help to estimate the individual
ability of crowd labelers and difficulty of each instance, both of which are
used to aggregate the labels. Empirical evaluation shows the superiority of
ELICE as compared to other state-of-the-art methods. We also derive a lower
bound on the number of expert-labeled instances needed to estimate the crowd
ability and dataset difficulty as well as to get better quality labels
Context-Specific Validation of Data-Driven Models
With an increasing use of data-driven models to control robotic systems, it
has become important to develop a methodology for validating such models before
they can be deployed to design a controller for the actual system.
Specifically, it must be ensured that the controller designed for a learned
model would perform as expected on the actual physical system. We propose a
context-specific validation framework to quantify the quality of a learned
model based on a distance measure between the closed-loop actual system and the
learned model. We then propose an active sampling scheme to compute a
probabilistic upper bound on this distance in a sample-efficient manner. The
proposed framework validates the learned model against only those behaviors of
the system that are relevant for the purpose for which we intend to use this
model, and does not require any a priori knowledge of the system dynamics.
Several simulations illustrate the practicality of the proposed framework for
validating the models of real-world systems, and consequently, for controller
synthesis
Risk Mitigation for Dynamic State Estimation Against Cyber Attacks and Unknown Inputs
Phasor measurement units (PMUs) can be effectively utilized for the
monitoring and control of the power grid. As the cyber-world becomes
increasingly embedded into power grids, the risks of this inevitable evolution
become serious. In this paper, we present a risk mitigation strategy, based on
dynamic state estimation, to eliminate threat levels from the grid's unknown
inputs and potential cyber-attacks. The strategy requires (a) the potentially
incomplete knowledge of power system models and parameters and (b) real-time
PMU measurements.
First, we utilize a dynamic state estimator for higher order depictions of
power system dynamics for simultaneous state and unknown inputs estimation.
Second, estimates of cyber-attacks are obtained through an attack detection
algorithm. Third, the estimation and detection components are seamlessly
utilized in an optimization framework to determine the most impacted PMU
measurements. Finally, a risk mitigation strategy is proposed to guarantee the
elimination of threats from attacks, ensuring the observability of the power
system through available, safe measurements. Case studies are included to
validate the proposed approach. Insightful suggestions, extensions, and open
problems are also posed
Component Based Modeling of Ultrasound Signals
This work proposes a component based model for the raw ultrasound signals
acquired by the transducer elements. Based on this approach, before undergoing
the standard digital processing chain, every sampled raw signal is first
decomposed into a smooth background signal and a strong reflectors component.
The decomposition allows for a suited processing scheme to be adjusted for each
component individually. We demonstrate the potential benefit of this approach
in image enhancement by suppressing side lobe artifacts, and in improvement of
digital data compression. Applying our proposed processing schemes to real
cardiac ultrasound data, we show that by separating the two components and
compressing them individually, over twenty-fold reduction of the data size is
achieved while retaining the image contents
Efficient Model Identification for Tensegrity Locomotion
This paper aims to identify in a practical manner unknown physical
parameters, such as mechanical models of actuated robot links, which are
critical in dynamical robotic tasks. Key features include the use of an
off-the-shelf physics engine and the Bayesian optimization framework. The task
being considered is locomotion with a high-dimensional, compliant Tensegrity
robot. A key insight, in this case, is the need to project the model
identification challenge into an appropriate lower dimensional space for
efficiency. Comparisons with alternatives indicate that the proposed method can
identify the parameters more accurately within the given time budget, which
also results in more precise locomotion control
Gradient-Free Learning Based on the Kernel and the Range Space
In this article, we show that solving the system of linear equations by
manipulating the kernel and the range space is equivalent to solving the
problem of least squares error approximation. This establishes the ground for a
gradient-free learning search when the system can be expressed in the form of a
linear matrix equation. When the nonlinear activation function is invertible,
the learning problem of a fully-connected multilayer feedforward neural network
can be easily adapted for this novel learning framework. By a series of kernel
and range space manipulations, it turns out that such a network learning boils
down to solving a set of cross-coupling equations. By having the weights
randomly initialized, the equations can be decoupled and the network solution
shows relatively good learning capability for real world data sets of small to
moderate dimensions. Based on the structural information of the matrix
equation, the network representation is found to be dependent on the number of
data samples and the output dimension.Comment: The idea of kernel and range projection was first introduced in the
IEEE/ACIS ICIS conference which was held in Singapore in June 2018. This
article presents a full development of the method supported by extensive
numerical result
Investigating Flight Envelope Variation Predictability of Impaired Aircraft using Least-Squares Regression Analysis
Aircraft failures alter the aircraft dynamics and cause maneuvering flight
envelope to change. Such envelope variations are nonlinear and generally
unpredictable by the pilot as they are governed by the aircraft's complex
dynamics. Hence, in order to prevent in-flight Loss of Control it is crucial to
practically predict the impaired aircraft's flight envelope variation due to
any a-priori unknown failure degree. This paper investigates the predictability
of the number of trim points within the maneuvering flight envelope and its
centroid using both linear and nonlinear least-squares estimation methods. To
do so, various polynomial models and nonlinear models based on hyperbolic
tangent function are developed and compared which incorporate the influencing
factors on the envelope variations as the inputs and estimate the centroid and
the number of trim points of the maneuvering flight envelope at any intended
failure degree. Results indicate that both the polynomial and hyperbolic
tangent function-based models are capable of predicting the impaired fight
envelope variation with good precision. Furthermore, it is shown that the
regression equation of the best polynomial fit enables direct assessment of the
impaired aircraft's flight envelope contraction and displacement sensitivity to
the specific parameters characterizing aircraft failure and flight condition.Comment: Accepted version, Journal of Aerospace Information System
Deep learning based inverse method for layout design
Layout design with complex constraints is a challenging problem to solve due
to the non-uniqueness of the solution and the difficulties in incorporating the
constraints into the conventional optimization-based methods. In this paper, we
propose a design method based on the recently developed machine learning
technique, Variational Autoencoder (VAE). We utilize the learning capability of
the VAE to learn the constraints and the generative capability of the VAE to
generate design candidates that automatically satisfy all the constraints. As
such, no constraints need to be imposed during the design stage. In addition,
we show that the VAE network is also capable of learning the underlying physics
of the design problem, leading to an efficient design tool that does not need
any physical simulation once the network is constructed. We demonstrated the
performance of the method on two cases: inverse design of surface diffusion
induced morphology change and mask design for optical microlithography
- …