16,897 research outputs found
Decorrelation of Neutral Vector Variables: Theory and Applications
In this paper, we propose novel strategies for neutral vector variable
decorrelation. Two fundamental invertible transformations, namely serial
nonlinear transformation and parallel nonlinear transformation, are proposed to
carry out the decorrelation. For a neutral vector variable, which is not
multivariate Gaussian distributed, the conventional principal component
analysis (PCA) cannot yield mutually independent scalar variables. With the two
proposed transformations, a highly negatively correlated neutral vector can be
transformed to a set of mutually independent scalar variables with the same
degrees of freedom. We also evaluate the decorrelation performances for the
vectors generated from a single Dirichlet distribution and a mixture of
Dirichlet distributions. The mutual independence is verified with the distance
correlation measurement. The advantages of the proposed decorrelation
strategies are intensively studied and demonstrated with synthesized data and
practical application evaluations
A Methodology for the Diagnostic of Aircraft Engine Based on Indicators Aggregation
Aircraft engine manufacturers collect large amount of engine related data
during flights. These data are used to detect anomalies in the engines in order
to help companies optimize their maintenance costs. This article introduces and
studies a generic methodology that allows one to build automatic early signs of
anomaly detection in a way that is understandable by human operators who make
the final maintenance decision. The main idea of the method is to generate a
very large number of binary indicators based on parametric anomaly scores
designed by experts, complemented by simple aggregations of those scores. The
best indicators are selected via a classical forward scheme, leading to a much
reduced number of indicators that are tuned to a data set. We illustrate the
interest of the method on simulated data which contain realistic early signs of
anomalies.Comment: Proceedings of the 14th Industrial Conference, ICDM 2014, St.
Petersburg : Russian Federation (2014
What May Visualization Processes Optimize?
In this paper, we present an abstract model of visualization and inference
processes and describe an information-theoretic measure for optimizing such
processes. In order to obtain such an abstraction, we first examined six
classes of workflows in data analysis and visualization, and identified four
levels of typical visualization components, namely disseminative,
observational, analytical and model-developmental visualization. We noticed a
common phenomenon at different levels of visualization, that is, the
transformation of data spaces (referred to as alphabets) usually corresponds to
the reduction of maximal entropy along a workflow. Based on this observation,
we establish an information-theoretic measure of cost-benefit ratio that may be
used as a cost function for optimizing a data visualization process. To
demonstrate the validity of this measure, we examined a number of successful
visualization processes in the literature, and showed that the
information-theoretic measure can mathematically explain the advantages of such
processes over possible alternatives.Comment: 10 page
- …