137,633 research outputs found
Implementing the Gaia Astrometric Global Iterative Solution (AGIS) in Java
This paper provides a description of the Java software framework which has
been constructed to run the Astrometric Global Iterative Solution for the Gaia
mission. This is the mathematical framework to provide the rigid reference
frame for Gaia observations from the Gaia data itself. This process makes Gaia
a self calibrated, and input catalogue independent, mission. The framework is
highly distributed typically running on a cluster of machines with a database
back end. All code is written in the Java language. We describe the overall
architecture and some of the details of the implementation.Comment: Accepted for Experimental Astronom
Supervised cross-modal factor analysis for multiple modal data classification
In this paper we study the problem of learning from multiple modal data for
purpose of document classification. In this problem, each document is composed
two different modals of data, i.e., an image and a text. Cross-modal factor
analysis (CFA) has been proposed to project the two different modals of data to
a shared data space, so that the classification of a image or a text can be
performed directly in this space. A disadvantage of CFA is that it has ignored
the supervision information. In this paper, we improve CFA by incorporating the
supervision information to represent and classify both image and text modals of
documents. We project both image and text data to a shared data space by factor
analysis, and then train a class label predictor in the shared space to use the
class label information. The factor analysis parameter and the predictor
parameter are learned jointly by solving one single objective function. With
this objective function, we minimize the distance between the projections of
image and text of the same document, and the classification error of the
projection measured by hinge loss function. The objective function is optimized
by an alternate optimization strategy in an iterative algorithm. Experiments in
two different multiple modal document data sets show the advantage of the
proposed algorithm over other CFA methods
Sketching for Large-Scale Learning of Mixture Models
Learning parameters from voluminous data can be prohibitive in terms of
memory and computational requirements. We propose a "compressive learning"
framework where we estimate model parameters from a sketch of the training
data. This sketch is a collection of generalized moments of the underlying
probability distribution of the data. It can be computed in a single pass on
the training set, and is easily computable on streams or distributed datasets.
The proposed framework shares similarities with compressive sensing, which aims
at drastically reducing the dimension of high-dimensional signals while
preserving the ability to reconstruct them. To perform the estimation task, we
derive an iterative algorithm analogous to sparse reconstruction algorithms in
the context of linear inverse problems. We exemplify our framework with the
compressive estimation of a Gaussian Mixture Model (GMM), providing heuristics
on the choice of the sketching procedure and theoretical guarantees of
reconstruction. We experimentally show on synthetic data that the proposed
algorithm yields results comparable to the classical Expectation-Maximization
(EM) technique while requiring significantly less memory and fewer computations
when the number of database elements is large. We further demonstrate the
potential of the approach on real large-scale data (over 10 8 training samples)
for the task of model-based speaker verification. Finally, we draw some
connections between the proposed framework and approximate Hilbert space
embedding of probability distributions using random features. We show that the
proposed sketching operator can be seen as an innovative method to design
translation-invariant kernels adapted to the analysis of GMMs. We also use this
theoretical framework to derive information preservation guarantees, in the
spirit of infinite-dimensional compressive sensing
A Generic Conceptual Model for Risk Analysis in a Multi-agent Based Collaborative Design Environment
Organised by: Cranfield UniversityThis paper presents a generic conceptual model of risk evaluation in order to manage the risk through
related constraints and variables under a multi-agent collaborative design environment. Initially, a hierarchy
constraint network is developed to mapping constraints and variables. Then, an effective approximation
technique named Risk Assessment Matrix is adopted to evaluate risk level and rank priority after probability
quantification and consequence validation. Additionally, an Intelligent Data based Reasoning Methodology
is expanded to deal with risk mitigation by combining inductive learning methods and reasoning
consistency algorithms with feasible solution strategies. Finally, two empirical studies were conducted to
validate the effectiveness and feasibility of the conceptual model.Mori Seiki – The Machine Tool Compan
- …