149,123 research outputs found
Towards Approximate Model Transformations
As the size and complexity of models grow, there is a need to count on novel mechanisms and tools for transforming them. This is required, e.g., when model transformations need to provide target models without having access to the complete source models or in really short time—as it happens, e.g., with streaming
models—or with very large models for which the transformation algorithms become too slow to be of practical use if the complete population of a model is investigated.
In this paper we introduce Approximate Model Transformations, which aim at producing target models that are accurate enough to provide meaningful and useful results in an efficient way, but without having to be fully correct. So to speak, this kind of transformations treats accuracy for execution performance. In particular, we redefine the traditional OCL operators used to query models (e.g.,
allInstances, select, collect, etc.) by adopting sampling techniques and analyse
the accuracy of approximate model transformations results.Universidad de Málaga, Campus de Excelencia Internacional AndalucĂa Tech. European Commission under the ICT Policy Support Programme (grant no. 317859). Research Project TIN2011-23795
Learning Independent Causal Mechanisms
Statistical learning relies upon data sampled from a distribution, and we
usually do not care what actually generated it in the first place. From the
point of view of causal modeling, the structure of each distribution is induced
by physical mechanisms that give rise to dependences between observables.
Mechanisms, however, can be meaningful autonomous modules of generative models
that make sense beyond a particular entailed data distribution, lending
themselves to transfer between problems. We develop an algorithm to recover a
set of independent (inverse) mechanisms from a set of transformed data points.
The approach is unsupervised and based on a set of experts that compete for
data generated by the mechanisms, driving specialization. We analyze the
proposed method in a series of experiments on image data. Each expert learns to
map a subset of the transformed data back to a reference distribution. The
learned mechanisms generalize to novel domains. We discuss implications for
transfer learning and links to recent trends in generative modeling.Comment: ICML 201
- …