1,517,567 research outputs found
Monotone Pieces Analysis for Qualitative Modeling
It is a crucial task to build qualitative models of industrial applications for model-based diagnosis. A Model Abstraction procedure is designed to automatically transform a quantitative model into qualitative model. If the data is monotone, the behavior can be easily abstracted using the corners of the bounding rectangle. Hence, many existing model abstraction approaches rely on monotonicity. But it is not a trivial problem to robustly detect monotone pieces from scattered data obtained by numerical simulation or experiments. This paper introduces an approach based on scale-dependent monotonicity: the notion that monotonicity can be defined relative to a scale. Real-valued functions defined on a finite set of reals e.g. simulation results, can be partitioned into quasi-monotone segments. The end points for the monotone segments are used as the initial set of landmarks for qualitative model abstraction. The qualitative model abstraction works as an iteratively refining process starting from the initial landmarks. The monotonicity analysis presented here can be used in constructing many other kinds of qualitative models; it is robust and computationally efficient
On the Asymptotic Behaviour of Cosmological Models in Scalar-Tensor Theories of Gravity
We study the qualitative properties of cosmological models in scalar-tensor
theories of gravity by exploiting the formal equivalence of these theories with
general relativity minimally coupled to a scalar field under a conformal
transformation and field redefinition. In particular, we investigate the
asymptotic behaviour of spatially homogeneous cosmological models in a class of
scalar-tensor theories which are conformally equivalent to general relativistic
Bianchi cosmologies with a scalar field and an exponential potential whose
qualitative features have been studied previously. Particular attention is
focussed on those scalar-tensor theory cosmological models, which are shown to
be self-similar, that correspond to general relativistic models that play an
important r\^{o}le in describing the asymptotic behaviour of more general
models (e.g., those cosmological models that act as early-time and late-time
attractors).Comment: 22 pages, submitted to Phys Rev
In All Likelihood, Deep Belief Is Not Enough
Statistical models of natural stimuli provide an important tool for
researchers in the fields of machine learning and computational neuroscience. A
canonical way to quantitatively assess and compare the performance of
statistical models is given by the likelihood. One class of statistical models
which has recently gained increasing popularity and has been applied to a
variety of complex data are deep belief networks. Analyses of these models,
however, have been typically limited to qualitative analyses based on samples
due to the computationally intractable nature of the model likelihood.
Motivated by these circumstances, the present article provides a consistent
estimator for the likelihood that is both computationally tractable and simple
to apply in practice. Using this estimator, a deep belief network which has
been suggested for the modeling of natural image patches is quantitatively
investigated and compared to other models of natural image patches. Contrary to
earlier claims based on qualitative results, the results presented in this
article provide evidence that the model under investigation is not a
particularly good model for natural image
Quantitative Analysis of Saliency Models
Previous saliency detection research required the reader to evaluate
performance qualitatively, based on renderings of saliency maps on a few
shapes. This qualitative approach meant it was unclear which saliency models
were better, or how well they compared to human perception. This paper provides
a quantitative evaluation framework that addresses this issue. In the first
quantitative analysis of 3D computational saliency models, we evaluate four
computational saliency models and two baseline models against ground-truth
saliency collected in previous work.Comment: 10 page
- âŠ