975,899 research outputs found
A Qualitative and Quantitative Evaluation of 8 Clear Sky Models
We provide a qualitative and quantitative evaluation of 8 clear sky models
used in Computer Graphics. We compare the models with each other as well as
with measurements and with a reference model from the physics community. After
a short summary of the physics of the problem, we present the measurements and
the reference model, and how we "invert" it to get the model parameters. We
then give an overview of each CG model, and detail its scope, its algorithmic
complexity, and its results using the same parameters as in the reference
model. We also compare the models with a perceptual study. Our quantitative
results confirm that the less simplifications and approximations are used to
solve the physical equations, the more accurate are the results. We conclude
with a discussion of the advantages and drawbacks of each model, and how to
further improve their accuracy
Qualitative System Identification from Imperfect Data
Experience in the physical sciences suggests that the only realistic means of
understanding complex systems is through the use of mathematical models.
Typically, this has come to mean the identification of quantitative models
expressed as differential equations. Quantitative modelling works best when the
structure of the model (i.e., the form of the equations) is known; and the
primary concern is one of estimating the values of the parameters in the model.
For complex biological systems, the model-structure is rarely known and the
modeler has to deal with both model-identification and parameter-estimation. In
this paper we are concerned with providing automated assistance to the first of
these problems. Specifically, we examine the identification by machine of the
structural relationships between experimentally observed variables. These
relationship will be expressed in the form of qualitative abstractions of a
quantitative model. Such qualitative models may not only provide clues to the
precise quantitative model, but also assist in understanding the essence of
that model. Our position in this paper is that background knowledge
incorporating system modelling principles can be used to constrain effectively
the set of good qualitative models. Utilising the model-identification
framework provided by Inductive Logic Programming (ILP) we present empirical
support for this position using a series of increasingly complex artificial
datasets. The results are obtained with qualitative and quantitative data
subject to varying amounts of noise and different degrees of sparsity. The
results also point to the presence of a set of qualitative states, which we
term kernel subsets, that may be necessary for a qualitative model-learner to
learn correct models. We demonstrate scalability of the method to biological
system modelling by identification of the glycolysis metabolic pathway from
data
Qualitative and quantitative models for ordinal data analysis
EnIn this paper, we explore and compare classical regression and ordinal data models when quantitative data are related to a qualitative assessment. Specifically, we test the approach on a data set of graduated students and we check the relative performance and the interpretative content of the models. Some further comments end the paper
Recommended from our members
Machine Learning Framework to Identify Individuals at Risk of Rapid Progression of Coronary Atherosclerosis: From the PARADIGM Registry.
Background Rapid coronary plaque progression (RPP) is associated with incident cardiovascular events. To date, no method exists for the identification of individuals at risk of RPP at a single point in time. This study integrated coronary computed tomography angiography-determined qualitative and quantitative plaque features within a machine learning (ML) framework to determine its performance for predicting RPP. Methods and Results Qualitative and quantitative coronary computed tomography angiography plaque characterization was performed in 1083 patients who underwent serial coronary computed tomography angiography from the PARADIGM (Progression of Atherosclerotic Plaque Determined by Computed Tomographic Angiography Imaging) registry. RPP was defined as an annual progression of percentage atheroma volume ≥1.0%. We employed the following ML models: model 1, clinical variables; model 2, model 1 plus qualitative plaque features; model 3, model 2 plus quantitative plaque features. ML models were compared with the atherosclerotic cardiovascular disease risk score, Duke coronary artery disease score, and a logistic regression statistical model. 224 patients (21%) were identified as RPP. Feature selection in ML identifies that quantitative computed tomography variables were higher-ranking features, followed by qualitative computed tomography variables and clinical/laboratory variables. ML model 3 exhibited the highest discriminatory performance to identify individuals who would experience RPP when compared with atherosclerotic cardiovascular disease risk score, the other ML models, and the statistical model (area under the receiver operating characteristic curve in ML model 3, 0.83 [95% CI 0.78-0.89], versus atherosclerotic cardiovascular disease risk score, 0.60 [0.52-0.67]; Duke coronary artery disease score, 0.74 [0.68-0.79]; ML model 1, 0.62 [0.55-0.69]; ML model 2, 0.73 [0.67-0.80]; all P<0.001; statistical model, 0.81 [0.75-0.87], P=0.128). Conclusions Based on a ML framework, quantitative atherosclerosis characterization has been shown to be the most important feature when compared with clinical, laboratory, and qualitative measures in identifying patients at risk of RPP
Quantitative Analysis of Saliency Models
Previous saliency detection research required the reader to evaluate
performance qualitatively, based on renderings of saliency maps on a few
shapes. This qualitative approach meant it was unclear which saliency models
were better, or how well they compared to human perception. This paper provides
a quantitative evaluation framework that addresses this issue. In the first
quantitative analysis of 3D computational saliency models, we evaluate four
computational saliency models and two baseline models against ground-truth
saliency collected in previous work.Comment: 10 page
Weighted Modal Transition Systems
Specification theories as a tool in model-driven development processes of
component-based software systems have recently attracted a considerable
attention. Current specification theories are however qualitative in nature,
and therefore fragile in the sense that the inevitable approximation of systems
by models, combined with the fundamental unpredictability of hardware
platforms, makes it difficult to transfer conclusions about the behavior, based
on models, to the actual system. Hence this approach is arguably unsuited for
modern software systems. We propose here the first specification theory which
allows to capture quantitative aspects during the refinement and implementation
process, thus leveraging the problems of the qualitative setting.
Our proposed quantitative specification framework uses weighted modal
transition systems as a formal model of specifications. These are labeled
transition systems with the additional feature that they can model optional
behavior which may or may not be implemented by the system. Satisfaction and
refinement is lifted from the well-known qualitative to our quantitative
setting, by introducing a notion of distances between weighted modal transition
systems. We show that quantitative versions of parallel composition as well as
quotient (the dual to parallel composition) inherit the properties from the
Boolean setting.Comment: Submitted to Formal Methods in System Desig
Scale-Based Monotonicity Analysis in Qualitative Modelling with Flat Segments
Qualitative models are often more suitable than classical quantitative models in tasks such as Model-based Diagnosis (MBD), explaining system behavior, and designing novel devices from first principles. Monotonicity is an important feature to leverage when constructing qualitative models. Detecting monotonic pieces robustly and efficiently from sensor or simulation data remains an open problem. This paper presents scale-based monotonicity: the notion that monotonicity can be defined relative to a scale. Real-valued functions defined on a finite set of reals e.g. sensor data or simulation results, can be partitioned into quasi-monotonic segments, i.e. segments monotonic with respect to a scale, in linear time. A novel segmentation algorithm is introduced along with a scale-based definition of "flatness"
- …