1,907 research outputs found
A Neutrosophic Description Logic
Description Logics (DLs) are appropriate, widely used, logics for managing
structured knowledge. They allow reasoning about individuals and concepts, i.e.
set of individuals with common properties. Typically, DLs are limited to
dealing with crisp, well defined concepts. That is, concepts for which the
problem whether an individual is an instance of it is yes/no question. More
often than not, the concepts encountered in the real world do not have a
precisely defined criteria of membership: we may say that an individual is an
instance of a concept only to a certain degree, depending on the individual's
properties. The DLs that deal with such fuzzy concepts are called fuzzy DLs. In
order to deal with fuzzy, incomplete, indeterminate and inconsistent concepts,
we need to extend the fuzzy DLs, combining the neutrosophic logic with a
classical DL. In particular, concepts become neutrosophic (here neutrosophic
means fuzzy, incomplete, indeterminate, and inconsistent), thus reasoning about
neutrosophic concepts is supported. We'll define its syntax, its semantics, and
describe its properties.Comment: 18 pages. Presented at the IEEE International Conference on Granular
Computing, Georgia State University, Atlanta, USA, May 200
Inferior Alveolar Nerve Block in Pediatric Patients: Quantitative Assessment
Pediatric inferior alveolar nerve block landmarks are poorly defined and quantitative assessments of their efficacy are unavailable. We assessed landmarks/planes employed in pediatric IA blocks, and a new landmark, to establish their value
Consensus and meta-analysis regulatory networks for combining multiple microarray gene expression datasets
Microarray data is a key source of experimental data for modelling gene regulatory interactions from expression levels. With the rapid increase of publicly available microarray data comes the opportunity to produce regulatory network models based on multiple datasets. Such models are potentially more robust with greater confidence, and place less reliance on a single dataset. However, combining datasets directly can be difficult as experiments are often conducted on different microarray platforms, and in different laboratories leading to inherent biases in the data that are not always removed through pre-processing such as normalisation. In this paper we compare two frameworks for combining microarray datasets to model regulatory networks: pre- and post-learning aggregation. In pre-learning approaches, such as using simple scale-normalisation prior to the concatenation of datasets, a model is learnt from a combined dataset, whilst in post-learning aggregation individual models are learnt from each dataset and the models are combined. We present two novel approaches for post-learning aggregation, each based on aggregating high-level features of Bayesian network models that have been generated from different microarray expression datasets. Meta-analysis Bayesian networks are based on combining statistical confidences attached to network edges whilst Consensus Bayesian networks identify consistent network features across all datasets. We apply both approaches to multiple datasets from synthetic and real (Escherichia coli and yeast) networks and demonstrate that both methods can improve on networks learnt from a single dataset or an aggregated dataset formed using a standard scale-normalisation
Thinking About Causation : A Causal Language with Epistemic Operators
In this paper we propose a formal framework for modeling the interaction of causal and (qualitative) epistemic reasoning. To this purpose, we extend the notion of a causal model [11, 16, 17, 26] with a representation of the epistemic state of an agent. On the side of the object language, we add operators to express knowledge and the act of observing new information. We provide a sound and complete axiomatization of the logic, and discuss the relation of this framework to causal team semantics.Peer reviewe
Conformative Filtering for Implicit Feedback Data
Implicit feedback is the simplest form of user feedback that can be used for
item recommendation. It is easy to collect and is domain independent. However,
there is a lack of negative examples. Previous work tackles this problem by
assuming that users are not interested or not as much interested in the
unconsumed items. Those assumptions are often severely violated since
non-consumption can be due to factors like unawareness or lack of resources.
Therefore, non-consumption by a user does not always mean disinterest or
irrelevance. In this paper, we propose a novel method called Conformative
Filtering (CoF) to address the issue. The motivating observation is that if
there is a large group of users who share the same taste and none of them have
consumed an item before, then it is likely that the item is not of interest to
the group. We perform multidimensional clustering on implicit feedback data
using hierarchical latent tree analysis (HLTA) to identify user `tastes' groups
and make recommendations for a user based on her memberships in the groups and
on the past behavior of the groups. Experiments on two real-world datasets from
different domains show that CoF has superior performance compared to several
common baselines
Scaling Analysis of Affinity Propagation
We analyze and exploit some scaling properties of the Affinity Propagation
(AP) clustering algorithm proposed by Frey and Dueck (2007). First we observe
that a divide and conquer strategy, used on a large data set hierarchically
reduces the complexity to , for a
data-set of size and a depth of the hierarchical strategy. For a
data-set embedded in a -dimensional space, we show that this is obtained
without notably damaging the precision except in dimension . In fact, for
larger than 2 the relative loss in precision scales like
. Finally, under some conditions we observe that there is a
value of the penalty coefficient, a free parameter used to fix the number
of clusters, which separates a fragmentation phase (for ) from a
coalescent one (for ) of the underlying hidden cluster structure. At
this precise point holds a self-similarity property which can be exploited by
the hierarchical strategy to actually locate its position. From this
observation, a strategy based on \AP can be defined to find out how many
clusters are present in a given dataset.Comment: 28 pages, 14 figures, Inria research repor
Structure of an archaeal PCNA1-PCNA2-FEN1 complex: elucidating PCNA subunit and client enzyme specificity.
The archaeal/eukaryotic proliferating cell nuclear antigen (PCNA) toroidal clamp interacts with a host of DNA modifying enzymes, providing a stable anchorage and enhancing their respective processivities. Given the broad range of enzymes with which PCNA has been shown to interact, relatively little is known about the mode of assembly of functionally meaningful combinations of enzymes on the PCNA clamp. We have determined the X-ray crystal structure of the Sulfolobus solfataricus PCNA1-PCNA2 heterodimer, bound to a single copy of the flap endonuclease FEN1 at 2.9 A resolution. We demonstrate the specificity of interaction of the PCNA subunits to form the PCNA1-PCNA2-PCNA3 heterotrimer, as well as providing a rationale for the specific interaction of the C-terminal PIP-box motif of FEN1 for the PCNA1 subunit. The structure explains the specificity of the individual archaeal PCNA subunits for selected repair enzyme 'clients', and provides insights into the co-ordinated assembly of sequential enzymatic steps in PCNA-scaffolded DNA repair cascades
Estimation of Controlled Direct Effects
When regression models adjust for mediators on the causal path from exposure to outcome, the regression coefficient of exposure is commonly viewed as a measure of the direct exposure effect. This interpretation can be misleading, even with a randomly assigned exposure. This is because adjustment for post-exposure measurements introduces bias whenever their association with the outcome is confounded by more than just the exposure. By the same token, adjustment for such confounders stays problematic when these are themselves affected by the exposure. Robins accommodated this by introducing linear structural nested direct effect models with direct effect parameters that can be estimated by using inverse probability weighting by a conditional distribution of the mediator. The resulting estimators are consistent, but inefficient, and can be extremely unstable when the mediator is absolutely continuous. We develop direct effect estimators which are not only more efficient but also consistent under a less demanding model for a conditional expectation of the outcome. We find that the one estimator which avoids inverse probability weighting altogether performs best. This estimator is intuitive, computationally straightforward and, as demonstrated by simulation, competes extremely well with ordinary least squares estimators in settings where standard regression is valid
Type Ia Supernova Light Curve Inference: Hierarchical Bayesian Analysis in the Near Infrared
We present a comprehensive statistical analysis of the properties of Type Ia
SN light curves in the near infrared using recent data from PAIRITEL and the
literature. We construct a hierarchical Bayesian framework, incorporating
several uncertainties including photometric error, peculiar velocities, dust
extinction and intrinsic variations, for coherent statistical inference. SN Ia
light curve inferences are drawn from the global posterior probability of
parameters describing both individual supernovae and the population conditioned
on the entire SN Ia NIR dataset. The logical structure of the hierarchical
model is represented by a directed acyclic graph. Fully Bayesian analysis of
the model and data is enabled by an efficient MCMC algorithm exploiting the
conditional structure using Gibbs sampling. We apply this framework to the
JHK_s SN Ia light curve data. A new light curve model captures the observed
J-band light curve shape variations. The intrinsic variances in peak absolute
magnitudes are: sigma(M_J) = 0.17 +/- 0.03, sigma(M_H) = 0.11 +/- 0.03, and
sigma(M_Ks) = 0.19 +/- 0.04. We describe the first quantitative evidence for
correlations between the NIR absolute magnitudes and J-band light curve shapes,
and demonstrate their utility for distance estimation. The average residual in
the Hubble diagram for the training set SN at cz > 2000 km/s is 0.10 mag. The
new application of bootstrap cross-validation to SN Ia light curve inference
tests the sensitivity of the model fit to the finite sample and estimates the
prediction error at 0.15 mag. These results demonstrate that SN Ia NIR light
curves are as effective as optical light curves, and, because they are less
vulnerable to dust absorption, they have great potential as precise and
accurate cosmological distance indicators.Comment: 24 pages, 15 figures, 4 tables. Accepted for publication in ApJ.
Corrected typo, added references, minor edit
Quantitative microstructure characterization of Ag nanoparticle sintered joints for power die attachment
The samples of sintered Ag joints for power die attachments were prepared using paste of Ag nanoparticles at 240 °C and 5 MPa for 3 to 17 minutes. Their microstructural features were quantitatively characterized with scanning elec-tronic microscopy, transmission electron microscopy, X-ray diffraction and image analysis method. The resulting nor-malized thickness, pore size and porosity decreased, and grain size increased with increasing the sintering time. A time dependence of the form t1/n with n close to 2 or 3 can be further derived for the kinetics of the thinning, densification and grain growth within the sintered Ag joints
- …