59,727 research outputs found
Ontology of core data mining entities
In this article, we present OntoDM-core, an ontology of core data mining
entities. OntoDM-core defines themost essential datamining entities in a three-layered
ontological structure comprising of a specification, an implementation and an application
layer. It provides a representational framework for the description of mining
structured data, and in addition provides taxonomies of datasets, data mining tasks,
generalizations, data mining algorithms and constraints, based on the type of data.
OntoDM-core is designed to support a wide range of applications/use cases, such as
semantic annotation of data mining algorithms, datasets and results; annotation of
QSAR studies in the context of drug discovery investigations; and disambiguation of
terms in text mining. The ontology has been thoroughly assessed following the practices
in ontology engineering, is fully interoperable with many domain resources and
is easy to extend
The Interaction of Yer Deletion and Nasal Assimilation in Optimality Theory1
The problem of opacity presents a challenge for generative phonology. This paper examines the process of Nasal Assimilation in Polish rendered opaque by the process of Vowel Deletion in Optimality Theory (Prince & Smolensky, 1993), which currently is a dominating model for phonological analysis. The opaque interaction of the two processes exposes the inadequacy of standard Optimality Theory arising from the fact that standard OT is a non-derivational theory. It is argued that only by introducing intermediate levels can Optimality Theory deal with complex cases of opaque interactions
Explaining the Justificatory Asymmetry between Statistical and Individualized Evidence
In some cases, there appears to be an asymmetry in the evidential value of statistical and more individualized evidence. For example, while I may accept that Alex is guilty based on eyewitness testimony that is 80% likely to be accurate, it does not seem permissible to do so based on the fact that 80% of a group that Alex is a member of are guilty. In this paper I suggest that rather than reflecting a deep defect in statistical evidence, this asymmetry might arise from a general constraint on rational inquiry. Plausibly the degree of evidential support needed to justify taking a proposition to be true depends on the stakes of error. While relying on statistical evidence plausibly raises the stakes by introducing new kinds of risk to members of the reference class, paradigmatically `individualized' evidence---evidence tracing back to A's voluntary behavior---can lower the stakes. The net result explains the apparent evidential asymmetry without positing a deep difference in the brute justificatory power of different types of evidence
Revisiting Theories with Enhanced Higgs Couplings to Weak Gauge Bosons
Based on recent LHC Higgs analyses and in anticipation of future results we
revisit theories where Higgs bosons can couple to weak gauge bosons with
enhanced strength relative to the Standard Model value. Specifically, we look
at the Georgi-Machacek model and its generalizations where higher "spin"
representations of SU(2)_L break electroweak symmetry while maintaining
custodial SU(2). In these theories, there is not only a Higgs-like boson but
partner Higgs scalars transforming under representations of custodial SU(2),
leading to a rich phenomenology. These theories serve as a consistent
theoretical and experimental framework to explain enhanced couplings to gauge
bosons, including fermiophobic Higgses. We focus on the phenomenology of a
neutral scalar partner to the Higgs, which is determined once the Higgs
couplings are specified. Depending on the parameter space, this partner could
have i) enhanced fermion and gauge boson couplings and should be searched for
at high mass (> 600 GeV), ii) have suppressed couplings and could be searched
for at lower masses, where the Standard Model Higgs has already been ruled out,
and iii) have fermiophilic couplings, where it can be searched for in heavy
Higgs and top resonance searches. In the first two regions, the partner also
has substantial decay rates into a pair of Higgs bosons. We touch briefly on
the more model-dependent effects of the nontrivial SU(2)_C multiplets, which
have exotic signals, such as a doubly-charged Higgs. We also discuss how the
loop induced effects of these scalars tend to reduce the Higgs decay rate to
photons, adding an additional uncertainty when extracting the couplings for the
Higgs boson.Comment: 9 pages, 9 figures, revtex4; v2, references adde
Information structure in linguistic theory and in speech production : validation of a cross-linguistic data set
The aim of this paper is to validate a dataset collected by means of production experiments which are part of the Questionnaire on Information Structure. The experiments generate a range of information structure contexts that have been observed in the literature to induce specific constructions. This paper compares the speech production results from a subset of these experiments with specific claims about the reflexes of information structure in four different languages. The results allow us to evaluate and in most cases validate the efficacy of our elicitation paradigms, to identify potentially fruitful avenues of future research, and to highlight issues involved in interpreting speech production data of this kind
Massive Gravity Theories and limits of Ghost-free Bigravity models
We construct a class of theories which extend New Massive Gravity to higher
orders in curvature in any dimension. The lagrangians arise as limits of a new
class of bimetric theories of Lovelock gravity, which are unitary theories free
from the Boulware-Deser ghost. These Lovelock bigravity models represent the
most general non-chiral ghost-free theories of an interacting massless and
massive spin-two field in any dimension. The scaling limit is taken in such a
way that unitarity is explicitly broken, but the Boulware-Deser ghost remains
absent. This automatically implies the existence of a holographic -theorem
for these theories. We also show that the Born-Infeld extension of New Massive
Gravity falls into our class of models demonstrating that this theory is also
free of the Boulware-Deser ghost. These results extend existing connections
between New Massive Gravity, bigravity theories, Galileon theories and
holographic -theorems.Comment: 11+5 page
Modelling the formation of phonotactic restrictions across the mental lexicon
Experimental data shows that adult learners of an artificial language with a phonotactic restriction learned this restriction better when being trained on word types (e.g. when they were presented with 80 different words twice each) than when being trained on word tokens (e.g. when presented with 40 different words four times each) (Hamann & Ernestus submitted). These findings support Pierrehumbertâs (2003) observation that phonotactic co-occurrence restrictions are formed across lexical entries, since only lexical levels of representation can be sensitive to type frequencies
A Deflationary Account of Mental Representation
Among the cognitive capacities of evolved creatures is the capacity to represent. Theories in cognitive neuroscience typically explain our manifest representational capacities by positing internal representations, but there is little agreement about how these representations function, especially with the relatively recent proliferation of connectionist, dynamical, embodied, and enactive approaches to cognition. In this talk I sketch an account of the nature and function of representation in cognitive neuroscience that couples a realist construal of representational vehicles with a pragmatic account of mental content. I call the resulting package a deflationary account of mental representation and I argue that it avoids the problems that afflict competing accounts
- âŠ