3,507 research outputs found
Recommended from our members
When almost is not even close: Remarks on the approximability of HDTP
A growing number of researchers in Cognitive Science advocate the thesis that human cognitive capacities are constrained by computational tractability. If right, this thesis also can be expected to have far-reaching consequences for work in Artificial General Intelligence: Models and systems considered as basis for the development of general cognitive architectures with human-like performance would also have to comply with tractability constraints, making in-depth complexity theoretic analysis a necessary and important part of the standard research and development cycle already from a rather early stage. In this paper we present an application case study for such an analysis based on results from a parametrized complexity and approximation theoretic analysis of the Heuristic Driven Theory Projection (HDTP) analogy-making framework
ACUOS: A System for Modular ACU Generalization with Subtyping and Inheritance
The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-11558-0_40Computing generalizers is relevant in a wide spectrum of automated
reasoning areas where analogical reasoning and inductive inference
are needed. The ACUOS system computes a complete and minimal
set of semantic generalizers (also called “anti-unifiers”) of two structures
in a typed language modulo a set of equational axioms. By supporting
types and any (modular) combination of associativity (A), commutativity
(C), and unity (U) algebraic axioms for function symbols,
ACUOS allows reasoning about typed data structures, e.g. lists, trees,
and (multi-)sets, and typical hierarchical/structural relations such as is a
and part of. This paper discusses the modular ACU generalization tool
ACUOS and illustrates its use in a classical artificial intelligence problem.This work has been partially supported by the EU (FEDER) and the Spanish MINECO under grants TIN 2010-21062-C02-02 and TIN 2013-45732-C4-1-P, by Generalitat Valenciana PROMETEO2011/052, and by NSF Grant CNS 13-10109. J. Espert has also been supported by the Spanish FPU grant FPU12/06223.Alpuente Frasnedo, M.; Escobar Román, S.; Espert Real, J.; Meseguer, J. (2014). ACUOS: A System for Modular ACU Generalization with Subtyping and Inheritance. En Logics in Artificial Intelligence. Springer. 573-581. https://doi.org/10.1007/978-3-319-11558-0_40S573581Alpuente, M., Escobar, S., Espert, J., Meseguer, J.: A Modular Order-sorted Equational Generalization Algorithm. Information and Computation 235, 98–136 (2014)Alpuente, M., Escobar, S., Meseguer, J., Ojeda, P.: A Modular Equational Generalization Algorithm. In: Hanus, M. (ed.) LOPSTR 2008. LNCS, vol. 5438, pp. 24–39. Springer, Heidelberg (2009)Alpuente, M., Escobar, S., Meseguer, J., Ojeda, P.: Order–Sorted Generalization. ENTCS 246, 27–38 (2009)Alpuente, M., Espert, J., Escobar, S., Meseguer, J.: ACUOS: A System for Modular ACU Generalization with Subtyping and Inheritance. Tech. rep., DSIC-UPV (2013), http://www.dsic.upv.es/users/elp/papers.htmlArmengol, E.: Usages of Generalization in Case-Based Reasoning. In: Weber, R.O., Richter, M.M. (eds.) ICCBR 2007. LNCS (LNAI), vol. 4626, pp. 31–45. Springer, Heidelberg (2007)Clavel, M., Durán, F., Eker, S., Lincoln, P., Martí-Oliet, N., Meseguer, J., Talcott, C. (eds.): All About Maude - A High-Performance Logical Framework. LNCS, vol. 4350. Springer, Heidelberg (2007)Clavel, M., Durán, F., Eker, S., Lincoln, P., Martí-Oliet, N., Meseguer, J., Talcott, C.L.: Reflection, metalevel computation, and strategies. In: All About Maude [6], pp. 419–458Gentner, D.: Structure-Mapping: A Theoretical Framework for Analogy*. Cognitive Science 7(2), 155–170 (1983)Krumnack, U., Schwering, A., Gust, H., Kühnberger, K.-U.: Restricted higher order anti unification for analogy making. In: Orgun, M.A., Thornton, J. (eds.) AI 2007. LNCS (LNAI), vol. 4830, pp. 273–282. Springer, Heidelberg (2007)Kutsia, T., Levy, J., Villaret, M.: Anti-Unification for Unranked Terms and Hedges. Journal of Automated Reasoning 520, 155–190 (2014)Meseguer, J.: Conditioned rewriting logic as a united model of concurrency. Theor. Comput. Sci. 96(1), 73–155 (1992)Muggleton, S.: Inductive Logic Programming: Issues, Results and the Challenge of Learning Language in Logic. Artif. Intell. 114(1-2), 283–296 (1999)Ontañón, S., Plaza, E.: Similarity measures over refinement graphs. Machine Learning 87(1), 57–92 (2012)Plotkin, G.: A note on inductive generalization. In: Machine Intelligence, vol. 5, pp. 153–163. Edinburgh University Press (1970)Pottier, L.: Generalisation de termes en theorie equationelle: Cas associatif-commutatif. Tech. Rep. INRIA 1056, Norwegian Computing Center (1989)Schmid, U., Hofmann, M., Bader, F., Häberle, T., Schneider, T.: Incident Mining using Structural Prototypes. In: García-Pedrajas, N., Herrera, F., Fyfe, C., Benítez, J.M., Ali, M. (eds.) IEA/AIE 2010, Part II. LNCS, vol. 6097, pp. 327–336. Springer, Heidelberg (2010
Recommended from our members
Theory blending: extended algorithmic aspects and examples
In Cognitive Science, conceptual blending has been proposed as an important cognitive mechanism that facilitates the creation of new concepts and ideas by constrained combination of available knowledge. It thereby provides a possible theoretical foundation for modeling high-level cognitive faculties such as the ability to understand, learn, and create new concepts and theories. Quite often the development of new mathematical theories and results is based on the combination of previously independent concepts, potentially even originating from distinct subareas of mathematics. Conceptual blending promises to offer a framework for modeling and re-creating this form of mathematical concept invention with computational means. This paper describes a logic-based framework which allows a formal treatment of theory blending (a subform of the general notion of conceptual blending with high relevance for applications in mathematics), discusses an interactive algorithm for blending within the framework, and provides several illustrating worked examples from mathematics
An NLO QCD analysis of inclusive cross-section and jet-production data from the ZEUS experiment
The ZEUS inclusive differential cross-section data from HERA, for charged and
neutral current processes taken with e+ and e- beams, together with
differential cross-section data on inclusive jet production in e+ p scattering
and dijet production in \gamma p scattering, have been used in a new NLO QCD
analysis to extract the parton distribution functions of the proton. The input
of jet data constrains the gluon and allows an accurate extraction of
\alpha_s(M_Z) at NLO;
\alpha_s(M_Z) = 0.1183 \pm 0.0028(exp.) \pm 0.0008(model)
An additional uncertainty from the choice of scales is estimated as \pm
0.005. This is the first extraction of \alpha_s(M_Z) from HERA data alone.Comment: 37 pages, 14 figures, to be submitted to EPJC. PDFs available at
http://durpdg.dur.ac.uk/hepdata in LHAPDFv
Search for the standard model Higgs boson decaying to a pair in events with no charged leptons and large missing transverse energy using the full CDF data set
We report on a search for the standard model Higgs boson produced in
association with a vector boson in the full data set of proton-antiproton
collisions at TeV recorded by the CDF II detector at the
Tevatron, corresponding to an integrated luminosity of 9.45 fb. We
consider events having no identified charged lepton, a transverse energy
imbalance, and two or three jets, of which at least one is consistent with
originating from the decay of a quark. We place 95% credibility level upper
limits on the production cross section times standard model branching fraction
for several mass hypotheses between 90 and . For a Higgs
boson mass of , the observed (expected) limit is 6.7
(3.6) times the standard model prediction.Comment: Accepted by Phys. Rev. Let
Search for the standard model Higgs boson decaying to a bb pair in events with one charged lepton and large missing transverse energy using the full CDF data set
We present a search for the standard model Higgs boson produced in
association with a W boson in sqrt(s) = 1.96 TeV p-pbar collision data
collected with the CDF II detector at the Tevatron corresponding to an
integrated luminosity of 9.45 fb-1. In events consistent with the decay of the
Higgs boson to a bottom-quark pair and the W boson to an electron or muon and a
neutrino, we set 95% credibility level upper limits on the WH production cross
section times the H->bb branching ratio as a function of Higgs boson mass. At a
Higgs boson mass of 125 GeV/c2 we observe (expect) a limit of 4.9 (2.8) times
the standard model value.Comment: Submitted to Phys. Rev. Lett (v2 contains clarifications suggested by
PRL
Evidence for the exclusive decay Bc+- to J/psi pi+- and measurement of the mass of the Bc meson
We report first evidence for a fully reconstructed decay mode of the
B_c^{\pm} meson in the channel B_c^{\pm} \to J/psi \pi^{\pm}, with J/psi \to
mu^+mu^-. The analysis is based on an integrated luminosity of 360 pb$^{-1} in
p\bar{p} collisions at 1.96 TeV center of mass energy collected by the Collider
Detector at Fermilab. We observe 14.6 \pm 4.6 signal events with a background
of 7.1 \pm 0.9 events, and a fit to the J/psi pi^{\pm} mass spectrum yields a
B_c^{\pm} mass of 6285.7 \pm 5.3(stat) \pm 1.2(syst) MeV/c^2. The probability
of a peak of this magnitude occurring by random fluctuation in the search
region is estimated as 0.012%.Comment: 7 pages, 3 figures. Version 3, accepted by PR
Top quark mass measurement using the template method at CDF
We present a measurement of the top quark mass in the lepton+jets and
dilepton channels of decays using the template method. The data
sample corresponds to an integrated luminosity of 5.6 fb of
collisions at Tevatron with TeV, collected with the CDF II
detector. The measurement is performed by constructing templates of three
kinematic variables in the lepton+jets and two kinematic variables in the
dilepton channel. The variables are two reconstructed top quark masses from
different jets-to-quarks combinations and the invariant mass of two jets from
the decay in the lepton+jets channel, and a reconstructed top quark mass
and , a variable related to the transverse mass in events with two
missing particles, in the dilepton channel. The simultaneous fit of the
templates from signal and background events in the lepton+jets and dilepton
channels to the data yields a measured top quark mass of Comment: submitted to Phys. Rev.
Search for the standard model Higgs boson decaying to a bb pair in events with two oppositely-charged leptons using the full CDF data set
We present a search for the standard model Higgs boson produced in
association with a Z boson in data collected with the CDF II detector at the
Tevatron, corresponding to an integrated luminosity of 9.45/fb. In events
consistent with the decay of the Higgs boson to a bottom-quark pair and the Z
boson to electron or muon pairs, we set 95% credibility level upper limits on
the ZH production cross section times the H -> bb branching ratio as a function
of Higgs boson mass. At a Higgs boson mass of 125 GeV/c^2 we observe (expect) a
limit of 7.1 (3.9) times the standard model value.Comment: To be submitted to Phys. Rev. Let
Evidence for t\bar{t}\gamma Production and Measurement of \sigma_t\bar{t}\gamma / \sigma_t\bar{t}
Using data corresponding to 6.0/fb of ppbar collisions at sqrt(s) = 1.96 TeV
collected by the CDF II detector, we present a cross section measurement of
top-quark pair production with an additional radiated photon. The events are
selected by looking for a lepton, a photon, significant transverse momentum
imbalance, large total transverse energy, and three or more jets, with at least
one identified as containing a b quark. The ttbar+photon sample requires the
photon to have 10 GeV or more of transverse energy, and to be in the central
region. Using an event selection optimized for the ttbar+photon candidate
sample we measure the production cross section of, and the ratio of cross
sections of the two samples. Control samples in the dilepton+photon and
lepton+photon+\met, channels are constructed to aid in decay product
identification and background measurements. We observe 30 ttbar+photon
candidate events compared to the standard model expectation of 26.9 +/- 3.4
events. We measure the ttbar+photon cross section to be 0.18+0.08 pb, and the
ratio of the cross section of ttbar+photon to ttbar to be 0.024 +/- 0.009.
Assuming no ttbar+photon production, we observe a probability of 0.0015 of the
background events alone producing 30 events or more, corresponding to 3.0
standard deviations.Comment: 9 pages, 3 figure
- …