628 research outputs found

    Co-Evolutionary Learning for Cognitive Computer Generated Entities

    Get PDF
    In this paper, an approach is advocated to use a hybrid approach towards learning behaviour for computer generated entities (CGEs) in a serious gaming setting. Hereby, an agent equipped with cognitive model is used but this agent is enhanced with Machine Learning (ML) capabilities. This facilitates the agent to exhibit human like behaviour but avoid an expert having to define all parameters explicitly. More in particular, the ML approach utilizes co-evolution as a learning paradigm. An evaluation in the domain of one-versus-one air combat shows promising results

    Monte Carlo Methods for Estimating Interfacial Free Energies and Line Tensions

    Full text link
    Excess contributions to the free energy due to interfaces occur for many problems encountered in the statistical physics of condensed matter when coexistence between different phases is possible (e.g. wetting phenomena, nucleation, crystal growth, etc.). This article reviews two methods to estimate both interfacial free energies and line tensions by Monte Carlo simulations of simple models, (e.g. the Ising model, a symmetrical binary Lennard-Jones fluid exhibiting a miscibility gap, and a simple Lennard-Jones fluid). One method is based on thermodynamic integration. This method is useful to study flat and inclined interfaces for Ising lattices, allowing also the estimation of line tensions of three-phase contact lines, when the interfaces meet walls (where "surface fields" may act). A generalization to off-lattice systems is described as well. The second method is based on the sampling of the order parameter distribution of the system throughout the two-phase coexistence region of the model. Both the interface free energies of flat interfaces and of (spherical or cylindrical) droplets (or bubbles) can be estimated, including also systems with walls, where sphere-cap shaped wall-attached droplets occur. The curvature-dependence of the interfacial free energy is discussed, and estimates for the line tensions are compared to results from the thermodynamic integration method. Basic limitations of all these methods are critically discussed, and an outlook on other approaches is given

    Architektury kognitywne, czyli jak zbudować sztuczny umysł

    Get PDF
    Architektury kognitywne (AK) są próbą stworzenia modeli komputerowych integrujących wiedzę o działaniu umysłu. Ich zadaniem jest implementacja konkretnych schematów działania funkcji poznawczych umożliwiająca testowanie tych funkcji na szerokiej gamie zagadnień. Wiele architektur kognitywnych opracowano w celu symulacji procesu komunikacji pomiędzy człowiekiem i złożonymi maszynami (HCI, Human-Computer Interfaces), symulowania czasów reakcji oraz różnych psychofizycznych zależności. Można to do pewnego stopnia osiągnąć budując modele układu poznawczego na poziomie symbolicznym, z wiedzą w postaci reguł logicznych. Istnieją też projekty, które próbują powiązać procesy poznawcze z aktywacją modułów reprezentujących konkretne obszary mózgu, zgodnie z obserwacjami w eksperymentach z funkcjonalnym rezonansem magnetycznym (fMRI). Dużą grupę stanowią architektury oparte na podejściu logicznym, które mają na celu symulację wyższych czynności poznawczych, przede wszystkim procesów myślenia i rozumowania. Niektóre z projektów rozwoju architektur poznawczych skupiają większe grupy badawcze działające od wielu dziesięcioleci. Ogólnie architektury kognitywne podzielić można na 3 duże grupy: architektury symboliczne (oparte na funkcjonalnym rozumieniu procesów poznawczych); architektury emergentne, oparte na modelach koneksjonistycznych; oraz architektury hybrydowe, wykorzystujące zarówno modele neuronowe jak i reguły symboliczne. W ostatnich latach znacznie wzrosło zainteresowanie architekturami inspirowanymi przez neurobiologię (BICA, Brain Inspired Cognitive Architectures). Jak sklasyfikować różne architektury, jakie wyzwania należy przed nimi postawić, jak oceniać postępy w ich rozwoju, czego nam brakuje do stworzenia pełnego modelu umysłu? Krytyczny przegląd istniejących architektur kognitywnych, ich ograniczeń i możliwości pozwala na sformułowanie ogólnych wniosków dotyczących kierunków ich rozwoju czego nam brakuje do stworzenia pełnego modelu umysłu? Krytyczny przegląd istniejących architektur kognitywnych, ich ograniczeń i możliwości pozwala na sformułowanie ogólnych wniosków dotyczących kierunków ich rozwoju oraz wysunięcie własnych propozycji budowy nowej architektury

    Towards Machine Wald

    Get PDF
    The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed \emph{by humans} because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to \emph{think} as \emph{humans} have the ability to do when faced with uncertainty is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tend to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with Decision Theory, Machine Learning, Bayesian Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty Quantification and Information Based Complexity.Comment: 37 page

    CANDELS : constraining the AGN-merger connection with host morphologies at z ~ 2

    Get PDF
    Using Hubble Space Telescope/WFC3 imaging taken as part of the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey, we examine the role that major galaxy mergers play in triggering active galactic nucleus (AGN) activity at z ~ 2. Our sample consists of 72 moderate-luminosity (L X ~ 1042-44 erg s-1) AGNs at 1.5 < z < 2.5 that are selected using the 4 Ms Chandra observations in the Chandra Deep Field South, the deepest X-ray observations to date. Employing visual classifications, we have analyzed the rest-frame optical morphologies of the AGN host galaxies and compared them to a mass-matched control sample of 216 non-active galaxies at the same redshift. We find that most of the AGNs reside in disk galaxies (51.4+5.8 - 5.9%), while a smaller percentage are found in spheroids (27.8+5.8 - 4.6%). Roughly 16.7+5.3 - 3.5% of the AGN hosts have highly disturbed morphologies and appear to be involved in a major merger or interaction, while most of the hosts (55.6+5.6 - 5.9%) appear relatively relaxed and undisturbed. These fractions are statistically consistent with the fraction of control galaxies that show similar morphological disturbances. These results suggest that the hosts of moderate-luminosity AGNs are no more likely to be involved in an ongoing merger or interaction relative to non-active galaxies of similar mass at z ~ 2. The high disk fraction observed among the AGN hosts also appears to be at odds with predictions that merger-driven accretion should be the dominant AGN fueling mode at z ~ 2, even at moderate X-ray luminosities. Although we cannot rule out that minor mergers are responsible for triggering these systems, the presence of a large population of relatively undisturbed disk-like hosts suggests that the stochastic accretion of gas plays a greater role in fueling AGN activity at z ~ 2 than previously thought

    Search for a W' boson decaying to a bottom quark and a top quark in pp collisions at sqrt(s) = 7 TeV

    Get PDF
    Results are presented from a search for a W' boson using a dataset corresponding to 5.0 inverse femtobarns of integrated luminosity collected during 2011 by the CMS experiment at the LHC in pp collisions at sqrt(s)=7 TeV. The W' boson is modeled as a heavy W boson, but different scenarios for the couplings to fermions are considered, involving both left-handed and right-handed chiral projections of the fermions, as well as an arbitrary mixture of the two. The search is performed in the decay channel W' to t b, leading to a final state signature with a single lepton (e, mu), missing transverse energy, and jets, at least one of which is tagged as a b-jet. A W' boson that couples to fermions with the same coupling constant as the W, but to the right-handed rather than left-handed chiral projections, is excluded for masses below 1.85 TeV at the 95% confidence level. For the first time using LHC data, constraints on the W' gauge coupling for a set of left- and right-handed coupling combinations have been placed. These results represent a significant improvement over previously published limits.Comment: Submitted to Physics Letters B. Replaced with version publishe

    A collision in 2009 as the origin of the debris trail of asteroid P/2010 A2

    Full text link
    The peculiar object P/2010 A2 was discovered by the LINEAR near-Earth asteroid survey in January 2010 and given a cometary designation due to the presence of a trail of material, although there was no central condensation or coma. The appearance of this object, in an asteroidal orbit (small eccentricity and inclination) in the inner main asteroid belt attracted attention as a potential new member of the recently recognized class of 'Main Belt Comets' (MBCs). If confirmed, this new object would greatly expand the range in heliocentric distance over which MBCs are found. Here we present observations taken from the unique viewing geometry provided by ESA's Rosetta spacecraft, far from the Earth, that demonstrate that the trail is due to a single event rather than a period of cometary activity, in agreement with independent results from the Hubble Space Telescope (HST). The trail is made up of relatively large particles of millimetre to centimetre size that remain close to the parent asteroid. The shape of the trail can be explained by an initial impact ejecting large clumps of debris that disintegrated and dispersed almost immediately. We determine that this was an asteroid collision that occurred around February 10, 2009.Comment: Published in Nature on 14/10/2010. 25 pages, includes supplementary materia

    Piecewise Boolean Algebras and Their Domains

    Get PDF
    We characterise piecewise Boolean domains, that is, those domains that arise as Boolean subalgebras of a piecewise Boolean algebra. This leads to equivalent descriptions of the category of piecewise Boolean algebras: either as piecewise Boolean domains equipped with an orientation, or as full structure sheaves on piecewise Boolean domains.Comment: 11 page
    • …
    corecore