3,200 research outputs found

    Man and Machine: Questions of Risk, Trust and Accountability in Today's AI Technology

    Full text link
    Artificial Intelligence began as a field probing some of the most fundamental questions of science - the nature of intelligence and the design of intelligent artifacts. But it has grown into a discipline that is deeply entwined with commerce and society. Today's AI technology, such as expert systems and intelligent assistants, pose some difficult questions of risk, trust and accountability. In this paper, we present these concerns, examining them in the context of historical developments that have shaped the nature and direction of AI research. We also suggest the exploration and further development of two paradigms, human intelligence-machine cooperation, and a sociological view of intelligence, which might help address some of these concerns.Comment: Preprin

    Using Cross-Lingual Explicit Semantic Analysis for Improving Ontology Translation

    Get PDF
    Semantic Web aims to allow machines to make inferences using the explicit conceptualisations contained in ontologies. By pointing to ontologies, Semantic Web-based applications are able to inter-operate and share common information easily. Nevertheless, multilingual semantic applications are still rare, owing to the fact that most online ontologies are monolingual in English. In order to solve this issue, techniques for ontology localisation and translation are needed. However, traditional machine translation is difficult to apply to ontologies, owing to the fact that ontology labels tend to be quite short in length and linguistically different from the free text paradigm. In this paper, we propose an approach to enhance machine translation of ontologies based on exploiting the well-structured concept descriptions contained in the ontology. In particular, our approach leverages the semantics contained in the ontology by using Cross Lingual Explicit Semantic Analysis (CLESA) for context-based disambiguation in phrase-based Statistical Machine Translation (SMT). The presented work is novel in the sense that application of CLESA in SMT has not been performed earlier to the best of our knowledge

    Deep Reinforcement Learning from Self-Play in Imperfect-Information Games

    Get PDF
    Many real-world applications can be described as large-scale games of imperfect information. To deal with these challenging domains, prior work has focused on computing Nash equilibria in a handcrafted abstraction of the domain. In this paper we introduce the first scalable end-to-end approach to learning approximate Nash equilibria without prior domain knowledge. Our method combines fictitious self-play with deep reinforcement learning. When applied to Leduc poker, Neural Fictitious Self-Play (NFSP) approached a Nash equilibrium, whereas common reinforcement learning methods diverged. In Limit Texas Holdem, a poker game of real-world scale, NFSP learnt a strategy that approached the performance of state-of-the-art, superhuman algorithms based on significant domain expertise.Comment: updated version, incorporating conference feedbac

    Experimental set-up for investigation of fault diagnosis of a centrifugal pump

    Get PDF
    Centrifugal pumps are complex machines which can experience different types of fault. Condition monitoring can be used in centrifugal pump fault detection through vibration analysis for mechanical and hydraulic forces. Vibration analysis methods have the potential to be combined with artificial intelligence systems where an automatic diagnostic method can be approached. An automatic fault diagnosis approach could be a good option to minimize human error and to provide a precise machine fault classification. This work aims to introduce an approach to centrifugal pump fault diagnosis based on artificial intelligence and genetic algorithm systems. An overview of the future works, research methodology and proposed experimental setup is presented and discussed. The expected results and outcomes based on the experimental work are illustrated

    Biometrics for Emotion Detection (BED): Exploring the combination of Speech and ECG

    Get PDF
    The paradigm Biometrics for Emotion Detection (BED) is introduced, which enables unobtrusive emotion recognition, taking into account varying environments. It uses the electrocardiogram (ECG) and speech, as a powerful but rarely used combination to unravel people’s emotions. BED was applied in two environments (i.e., office and home-like) in which 40 people watched 6 film scenes. It is shown that both heart rate variability (derived from the ECG) and, when people’s gender is taken into account, the standard deviation of the fundamental frequency of speech indicate people’s experienced emotions. As such, these measures validate each other. Moreover, it is found that people’s environment can indeed of influence experienced emotions. These results indicate that BED might become an important paradigm for unobtrusive emotion detection

    Masquerade attack detection through observation planning for multi-robot systems

    Full text link
    The increasing adoption of autonomous mobile robots comes with a rising concern over the security of these systems. In this work, we examine the dangers that an adversary could pose in a multi-agent robot system. We show that conventional multi-agent plans are vulnerable to strong attackers masquerading as a properly functioning agent. We propose a novel technique to incorporate attack detection into the multi-agent path-finding problem through the simultaneous synthesis of observation plans. We show that by specially crafting the multi-agent plan, the induced inter-agent observations can provide introspective monitoring guarantees; we achieve guarantees that any adversarial agent that plans to break the system-wide security specification must necessarily violate the induced observation plan.Accepted manuscrip

    Enaction-Based Artificial Intelligence: Toward Coevolution with Humans in the Loop

    Full text link
    This article deals with the links between the enaction paradigm and artificial intelligence. Enaction is considered a metaphor for artificial intelligence, as a number of the notions which it deals with are deemed incompatible with the phenomenal field of the virtual. After explaining this stance, we shall review previous works regarding this issue in terms of artifical life and robotics. We shall focus on the lack of recognition of co-evolution at the heart of these approaches. We propose to explicitly integrate the evolution of the environment into our approach in order to refine the ontogenesis of the artificial system, and to compare it with the enaction paradigm. The growing complexity of the ontogenetic mechanisms to be activated can therefore be compensated by an interactive guidance system emanating from the environment. This proposition does not however resolve that of the relevance of the meaning created by the machine (sense-making). Such reflections lead us to integrate human interaction into this environment in order to construct relevant meaning in terms of participative artificial intelligence. This raises a number of questions with regards to setting up an enactive interaction. The article concludes by exploring a number of issues, thereby enabling us to associate current approaches with the principles of morphogenesis, guidance, the phenomenology of interactions and the use of minimal enactive interfaces in setting up experiments which will deal with the problem of artificial intelligence in a variety of enaction-based ways

    Artificial life meets computational creativity?

    Get PDF
    I review the history of work in Artificial Life on the problem of the open-ended evolutionary growth of complexity in computational worlds. This is then put into the context of evolutionary epistemology and human creativity

    Attempts to regulate artificial intelligence: regulatory practices from the United States, the European Union, and the People's Republic of China

    Get PDF
    A század egyik legmegdöbbentőbb és előremutató vívmánya a mesterséges intelligencia, vagy röviden az MI. A mesterséges intelligencia a gépek általi intelligencia, ami szemben áll az emberek és más állatok által mutatott természetes intelligenciával. Egyre több iparág alkalmazza a mesterséges intelligenciát, és az elkövetkező években várhatóan tovább fog nőni a számuk. Az MI alkalmazások segíthetnek az embereknek a bonyolult problémák elemzésében és a hatékony megoldások meghatározásában. Emellett az MI technológiákat egyre több iparágban és vállalkozásban tervezik alkalmazni, ami új területek kialakulását és újfajta technológiák fejlesztését ösztönzi. A különböző potenciális előnyök ellenére az MI algoritmusok fejlesztése és alkalmazása néhány jelenlegi és jövőbeli kihívást is felvet. Ezért különösen fontos odafigyelni arra, hogy hogyan történik az algoritmusok fejlesztése és alkalmazása. Ha ezt gondatlanul teszik, a technológiával való helytelen bánásmódnak súlyos következményei lehetnek. Ráadásul további akadályok is felmerülnek, mivel a mesterséges intelligenciával kapcsolatos egyik legjelentősebb probléma a növekvő komplexitás, ami megnehezíti az alkalmazott algoritmusok megértését és értékelését. Ennek eredményeképpen a szabályozás kidolgozásakor meg kell vizsgálni a mesterséges intelligenciával kapcsolatos főbb etikai kérdéseket. A túlzott szabályozás továbbá gátolhatja az innovációt és akadályozhatja a jobb MI alkalmazások fejlesztését. Biztosítani kell, hogy a fejlesztés etikusan történjen, és az egész emberiség javát szolgáló módon használják, ha azt akarjuk, hogy a technológia minden előnyét kiaknázzuk. E szempontok alapján a kutatás célja, hogy összehasonlítsa és szembeállítsa az USA, az EU és Kína által elfogadott különböző szabályozási stratégiákat
    corecore