38 research outputs found

    A black art: Ontology, data, and the Tower of Babel problem

    Get PDF
    Computational ontologies are a new type of emerging scientific media (Smith, 2016) that process large quantities of heterogeneous data about portions of reality. Applied computational ontologies are used for semantically integrating (Heiler, 1995; Pileggi & Fernandez-Llatas, 2012) divergent data to represent reality and in so doing applied computational ontologies alter conceptions of materiality and produce new realities based on levels of informational granularity and abstraction (Floridi, 2011), resulting in a new type of informational ontology (Iliadis, 2013) the critical analysis of which requires new methods and frameworks. Currently, there is a lack of literature addressing the theoretical, social, and critical dimensions of such informational ontologies, applied computational ontologies, and the interdisciplinary communities of practice (Brown & Duguid, 1991; Wenger, 1998) that produce them. This dissertation fills a lacuna in communicative work in an emerging subfield of Science and Technology Studies (Latour & Woolgar, 1979) known as Critical Data Studies (boyd & Crawford, 2012; Dalton & Thatcher, 2014; Kitchin & Lauriault, 2014) by adopting a critical framework to analyze the systems of thought that inform applied computational ontology while offering insight into its realism-based methods and philosophical frameworks to gauge their ethical import. Since the early 1990s, computational ontologies have been used to organize massive amounts of heterogeneous data by individuating reality into computable parts, attributes, and relations. This dissertation provides a theory of computational ontologies as technologies of individuation (Simondon, 2005) that translate disparate data to produce informational cohesion. By technologies of individuation I mean engineered artifacts whose purpose is to partition portions of reality into computable informational objects. I argue that data are metastable entities and that computational ontologies restrain heterogeneous data via a process of translation to produce semantic interoperability. In this way, I show that computational ontologies effectively re-ontologize (Floridi, 2013) and produce reality and thus that have ethical consequences, specifically in terms of their application to social reality and social ontology (Searle, 2006). I use the Basic Formal Ontology (Arp, Smith, & Spear, 2015)—the world’s most widely used upper-level ontology—as a case study and analyze its methods and ensuing ethical issues concerning its social application in the Military Ontology before recommending an ethical framework. “Ontology” is a term that is used in philosophy and computer science in related but different ways—philosophical ontology typically concerns metaphysics while computational ontology typically concerns databases. This dissertation provides a critical history and theory of ontology and the interdisciplinary teams of researchers that came to adopt methods from philosophical ontology to build, persuade, and reason with applied computational ontology. Following a critical communication approach, I define applied computational ontology construction as a solution to a communication problem among scientists who seek to create semantic interoperability among data and argue that applied ontology is philosophical, informational in nature, and communicatively constituted (McPhee & Zaug, 2000). The primary aim is to explain how philosophy informs applied computational ontology while showing how such ontologies became instantiated in material organizations, how to study them, and describe their ethical implications

    Intelligence artificielle: Les défis actuels et l'action d'Inria - Livre blanc Inria

    Get PDF
    Livre blanc Inria N°01International audienceInria white papers look at major current challenges in informatics and mathematics and show actions conducted by our project-teams to address these challenges. This document is the first produced by the Strategic Technology Monitoring & Prospective Studies Unit. Thanks to a reactive observation system, this unit plays a lead role in supporting Inria to develop its strategic and scientific orientations. It also enables the institute to anticipate the impact of digital sciences on all social and economic domains. It has been coordinated by Bertrand Braunschweig with contributions from 45 researchers from Inria and from our partners. Special thanks to Peter Sturm for his precise and complete review.Les livres blancs d’Inria examinent les grands défis actuels du numérique et présentent les actions menées par noséquipes-projets pour résoudre ces défis. Ce document est le premier produit par la cellule veille et prospective d’Inria. Cette unité, par l’attention qu’elle porte aux évolutions scientifiques et technologiques, doit jouer un rôle majeur dans la détermination des orientations stratégiques et scientifiques d’Inria. Elle doit également permettre à l’Institut d’anticiper l’impact des sciences du numérique dans tous les domaines sociaux et économiques. Ce livre blanc a été coordonné par Bertrand Braunschweig avec des contributions de 45 chercheurs d’Inria et de ses partenaires. Un grand merci à Peter Sturm pour sa relecture précise et complète. Merci également au service STIP du centre de Saclay – Île-de-France pour la correction finale de la version française

    On the role of Computational Logic in Data Science: representing, learning, reasoning, and explaining knowledge

    Get PDF
    In this thesis we discuss in what ways computational logic (CL) and data science (DS) can jointly contribute to the management of knowledge within the scope of modern and future artificial intelligence (AI), and how technically-sound software technologies can be realised along the path. An agent-oriented mindset permeates the whole discussion, by stressing pivotal role of autonomous agents in exploiting both means to reach higher degrees of intelligence. Accordingly, the goals of this thesis are manifold. First, we elicit the analogies and differences among CL and DS, hence looking for possible synergies and complementarities along 4 major knowledge-related dimensions, namely representation, acquisition (a.k.a. learning), inference (a.k.a. reasoning), and explanation. In this regard, we propose a conceptual framework through which bridges these disciplines can be described and designed. We then survey the current state of the art of AI technologies, w.r.t. their capability to support bridging CL and DS in practice. After detecting lacks and opportunities, we propose the notion of logic ecosystem as the new conceptual, architectural, and technological solution supporting the incremental integration of symbolic and sub-symbolic AI. Finally, we discuss how our notion of logic ecosys- tem can be reified into actual software technology and extended towards many DS-related directions

    Sparks of Artificial General Intelligence: Early experiments with GPT-4

    Full text link
    Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4, was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction. We conclude with reflections on societal influences of the recent technological leap and future research directions

    Domain-sensitive topic management in a modular conversational agent framework

    Get PDF
    Flexible nontask-oriented conversational agents require content for generating responses and mechanisms that serve them for choosing appropriate topics to drive interactions with users. Structured knowledge resources such as ontologies are a useful mechanism to represent conversational topics. In order to develop the topic-management mechanism, we addressed a number of research issues related to the development of the required infrastructure. First, we address the issue of heavy human involvement in the construction of knowledge resources by proposing a four-stage automatic process for building domain-specific ontologies. These ontologies are comprised of a set of subtaxonomies obtained from WordNet, an electronic dictionary that arranges concepts in a hierarchical structure. The roots of these subtaxonomies are obtained from Wikipedia’s article links or wikilinks; this under the hypothesis that wikilinks provide a sense of relatedness from the article consulted to their destinations. With the knowledge structures defined, we explore the possibility of using semantic relatedness over these domain-specific ontologies as a mean to propose conversational topics in a coherent manner. For this, we examine different automatic measures of semantic relatedness to determine which correlates with human judgements obtained from an automatically constructed dataset. We then examine the question of whether domain information influences the human perception of semantic relatedness in a way that automatic measures do not replicate. This study requires us to design and implement a process to build datasets with pairs of concepts as those used in the literature to evaluate automatic measures of semantic relatedness, but with domain information associated. This study shows, to statistical significance, that existing measures of semantic relatedness do not take domain into consideration, and that including domain as a factor in this calculation can enhance the agreement of automatic measures with human assessments. Finally, this artificially constructed measure is integrated into the Toy’s dialogue manager, in order to help in the real-time selection of conversational topics. This supplements our result that the use of semantic relatedness seems to produce more coherent and interesting topic transitions than existing mechanisms

    Ontology design and management for eCare services

    Get PDF

    An Intelligent Multi-Agent System Approach to Automating Safety Features for On-Line Real Time Communications: Agent Mediated Information Exchange

    Get PDF
    Child safety online is a growing problem, governmental attempts to highlight and combat this issue have not been as successful as it was hoped, and still there are highly publicised cases of children, young people and vulnerable adults coming to harm as a result of unsafe online practices. This thesis presents the research, design and development of a prototype system called SafeChat, which will provide a safer environment for children interacting in online environments. In order to combat such a complex problem, it is necessary to integrate various artificial intelligent technologies and autonomous systems. The SafeChat prototype system discussed within this research has been implemented in Java Agent Development Environment (JADE) and utilises Protégé Ontology development, reasoning and natural language processing techniques. To evaluate our system performance, comprehensive testing to measure its effectiveness in detecting potential risk to the user (e.g. child) is in constant development. Initial results of system testing are encouraging and demonstrate its effectiveness in identifying different levels of threat during online conversation. The potential impact of this work is immense, when used as a plug-in to popular communications software, such as Facebook Messenger, Skype and WhatsApp. SafeChat provides a safer environment for children to communicate, identifying potential and actual threats, whilst maintaining the privacy of their discourse. The SafeChat system could be easily adapted to provide autonomous solutions in other areas of online threat, such as cyberbullying and radicalisation
    corecore