295,373 research outputs found

    The Embodied Mind

    Get PDF
    Suvremenu kognitivnu znanost ne možemo razumjeti bez najnovijeg razvoja računalne znanosti, umjetne inteligencije (Artificial Intelligence - AI), robotike, neuroznanosti, biologije, lingvistike i psihologije. Kako klasična analitička filozofija, tako i tradicionalna AI pretpostavile su da sve vrste znanja moramo prikazati formalnim ili programskim jezicima. Ova je pretpostavka u proturječju s nedavnim uvidima u biologiju evolucije i razvojnu psihologiju ljudskog organizma. Većina je našega znanja implicitna i nesvjesna. To nije formalno prikazano, nego utjelovljeno znanje koje učimo radeći, a razumijevamo tjelesnim suodnosom s ekološkim nišama i društvenim okolišima. To vrijedi ne samo za vještine na niskoj razini nego i za visokorazinska područja kategorizacije, jezika i apstraktnog mišljenja. Utjelovljena kognitivna znanost, AI, kao i robotika pokušavaju umjetnom evolucijom stvoriti utjelovljeni um. S filozofskoga gledišta, zapanjuje da tradicionalni pojmovi kognitivne znanosti i AI s formalnim predodžbama znanja pripadaju tradicionalnoj liniji filozofije.Modern cognitive science cannot be understood without recent developments in computer science, artificial intelligence (AI), robotics, neuroscience, biology, linguistics, and psychology. Classic analytic philosophy as well as traditional AI assumed that all kinds of knowledge must explicitly be represented by formal or programming languages. This assumption is in contradiction to recent insights into the biology of evolution and developmental psychology of the human organism. Most of our knowledge is implicit and unconscious. It is not formally represented, but embodied knowledge which is learnt by doing and understood by bodily interacting with ecological niches and social environments. That is true not only for low-level skills, but even for high-level domains of categorization, language, and abstract thinking. Embodied cognitive science, AI, and robotics try to build the embodied mind in an artificial evolution. From a philosophical point of view, it is amazing that the new ideas of embodied mind and robotics have deep roots in the history of philosophy

    Doctor of Philosophy

    Get PDF
    dissertationManual annotation of clinical texts is often used as a method of generating reference standards that provide data for training and evaluation of Natural Language Processing (NLP) systems. Manually annotating clinical texts is time consuming, expensive, and requires considerable cognitive effort on the part of human reviewers. Furthermore, reference standards must be generated in ways that produce consistent and reliable data but must also be valid in order to adequately evaluate the performance of those systems. The amount of labeled data necessary varies depending on the level of analysis, the complexity of the clinical use case, and the methods that will be used to develop automated machine systems for information extraction and classification. Evaluating methods that potentially reduce cost, manual human workload, introduce task efficiencies, and reduce the amount of labeled data necessary to train NLP tools for specific clinical use cases are active areas of research inquiry in the clinical NLP domain. This dissertation integrates a mixed methods approach using methodologies from cognitive science and artificial intelligence with manual annotation of clinical texts. Aim 1 of this dissertation identifies factors that affect manual annotation of clinical texts. These factors are further explored by evaluating approaches that may introduce efficiencies into manual review tasks applied to two different NLP development areas - semantic annotation of clinical concepts and identification of information representing Protected Health Information (PHI) as defined by HIPAA. Both experiments integrate iv different priming mechanisms using noninteractive and machine-assisted methods. The main hypothesis for this research is that integrating pre-annotation or other machineassisted methods within manual annotation workflows will improve efficiency of manual annotation tasks without diminishing the quality of generated reference standards

    A New Constructivist AI: From Manual Methods to Self-Constructive Systems

    Get PDF
    The development of artificial intelligence (AI) systems has to date been largely one of manual labor. This constructionist approach to AI has resulted in systems with limited-domain application and severe performance brittleness. No AI architecture to date incorporates, in a single system, the many features that make natural intelligence general-purpose, including system-wide attention, analogy-making, system-wide learning, and various other complex transversal functions. Going beyond current AI systems will require significantly more complex system architecture than attempted to date. The heavy reliance on direct human specification and intervention in constructionist AI brings severe theoretical and practical limitations to any system built that way. One way to address the challenge of artificial general intelligence (AGI) is replacing a top-down architectural design approach with methods that allow the system to manage its own growth. This calls for a fundamental shift from hand-crafting to self-organizing architectures and self-generated code – what we call a constructivist AI approach, in reference to the self-constructive principles on which it must be based. Methodologies employed for constructivist AI will be very different from today’s software development methods; instead of relying on direct design of mental functions and their implementation in a cog- nitive architecture, they must address the principles – the “seeds” – from which a cognitive architecture can automatically grow. In this paper I describe the argument in detail and examine some of the implications of this impending paradigm shift

    A Case for Machine Ethics in Modeling Human-Level Intelligent Agents

    Get PDF
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts

    Embodied cognition: A field guide

    Get PDF
    The nature of cognition is being re-considered. Instead of emphasizing formal operations on abstract symbols, the new approach foregrounds the fact that cognition is, rather, a situated activity, and suggests that thinking beings ought therefore be considered first and foremost as acting beings. The essay reviews recent work in Embodied Cognition, provides a concise guide to its principles, attitudes and goals, and identifies the physical grounding project as its central research focus

    Enaction-Based Artificial Intelligence: Toward Coevolution with Humans in the Loop

    Full text link
    This article deals with the links between the enaction paradigm and artificial intelligence. Enaction is considered a metaphor for artificial intelligence, as a number of the notions which it deals with are deemed incompatible with the phenomenal field of the virtual. After explaining this stance, we shall review previous works regarding this issue in terms of artifical life and robotics. We shall focus on the lack of recognition of co-evolution at the heart of these approaches. We propose to explicitly integrate the evolution of the environment into our approach in order to refine the ontogenesis of the artificial system, and to compare it with the enaction paradigm. The growing complexity of the ontogenetic mechanisms to be activated can therefore be compensated by an interactive guidance system emanating from the environment. This proposition does not however resolve that of the relevance of the meaning created by the machine (sense-making). Such reflections lead us to integrate human interaction into this environment in order to construct relevant meaning in terms of participative artificial intelligence. This raises a number of questions with regards to setting up an enactive interaction. The article concludes by exploring a number of issues, thereby enabling us to associate current approaches with the principles of morphogenesis, guidance, the phenomenology of interactions and the use of minimal enactive interfaces in setting up experiments which will deal with the problem of artificial intelligence in a variety of enaction-based ways
    corecore