153,343 research outputs found

    A Defense of Pure Connectionism

    Full text link
    Connectionism is an approach to neural-networks-based cognitive modeling that encompasses the recent deep learning movement in artificial intelligence. It came of age in the 1980s, with its roots in cybernetics and earlier attempts to model the brain as a system of simple parallel processors. Connectionist models center on statistical inference within neural networks with empirically learnable parameters, which can be represented as graphical models. More recent approaches focus on learning and inference within hierarchical generative models. Contra influential and ongoing critiques, I argue in this dissertation that the connectionist approach to cognitive science possesses in principle (and, as is becoming increasingly clear, in practice) the resources to model even the most rich and distinctly human cognitive capacities, such as abstract, conceptual thought and natural language comprehension and production. Consonant with much previous philosophical work on connectionism, I argue that a core principle—that proximal representations in a vector space have similar semantic values—is the key to a successful connectionist account of the systematicity and productivity of thought, language, and other core cognitive phenomena. My work here differs from preceding work in philosophy in several respects: (1) I compare a wide variety of connectionist responses to the systematicity challenge and isolate two main strands that are both historically important and reflected in ongoing work today: (a) vector symbolic architectures and (b) (compositional) vector space semantic models; (2) I consider very recent applications of these approaches, including their deployment on large-scale machine learning tasks such as machine translation; (3) I argue, again on the basis mostly of recent developments, for a continuity in representation and processing across natural language, image processing and other domains; (4) I explicitly link broad, abstract features of connectionist representation to recent proposals in cognitive science similar in spirit, such as hierarchical Bayesian and free energy minimization approaches, and offer a single rebuttal of criticisms of these related paradigms; (5) I critique recent alternative proposals that argue for a hybrid Classical (i.e. serial symbolic)/statistical model of mind; (6) I argue that defending the most plausible form of a connectionist cognitive architecture requires rethinking certain distinctions that have figured prominently in the history of the philosophy of mind and language, such as that between word- and phrase-level semantic content, and between inference and association

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    Recipe for Disaster

    Get PDF
    Today’s rapid advances in algorithmic processes are creating and generating predictions through common applications, including speech recognition, natural language (text) generation, search engine prediction, social media personalization, and product recommendations. These algorithmic processes rapidly sort through streams of computational calculations and personal digital footprints to predict, make decisions, translate, and attempt to mimic human cognitive function as closely as possible. This is known as machine learning. The project Recipe for Disaster was developed by exploring automation in technology, specifically through the use of machine learning and recurrent neural networks. These algorithmic models feed on large amounts of data as a source to continuously adapt and learn from and, in return, predict and produce their own data. Using a recurrent neural network (a subset of machine learning) and a data-set of over 800 internet-sourced food recipes, Recipe for Disaster is a video, photographic, and installation based exploration of five results of a computer’s own version of a recipe. The recipes are translated into how-to styled videos modeled after popular social media tropes and photographs of each resulting food dish. The food photographs appear as imagery commonly found in consumer culture, but as the disjointed results of the generated recipes. The videos, photographs, and installation are all displayed through variations of screens and screen-like components, deploying a bridge between the viewer and notions of digital media consumption. Recipe for Disaster functions as a critique on the loss of human agency through the use of algorithmic models, while simultaneously recognizing food consumption as an intrinsic element of being human. In discussing how machine learning or predictive models have become more deeply integrated into the systems we use on a day-to-day basis, this project mimics information and media shared through and created by those systems. It is a response to the hidden complexities of systems and structures that question the effectiveness of predictions made by machines and how they might be affecting information and media literacy, visual semiotics, culture, and overall human behavior and development
    • …
    corecore