389 research outputs found

    Developmental Bootstrapping of AIs

    Full text link
    Although some current AIs surpass human abilities in closed artificial worlds such as board games, their abilities in the real world are limited. They make strange mistakes and do not notice them. They cannot be instructed easily, fail to use common sense, and lack curiosity. They do not make good collaborators. Mainstream approaches for creating AIs are the traditional manually-constructed symbolic AI approach and generative and deep learning AI approaches including large language models (LLMs). These systems are not well suited for creating robust and trustworthy AIs. Although it is outside of the mainstream, the developmental bootstrapping approach has more potential. In developmental bootstrapping, AIs develop competences like human children do. They start with innate competences. They interact with the environment and learn from their interactions. They incrementally extend their innate competences with self-developed competences. They interact and learn from people and establish perceptual, cognitive, and common grounding. They acquire the competences they need through bootstrapping. However, developmental robotics has not yet produced AIs with robust adult-level competences. Projects have typically stopped at the Toddler Barrier corresponding to human infant development at about two years of age, before their speech is fluent. They also do not bridge the Reading Barrier, to skillfully and skeptically draw on the socially developed information resources that power current LLMs. The next competences in human cognitive development involve intrinsic motivation, imitation learning, imagination, coordination, and communication. This position paper lays out the logic, prospects, gaps, and challenges for extending the practice of developmental bootstrapping to acquire further competences and create robust, resilient, and human-compatible AIs.Comment: 102 pages, 29 figure

    CLiFF Notes: Research In Natural Language Processing at the University of Pennsylvania

    Get PDF
    The Computational Linguistics Feedback Forum (CLIFF) is a group of students and faculty who gather once a week to discuss the members\u27 current research. As the word feedback suggests, the group\u27s purpose is the sharing of ideas. The group also promotes interdisciplinary contacts between researchers who share an interest in Cognitive Science. There is no single theme describing the research in Natural Language Processing at Penn. There is work done in CCG, Tree adjoining grammars, intonation, statistical methods, plan inference, instruction understanding, incremental interpretation, language acquisition, syntactic parsing, causal reasoning, free word order languages, ... and many other areas. With this in mind, rather than trying to summarize the varied work currently underway here at Penn, we suggest reading the following abstracts to see how the students and faculty themselves describe their work. Their abstracts illustrate the diversity of interests among the researchers, explain the areas of common interest, and describe some very interesting work in Cognitive Science. This report is a collection of abstracts from both faculty and graduate students in Computer Science, Psychology and Linguistics. We pride ourselves on the close working relations between these groups, as we believe that the communication among the different departments and the ongoing inter-departmental research not only improves the quality of our work, but makes much of that work possible

    Artificial Intelligence Through the Eyes of the Public

    Get PDF
    Artificial Intelligence is becoming a popular field in computer science. In this report we explored its history, major accomplishments and the visions of its creators. We looked at how Artificial Intelligence experts influence reporting and engineered a survey to gauge public opinion. We also examined expert predictions concerning the future of the field as well as media coverage of its recent accomplishments. These results were then used to explore the links between expert opinion, public opinion and media coverage

    Artificial intelligence as writing: knowledge-based hypertext systems as a medium for communication

    Get PDF
    This thesis is an exploration of a new metaphor for artificial intelligence (AI). Traditionally, the computer within AI has been viewed as an agent, one with which the user engages in a conversation. More recently certain researchers have proposed the notion that artificial intelligence (and indeed computing in general) can be more appropriately seen as a form of writing. Initially this thesis reviews the literature in this area, and aspects of AI which support the approach. Features of writing are then described which show parallels with AI. This then allows us to take lessons from the history and development of both traditional writing and the new computer-based writing systems to inform the design of a new type of artificial intelligence system. A design based on these features, called Running Texts is presented through a number of small examples. Issues that arise from these and possible future developments, based on the implementation are then discussed. A rationale for users choosing to learn a system such as Running Texts is proposed, as benefits from the psychological and social implications of writing can be applied to AI systems, when they are seen as writing. The same parallels point out potential problems, and suggest new ways to see the relation between AI and thought
    • …
    corecore