64,246 research outputs found
Extending Machine Language Models toward Human-Level Language Understanding
Language is crucial for human intelligence, but what exactly is its role? We
take language to be a part of a system for understanding and communicating
about situations. The human ability to understand and communicate about
situations emerges gradually from experience and depends on domain-general
principles of biological neural networks: connection-based learning,
distributed representation, and context-sensitive, mutual constraint
satisfaction-based processing. Current artificial language processing systems
rely on the same domain general principles, embodied in artificial neural
networks. Indeed, recent progress in this field depends on \emph{query-based
attention}, which extends the ability of these systems to exploit context and
has contributed to remarkable breakthroughs. Nevertheless, most current models
focus exclusively on language-internal tasks, limiting their ability to perform
tasks that depend on understanding situations. These systems also lack memory
for the contents of prior situations outside of a fixed contextual span. We
describe the organization of the brain's distributed understanding system,
which includes a fast learning system that addresses the memory problem. We
sketch a framework for future models of understanding drawing equally on
cognitive neuroscience and artificial intelligence and exploiting query-based
attention. We highlight relevant current directions and consider further
developments needed to fully capture human-level language understanding in a
computational system
A New Framework for Machine Intelligence: Concepts and Prototype
Machine learning (ML) and artificial intelligence (AI) have become hot topics
in many information processing areas, from chatbots to scientific data
analysis. At the same time, there is uncertainty about the possibility of
extending predominant ML technologies to become general solutions with
continuous learning capabilities. Here, a simple, yet comprehensive,
theoretical framework for intelligent systems is presented. A combination of
Mirror Compositional Representations (MCR) and a Solution-Critic Loop (SCL) is
proposed as a generic approach for different types of problems. A prototype
implementation is presented for document comparison using English Wikipedia
corpus
Flexibly Instructable Agents
This paper presents an approach to learning from situated, interactive
tutorial instruction within an ongoing agent. Tutorial instruction is a
flexible (and thus powerful) paradigm for teaching tasks because it allows an
instructor to communicate whatever types of knowledge an agent might need in
whatever situations might arise. To support this flexibility, however, the
agent must be able to learn multiple kinds of knowledge from a broad range of
instructional interactions. Our approach, called situated explanation, achieves
such learning through a combination of analytic and inductive techniques. It
combines a form of explanation-based learning that is situated for each
instruction with a full suite of contextually guided responses to incomplete
explanations. The approach is implemented in an agent called Instructo-Soar
that learns hierarchies of new tasks and other domain knowledge from
interactive natural language instructions. Instructo-Soar meets three key
requirements of flexible instructability that distinguish it from previous
systems: (1) it can take known or unknown commands at any instruction point;
(2) it can handle instructions that apply to either its current situation or to
a hypothetical situation specified in language (as in, for instance,
conditional instructions); and (3) it can learn, from instructions, each class
of knowledge it uses to perform tasks.Comment: See http://www.jair.org/ for any accompanying file
Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure
Big data research has attracted great attention in science, technology,
industry and society. It is developing with the evolving scientific paradigm,
the fourth industrial revolution, and the transformational innovation of
technologies. However, its nature and fundamental challenge have not been
recognized, and its own methodology has not been formed. This paper explores
and answers the following questions: What is big data? What are the basic
methods for representing, managing and analyzing big data? What is the
relationship between big data and knowledge? Can we find a mapping from big
data into knowledge space? What kind of infrastructure is required to support
not only big data management and analysis but also knowledge discovery, sharing
and management? What is the relationship between big data and science paradigm?
What is the nature and fundamental challenge of big data computing? A
multi-dimensional perspective is presented toward a methodology of big data
computing.Comment: 59 page
Understanding Negations in Information Processing: Learning from Replicating Human Behavior
Information systems experience an ever-growing volume of unstructured data,
particularly in the form of textual materials. This represents a rich source of
information from which one can create value for people, organizations and
businesses. For instance, recommender systems can benefit from automatically
understanding preferences based on user reviews or social media. However, it is
difficult for computer programs to correctly infer meaning from narrative
content. One major challenge is negations that invert the interpretation of
words and sentences. As a remedy, this paper proposes a novel learning strategy
to detect negations: we apply reinforcement learning to find a policy that
replicates the human perception of negations based on an exogenous response,
such as a user rating for reviews. Our method yields several benefits, as it
eliminates the former need for expensive and subjective manual labeling in an
intermediate stage. Moreover, the inferred policy can be used to derive
statistical inferences and implications regarding how humans process and act on
negations.Comment: 39 page
Proceedings of the 1st Workshop on Robotics Challenges and Vision (RCV2013)
Proceedings of the 1st Workshop on Robotics Challenges and Vision (RCV2013)Comment: http://compbio.cs.wayne.edu/robotics/rcv2013/proceedings-emb.pd
Technology assessment of advanced automation for space missions
Six general classes of technology requirements derived during the mission definition phase of the study were identified as having maximum importance and urgency, including autonomous world model based information systems, learning and hypothesis formation, natural language and other man-machine communication, space manufacturing, teleoperators and robot systems, and computer science and technology
Human Computation and Convergence
Humans are the most effective integrators and producers of information,
directly and through the use of information-processing inventions. As these
inventions become increasingly sophisticated, the substantive role of humans in
processing information will tend toward capabilities that derive from our most
complex cognitive processes, e.g., abstraction, creativity, and applied world
knowledge. Through the advancement of human computation - methods that leverage
the respective strengths of humans and machines in distributed
information-processing systems - formerly discrete processes will combine
synergistically into increasingly integrated and complex information processing
systems. These new, collective systems will exhibit an unprecedented degree of
predictive accuracy in modeling physical and techno-social processes, and may
ultimately coalesce into a single unified predictive organism, with the
capacity to address societies most wicked problems and achieve planetary
homeostasis.Comment: Pre-publication draft of chapter. 24 pages, 3 figures; added
references to page 1 and 3, and corrected typ
Artificial Intelligence in the Context of Human Consciousness
Artificial intelligence (AI) can be defined as the ability of a machine to learn and make decisions based on acquired information. AI’s development has incited rampant public speculation regarding the singularity theory: a futuristic phase in which intelligent machines are capable of creating increasingly intelligent systems. Its implications, combined with the close relationship between humanity and their machines, make achieving understanding both natural and artificial intelligence imperative. Researchers are continuing to discover natural processes responsible for essential human skills like decision-making, understanding language, and performing multiple processes simultaneously. Artificial intelligence attempts to simulate these functions through techniques like artificial neural networks, Markov Decision Processes, Human Language Technology, and Multi-Agent Systems, which rely upon a combination of mathematical models and hardware
- …