60,744 research outputs found
What we know about learning: How we must change the school experience
Dr. Roger Schank was the Founder of the renowned Institute for the Learning Sciences at Northwestern University, where he is John P. Evans Professor Emeritus in Computer Science, Education and Psychology. He was Professor of computer science and psychology at Yale University and Director of the Yale Artificial Intelligence Project. He was a visiting professor at the University of Paris VII, an Assistant Professor of Computer Science and Linguistics at Stanford University and research fellow at the Institute for Semantics and Cognition in Switzerland. He also served as the Distinguished Career Professor in the School of Computer Science at Carnegie Mellon University. He is a fellow of the AAAI and was founder of the Cognitive Science Society and co-founder of the Journal of Cognitive Science. He holds a Ph.D. in linguistics from University of Texas.
In 1994, he founded Cognitive Arts Corporation, a company that designs and builds high quality multimedia simulations for use in corporate training and for online university-level courses. The latter were built in partnership with Columbia University.
In 2002 he founded Socratic Arts, a company that is devoted to making high quality e-learning affordable for both businesses and schools
Recommended from our members
Boy meets goal, boy loses goal, boy gets goal : the nature of feedback between goal-based simulation and understanding systems
We are designing a goal-based planning and simulation system called REACTOR for a multiple-actor world in which partially formulated plans are monitored during execution, providing feedback to the planner. Plan failures that occur are diagnosed by a combination of top-down (plan-synthesis) and bottom-up (plan-understanding) techniques, allowing an informed choice of response to the error. By maintaining separate belief spaces for each actor, we simulate planners who themselves simulate the planning and plan-understanding of other actors
Generating Rembrandt: Artificial Intelligence, Copyright, and Accountability in the 3A Era--The Human-like Authors are Already Here- A New Model
Artificial intelligence (AI) systems are creative, unpredictable, independent, autonomous, rational, evolving, capable of data collection, communicative, efficient, accurate, and have free choice among alternatives. Similar to humans, AI systems can autonomously create and generate creative works. The use of AI systems in the production of works, either for personal or manufacturing purposes, has become common in the 3A era of automated, autonomous, and advanced technology. Despite this progress, there is a deep and common concern in modern society that AI technology will become uncontrollable. There is therefore a call for social and legal tools for controlling AI systems’ functions and outcomes. This Article addresses the questions of the copyrightability of artworks generated by AI systems: ownership and accountability. The Article debates who should enjoy the benefits of copyright protection and who should be responsible for the infringement of rights and damages caused by AI systems that independently produce creative works. Subsequently, this Article presents the AI Multi- Player paradigm, arguing against the imposition of these rights and responsibilities on the AI systems themselves or on the different stakeholders, mainly the programmers who develop such systems. Most importantly, this Article proposes the adoption of a new model of accountability for works generated by AI systems: the AI Work Made for Hire (WMFH) model, which views the AI system as a creative employee or independent contractor of the user. Under this proposed model, ownership, control, and responsibility would be imposed on the humans or legal entities that use AI systems and enjoy its benefits. This model accurately reflects the human-like features of AI systems; it is justified by the theories behind copyright protection; and it serves as a practical solution to assuage the fears behind AI systems. In addition, this model unveils the powers behind the operation of AI systems; hence, it efficiently imposes accountability on clearly identifiable persons or legal entities. Since AI systems are copyrightable algorithms, this Article reflects on the accountability for AI systems in other legal regimes, such as tort or criminal law and in various industries using these systems
Minds, Brains and Programs
This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain I assume this is an empirical fact about the actual causal relations between mental processes and brains It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality The main argument of this paper is directed at establishing this claim The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4
Recommended from our members
The NOMAD system : expectation-based detection and correction of errors during understanding of syntactically and semantically ill-formed text
Most large text-understanding systems have been designed under the assumption that the input text will be in reasonably "neat" form (for example, newspaper stories and other edited texts). However, a great deal of natural language text (for example, memos, messages, rough drafts, conversation transcripts, etc.) have features that differ significantly from "neat" texts, posing special problems for readers, such as misspelled words, missing words, poor syntactic construction, unclear or ambiguous interpretation, missing crucial punctuation, etc. Our solution to these problems is to make use of expectations, based both on knowledge of surface English and on world knowledge of the situation being described. These syntactic and semantic expectations can be used to figure out unknown words from context, constrain the possible word senses of words with multiple meanings (ambiguity), fill in missing words (ellipsis), and resolve referents (anaphora). This method of using expectations to aid the understanding of "scruffy" texts has bee incorporated into a working computer program called NOMAD, which understands scruffy texts in the domain of Navy ship-to-shore messages
Recommended from our members
Parsing with parallelism : a spreading-activation model of inference processing during text understanding
The past decade of reseatch in Natural Language Processing has universally recognized that, since natural language input is almost always ambiguous with respect to its pragmatic implications, its syntactic parse, and even its lexical analysis (i.e., choice of correct word-sense for an ambiguous word), processing natural language input requires decisions about word meanings, syntactic structure, and pragmatic inferences. The lexical, syntactic, and pragmatic levels of inferencing are not as disparate as they have often been treated in both psychological and artificial intelligence research. In fact, these three levels of analysis interact to form a joint interpretation of text.ATLAST (A Three-level Language Analysis SysTem) is an implemented integration of human language understanding at the lexical, the syntactic, and the pragmatic levels. For psychological validity, ATLAST is based on results of experiments with human subjects. The ATLAST model uses a new architecture which was developed to incorporate three features: spreading activation memory, two-stage syntax, and parallel processing of syntax and semantics. It is also a new framework within which to interpret and tackle unsolved problems through implementation and experimentation
Self-weighted Multiple Kernel Learning for Graph-based Clustering and Semi-supervised Classification
Multiple kernel learning (MKL) method is generally believed to perform better
than single kernel method. However, some empirical studies show that this is
not always true: the combination of multiple kernels may even yield an even
worse performance than using a single kernel. There are two possible reasons
for the failure: (i) most existing MKL methods assume that the optimal kernel
is a linear combination of base kernels, which may not hold true; and (ii) some
kernel weights are inappropriately assigned due to noises and carelessly
designed algorithms. In this paper, we propose a novel MKL framework by
following two intuitive assumptions: (i) each kernel is a perturbation of the
consensus kernel; and (ii) the kernel that is close to the consensus kernel
should be assigned a large weight. Impressively, the proposed method can
automatically assign an appropriate weight to each kernel without introducing
additional parameters, as existing methods do. The proposed framework is
integrated into a unified framework for graph-based clustering and
semi-supervised classification. We have conducted experiments on multiple
benchmark datasets and our empirical results verify the superiority of the
proposed framework.Comment: Accepted by IJCAI 2018, Code is availabl
- …