328,460 research outputs found
Programs for machine learning. Part II
Part I of this paper described the community unit as one of the indirect and implicit means we employ in specifying the behavior of a proposed learning system. It has been pointed out that, for complex problems, the community unit's vision tends to be too narrow and restricted because of its piecemeal manner of attacking problems.This second part of the paper describes a planning mechanism which attempts to overcome this difficulty by taking a larger view of a given task. After surveying the task in general, the planning mechanism subdivides the task into a hierarchy of subtasks each by itself presumably easier to perform than the original task. This hierarchy of subtasks comprises a rough sketch of a possible course of action which guides the community unit.To manage classes of problems and to make efficient use of past experience, an induction mechanism is proposed. The induction mechanism will take a still larger view by considering the system's past experience with various problems and by attempting to apply that experience to related problems which have not previously been encountered
Recommended from our members
Artificial Intelligence’s Fair Use Crisis
As automation supplants more forms of labor, creative expression still seems like a distinctly human enterprise. This may someday change: by ingesting works of authorship as “training data,” computer programs can teach themselves to write natural prose, compose music, and generate movies. Machine learning is an artificial intelligence (“AI”) technology with immense potential and a commensurate appetite for copyrighted works. In the United States, the copyright law mechanism most likely to facilitate machine learning’s uses of protected data is the fair use doctrine. However, current fair use doctrine threatens either to derail the progress of machine learning or to disenfranchise the human creators whose work makes it possible.
This Article addresses the problem in three Parts: using popular machine learning datasets and research as case studies, Part I describes how programs “learn” from corpora of copyrighted works and catalogs the legal risks of this practice. It concludes that fair use may not protect expressive machine learning applications, including the burgeoning field of natural language generation. Part II explains that applying today’s fair use doctrine to expressive machine learning will yield one of two undesirable outcomes: if U.S. courts reject the fair use defense for machine learning, valuable innovation may move to another jurisdiction or halt entirely; alternatively, if courts find the technology to be fair use, sophisticated software may divert rightful earnings from the authors of input data. This dilemma shows that fair use may no longer serve its historical purpose. Traditionally, fair use is understood to benefit the public by fostering expressive activity. Today, the doctrine increasingly serves the economic interests of powerful firms at the expense of disempowered individual rights holders. Finally, in Part III, this Article contemplates changes in doctrine and policy that could address these problems. It concludes that the United States’ interest in avoiding both prongs of AI’s fair use dilemma offers a novel justification for redistributive measures that could promote social equity alongside technological progress
Computer Aided Verification
This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book
Computer Aided Verification
This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book
Sparsity Methods for Systems and Control
The method of sparsity has been attracting a lot of attention in the fields related not only to signal processing, machine learning, and statistics, but also systems and control. The method is known as compressed sensing, compressive sampling, sparse representation, or sparse modeling. More recently, the sparsity method has been applied to systems and control to design resource-aware control systems. This book gives a comprehensive guide to sparsity methods for systems and control, from standard sparsity methods in finite-dimensional vector spaces (Part I) to optimal control methods in infinite-dimensional function spaces (Part II). The primary objective of this book is to show how to use sparsity methods for several engineering problems. For this, the author provides MATLAB programs by which the reader can try sparsity methods for themselves. Readers will obtain a deep understanding of sparsity methods by running these MATLAB programs. Sparsity Methods for Systems and Control is suitable for graduate level university courses, though it should also be comprehendible to undergraduate students who have a basic knowledge of linear algebra and elementary calculus. Also, especially part II of the book should appeal to professional researchers and engineers who are interested in applying sparsity methods to systems and control
The Dawn of Fully Automated Contract Drafting: Machine Learning Breathes New Life Into a Decades-Old Promise
Technological advances within contract drafting software have seemingly plateaued. Despite the decades-long hopes and promises of many commentators, critics doubt this technology will ever fully automate the drafting process. But, while there has been a lack of innovation in contract drafting software, technological advances have continued to improve contract review and analysis programs. “Machine learning,” the leading innovative force in these areas, has proven incredibly efficient, performing in mere minutes tasks that would otherwise take a team of lawyers tens of hours. Some contract drafting programs have already experimented with machine learning capabilities, and this technology may pave the way for the full automation of contract drafting. Although intellectual property, data access, and ethical obstacles may delay complete integration of machine learning into contract drafting, full automation is likely still viable
Enhancing Undergraduate AI Courses through Machine Learning Projects
It is generally recognized that an undergraduate introductory Artificial Intelligence course is challenging to teach. This is, in part, due to the diverse and seemingly disconnected core topics that are typically covered. The paper presents work funded by the National Science Foundation to address this problem and to enhance the student learning experience in the course. Our work involves the development of an adaptable framework for the presentation of core AI topics through a unifying theme of machine learning. A suite of hands-on semester-long projects are developed, each involving the design and implementation of a learning system that enhances a commonly-deployed application. The projects use machine learning as a unifying theme to tie together the core AI topics. In this paper, we will first provide an overview of our model and the projects being developed and will then present in some detail our experiences with one of the projects – Web User Profiling which we have used in our AI class
Building Program Vector Representations for Deep Learning
Deep learning has made significant breakthroughs in various fields of
artificial intelligence. Advantages of deep learning include the ability to
capture highly complicated features, weak involvement of human engineering,
etc. However, it is still virtually impossible to use deep learning to analyze
programs since deep architectures cannot be trained effectively with pure back
propagation. In this pioneering paper, we propose the "coding criterion" to
build program vector representations, which are the premise of deep learning
for program analysis. Our representation learning approach directly makes deep
learning a reality in this new field. We evaluate the learned vector
representations both qualitatively and quantitatively. We conclude, based on
the experiments, the coding criterion is successful in building program
representations. To evaluate whether deep learning is beneficial for program
analysis, we feed the representations to deep neural networks, and achieve
higher accuracy in the program classification task than "shallow" methods, such
as logistic regression and the support vector machine. This result confirms the
feasibility of deep learning to analyze programs. It also gives primary
evidence of its success in this new field. We believe deep learning will become
an outstanding technique for program analysis in the near future.Comment: This paper was submitted to ICSE'1
- …