18 research outputs found

    Fair Use and Machine Learning

    Get PDF
    There would be a beaten path to the maker of software that could reliably state whether a use of a copyrighted work was protected as fair use. But applying machine learning to fair use faces considerable hurdles. Fair use has generated hundreds of reported cases, but machine learning works best with examples in greater numbers. More examples may be available, from mining the decision making of web sites, from having humans judge fair use examples just as they label images to teach self-driving cars, and using machine learning itself to generate examples. Beyond the number of examples, the form of the data is more abstract than the concrete examples on which machine learning has succeeded, such as computer vision, viewing recommendations, and even in comparison to machine translation, where the operative unit was the sentence, not a concept that could be distributed across a document. But techniques presently in use do find patterns in data to build more abstract features, and then use the same process to build more abstract features. It may be that such automated processes can provide the conceptual blocks necessary. In addition, tools drawn from knowledge engineering (ironically, the branch of artificial intelligence that of late has been eclipsed by machine learning) may extract concepts from such data as judicial opinions. Such tools would include new methods of knowledge representation and automated tagging. If the data questions are overcome, machine learning provides intriguing possibilities, but also faces challenges from the nature of fair use law. Artificial neural networks have shown formidable performance in classification. Classifying fair use examples raises a number of questions. Fair use law is often considered contradictory, vague, and unpredictable. In computer science terminology, the data is “noisy.” That inconsistency could flummox artificial neural networks, or the networks could disclose consistencies that have eluded commentators. Other algorithms such as nearest neighbor and support vectors could likewise both use and test legal reasoning by analogy. Another approach to machine learning, decision trees, may be simpler than other approaches in some respects, but could work on smaller data sets (addressing one of the data issues above) and provide something that machine learning often lacks: transparency. Decision trees disclose their decision-making process, whereas neural networks, especially deep learning, are opaque black boxes. Finally, unsupervised machine learning could be used to explore fair use case law for patterns, whether they be consistent structures in its jurisprudence, or biases that have played an undisclosed role. Any possible patterns found, however, should be treated as possibilities, pending testing by other means

    Matching sets of features for efficient retrieval and recognition

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 145-153).In numerous domains it is useful to represent a single example by the collection of local features or parts that comprise it. In computer vision in particular, local image features are a powerful way to describe images of objects and scenes. Their stability under variable image conditions is critical for success in a wide range of recognition and retrieval applications. However, many conventional similarity measures and machine learning algorithms assume vector inputs. Comparing and learning from images represented by sets of local features is therefore challenging, since each set may vary in cardinality and its elements lack a meaningful ordering. In this thesis I present computationally efficient techniques to handle comparisons, learning, and indexing with examples represented by sets of features. The primary goal of this research is to design and demonstrate algorithms that can effectively accommodate this useful representation in a way that scales with both the representation size as well as the number of images available for indexing or learning. I introduce the pyramid match algorithm, which efficiently forms an implicit partial matching between two sets of feature vectors.(cont.) The matching has a linear time complexity, naturally forms a Mercer kernel, and is robust to clutter or outlier features, a critical advantage for handling images with variable backgrounds, occlusions, and viewpoint changes. I provide bounds on the expected error relative to the optimal partial matching. For very large databases, even extremely efficient pairwise comparisons may not offer adequately responsive query times. I show how to perform sub-linear time retrievals under the matching measure with randomized hashing techniques, even when input sets have varying numbers of features. My results are focused on several important vision tasks, including applications to content-based image retrieval, discriminative classification for object recognition, kernel regression, and unsupervised learning of categories. I show how the dramatic increase in performance enables accurate and flexible image comparisons to be made on large-scale data sets, and removes the need to artificially limit the number of local descriptions used per image when learning visual categories.by Kristen Lorraine Grauman.Ph.D

    The Data Science Design Manual

    Get PDF

    Acta Cybernetica : Volume 21. Number 3.

    Get PDF

    Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans

    Get PDF
    We are currently unable to specify human goals and societal values in a way that reliably directs AI behavior. Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives. "Law Informs Code" is the research agenda embedding legal knowledge and reasoning in AI. Similar to how parties to a legal contract cannot foresee every potential contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code. We describe how data generated by legal processes (methods of law-making, statutory interpretation, contract drafting, applications of legal standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals. This increases human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment. Although law is partly a reflection of historically contingent political power - and thus not a perfect aggregation of citizen preferences - if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. If law eventually informs powerful AI, engaging in the deliberative political process to improve law takes on even more meaning.Comment: Forthcoming in Northwestern Journal of Technology and Intellectual Property, Volume 2

    Adaptive Phishing Detection System using Machine Learning

    Full text link
    Despite the availability of toolbars and studies in phishing, the number of phishing attacks has been increasing in the past years. It remains a challenge to develop robust phishing detection systems due to the continuous change of attack models. We attempt to address this by designing an adaptive phishing detection system with the ability to continually learn and detect phishing robustly. In the first work, we demonstrate a systematic way to develop a novel phishing detection approach using compression algorithm. We also propose the use of compression ratio as a novel machine learning feature, which significantly improves machine learning based phishing detection over previous studies. Our proposed method outperforms the use of best-performing HTML-based features in past studies, with a true positive rate of 80.04%. In the following work, we propose a feature-free method using Normalised Compression Distance (NCD), a metric which computes the similarity of two websites by compressing them, eliminating the need to perform any feature extraction. This method examines the HTML of webpages and computes their similarity with known phishing websites. Our approach is feasible to deploy in real systems with a processing time of roughly 0.3 seconds, and significantly outperforms previous methods in detecting phishing websites, with an AUC score of 98.68%, a G-mean score of 94.47%, a high true positive rate (TPR) of around 90%, while maintaining a low false positive rate (FPR) of 0.58%. We also discuss the implication of automation offered by AutoML frameworks towards the role of human experts and data scientists in the domain of phishing detection. Our work investigates whether models that are built using AutoML frameworks can outperform the results achieved by human data scientists in phishing datasets and analyses the relationship between the performances and various data complexity measures. There remain many challenges for building a real-world phishing detection system using AutoML frameworks due to the current support only for supervised classification problems, leading to the need for labelled data, and the inability to update the AutoML-based models incrementally. This indicates that experts with knowledge in the domain of phishing and cybersecurity are still essential in phishing detection

    Learning Functional Prepositions

    Full text link
    In first language acquisition, what does it mean for a grammatical category to have been acquired, and what are the mechanisms by which children learn functional categories in general? In the context of prepositions (Ps), if the lexical/functional divide cuts through the P category, as has been suggested in the theoretical literature, then constructivist accounts of language acquisition would predict that children develop adult-like competence with the more abstract units, functional Ps, at a slower rate compared to their acquisition of lexical Ps. Nativists instead assume that the features of functional P are made available by Universal Grammar (UG), and are mapped as quickly, if not faster, than the semantic features of their lexical counterparts. Conversely, if Ps are either all lexical or all functional, on both accounts of acquisition we should observe few differences in learning. Three empirical studies of the development of P were conducted via computer analysis of the English and Spanish sub-corpora of the CHILDES database. Study 1 analyzed errors in child usage of Ps, finding almost no errors in commission in either language, but that the English learners lag in their production of functional Ps relative to lexical Ps. That no such delay was found in the Spanish data suggests that the English pattern is not universal. Studies 2 and 3 applied novel measures of phrasal (P head + nominal complement) productivity to the data. Study 2 examined prepositional phrases (PPs) whose head-complement pairs appeared in both child and adult speech, while Study 3 considered PPs produced by children that never occurred in adult speech. In both studies the productivity of Ps for English children developed faster than that of lexical Ps. In Spanish there were few differences, suggesting that children had already mastered both orders of Ps early in acquisition. These empirical results suggest that at least in English P is indeed a split category, and that children acquire the syntax of the functional subset very quickly, committing almost no errors. The UG position is thus supported. Next, the dissertation investigates a \u27soft nativist\u27 acquisition strategy that composes the distributional analysis of input, minimal a priori knowledge of the possible co-occurrence of morphosyntactic features associated with functional elements, and linguistic knowledge that is presumably acquired via the experience of pragmatic, communicative situations. The output of the analysis consists in a mapping of morphemes to the feature bundles of nominative pronouns for English and Spanish, plus specific claims about the sort of knowledge required from experience. The acquisition model is then extended to adpositions, to examine what, if anything, distributional analysis can tell us about the functional sequences of PPs. The results confirm the theoretical position according to which spatiotemporal Ps are lexical in character, rooting their own extended projections, and that functional Ps express an aspectual sequence in the functional superstructure of the PP

    Semantic discovery and reuse of business process patterns

    Get PDF
    Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modelling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. As a statement of research-in-progress this paper focuses on business process patterns and proposes an initial methodological framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework borrows ideas from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse

    Calcul de centralité et identification de structures de communautés dans les graphes de documents

    Get PDF
    Dans cette thèse, nous nous intéressons à la caractérisation de grandes collections de documents (en utilisant les liens entre ces derniers) afin de faciliter leur utilisation et leur exploitation par des humains ou par des outils informatiques. Dans un premier temps, nous avons abordé la problématique du calcul de centralité dans les graphes de documents. Nous avons décrit les principaux algorithmes de calcul de centralité existants en mettant l'accent sur le problème TKC (Tightly Knit Community) dont souffre la plupart des mesures de centralité récentes. Ensuite, nous avons proposé trois nouveaux algorithmes de calcul de centralité (MHITS, NHITS et DocRank) permettant d'affronter le phénomène TKC. Les différents algorithmes proposés ont été évalués et comparés aux approches existantes. Des critères d'évaluation ont notamment été proposés pour mesurer l'effet TKC. Dans un deuxième temps, nous nous sommes intéressés au problème de la classification non supervisée de documents. Plus précisément, nous avons envisagé ce regroupement comme une tâche d'identification de structures de communautés (ISC) dans les graphes de documents. Nous avons décrit les principales approches d'ISC existantes en distinguant les approches basées sur un modèle génératif des approches algorithmiques ou classiques. Puis, nous avons proposé un modèle génératif (SPCE) basé sur le lissage et sur une initialisation appropriée pour l'ISC dans des graphes de faible densité. Le modèle SPCE a été évalué et validé en le comparant à d'autres approches d'ISC. Enfin, nous avons montré que le modèle SPCE pouvait être étendu pour prendre en compte simultanément les liens et les contenus des documents.In this thesis, we are interested in characterizing large collections of documents (using the links between them) in order to facilitate their use and exploitation by humans or by software tools. Initially, we addressed the problem of centrality computation in document graphs. We described existing centrality algorithms by focusing on the TKC (Tightly Knit Community) problem which affects most existing centrality measures. Then, we proposed three new centrality algorithms (MHITS, NHITS and DocRank) which tackle the TKC effect. The proposed algorithms were evaluated and compared to existing approaches using several graphs and evaluation measures. In a second step, we investigated the problem of document clustering. Specifically, we considered this clustering as a task of community structure identification (CSI) in document graphs. We described the existing CSI approaches by distinguishing those based on a generative model from the algorithmic or traditional ones. Then, we proposed a generative model (SPCE) based on smoothing and on an appropriate initialization for CSI in sparse graphs. The SPCE model was evaluated and validated by comparing it to other CSI approaches. Finally, we showed that the SPCE model can be extended to take into account simultaneously the links and content of documents
    corecore