309 research outputs found

    Should Some Independent Contractors Be Redefined as Employees under Labor Law

    Get PDF

    Investigating Privacy Policies using PolicyLint Tool

    Get PDF
    Organizations essentially inform clients about data collection and sharing practices through privacy policies. Recent research has proposed tools to help users better comprehend these lengthy and intricate legal documents that summarize collection and sharing. However, these instruments have a significant flaw. They overlook the possibility of contradictions within a particular policy. This paper introduces PolicyLint, a tool for analyzing privacy policies that simultaneously considers negation and varying semantic levels of data objects and entities. PolicyLint accomplishes this by using sentence-level natural language processing to automatically create ontologies from a large corpus of privacy policies and capturing both positive and negative statements regarding data collection and sharing. Using PolicyLint, I examined the policies of 300 apps and found that some contained contradictions that could indicate false statements. I manually check 100 contradictions, spotting troubling patterns like the use of misleading presentation, attempts to redefine terms that are commonly understood, and tracking information that is made possible by sharing or collecting data that can be used to derive sensitive information. As a result, automated privacy policy analysis is significantly improved by PolicyLint

    Employment by Design: Employees, Independent Contractors and the Theory of the Firm

    Get PDF
    Employment laws protect “employees” and impose duties on their “employers.” In the modern working world, however, “employee” and “employer” status is not always clear. The status of some workers and the firms they serve can be ambiguous, especially when the workers work as individuals not organized as firms. Individual workers might be “employees,” but they might also be self-employed individuals working as “independent contractors.” Even if it is clear that workers are someone’s “employees,” the identity of the employer can be unclear. If one firm pays “employees” to work mainly or exclusively for another firm that pays the first firm for the work, which firm is the “employer” of the employees

    On link predictions in complex networks with an application to ontologies and semantics

    Get PDF
    It is assumed that ontologies can be represented and treated as networks and that these networks show properties of so-called complex networks. Just like ontologies “our current pictures of many networks are substantially incomplete” (Clauset et al., 2008, p. 3ff.). For this reason, networks have been analyzed and methods for identifying missing edges have been proposed. The goal of this thesis is to show how treating and understanding an ontology as a network can be used to extend and improve existing ontologies, and how measures from graph theory and techniques developed in social network analysis and other complex networks in recent years can be applied to semantic networks in the form of ontologies. Given a large enough amount of data, here data organized according to an ontology, and the relations defined in the ontology, the goal is to find patterns that help reveal implicitly given information in an ontology. The approach does not, unlike reasoning and methods of inference, rely on predefined patterns of relations, but it is meant to identify patterns of relations or of other structural information taken from the ontology graph, to calculate probabilities of yet unknown relations between entities. The methods adopted from network theory and social sciences presented in this thesis are expected to reduce the work and time necessary to build an ontology considerably by automating it. They are believed to be applicable to any ontology and can be used in either supervised or unsupervised fashion to automatically identify missing relations, add new information, and thereby enlarge the data set and increase the information explicitly available in an ontology. As seen in the IBM Watson example, different knowledge bases are applied in NLP tasks. An ontology like WordNet contains lexical and semantic knowl- edge on lexemes while general knowledge ontologies like Freebase and DBpedia contain information on entities of the non-linguistic world. In this thesis, examples from both kinds of ontologies are used: WordNet and DBpedia. WordNet is a manually crafted resource that establishes a network of representations of word senses, connected to the word forms used to express these, and connect these senses and forms with lexical and semantic relations in a machine-readable form. As will be shown, although a lot of work has been put into WordNet, it can still be improved. While it already contains many lexical and semantical relations, it is not possible to distinguish between polysemous and homonymous words. As will be explained later, this can be useful for NLP problems regarding word sense disambiguation and hence QA. Using graph- and network-based centrality and path measures, the goal is to train a machine learning model that is able to identify new, missing relations in the ontology and assign this new relation to the whole data set (i.e., WordNet). The approach presented here will be based on a deep analysis of the ontology and the network structure it exposes. Using different measures from graph theory as features and a set of manually created examples, a so-called training set, a supervised machine learning approach will be presented and evaluated that will show what the benefit of interpreting an ontology as a network is compared to other approaches that do not take the network structure into account. DBpedia is an ontology derived from Wikipedia. The structured information given in Wikipedia infoboxes is parsed and relations according to an underlying ontology are extracted. Unlike Wikipedia, it only contains the small amount of structured information (e.g., the infoboxes of each page) and not the large amount of unstructured information (i.e., the free text) of Wikipedia pages. Hence DBpedia is missing a large number of possible relations that are described in Wikipedia. Also compared to Freebase, an ontology used and maintained by Google, DBpedia is quite incomplete. This, and the fact that Wikipedia is expected to be usable to compare possible results to, makes DBpedia a good subject of investigation. The approach used to extend DBpedia presented in this thesis will be based on a thorough analysis of the network structure and the assumed evolution of the network, which will point to the locations of the network where information is most likely to be missing. Since the structure of the ontology and the resulting network is assumed to reveal patterns that are connected to certain relations defined in the ontology, these patterns can be used to identify what kind of relation is missing between two entities of the ontology. This will be done using unsupervised methods from the field of data mining and machine learning

    Deference and intervention in the characterisation of work contracts

    Get PDF
    This thesis examines judicial approaches to the characterisation of work contracts in Australia. Important consequences flow from the characterisation of a contract as one of employment. A significant number of labour statutes bestow rights and protections upon employees only, thereby excluding other types of workers, such as independent contractors, from their coverage. Employing entities seeking to avoid statutory labour obligations use various contractual techniques to disguise employees as independent contractors. In some cases, courts have afforded deference to these contractual arrangements. In other cases, courts have adopted an interventionist approach, disregarding or according limited weight to the terms of the written contract, and focusing instead on the underlying substance of the relationship. There is, however, an absence of clarity as to the conceptual and doctrinal justifications for such intervention, resulting in judicial oscillations between deference and intervention. This thesis argues that Australian courts should adopt the interventionist approach to the characterisation of work contracts. It presents the conceptual and doctrinal justifications for the interventionist approach and constructs a two-stage analytical framework for the application of this approach by the courts. In the course of elucidating and defending the interventionist approach to characterisation, this thesis addresses broader conceptual questions concerning the distinction between formalism and substantivism in common law adjudication, the interaction of common law and statute, and the normative tensions that arise when the norms of public regulation are channelled through the vehicle of private law. The thesis focuses primarily on Australian law, though it also draws upon the law of the United Kingdom, United States and Canada, where relevant. The increasing diversity of work arrangements in the modern economy, fuelled in part by the emergence of the gig economy in recent years, has placed strains upon the common law's architecture for identifying the beneficiary of labour law's protections. This thesis seeks to make a contribution to the important task of reconstructing that architecture and consolidating its conceptual and doctrinal foundations. The thesis takes the form of a thesis by compilation, comprising an integrative chapter and seven sole-authored peer-reviewed journal articles

    Psychology at Indiana University: A Centennial Review and Compendium

    Get PDF
    Preface -- The legacy of the Laboratory (1888-1988): a history of the Department of Psychology at Indiana University / by James H. Capshew --Appendices A: Psychology Faculty, 1885-1988 -- B. Department Chairs, 1885-1988 -- C. Psychological Clinic Directors, 1922-1988 -- D. Graduate Degrees, 1886-1987 -- E. Bibliography of William Lowe Bryan -- Index to Appendices A-C -- The faculty in 1988
    • …
    corecore