140 research outputs found

    What’s behind the ag-data logo? An examination of voluntary agricultural-data codes of practice

    Get PDF
    In this article, we analyse agricultural data (ag-data) codes of practice. After the introduction, Part II examines the emergence of ag-data codes of practice and provides two case studies—the American Farm Bureau’s Privacy and Security Principles for Farm Data and New Zealand’s Farm Data Code of Practice—that illustrate that the ultimate aims of ag-data codes of practice are inextricably linked to consent, disclosure, transparency and, ultimately, the building of trust. Part III highlights the commonalities and challenges of ag-data codes of practice. In Part IV several concluding observations are made. Most notably, while ag-data codes of practice may help change practices and convert complex details about ag-data contracts into something tangible, understandable and useable, it is important for agricultural industries to not hastily or uncritically accept or adopt ag-data codes of practice. There needs to be clear objectives, and a clear direction in which stakeholders want to take ag-data practices. In other words, stakeholders need to be sure about what they are trying, and able, to achieve with ag-data codes of practice. Ag-data codes of practice need credible administration, accreditation and monitoring. There also needs to be a way of reviewing and evaluating the codes in a more meaningful way than simple metrics such as the number of members: for example, we need to know something about whether the codes raise awareness and education around data practices, and, perhaps most importantly, whether they encourage changes in attitudes and behaviours around the access to and use of ag-data

    Seq2seq is All You Need for Coreference Resolution

    Full text link
    Existing works on coreference resolution suggest that task-specific models are necessary to achieve state-of-the-art performance. In this work, we present compelling evidence that such models are not necessary. We finetune a pretrained seq2seq transformer to map an input document to a tagged sequence encoding the coreference annotation. Despite the extreme simplicity, our model outperforms or closely matches the best coreference systems in the literature on an array of datasets. We also propose an especially simple seq2seq approach that generates only tagged spans rather than the spans interleaved with the original text. Our analysis shows that the model size, the amount of supervision, and the choice of sequence representations are key factors in performance.Comment: EMNLP 202
    • …
    corecore