2,797 research outputs found

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    Measuring the Impact of China’s Digital Heritage: Developing Multidimensional Impact Indicators for Digital Museum Resources

    Get PDF
    This research investigates how to best assess the impact of China’s digital heritage and focuses on digital museum resources. It is motivated by the need for tools to help governing bodies and heritage organisations assess the impact of digital heritage resources. The research sits at the intersection of Chinese cultural heritage, digital heritage, and impact assessment (IA) studies, which forms the theoretical framework of the thesis. Informed by the Balanced Value Impact (BVI) Model, this thesis addresses the following questions: 1. How do Western heritage discourses and Chinese culture shape ‘cultural heritage’ and the museum digital ecosystem in modern China? 2. Which indicators demonstrate the multidimensional impacts of digital museum resources in China? How should the BVI Model be adapted to fit the Chinese cultural landscape? 3. How do different stakeholders perceive these impact indicators? What are the implications for impact indicator development and application? This research applies a mixed-method approach, combining desk research, survey, and interview with both public audiences and museum professionals. The research findings identify 18 impact indicators, covering economic, social, innovation and operational dimensions. Notably, the perceived usefulness and importance of different impact indicators vary among and between public participants and museum professionals. The study finds the BVI Model helpful in guiding the indicator development process, particularly in laying a solid foundation to inform decision-making. The Strategic Perspectives and Value Lenses provide a structure to organise various indicators and keep them focused on the impact objectives. However, the findings also suggest that the Value Lenses are merely signifiers; their signified meanings change with cultural contexts and should be examined when the Model is applied in a different cultural setting. This research addresses the absence of digital resource IA in China’s heritage sector. It contributes to the field of IA for digital heritage within and beyond the Chinese context by challenging the current target-setting culture in performance evaluation. Moreover, the research ratifies the utility of the BVI Model while modifying it to fit China’s unique cultural setting. This thesis as a whole demonstrates the value of using multidimensional impact indicators for evidence-based decision-making and better museum practices in the digital domain

    Protecting Privacy in Indian Schools: Regulating AI-based Technologies' Design, Development and Deployment

    Get PDF
    Education is one of the priority areas for the Indian government, where Artificial Intelligence (AI) technologies are touted to bring digital transformation. Several Indian states have also started deploying facial recognition-enabled CCTV cameras, emotion recognition technologies, fingerprint scanners, and Radio frequency identification tags in their schools to provide personalised recommendations, ensure student security, and predict the drop-out rate of students but also provide 360-degree information of a student. Further, Integrating Aadhaar (digital identity card that works on biometric data) across AI technologies and learning and management systems (LMS) renders schools a ‘panopticon’. Certain technologies or systems like Aadhaar, CCTV cameras, GPS Systems, RFID tags, and learning management systems are used primarily for continuous data collection, storage, and retention purposes. Though they cannot be termed AI technologies per se, they are fundamental for designing and developing AI systems like facial, fingerprint, and emotion recognition technologies. The large amount of student data collected speedily through the former technologies is used to create an algorithm for the latter-stated AI systems. Once algorithms are processed using machine learning (ML) techniques, they learn correlations between multiple datasets predicting each student’s identity, decisions, grades, learning growth, tendency to drop out, and other behavioural characteristics. Such autonomous and repetitive collection, processing, storage, and retention of student data without effective data protection legislation endangers student privacy. The algorithmic predictions by AI technologies are an avatar of the data fed into the system. An AI technology is as good as the person collecting the data, processing it for a relevant and valuable output, and regularly evaluating the inputs going inside an AI model. An AI model can produce inaccurate predictions if the person overlooks any relevant data. However, the state, school administrations and parents’ belief in AI technologies as a panacea to student security and educational development overlooks the context in which ‘data practices’ are conducted. A right to privacy in an AI age is inextricably connected to data practices where data gets ‘cooked’. Thus, data protection legislation operating without understanding and regulating such data practices will remain ineffective in safeguarding privacy. The thesis undergoes interdisciplinary research that enables a better understanding of the interplay of data practices of AI technologies with social practices of an Indian school, which the present Indian data protection legislation overlooks, endangering students’ privacy from designing and developing to deploying stages of an AI model. The thesis recommends the Indian legislature frame better legislation equipped for the AI/ML age and the Indian judiciary on evaluating the legality and reasonability of designing, developing, and deploying such technologies in schools

    Language integrated relational lenses

    Get PDF
    Relational databases are ubiquitous. Such monolithic databases accumulate large amounts of data, yet applications typically only work on small portions of the data at a time. A subset of the database defined as a computation on the underlying tables is called a view. Querying views is helpful, but it is also desirable to update them and have these changes be applied to the underlying database. This view update problem has been the subject of much previous work before, but support by database servers is limited and only rarely available. Lenses are a popular approach to bidirectional transformations, a generalization of the view update problem in databases to arbitrary data. However, perhaps surprisingly, lenses have seldom actually been used to implement updatable views in databases. Bohannon, Pierce and Vaughan propose an approach to updatable views called relational lenses. However, to the best of our knowledge this proposal has not been implemented or evaluated prior to the work reported in this thesis. This thesis proposes programming language support for relational lenses. Language integrated relational lenses support expressive and efficient view updates, without relying on updatable view support from the database server. By integrating relational lenses into the programming language, application development becomes easier and less error-prone, avoiding the impedance mismatch of having two programming languages. Integrating relational lenses into the language poses additional challenges. As defined by Bohannon et al. relational lenses completely recompute the database, making them inefficient as the database scales. The other challenge is that some parts of the well-formedness conditions are too general for implementation. Bohannon et al. specify predicates using possibly infinite abstract sets and define the type checking rules using relational algebra. Incremental relational lenses equip relational lenses with change-propagating semantics that map small changes to the view into (potentially) small changes to the source tables. We prove that our incremental semantics are functionally equivalent to the non-incremental semantics, and our experimental results show orders of magnitude improvement over the non-incremental approach. This thesis introduces a concrete predicate syntax and shows how the required checks are performed on these predicates and show that they satisfy the abstract predicate specifications. We discuss trade-offs between static predicates that are fully known at compile time vs dynamic predicates that are only known during execution and introduce hybrid predicates taking inspiration from both approaches. This thesis adapts the typing rules for relational lenses from sequential composition to a functional style of sub-expressions. We prove that any well-typed functional relational lens expression can derive a well-typed sequential lens. We use these additions to relational lenses as the foundation for two practical implementations: an extension of the Links functional language and a library written in Haskell. The second implementation demonstrates how type-level computation can be used to implement relational lenses without changes to the compiler. These two implementations attest to the possibility of turning relational lenses into a practical language feature

    Improving the SEP licensing framework by revising SSOs’ IPR policies

    Get PDF
    This thesis examines the SEP licensing framework with a view to understanding whether it can be improved by revising IPR policies. The ICT standardisation, which provides interoperability, is one of the building blocks of the modern economy. Put simply, without standards, there would not be IoT or for example, consumers would only be able to connect to a wireless network with devices specifically built for that network. Standards are not a new phenomenon; however, they became more complex with the increasing importance of technology, which made them, in return, more dependent on patented technologies (i.e. SEPs). SEPs cause complications in standardisation as they require SEP owners and potential licensees to negotiate/agree on usually complex licensing agreements. Although SSOs have attempted to regulate this relationship with their IPR policies, now it seems these policies cannot keep up with the changing dynamics and needs in standardisation. Dysfunctions in the system do not only affect competition in the relevant markets, they also prejudice consumers’ interests, for example, by passing on higher prices to cover supra-competitive royalties. In particular, since the first Rambus case in the US, competition/antitrust agencies and courts have been dealing with SEP-related issues. Recently, the EU has been considering addressing some of those with legislation. Conversely, this research derives from the notion that active standardisation participants are better equipped to deal with SEP-related issues, and flexible IPR policies are more suitable for addressing these issues in the dynamic standardisation ecosystem. Against this backdrop, this comparative research aims to identify areas where SEP licensing framework can be improved by reforming IPR policies, and it develops some proposals using the black-letter and empirical research methods that SSOs can implement

    Sociotechnical Imaginaries, the Future and the Third Offset Strategy

    Get PDF

    Archaeological palaeoenvironmental archives: challenges and potential

    Get PDF
    This Arts and Humanities Research Council (AHRC) sponsored collaborative doctoral project represents one of the most significant efforts to collate quantitative and qualitative data that can elucidate practices related to archaeological palaeoenvironmental archiving in England. The research has revealed that archived palaeoenvironmental remains are valuable resources for archaeological research and can clarify subjects that include the adoption and importation of exotic species, plant and insect invasion, human health and diet, and plant and animal husbandry practices. In addition to scientific research, archived palaeoenvironmental remains can provide evidence-based narratives of human resilience and climate change and offer evidence of the scientific process, making them ideal resources for public science engagement. These areas of potential have been realised at an imperative time; given that waterlogged palaeoenvironmental remains at significant sites such as Star Carr, Must Farm, and Flag Fen, archaeological deposits in towns and cities are at risk of decay due to climate change-related factors, and unsustainable agricultural practices. Innovative approaches to collecting and archiving palaeoenvironmental remains and maintaining existing archives will permit the creation of an accessible and thorough national resource that can service archaeologists and researchers in the related fields of biology and natural history. Furthermore, a concerted effort to recognise absences in archaeological archives, matched by an effort to supply these deficiencies, can produce a resource that can contribute to an enduring geographical and temporal record of England's biodiversity, which can be used in perpetuity in the face of diminishing archaeological and contemporary natural resources. To realise these opportunities, particular challenges must be overcome. The most prominent of these include inconsistent collection policies resulting from pressures associated with shortages in storage capacity and declining specialist knowledge in museums and repositories combined with variable curation practices. Many of these challenges can be resolved by developing a dedicated storage facility that can focus on the ongoing conservation and curation of palaeoenvironmental remains. Combined with an OASIS + module designed to handle and disseminate data pertaining to palaeoenvironmental archives, remains would be findable, accessible, and interoperable with biological archives and collections worldwide. Providing a national centre for curating palaeoenvironmental remains and a dedicated digital repository will require significant funding. Funding sources could be identified through collaboration with other disciplines. If sufficient funding cannot be identified, options that would require less financial investment, such as high-level archive audits and the production of guidance documents, will be able to assist all stakeholders with the improved curation, management, and promotion of the archived resource

    Making sense of solid for data governance and GDPR

    Get PDF
    Solid is a new radical paradigm based on decentralising control of data from central organisations to individuals that seeks to empower individuals to have active control of who and how their data is being used. In order to realise this vision, the use-cases and implementations of Solid also require us to be consistent with the relevant privacy and data protection regulations such as the GDPR. However, to do so first requires a prior understanding of all actors, roles, and processes involved in a use-case, which then need to be aligned with GDPR's concepts to identify relevant obligations, and then investigate their compliance. To assist with this process, we describe Solid as a variation of `cloud technology' and adapt the existing standardised terminologies and paradigms from ISO/IEC standards. We then investigate the applicability of GDPR's requirements to Solid-based implementations, along with an exploration of how existing issues arising from GDPR enforcement also apply to Solid. Finally, we outline the path forward through specific extensions to Solid's specifications that mitigate known issues and enable the realisation of its benefits
    corecore