206 research outputs found

    A Holistic Framework for Complex Big Data Governance

    Get PDF
    Big data assets are large datasets that can be leveraged by organisations if the capabilities exist, but it also brings considerable challenges. Despite the benefits that can be realised, the lack of proper big data governance is a major barrier, making the processing and control of this data exceptionally difficult to execute correctly. More specifically, organisations reportedly struggle to incorporate big data governance into their existing structures and business models to derive value from big data initiatives. Big data governance is an emerging research domain, gaining attention from both Information Systems scholars and the practitioner community. Nonetheless, there appears to have been limited scientific research in the area and most existing data governance approaches were limited, given they do not address end-to-end aspects of how big data could be governed. Furthermore, no suitable framework for handling data governance against big data complexities was found to be available. Thus, the main contribution of the work presented in this thesis is to address this requirement; by advancing research in this field and designing a novel holistic big data governance framework capable of supporting global organisations to effectively manage big data as an asset, thereby obtaining value from their big data initiatives. An extensive systematic literature review was done in order to uncover the published content that reflects the current state of knowledge in big data governance. To facilitate the creation of the proposed framework a grounded theory methodology was used to analyse openly available parliamentary inquiry data sources, with particular focus on identifying the core criteria for big data governance. The resulting novel framework generated provides new knowledge by identifying several big data governance building blocks; which are classified as strategic goals, execution stages, enablers and 22 core big data governance components to ensure an effective big data governance programme. Moreover, thesis findings indicate that big data complexities extend to the ethical side of big data governance and this is taken into consideration in the framework design. An ‘ethics by design’ component is proposed to influence how this can be addressed in a structured way

    Adaptive architecture: Regulating human building interaction

    Get PDF
    In this paper we explore regulatory, technical and interactional implications of Adaptive Architecture, a novel trend emerging in the built environment. We provide a comprehensive description of the emergence and history of the term, with reference to the current state of the art and policy foundations supporting it e.g. smart city initiatives and building regulations. As Adaptive Architecture is underpinned by the Internet of Things (IoT), we are interested in how regulatory and surveillance issues posed by the IoT manifest in buildings too. To support our analysis, we utilise a prominent concept from architecture, Stuart Brand’s Shearing Layers model, which describes the different physical layers of a building and how they relate to temporal change. To ground our analysis, we use three cases of Adaptive Architecture, namely an IoT device (Nest Smart Cam IQ); an Adaptive Architecture research prototype, (ExoBuilding); and a commercial deployment (the Edge). In bringing together Shearing Layers, Adaptive Architecture and the challenges therein, we frame our analysis under 5 key themes. These are guided by emerging information privacy and security regulations. We explore the issues Adaptive Architecture needs to face for: A – ‘Physical & information security’; B – ‘Establishing responsibility’; C – ‘occupant rights over flows, collection, use & control of personal data’; D- ‘Visibility of Emotions and Bodies’; & E – ‘Surveillance of Everyday Routine Activities’. We conclude by summarising key challenges for Adaptive Architecture, regulation and the future of human building interaction

    The European legal approach to Open Science and research data

    Get PDF
    This dissertation proposes an analysis of the governance of the European scientific research, focusing on the emergence of the Open Science paradigm: a new way of doing science, oriented towards the openness of every phase of the scientific research process, able to take full advantage of the digital ICTs. The emergence of this paradigm is relatively recent, but in the last years it has become increasingly relevant. The European institutions expressed a clear intention to embrace the Open Science paradigm (eg., think about the European Open Science Cloud, EOSC; or the establishment of the Horizon Europe programme). This dissertation provides a conceptual framework for the multiple interventions of the European institutions in the field of Open Science, addressing the major legal challenges of its implementation. The study investigates the notion of Open Science, proposing a definition that takes into account all its dimensions related to the human and fundamental rights framework in which Open Science is grounded. The inquiry addresses the legal challenges related to the openness of research data, in light of the European Open Data framework and the impact of the GDPR on the context of Open Science. The last part of the study is devoted to the infrastructural dimension of the Open Science paradigm, exploring the e-infrastructures. The focus is on a specific type of computational infrastructure: the High Performance Computing (HPC) facility. The adoption of HPC for research is analysed from the European perspective, investigating the EuroHPC project, and the local perspective, proposing the case study of the HPC facility of the University of Luxembourg, the ULHPC. This dissertation intends to underline the relevance of the legal coordination approach, between all actors and phases of the process, in order to develop and implement the Open Science paradigm, adhering to the underlying human and fundamental rights

    Design of a FAIR digital data health infrastructure in Africa for COVID-19 reporting and research

    Get PDF
    The limited volume of COVID-19 data from Africa raises concerns for global genome research, which requires a diversity of genotypes for accurate disease prediction, including on the provenance of the new SARS-CoV-2 mutations. The Virus Outbreak Data Network (VODAN)-Africa studied the possibility of increasing the production of clinical data, finding concerns about data ownership, and the limited use of health data for quality treatment at point of care. To address this, VODAN Africa developed an architecture to record clinical health data and research data collected on the incidence of COVID-19, producing these as human- and machine-readable data objects in a distributed architecture of locally governed, linked, human- and machine-readable data. This architecture supports analytics at the point of care and-through data visiting, across facilities-for generic analytics. An algorithm was run across FAIR Data Points to visit the distributed data and produce aggregate findings. The FAIR data architecture is deployed in Uganda, Ethiopia, Liberia, Nigeria, Kenya, Somalia, Tanzania, Zimbabwe, and Tunisia.Computer Systems, Imagery and Medi

    Provenance documentation to enable explainable and trustworthy AI: A literature review

    Get PDF
    ABSTRACTRecently artificial intelligence (AI) and machine learning (ML) models have demonstrated remarkable progress with applications developed in various domains. It is also increasingly discussed that AI and ML models and applications should be transparent, explainable, and trustworthy. Accordingly, the field of Explainable AI (XAI) is expanding rapidly. XAI holds substantial promise for improving trust and transparency in AI-based systems by explaining how complex models such as the deep neural network (DNN) produces their outcomes. Moreover, many researchers and practitioners consider that using provenance to explain these complex models will help improve transparency in AI-based systems. In this paper, we conduct a systematic literature review of provenance, XAI, and trustworthy AI (TAI) to explain the fundamental concepts and illustrate the potential of using provenance as a medium to help accomplish explainability in AI-based systems. Moreover, we also discuss the patterns of recent developments in this area and offer a vision for research in the near future. We hope this literature review will serve as a starting point for scholars and practitioners interested in learning about essential components of provenance, XAI, and TAI

    Data Spaces

    Get PDF
    This open access book aims to educate data space designers to understand what is required to create a successful data space. It explores cutting-edge theory, technologies, methodologies, and best practices for data spaces for both industrial and personal data and provides the reader with a basis for understanding the design, deployment, and future directions of data spaces. The book captures the early lessons and experience in creating data spaces. It arranges these contributions into three parts covering design, deployment, and future directions respectively. The first part explores the design space of data spaces. The single chapters detail the organisational design for data spaces, data platforms, data governance federated learning, personal data sharing, data marketplaces, and hybrid artificial intelligence for data spaces. The second part describes the use of data spaces within real-world deployments. Its chapters are co-authored with industry experts and include case studies of data spaces in sectors including industry 4.0, food safety, FinTech, health care, and energy. The third and final part details future directions for data spaces, including challenges and opportunities for common European data spaces and privacy-preserving techniques for trustworthy data sharing. The book is of interest to two primary audiences: first, researchers interested in data management and data sharing, and second, practitioners and industry experts engaged in data-driven systems where the sharing and exchange of data within an ecosystem are critical

    Patient generated health data and electronic health record integration, governance and socio-technical issues: A narrative review

    Get PDF
    Patients’ health records have the potential to include patient generated health data (PGHD), which can aid in the provision of personalized care. Access to these data can allow healthcare professionals to receive additional information that will assist in decision-making and the provision of additional support. Given the diverse sources of PGHD, this review aims to provide evidence on PGHD integration with electronic health records (EHR), models and standards for PGHD exchange with EHR, and PGHD-EHR policy design and development. The review also addresses governance and socio-technical considerations in PGHD management. Databases used for the review include PubMed, Scopus, ScienceDirect, IEEE Xplore, SpringerLink and ACM Digital Library. The review reveals the significance, but current deficiency, of provenance, trust and contextual information as part of PGHD integration with EHR. Also, we find that there is limited work on data quality, and on new data sources and associated data elements, within the design of existing standards developed for PGHD integration. New data sources from emerging technologies like mixed reality, virtual reality, interactive voice response system, and social media are rarely considered. The review recommends the need for well-developed designs and policies for PGHD-EHR integration that promote data quality, patient autonomy, privacy, and enhanced trust
    • …
    corecore