2,388 research outputs found

    A Survey on Forensics and Compliance Auditing for Critical Infrastructure Protection

    Get PDF
    The broadening dependency and reliance that modern societies have on essential services provided by Critical Infrastructures is increasing the relevance of their trustworthiness. However, Critical Infrastructures are attractive targets for cyberattacks, due to the potential for considerable impact, not just at the economic level but also in terms of physical damage and even loss of human life. Complementing traditional security mechanisms, forensics and compliance audit processes play an important role in ensuring Critical Infrastructure trustworthiness. Compliance auditing contributes to checking if security measures are in place and compliant with standards and internal policies. Forensics assist the investigation of past security incidents. Since these two areas significantly overlap, in terms of data sources, tools and techniques, they can be merged into unified Forensics and Compliance Auditing (FCA) frameworks. In this paper, we survey the latest developments, methodologies, challenges, and solutions addressing forensics and compliance auditing in the scope of Critical Infrastructure Protection. This survey focuses on relevant contributions, capable of tackling the requirements imposed by massively distributed and complex Industrial Automation and Control Systems, in terms of handling large volumes of heterogeneous data (that can be noisy, ambiguous, and redundant) for analytic purposes, with adequate performance and reliability. The achieved results produced a taxonomy in the field of FCA whose key categories denote the relevant topics in the literature. Also, the collected knowledge resulted in the establishment of a reference FCA architecture, proposed as a generic template for a converged platform. These results are intended to guide future research on forensics and compliance auditing for Critical Infrastructure Protection.info:eu-repo/semantics/publishedVersio

    Recalibrating machine learning for social biases: demonstrating a new methodology through a case study classifying gender biases in archival documentation

    Get PDF
    This thesis proposes a recalibration of Machine Learning for social biases to minimize harms from existing approaches and practices in the field. Prioritizing quality over quantity, accuracy over efficiency, representativeness over convenience, and situated thinking over universal thinking, the thesis demonstrates an alternative approach to creating Machine Learning models. Drawing on GLAM, the Humanities, the Social Sciences, and Design, the thesis focuses on understanding and communicating biases in a specific use case. 11,888 metadata descriptions from the University of Edinburgh Heritage Collections' Archives catalog were manually annotated for gender biases and text classification models were then trained on the resulting dataset of 55,260 annotations. Evaluations of the models' performance demonstrates that annotating gender biases can be automated; however, the subjectivity of bias as a concept complicates the generalizability of any one approach. The contributions are: (1) an interdisciplinary and participatory Bias-Aware Methodology, (2) a Taxonomy of Gendered and Gender Biased Language, (3) data annotated for gender biased language, (4) gender biased text classification models, and (5) a human-centered approach to model evaluation. The contributions have implications for Machine Learning, demonstrating how bias is inherent to all data and models; more specifically for Natural Language Processing, providing an annotation taxonomy, annotated datasets and classification models for analyzing gender biased language at scale; for the Gallery, Library, Archives, and Museum sector, offering guidance to institutions seeking to reconcile with histories of marginalizing communities through their documentation practices; and for historians, who utilize cultural heritage documentation to study and interpret the past. Through a real-world application of the Bias-Aware Methodology in a case study, the thesis illustrates the need to shift away from removing social biases and towards acknowledging them, creating data and models that surface the uncertainty and multiplicity characteristic of human societies

    Configuration Management of Distributed Systems over Unreliable and Hostile Networks

    Get PDF
    Economic incentives of large criminal profits and the threat of legal consequences have pushed criminals to continuously improve their malware, especially command and control channels. This thesis applied concepts from successful malware command and control to explore the survivability and resilience of benign configuration management systems. This work expands on existing stage models of malware life cycle to contribute a new model for identifying malware concepts applicable to benign configuration management. The Hidden Master architecture is a contribution to master-agent network communication. In the Hidden Master architecture, communication between master and agent is asynchronous and can operate trough intermediate nodes. This protects the master secret key, which gives full control of all computers participating in configuration management. Multiple improvements to idempotent configuration were proposed, including the definition of the minimal base resource dependency model, simplified resource revalidation and the use of imperative general purpose language for defining idempotent configuration. Following the constructive research approach, the improvements to configuration management were designed into two prototypes. This allowed validation in laboratory testing, in two case studies and in expert interviews. In laboratory testing, the Hidden Master prototype was more resilient than leading configuration management tools in high load and low memory conditions, and against packet loss and corruption. Only the research prototype was adaptable to a network without stable topology due to the asynchronous nature of the Hidden Master architecture. The main case study used the research prototype in a complex environment to deploy a multi-room, authenticated audiovisual system for a client of an organization deploying the configuration. The case studies indicated that imperative general purpose language can be used for idempotent configuration in real life, for defining new configurations in unexpected situations using the base resources, and abstracting those using standard language features; and that such a system seems easy to learn. Potential business benefits were identified and evaluated using individual semistructured expert interviews. Respondents agreed that the models and the Hidden Master architecture could reduce costs and risks, improve developer productivity and allow faster time-to-market. Protection of master secret keys and the reduced need for incident response were seen as key drivers for improved security. Low-cost geographic scaling and leveraging file serving capabilities of commodity servers were seen to improve scaling and resiliency. Respondents identified jurisdictional legal limitations to encryption and requirements for cloud operator auditing as factors potentially limiting the full use of some concepts

    Displacement and the Humanities: Manifestos from the Ancient to the Present

    Get PDF
    This is the final version. Available on open access from MDPI via the DOI in this recordThis is a reprint of articles from the Special Issue published online in the open access journal Humanities (ISSN 2076-0787) (available at: https://www.mdpi.com/journal/humanities/special_issues/Manifestos Ancient Present)This volume brings together the work of practitioners, communities, artists and other researchers from multiple disciplines. Seeking to provoke a discourse around displacement within and beyond the field of Humanities, it positions historical cases and debates, some reaching into the ancient past, within diverse geo-chronological contexts and current world urgencies. In adopting an innovative dialogic structure, between practitioners on the ground - from architects and urban planners to artists - and academics working across subject areas, the volume is a proposition to: remap priorities for current research agendas; open up disciplines, critically analysing their approaches; address the socio-political responsibilities that we have as scholars and practitioners; and provide an alternative site of discourse for contemporary concerns about displacement. Ultimately, this volume aims to provoke future work and collaborations - hence, manifestos - not only in the historical and literary fields, but wider research concerned with human mobility and the challenges confronting people who are out of place of rights, protection and belonging

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    Complete and easy type Inference for first-class polymorphism

    Get PDF
    The Hindley-Milner (HM) typing discipline is remarkable in that it allows statically typing programs without requiring the programmer to annotate programs with types themselves. This is due to the HM system offering complete type inference, meaning that if a program is well typed, the inference algorithm is able to determine all the necessary typing information. Let bindings implicitly perform generalisation, allowing a let-bound variable to receive the most general possible type, which in turn may be instantiated appropriately at each of the variable’s use sites. As a result, the HM type system has since become the foundation for type inference in programming languages such as Haskell as well as the ML family of languages and has been extended in a multitude of ways. The original HM system only supports prenex polymorphism, where type variables are universally quantified only at the outermost level. This precludes many useful programs, such as passing a data structure to a function in the form of a fold function, which would need to be polymorphic in the type of the accumulator. However, this would require a nested quantifier in the type of the overall function. As a result, one direction of extending the HM system is to add support for first-class polymorphism, allowing arbitrarily nested quantifiers and instantiating type variables with polymorphic types. In such systems, restrictions are necessary to retain decidability of type inference. This work presents FreezeML, a novel approach for integrating first-class polymorphism into the HM system, focused on simplicity. It eschews sophisticated yet hard to grasp heuristics in the type systems or extending the language of types, while still requiring only modest amounts of annotations. In particular, FreezeML leverages the mechanisms for generalisation and instantiation that are already at the heart of ML. Generalisation and instantiation are performed by let bindings and variables, respectively, but extended to types beyond prenex polymorphism. The defining feature of FreezeML is the ability to freeze variables, which prevents the usual instantiation of their types, allowing them instead to keep their original, fully polymorphic types. We demonstrate that FreezeML is as expressive as System F by providing a translation from the latter to the former; the reverse direction is also shown. Further, we prove that FreezeML is indeed a conservative extension of ML: When considering only ML programs, FreezeML accepts exactly the same programs as ML itself. # We show that type inference for FreezeML can easily be integrated into HM-like type systems by presenting a sound and complete inference algorithm for FreezeML that extends Algorithm W, the original inference algorithm for the HM system. Since the inception of Algorithm W in the 1970s, type inference for the HM system and its descendants has been modernised by approaches that involve constraint solving, which proved to be more modular and extensible. In such systems, a term is translated to a logical constraint, whose solutions correspond to the types of the original term. A solver for such constraints may then be defined independently. To this end, we demonstrate such a constraint-based inference approach for FreezeML. We also discuss the effects of integrating the value restriction into FreezeML and provide detailed comparisons with other approaches towards first-class polymorphism in ML alongside a collection of examples found in the literature

    “Giving better and giving more”? Examining the roles of philanthropy advisors in elite philanthropy

    Get PDF
    Over the past 15 years, philanthropy advisors (PhAds) have grown in prominence within financial institutions, family offices and independent consultancies (Beeston and Breeze 2023; Harrington 2016; Ostrander 2007; Sklair and Glucksberg 2021). More recently, the outbreak of the Covid-19 pandemic has heightened awareness of the significance of philanthropic advisory services within elite philanthropy. Yet the roles and contributions of PhAds remain under-researched, notwithstanding the long-standing interest in unpacking the “black box” of elite philanthropy (Odendahl 1990; Ostrander 2007; Ostrower 1995), particularly in relation to donor-centred philanthropy (Ostrander and Schervish 1990; Ostrander 2007) and elite power, and increased scrutiny of the role of philanthropic intermediaries (Kumar and Brooks 2021) and professional advisors (Harrington 2016) in elite philanthropy. The thesis explores the roles of PhAds by asking three questions: What are the roles of PhAds in elite philanthropy? How do PhAds shape narratives of legitimacy within elite philanthropic practices? And what can analysing the roles of PhAds, in the context of pandemic responses, add to existing understandings of elite philanthropy? To address these questions, the thesis took the form of a multi-method qualitative study, based on 34 interviews with philanthropy practitioners, participant and non-participant observation, and document analysis. The study drew on industry grey literature, comprising online materials that include websites, training materials, handbooks, reports and webinars produced by philanthropy advisors. Data was collected between 2019 and 2021. By incorporating insights from critical elite studies, philanthropic research and examining how philanthropy advisors enable donor-control and elite agency, this thesis advances understanding of the meaning-making processes of philanthropy advisors, by integrating concepts and domains of research on elite identities (Khan 2012; Sherman 2017; Maclean et. al. 2015) and the broader role of philanthropy in legitimising elites and wealth accumulation (Kantola and Kuusela 2019; Harrington 2016; McGoey and Thiel 2018; Sklair and Glucksberg 2021). The research finds that PhAds form part of an emergent industry, acting as brokers, intermediaries and boundary spanners within elite philanthropy. It examines the legitimising accounts (Creed et al. 2002) used by PhAds, to understand how they relate to and shape systems of meaning for the role of philanthropists, philanthropy and themselves (philanthropy advice services). The findings emphasise the central role of social impact claims, with philanthropy advice understood as a way of increasing social impact of philanthropy and with PhAds characterising their roles as enabling clients to “give better and give more”. The thesis discusses PhAds’ understanding of their roles in the identity formation of their clients through a “philanthropic learning journey” – an affective and experiential process that aims at the self-realisation of the philanthropist. This contributes to studies on identity and meaning making in elite philanthropy, highlighting the roles of advisors in the formation of positive wealth identities (Harrington 2016; Maclean et. al 2015; Sklair and Glucksberg 2021). The thesis also explores the ways in which PhAds and philanthropy advice services legitimised the role of elite philanthropy in philanthropic responses to the Covid-19 pandemic. In summary, the project offers two key contributions in building on existing studies of philanthropy advice services and practitioners. Firstly, it provides rich qualitative evidence on under-researched philanthropy advisors and demonstrates their roles as professional enablers of elite philanthropy; and secondly, it expands debates on the legitimising practices of elite philanthropy (McGoey and Thiel 2018; McGoey 2021; Sklair and Glucksberg 2021) by evidencing how donor-centred practices are justified by PhAds as a means to an end

    Trusted Provenance with Blockchain - A Blockchain-based Provenance Tracking System for Virtual Aircraft Component Manufacturing

    Get PDF
    The importance of provenance in the digital age has led to significant interest in utilizing blockchain technology for tamper-proof storage of provenance data. This thesis proposes a blockchain-based provenance tracking system for the certification of aircraft components. The aim is to design and implement a system that can ensure the trustworthy, tamper-resistant storage of provenance documents originating from an aircraft manufacturing process. To achieve this, the thesis presents a systematic literature review, which provides a comprehensive overview of existing works in the field of provenance and blockchain technology. After obtaining strategies to utilize blockchain for the storage of provenance data on the blockchain, a system was designed to meet the requirements of stakeholders in the aviation industry. The thesis utilized a systematic approach to gather requirements by conducting interviews with stakeholders. The system was implemented using a combination of smart contracts and a graphical user interface to provide tamper-resistant, traceable storage of relevant data on a transparent blockchain. An evaluation based on the requirements identified during the requirement engineering process found that the proposed system meets all identified requirements. Overall, this thesis offers insight into a potential application of blockchain technology in the aviation industry and provides a valuable resource for researchers and industry professionals seeking to leverage blockchain technology for provenance tracking and certification purpose

    Making sense of solid for data governance and GDPR

    Get PDF
    Solid is a new radical paradigm based on decentralising control of data from central organisations to individuals that seeks to empower individuals to have active control of who and how their data is being used. In order to realise this vision, the use-cases and implementations of Solid also require us to be consistent with the relevant privacy and data protection regulations such as the GDPR. However, to do so first requires a prior understanding of all actors, roles, and processes involved in a use-case, which then need to be aligned with GDPR's concepts to identify relevant obligations, and then investigate their compliance. To assist with this process, we describe Solid as a variation of `cloud technology' and adapt the existing standardised terminologies and paradigms from ISO/IEC standards. We then investigate the applicability of GDPR's requirements to Solid-based implementations, along with an exploration of how existing issues arising from GDPR enforcement also apply to Solid. Finally, we outline the path forward through specific extensions to Solid's specifications that mitigate known issues and enable the realisation of its benefits
    corecore