3,199 research outputs found

    Systematizing Decentralization and Privacy: Lessons from 15 Years of Research and Deployments

    Get PDF
    Decentralized systems are a subset of distributed systems where multiple authorities control different components and no authority is fully trusted by all. This implies that any component in a decentralized system is potentially adversarial. We revise fifteen years of research on decentralization and privacy, and provide an overview of key systems, as well as key insights for designers of future systems. We show that decentralized designs can enhance privacy, integrity, and availability but also require careful trade-offs in terms of system complexity, properties provided, and degree of decentralization. These trade-offs need to be understood and navigated by designers. We argue that a combination of insights from cryptography, distributed systems, and mechanism design, aligned with the development of adequate incentives, are necessary to build scalable and successful privacy-preserving decentralized systems

    Redescribing Health Privacy: The Importance of Health Policy

    Get PDF
    Current conversations about health information policy often tend to be based on three broad assumptions. First, many perceive a tension between regulation and innovation. We often hear that privacy regulations are keeping researchers, companies, and providers from aggregating the data they need to promote innovation. Second, aggregation of fragmented data is seen as a threat to its proper regulation, creating the risk of breaches and other misuse. Third, a prime directive for technicians and policymakers is to give patients ever more granular methods of control over data. This article questions and complicates those assumptions, which I deem (respectively) the Privacy Threat to Research, the Aggregation Threat to Privacy, and the Control Solution. This article is also intended to enrich our concepts of “fragmentation” and “integration” in health care. There is a good deal of sloganeering around “firewalls” and “vertical integration” as idealized implementations of “fragmentation” and “integration” (respective). The problem, though, is that terms like these (as well as “disruption”) are insufficiently normative to guide large-scale health system change. They describe, but they do not adequately prescribe. By examining those instances where: a) regulation promotes innovation, and b) increasing (some kinds of) availability of data actually enhances security, confidentiality, and privacy protections, this article attempts to give a richer account of the ethics of fragmentation and integration in the U.S. health care system. But, it also has a darker side, highlighting the inevitable conflicts of values created in a “reputation society” driven by stigmatizing social sorting systems. Personal data control may exacerbate social inequalities. Data aggregation may increase both our powers of research and our vulnerability to breach. The health data policymaking landscape of the next decade will feature a series of intractable conflicts between these important social values

    Hazard Contribution Modes of Machine Learning Components

    Get PDF
    Amongst the essential steps to be taken towards developing and deploying safe systems with embedded learning-enabled components (LECs) i.e., software components that use ma- chine learning (ML)are to analyze and understand the con- tribution of the constituent LECs to safety, and to assure that those contributions have been appropriately managed. This paper addresses both steps by, first, introducing the notion of hazard contribution modes (HCMs) a categorization of the ways in which the ML elements of LECs can contribute to hazardous system states; and, second, describing how argumentation patterns can capture the reasoning that can be used to assure HCM mitigation. Our framework is generic in the sense that the categories of HCMs developed i) can admit different learning schemes, i.e., supervised, unsupervised, and reinforcement learning, and ii) are not dependent on the type of system in which the LECs are embedded, i.e., both cyber and cyber-physical systems. One of the goals of this work is to serve a starting point for systematizing L analysis towards eventually automating it in a tool

    Information Systems Security Policy Violation: Systematic Literature Review on Behavior Threats by Internal Agents

    Get PDF
    Systematic literature review (SLR) addresses the question of structured literature searches when dealing with a potentially large number of literature sources. An example of a large number of literature sources where SLR would be beneficial can be found in the Information systems security literature which touches on internal agents’ behavior and tendencies to violate security policies. Upon close examination, very few studies have used SLR in the work. This work presents an insightful approach to how SLR may be applicable in the domain of Information Systems security. The article presents a summary of the SLR approach contextualized in the domain of IS security in order to address such a gap. Rigor and relevance is systematized in the work through a pre-selection and coding of literature using Atlas.ti. The outcome of the SLR process outlined in this work is a presentation of literature in three pre-determined schemes namely, the theories that have been used in information systems security violations literature, categorization of security violations as presented in literature; and the contexts that these violations occur. The work concludes by presenting suggestions for future research

    Analyzing the Impacts of Emerging Technologies on Workforce Skills: A Case Study of Industrial Engineering in the Context of the Industrial Internet of Things

    Get PDF
    New technologies can result in major disruptions and change paradigms that were once well established. Methods have been developed to forecast new technologies and to analyze the impacts of them in terms of processes, products, and services. However, the current literature does not provide answers on how to forecast changes in terms of skills and knowledge, given an emerging technology. This thesis aims to fill this literature gap by developing a structured method to forecast the required set of skills for emerging technologies and to compare it with the current skills of the workforce. The method relies on the breakdown of the emerging technology into smaller components, so then skills can be identified for each component. A case study was conducted to implement and test the proposed method. In this case study, the impacts of the Industrial Internet of Things (IIoT) on engineering skills and knowledge were assessed. Text data analytics validated IIoT as an emerging technology, thus justifying the case study based on engineering and manufacturing discussions. The set of skills required for IIoT was compared to the current skills developed by Industrial Engineering students at the University of Windsor. Text data analytics was also used to evaluate the importance of each IIoT component by measuring how associated individual components are to IIoT. Therefore, existing skill gaps between the current Industrial Engineering program and IIoT requirements were not only mapped, but they were also given weights

    DOING INFORMATION SYSTEMS: INFORMATIZING AND SYSTEMATIZING FROM A PRACTICE LENS

    Get PDF
    This study applies the theory of practice to view the information systems (IS) field in terms of its essential activity—what it does as an intellectual enterprise. Drawing from Foucault, Bourdieu, Pickering and other practice theorists, it defines the IS field as continuously informatizing and systematizing its objects of study. Each of these two activities is elaborated into three dimensions: informatizing is characterized as automating, informating, and complexing; systematizing is characterized as analysing/ synthesizing, sensemaking and enacting. These dimensions are mapped into themes that can be characteristically said to be IS research, and based on each of their essential activities, provide a theoretically coherent image of research in IS that connects the dots despite the field’s apparent theoretical diversity and incongruity. Focusing on what the IS field does builds a distinctive identity for the field, opens up possibilities for theorizing the IT artefact and enables IS researchers to theorize not only traditional IS topics, but especially novel, unpredictable, and emergent socio-technical phenomena. By bringing back the IS field to its core concepts—information and system—the performative act of doing IS in both its discursive and non-discursive practices hold the potential for enhancing the intellectual and social relevance of the IS field
    • …
    corecore