5 research outputs found

    Informationsläckor och organisationer

    Get PDF
    Informationsläckage har blivit ett allt större problem sedan näringslivet blivit allt mer konkurrenskraftigt. Fenomenet informationsläckage är ett diffust och komplext ämne eftersom det kan ske på många olika sätt vilket gör det svårt för organisationer att skydda sig emot det. Denna undersökning innefattar studier på myndigheterna Lunds Kommun och Försvarsmakten, där vi undersöker vilka handlingsplaner som finns för att hantera och bemöta informationsläckage, samt vilka konsekvenser informationsläckage kan innebära. Tillvägagångssätt har varit att samla in litteratur som varit lämpad för ämnet och passat med vår forskningsfråga. Resultatet visar att våra informanter är medvetna om informationsläckage och förhåller sig till det, samt att de använder sig av handlingsplaner för att bemöta ett informationsläckage, dock i olika utsträckning

    Meanings of Security: A Constructivist Inquiry into the Context of Information Security Policy Development Post 9/11

    Get PDF
    Security is a term that appears to be used in a variety of ways and to have a number of meanings. In policy discussions, there may be reference to information security, national security, network security, online security, and other kinds of security. In an environment where technological innovation appears to be occurring at an ever increasing rate, policy makers look to technological experts for advice, and information security policy is developed, it seems to be important to consider these variations in meaning. This constructivist inquiry explores the context in which information security policy is developed and inquires into the meanings, assumptions, and values of those who engage in policy discourse. The guiding research question, What is the meaning of security? asks participants in federal and state government, colleges and universities, and the private and non-profit sectors about their understandings of security. The findings of this inquiry, presented in a narrative case study report, and the implications of this case study provide a richer understanding of the multiple meanings of security in the context in which information is selected and presented to policy makers, advice is given, and policy decisions are made. The multiple perspectives offered by diverse research participants offer valuable insights into the complex world in which information security policy development takes place. While the goal of this research is understanding, the use of thick description in the narrative may aid in the transferability necessary for the reader to make use of this research in other settings. Lessons learned are included, along with implications for policy makers and for future research

    PRESERVING PRIVACY IN DATA RELEASE

    Get PDF
    Data sharing and dissemination play a key role in our information society. Not only do they prove to be advantageous to the involved parties, but they can also be fruitful to the society at large (e.g., new treatments for rare diseases can be discovered based on real clinical trials shared by hospitals and pharmaceutical companies). The advancements in the Information and Communication Technology (ICT) make the process of releasing a data collection simpler than ever. The availability of novel computing paradigms, such as data outsourcing and cloud computing, make scalable, reliable and fast infrastructures a dream come true at reasonable costs. As a natural consequence of this scenario, data owners often rely on external storage servers for releasing their data collections, thus delegating the burden of data storage and management to the service provider. Unfortunately, the price to be paid when releasing a collection of data is in terms of unprecedented privacy risks. Data collections often include sensitive information, not intended for disclosure, that should be properly protected. The problem of protecting privacy in data release has been under the attention of the research and development communities for a long time. However, the richness of released data, the large number of available sources, and the emerging outsourcing/cloud scenarios raise novel problems, not addressed by traditional approaches, which need enhanced solutions. In this thesis, we define a comprehensive approach for protecting sensitive information when large collections of data are publicly or selectively released by their owners. In a nutshell, this requires protecting data explicitly included in the release, as well as protecting information not explicitly released but that could be exposed by the release, and ensuring that access to released data be allowed only to authorized parties according to the data owners\u2019 policies. More specifically, these three aspects translate to three requirements, addressed by this thesis, which can be summarized as follows. The first requirement is the protection of data explicitly included in a release. While intuitive, this requirement is complicated by the fact that privacy-enhancing techniques should not prevent recipients from performing legitimate analysis on the released data but, on the contrary, should ensure sufficient visibility over non sensitive information. We therefore propose a solution, based on a novel formulation of the fragmentation approach, that vertically fragments a data collection so to satisfy requirements for both information protection and visibility, and we complement it with an effective means for enriching the utility of the released data. The second requirement is the protection of data not explicitly included in a release. As a matter of fact, even a collection of non sensitive data might enable recipients to infer (possibly sensitive) information not explicitly disclosed but that somehow depends on the released information (e.g., the release of the treatment with which a patient is being cared can leak information about her disease). To address this requirement, starting from a real case study, we propose a solution for counteracting the inference of sensitive information that can be drawn observing peculiar value distributions in the released data collection. The third requirement is access control enforcement. Available solutions fall short for a variety of reasons. Traditional access control mechanisms are based on a reference monitor and do not fit outsourcing/cloud scenarios, since neither the data owner is willing, nor the cloud storage server is trusted, to enforce the access control policy. Recent solutions for access control enforcement in outsourcing scenarios assume outsourced data to be read-only and cannot easily manage (dynamic) write authorizations. We therefore propose an approach for efficiently supporting grant and revoke of write authorizations, building upon the selective encryption approach, and we also define a subscription-based authorization policy, to fit real-world scenarios where users pay for a service and access the resources made available during their subscriptions. The main contributions of this thesis can therefore be summarized as follows. With respect to the protection of data explicitly included in a release, our original results are: i) a novel modeling of the fragmentation problem; ii) an efficient technique for computing a fragmentation, based on reduced Ordered Binary Decision Diagrams (OBDDs) to formulate the conditions that a fragmentation must satisfy; iii) the computation of a minimal fragmentation not fragmenting data more than necessary, with the definition of both an exact and an heuristic algorithms, which provides faster computational time while well approximating the exact solutions; and iv) the definition of loose associations, a sanitized form of the sensitive associations broken by fragmentation that can be safely released, specifically extended to operate on arbitrary fragmentations. With respect to the protection of data not explicitly included in a release, our original results are: i) the definition of a novel and unresolved inference scenario, raised from a real case study where data items are incrementally released upon request; ii) the definition of several metrics to assess the inference exposure due to a data release, based upon the concepts of mutual information, Kullback-Leibler distance between distributions, Pearson\u2019s cumulative statistic, and Dixon\u2019s coefficient; and iii) the identification of a safe release with respect to the considered inference channel and the definition of the controls to be enforced to guarantee that no sensitive information be leaked releasing non sensitive data items. With respect to access control enforcement, our original results are: i) the management of dynamic write authorizations, by defining a solution based on selective encryption for efficiently and effectively supporting grant and revoke of write authorizations; ii) the definition of an effective technique to guarantee data integrity, so to allow the data owner and the users to verify that modifications to a resource have been produced only by authorized users; and iii) the modeling and enforcement of a subscription-based authorization policy, to support scenarios where both the set of users and the set of resources change frequently over time, and users\u2019 authorizations are based on their subscriptions

    Maximizing Sharing of Protected Information

    Get PDF
    ... In this paper we address the problem of classifying information by enforcing explicit data classification as well as inference and association constraints. We formulate the problem of determining a classification that ensures satisfaction of the constraints, while at the same time guaranteeing that information will not be overclassified. We present an approach to the solution of this problem and give an algorithm implementing it which is linear in simple cases, and quadratic in the general case. We also analyze a variant of the problem that is NP-complete
    corecore