19 research outputs found

    Unauthorized Access

    Get PDF
    Going beyond current books on privacy and security, this book proposes specific solutions to public policy issues pertaining to online privacy and security. Requiring no technical or legal expertise, it provides a practical framework to address ethical and legal issues. The authors explore the well-established connection between social norms, privacy, security, and technological structure. They also discuss how rapid technological developments have created novel situations that lack relevant norms and present ways to develop these norms for protecting informational privacy and ensuring sufficient information security

    Evaluating the data privacy of mobile applications through crowdsourcing

    Get PDF
    Consumers are largely unaware regarding the use being made to the data that they generate through smart devices, or their GDPR-compliance, since such information is typically hidden behind vague privacy policy documents, which are often lengthy, difficult to read (containing legal terms and definitions) and frequently changing. This paper describes the activities of the CAP-A project, whose aim is to apply crowdsourcing techniques to evaluate the privacy friendliness of apps, and to allow users to better understand the content of Privacy Policy documents and, consequently, the privacy implications of using any given mobile app. To achieve this, we developed a set of tools that aim at assisting users to express their own privacy concerns and expectations and assess the mobile apps’ privacy properties through collective intelligence

    Faculty Perspectives - Fall 2015

    Get PDF
    This issue features excerpts from exciting and innovative works by faculty members Kimberly Bailey, Alexander Boni-Saenz and Richard Warner. Also featured: An excerpt from Daniel M. Katz\u27s The MIT School of Law?, which discusses new directions for legal education.https://scholarship.kentlaw.iit.edu/fac_perspectives/1004/thumbnail.jp

    Debugging the Tallinn Manual 2.0\u27s Application of the Due Diligence Principle to Cyber Operations

    Get PDF
    As global cyber connectivity increases, so does opportunities for large-scale nefarious cyber operations. These novel circumstances have necessitated the application of old-world customs to an increasingly complex world. To meet this challenge, the Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations was created. The Manual provides 154 black letter rules detailing how international law applies to cyber operations during peacetime. Of particular import is the Manual’s interpretation of the due diligence principle. This principle, which defines the contours of a state’s obligation to prevent their territory to inflict extraterritorial harm, is increasingly significant in light of the above-mentioned increase in global network connectivity. It is with regards to this principal where the Manual’s application is flawed. However, because of the principle’s inherent flexibility, and the unique nature of the cyber risks, there are patches that are consistent with international law and would better serve global peace and security

    Faculty Perspectives - Fall 2013

    Get PDF
    This is the inaugural edition of Faculty Perspectives, a publication that highlights new faculty scholarship at IIT Chicago-Kent College of Law. This issue features four excerpts from recent or forthcoming faculty articles and books.https://scholarship.kentlaw.iit.edu/fac_perspectives/1000/thumbnail.jp

    SkyCDS: A resilient content delivery service based on diversified cloud storage

    Get PDF
    Cloud-based storage is a popular outsourcing solution for organizations to deliver contents to end-users. However, there is a need for contingency plans to ensure service provision when the provider either suffers outages or is going out of business. This paper presents SkyCDS: a resilient content delivery service based on a publish/subscribe overlay over diversified cloud storage. SkyCDS splits the content delivery into metadata and content storage flow layers. The metadata flow layer is based on publish-subscribe patterns for insourcing the metadata control back to content owner. The storage layer is based on dispersal information over multiple cloud locations with which organizations outsource content storage in a controlled manner. In SkyCDS, the content dispersion is performed on the publisher side and the content retrieving process on the end-user side (the subscriber), which reduces the load on the organization side only to metadata management. SkyCDS also lowers the overhead of the content dispersion and retrieving processes by taking advantage of multi-core technology. A new allocation strategy based on cloud storage diversification and failure masking mechanisms minimize side effects of temporary, permanent cloud-based service outages and vendor lock-in. We developed a SkyCDS prototype that was evaluated by using synthetic workloads and a study case with real traces. Publish/subscribe queuing patterns were evaluated by using a simulation tool based on characterized metrics taken from experimental evaluation. The evaluation revealed the feasibility of SkyCDS in terms of performance, reliability and storage space profitability. It also shows a novel way to compare the storage/delivery options through risk assessment. (C) 2015 Elsevier B.V. All rights reserved.The work presented in this paper has been partially supported by EU under the COST programme Action IC1305, Network for Sustainable Ultrascale Computing (NESUS)

    A gearbox model for processing large volumes of data by using pipeline systems encapsulated into virtual containers

    Get PDF
    Software pipelines enable organizations to chain applications for adding value to contents (e.g., confidentially, reliability, and integrity) before either sharing them with partners or sending them to the cloud. However, the pipeline components add overhead when processing large volumes of data, which can become critical in real-world scenarios. This paper presents a gearbox model for processing large volumes of data by using pipeline systems encapsulated into virtual containers. In this model, the gears represent applications, whereas gearboxes represent software pipelines. This model was implemented as a collaborative system that automatically performs Gear up (by using parallel patterns) and/or Gear down (by using in-memory storage) until all gears produce uniform data processing velocities. This model reduces delays and bottlenecks produced by the heterogeneous performance of applications included in software pipelines. The new container tool has been designed to encapsulate both the collaborative system and the software pipelines into a virtual container and deploy it on IT infrastructures. We conducted case studies to evaluate the performance of when processing medical images and PDF repositories. The incorporation of a capsule to a cloud storage service for pre-processing medical imagery was also studied. The experimental evaluation revealed the feasibility of applying the gearbox model to the deployment of software pipelines in real-world scenarios as it can significantly improve the end-user service experience when pre-processing large-scale data in comparison with state-of-the-art solutions such as Sacbe and Parsl.This work has been partially supported by the “Spanish Ministerio de Economia y Competitividad ” under the project grant TIN2016-79637-P “Towards Unification of HPC and Big Data paradigms”

    Chicago-Kent Magazine - Fall 2013

    Get PDF
    https://scholarship.kentlaw.iit.edu/ckmagazine/1000/thumbnail.jp

    Chicago-Kent Magazine - Fall 2013

    Get PDF
    https://scholarship.kentlaw.iit.edu/ckmagazine/1000/thumbnail.jp
    corecore