34,460 research outputs found

    The Case for Quantum Key Distribution

    Get PDF
    Quantum key distribution (QKD) promises secure key agreement by using quantum mechanical systems. We argue that QKD will be an important part of future cryptographic infrastructures. It can provide long-term confidentiality for encrypted information without reliance on computational assumptions. Although QKD still requires authentication to prevent man-in-the-middle attacks, it can make use of either information-theoretically secure symmetric key authentication or computationally secure public key authentication: even when using public key authentication, we argue that QKD still offers stronger security than classical key agreement.Comment: 12 pages, 1 figure; to appear in proceedings of QuantumComm 2009 Workshop on Quantum and Classical Information Security; version 2 minor content revision

    Privacy and Accountability in Black-Box Medicine

    Get PDF
    Black-box medicine—the use of big data and sophisticated machine learning techniques for health-care applications—could be the future of personalized medicine. Black-box medicine promises to make it easier to diagnose rare diseases and conditions, identify the most promising treatments, and allocate scarce resources among different patients. But to succeed, it must overcome two separate, but related, problems: patient privacy and algorithmic accountability. Privacy is a problem because researchers need access to huge amounts of patient health information to generate useful medical predictions. And accountability is a problem because black-box algorithms must be verified by outsiders to ensure they are accurate and unbiased, but this means giving outsiders access to this health information. This article examines the tension between the twin goals of privacy and accountability and develops a framework for balancing that tension. It proposes three pillars for an effective system of privacy-preserving accountability: substantive limitations on the collection, use, and disclosure of patient information; independent gatekeepers regulating information sharing between those developing and verifying black-box algorithms; and information-security requirements to prevent unintentional disclosures of patient information. The article examines and draws on a similar debate in the field of clinical trials, where disclosing information from past trials can lead to new treatments but also threatens patient privacy

    A segmentation method for shared protection in WDM networks

    Get PDF
    Shared link and shared path protections have been recognized as preferred schemes to protect traffic flows against network failures. In recent years, another method referred to as Shared Segment Protection has been studied as an alternative solution for protection. This method is more flexible and efficient in terms of capacity utilization and restoration time. However, to our best knowledge, this method has mostly been studied in dynamic provisioning scenarios in which searching for restoration paths is dynamically performed after a failure has occurred. In this paper, based on the path segmentation idea, we propose a method to generate good candidate routes for traffic demands in static provisioning. These candidates are used as input parameters of an Integer Linear Programming (ILP) model for shared backup protection. Numerical results show that the capacity efficiency resulting from these candidates is much better than the best known Shared Backup Path Protection (SBPP) schemes. In addition, although the restoration time of our scheme is a little bit longer than those implementing link protection, it is still faster than path protection schemes

    Community tracking in a cMOOC and nomadic learner behavior identification on a connectivist rhizomatic learning network

    Get PDF
    This article contributes to the literature on connectivism, connectivist MOOCs (cMOOCs) and rhizomatic learning by examining participant interactions, community formation and nomadic learner behavior in a particular cMOOC, #rhizo15, facilitated for 6 weeks by Dave Cormier. It further focuses on what we can learn by observing Twitter interactions particularly. As an explanatory mixed research design, Social Network Analysis and content analysis were employed for the purposes of the research. SNA is used at the macro, meso and micro levels, and content analysis of one week of the MOOC was conducted using the Community of Inquiry framework. The macro level analysis demonstrates that communities in a rhizomatic connectivist networks have chaotic relationships with other communities in different dimensions (clarified by use of hashtags of concurrent, past and future events). A key finding at the meso level was that as #rhizo15 progressed and number of active participants decreased, interaction increased in overall network. The micro level analysis further reveals that, though completely online, the nature of open online ecosystems are very convenient to facilitate the formation of community. The content analysis of week 3 tweets demonstrated that cognitive presence was the most frequently observed, while teaching presence (teaching behaviors of both facilitator and participants) was the lowest. This research recognizes the limitations of looking only at Twitter when #rhizo15 conversations occurred over multiple platforms frequented by overlapping but not identical groups of people. However, it provides a valuable partial perspective at the macro meso and micro levels that contribute to our understanding of community-building in cMOOCs

    Developing an Efficient DMCIS with Next-Generation Wireless Networks

    Get PDF
    The impact of extreme events across the globe is extraordinary which continues to handicap the advancement of the struggling developing societies and threatens most of the industrialized countries in the globe. Various fields of Information and Communication Technology have widely been used for efficient disaster management; but only to a limited extent though, there is a tremendous potential for increasing efficiency and effectiveness in coping with disasters with the utilization of emerging wireless network technologies. Early warning, response to the particular situation and proper recovery are among the main focuses of an efficient disaster management system today. Considering these aspects, in this paper we propose a framework for developing an efficient Disaster Management Communications and Information System (DMCIS) which is basically benefited by the exploitation of the emerging wireless network technologies combined with other networking and data processing technologies.Comment: 6 page

    A Secure Lightweight Approach of Node Membership Verification in Dense HDSN

    Full text link
    In this paper, we consider a particular type of deployment scenario of a distributed sensor network (DSN), where sensors of different types and categories are densely deployed in the same target area. In this network, the sensors are associated with different groups, based on their functional types and after deployment they collaborate with one another in the same group for doing any assigned task for that particular group. We term this sort of DSN as a heterogeneous distributed sensor network (HDSN). Considering this scenario, we propose a secure membership verification mechanism using one-way accumulator (OWA) which ensures that, before collaborating for a particular task, any pair of nodes in the same deployment group can verify each other-s legitimacy of membership. Our scheme also supports addition and deletion of members (nodes) in a particular group in the HDSN. Our analysis shows that, the proposed scheme could work well in conjunction with other security mechanisms for sensor networks and is very effective to resist any adversary-s attempt to be included in a legitimate group in the network.Comment: 6 page

    Why (and How) Networks Should Run Themselves

    Full text link
    The proliferation of networked devices, systems, and applications that we depend on every day makes managing networks more important than ever. The increasing security, availability, and performance demands of these applications suggest that these increasingly difficult network management problems be solved in real time, across a complex web of interacting protocols and systems. Alas, just as the importance of network management has increased, the network has grown so complex that it is seemingly unmanageable. In this new era, network management requires a fundamentally new approach. Instead of optimizations based on closed-form analysis of individual protocols, network operators need data-driven, machine-learning-based models of end-to-end and application performance based on high-level policy goals and a holistic view of the underlying components. Instead of anomaly detection algorithms that operate on offline analysis of network traces, operators need classification and detection algorithms that can make real-time, closed-loop decisions. Networks should learn to drive themselves. This paper explores this concept, discussing how we might attain this ambitious goal by more closely coupling measurement with real-time control and by relying on learning for inference and prediction about a networked application or system, as opposed to closed-form analysis of individual protocols
    corecore