364,875 research outputs found

    Distributed Key Generation for the Internet

    Get PDF
    Although distributed key generation (DKG) has been studied for some time, it has never been examined outside of the synchronous setting. We present the first realistic DKG architecture for use over the Internet. We propose a practical system model and define an efficient verifiable secret sharing scheme in it. We observe the necessity of Byzantine agreement for asynchronous DKG and analyze the difficulty of using a randomized protocol for it. Using our verifiable secret sharing scheme and a leader-based agreement protocol, we then design a DKG protocol for public-key cryptography. Finally, along with traditional proactive security, we also introduce group modification primitives in our system.

    A Distributed Public Key Infrastructure Based on Threshold Cryptography for the HiiMap Next Generation Internet Architecture

    Get PDF
    In this article, a security extension for the HiiMap Next Generation Internet Architecture is presented. We regard a public key infrastructure which is integrated into the mapping infrastructure of the locator/identifier-split addressing scheme. The security approach is based on Threshold Cryptography which enables a sharing of keys among the mapping servers. Hence, a more trustworthy and fair approach for a Next Generation Internet Architecture as compared to the state of the art approach is fostered. Additionally, we give an evaluation based on IETF AAA recommendations for security-related systems

    Access Controlled

    Get PDF
    Reports on a new generation of Internet controls that establish a new normative terrain in which surveillance and censorship are routine. Internet filtering, censorship of Web content, and online surveillance are increasing in scale, scope, and sophistication around the world, in democratic countries as well as in authoritarian states. The first generation of Internet controls consisted largely of building firewalls at key Internet gateways; China's famous “Great Firewall of China” is one of the first national Internet filtering systems. Today the new tools for Internet controls that are emerging go beyond mere denial of information. These new techniques, which aim to normalize (or even legalize) Internet control, include targeted viruses and the strategically timed deployment of distributed denial-of-service (DDoS) attacks, surveillance at key points of the Internet's infrastructure, take-down notices, stringent terms of usage policies, and national information shaping strategies. Access Controlled reports on this new normative terrain. The book, a project from the OpenNet Initiative (ONI), a collaboration of the Citizen Lab at the University of Toronto's Munk Centre for International Studies, Harvard's Berkman Center for Internet and Society, and the SecDev Group, offers six substantial chapters that analyze Internet control in both Western and Eastern Europe and a section of shorter regional reports and country profiles drawn from material gathered by the ONI around the world through a combination of technical interrogation and field research methods

    Soft peer review: social software and distributed scientific evaluation

    Get PDF
    The debate on the prospects of peer-review in the Internet age and the increasing criticism leveled against the dominant role of impact factor indicators are calling for new measurable criteria to assess scientific quality. Usage-based metrics offer a new avenue to scientific quality assessment but face the same risks as first generation search engines that used unreliable metrics (such as raw traffic data) to estimate content quality. In this article I analyze the contribution that social bookmarking systems can provide to the problem of usage-based metrics for scientific evaluation. I suggest that collaboratively aggregated metadata may help fill the gap between traditional citation-based criteria and raw usage factors. I submit that bottom-up, distributed evaluation models such as those afforded by social bookmarking will challenge more traditional quality assessment models in terms of coverage, efficiency and scalability. Services aggregating user-related quality indicators for online scientific content will come to occupy a key function in the scholarly communication system

    Compiler Design for Distributed Quantum Computing

    Get PDF
    In distributed quantum computing architectures, with the network and communications functionalities provided by the Quantum Internet, remote quantum processing units (QPUs) can communicate and cooperate for executing computational tasks that single NISQ devices cannot handle by themselves. To this aim, distributed quantum computing requires a new generation of quantum compilers, for mapping any quantum algorithm to any distributed quantum computing architecture. With this perspective, in this paper, we first discuss the main challenges arising with compiler design for distributed quantum computing. Then, we analytically derive an upper bound of the overhead induced by quantum compilation for distributed quantum computing. The derived bound accounts for the overhead induced by the underlying computing architecture as well as the additional overhead induced by the sub-optimal quantum compiler--expressly designed through the paper to achieve three key features, namely, general-purpose, efficient and effective. Finally, we validate the analytical results and we confirm the validity of the compiler design through an extensive performance analysis

    A Distributed Shared Key Generation Procedure Using Fractional Keys

    Get PDF
    We present a new class of distributed key generation and recoveryalgorithms suitable for group communication systems where the groupmembership is either static or slowly time-varying, and must be tightlycontrolled. The proposed key generation approach allows entities whichmayhave only partial trust in each other to jointly generate a shared keywithout the aid of an external third party. The group collectivelygenerates and maintains a dynamic group parameter, and the shared key isgenerated using a strong, one-way function of this parameter. This schemealso provides perfect forward secrecy. The validity of key generation canbe checked using verifiable secret sharing techniques. The key retrievalmethod does not require the keys to be stored in an external retrievalcenter. We note that many Internet-based applications may have theserequirements. Fulfillment of these requirements is realized through theuse of fractional keys--a distributed technique recently developed toenhance the security of distributed systems in a non-cryptographicmanner

    6G wireless systems : a vision, architectural elements, and future directions

    Get PDF
    Internet of everything (IoE)-based smart services are expected to gain immense popularity in the future, which raises the need for next-generation wireless networks. Although fifth-generation (5G) networks can support various IoE services, they might not be able to completely fulfill the requirements of novel applications. Sixth-generation (6G) wireless systems are envisioned to overcome 5G network limitations. In this article, we explore recent advances made toward enabling 6G systems. We devise a taxonomy based on key enabling technologies, use cases, emerging machine learning schemes, communication technologies, networking technologies, and computing technologies. Furthermore, we identify and discuss open research challenges, such as artificial-intelligence-based adaptive transceivers, intelligent wireless energy harvesting, decentralized and secure business models, intelligent cell-less architecture, and distributed security models. We propose practical guidelines including deep Q-learning and federated learning-based transceivers, blockchain-based secure business models, homomorphic encryption, and distributed-ledger-based authentication schemes to cope with these challenges. Finally, we outline and recommend several future directions. © 2013 IEEE

    Extending Ambient Intelligence to the Internet of Things: New Challenges for QoC Management

    Get PDF
    International audienceQuality of Context (QoC) awareness is recognized as a key point for the success of context-aware computing solutions. At a time where the Internet of Things, Cloud Computing, and Ambient Intelligence paradigms bring together new opportunities for more complex context computation, the next generation of Multiscale Distributed Context Managers (MDCM) is facing new challenges concerning QoC management. This paper presents how our QoCIM framework can help application developers to manage the whole QoC life-cycle by providing genericity, openness and uniformity. Its usages are illustrated, both at design time and at runtime, in the case of an urban pollution context- and QoC-aware scenario

    Orchestrating Service Migration for Low Power MEC-Enabled IoT Devices

    Full text link
    Multi-Access Edge Computing (MEC) is a key enabling technology for Fifth Generation (5G) mobile networks. MEC facilitates distributed cloud computing capabilities and information technology service environment for applications and services at the edges of mobile networks. This architectural modification serves to reduce congestion, latency, and improve the performance of such edge colocated applications and devices. In this paper, we demonstrate how reactive service migration can be orchestrated for low-power MEC-enabled Internet of Things (IoT) devices. Here, we use open-source Kubernetes as container orchestration system. Our demo is based on traditional client-server system from user equipment (UE) over Long Term Evolution (LTE) to the MEC server. As the use case scenario, we post-process live video received over web real-time communication (WebRTC). Next, we integrate orchestration by Kubernetes with S1 handovers, demonstrating MEC-based software defined network (SDN). Now, edge applications may reactively follow the UE within the radio access network (RAN), expediting low-latency. The collected data is used to analyze the benefits of the low-power MEC-enabled IoT device scheme, in which end-to-end (E2E) latency and power requirements of the UE are improved. We further discuss the challenges of implementing such schemes and future research directions therein
    • …
    corecore