50 research outputs found

    File management in a mobile DHT-based P2P environment

    Get PDF
    The emergence of mobile P2P systems is largely due to the evolution of mobile devices into powerful information processing units. The relatively structured context that results from the mapping of mobile patterns of behaviour onto P2P models is however constrained by the vulnerabilities of P2P networks and the inherent limitations of mobile devices. Whilst the implementation of P2P models gives rise to security and reliability issues, the deployment of mobile devices is subject to efficiency constraints. This paper presents the development and deployment of a mobile P2P system based on distributed hash tables (DHT). The secure, reliable and efficient dispersal of files is taken as an application. Reliability was addressed by providing two methods for file dispersal: replication and erasure coding. Security constraints were catered for by incorporating an authentication mechanism and three encryption schemes. Lightweight versions of various algorithms were selected in order to attend to efficiency requirements

    Network security mechanisms and implementations for the next generation reliable fast data transfer protocol - UDT

    Full text link
    University of Technology, Sydney. Faculty of Engineering and Information Technology.TCP protocol variants (such as FAST, BiC, XCP, Scalable and High Speed) have demonstrated improved performance in simulation and in several limited network experiments. However, practical use of these protocols is still very limited because of implementation and installation difficulties. Users who require to transfer bulk data (e.g., in Cloud/GRID computing) usually turn to application level solutions where these variants do not fair well. Among protocols considered in the application level are User Datagram Protocol (UDP)-based protocols, such as UDT (UDP-based Data Transport Protocol). UDT is one of the most recently developed new transport protocols with congestion control algorithms. It was developed to support next generation high-speed networks, including wide area optical networks. It is considered a state-of-the-art protocol, addressing infrastructure requirements for transmitting data in high-speed networks. Its development, however, creates new vulnerabilities because like many other protocols, it relies solely on the existing security mechanisms for current protocols such as the Transmission Control Protocol (TCP) and UDP. Certainly, both UDT and the decades-old TCP/UDP lack a well-thought-out security architecture that addresses problems in today’s networks. In this dissertation, we focus on investigating UDT security issues and offer important contributions to the field of network security. The choice of UDT is significant for several reasons: UDT as a newly designed next generation protocol is considered one of the most promising and fastest protocols ever created that operates on top of the UDP protocol. It is a reliable UDP-based application-level data-transport protocol intended for distributing data intensive applications over wide area high-speed networks. It can transfer data in a highly configurable framework and can accommodate various congestion control algorithms. Its proven success at transferring terabytes of data gathered from outer space across long distances is a testament to its significant commercial promise. In this work, our objective is to examine a range of security methods used on existing mature protocols such as TCP and UDP and evaluate their viability for UDT. We highlight the security limitations of UDT and determine the threshold of feasible security schemes within the constraints under which UDT was designed and developed. Subsequently, we provide ways of securing applications and traffic using UDT protocol, and offer recommendations for securing UDT. We create security mechanisms tailored for UDT and propose a new security architecture that can assist network designers, security investigators, and users who want to incorporate security when implementing UDT across wide area networks. We then conduct practical experiments on UDT using our security mechanisms and explore the use of other existing security mechanisms used on TCP/UDP for UDT. To analyse the security mechanisms, we carry out a formal proof of correctness to assist us in determining their applicability by using Protocol Composition Logic (PCL). This approach is modular, comprising a separate proof of each protocol section and providing insight into the network environment in which each section can be reliably employed. Moreover, the proof holds for a variety of failure recovery strategies and other implementation and configuration options. We derive our technique from the PCL on TLS and Kerberos in the literature. We maintain, however, the novelty of our work for UDT particularly our newly developed mechanisms such as UDT-AO, UDT-DTLS, UDT-Kerberos (GSS-API) specifically for UDT, which all now form our proposed UDT security architecture. We further analyse this architecture using rewrite systems and automata. We outline and use symbolic analysis approach to effectively verify our proposed architecture. This approach allows dataflow replication in the implementation of selected mechanisms that are integrated into the proposed architecture. We consider this approach effective by utilising the properties of the rewrite systems to represent specific flows within the architecture to present a theoretical and reliable method to perform the analysis. We introduce abstract representations of the components that compose the architecture and conduct our investigation, through structural, semantics and query analyses. The result of this work, which is first in the literature, is a more robust theoretical and practical representation of a security architecture of UDT, viable to work with other high speed network protocols

    Design and implementation of testbed using IoT and P2P technologies: improving reliability by a fuzzy-based approach

    Get PDF
    The internet of things (IoT) is a new type of internet application which enables the objects to be active participants with other members of the network. In P2P systems, each peer has to obtain information of other peers and propagate the information through neighbouring peers. However, in reality, each peer might be faulty or might send incorrect information. In our previous work, we implemented a P2P platform called JXTA-overlay, which provides a set of basic functionalities, primitives, intended to be as complete as possible to satisfy the needs of most JXTA-based applications. In this paper, we present the implementation of a testbed using IoT and P2P technologies. We also present two fuzzy-based systems (FPRS1 and FPRS2) to improve the reliability of the proposed approach. Comparing the complexity of FPRS1 and FPRS2, the FPRS2 is more complex than FPRS1. However, FPRS2 makes the platform more reliable.Peer ReviewedPostprint (author's final draft

    Cloud scheduling optimization: a reactive model to enable dynamic deployment of virtual machines instantiations

    Get PDF
    This study proposes a model for supporting the decision making process of the cloud policy for the deployment of virtual machines in cloud environments. We explore two configurations, the static case in which virtual machines are generated according to the cloud orchestration, and the dynamic case in which virtual machines are reactively adapted according to the job submissions, using migration, for optimizing performance time metrics. We integrate both solutions in the same simulator for measuring the performance of various combinations of virtual machines, jobs and hosts in terms of the average execution and total simulation time. We conclude that the dynamic configuration is prosperus as it offers optimized job execution performance

    Meta-scheduling Issues in Interoperable HPCs, Grids and Clouds

    Get PDF
    Over the last years, interoperability among resources has been emerged as one of the most challenging research topics. However, the commonality of the complexity of the architectures (e.g., heterogeneity) and the targets that each computational paradigm including HPC, grids and clouds aims to achieve (e.g., flexibility) remain the same. This is to efficiently orchestrate resources in a distributed computing fashion by bridging the gap among local and remote participants. Initially, this is closely related with the scheduling concept which is one of the most important issues for designing a cooperative resource management system, especially in large scale settings such as in grids and clouds. Within this context, meta-scheduling offers additional functionalities in the area of interoperable resource management, this is because of its great agility to handle sudden variations and dynamic situations in user demands. Accordingly, the case of inter-infrastructures, including InterCloud, entitle that the decentralised meta-scheduling scheme overcome issues like consolidated administration management, bottleneck and local information exposition. In this work, we detail the fundamental issues for developing an effective interoperable meta-scheduler for e-infrastructures in general and InterCloud in particular. Finally, we describe a simulation and experimental configuration based on real grid workload traces to demonstrate the interoperable setting as well as provide experimental results as part of a strategic plan for integrating future meta-schedulers

    Centralized micro-clouds: an infrastructure for service distribution in collaborative smart devices

    Get PDF
    In the current information-driven society, the massive use and impact of communications and mobile devices challenge the design of communication networks. This highlights the emergency of a new Internet structure namely the Internet of Things that refers to the transformation of physical objects to smart objects and their communication. Based on that the communication of such objects will offer an augmented infrastructure that is formed dynamically and on the fly based on transient links among objects. This is the concept behind cloud computing, to provide a computer-based environment where various services are available to be consumed by everyday users, anywhere and at anytime. Our vision encompasses a dynamic micro-cloud environment that is formed from devices that share computational power. This encompasses inter-linked smart objects and smart mobile devices available from a smart environment that can be formed dynamically. The proposed micro-cloud notion will be of apparent significance to maintain the required quality of service in dynamic scenarios such as those found in emergency and disaster situations. To represent such system we are focused on the development of such architecture into a novel simulation toolkit that allows the replication of Internet of Things scenarios

    サーバクラスタでの低消費電力化のための移行モデルの研究

    Get PDF
    博士(工学)法政大学 (Hosei University
    corecore