2,818 research outputs found

    Network Coding for Distributed Cloud, Fog and Data Center Storage

    Get PDF

    An extensive research survey on data integrity and deduplication towards privacy in cloud storage

    Get PDF
    Owing to the highly distributed nature of the cloud storage system, it is one of the challenging tasks to incorporate a higher degree of security towards the vulnerable data. Apart from various security concerns, data privacy is still one of the unsolved problems in this regards. The prime reason is that existing approaches of data privacy doesn't offer data integrity and secure data deduplication process at the same time, which is highly essential to ensure a higher degree of resistance against all form of dynamic threats over cloud and internet systems. Therefore, data integrity, as well as data deduplication is such associated phenomena which influence data privacy. Therefore, this manuscript discusses the explicit research contribution toward data integrity, data privacy, and data deduplication. The manuscript also contributes towards highlighting the potential open research issues followed by a discussion of the possible future direction of work towards addressing the existing problems

    A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing

    Get PDF
    The emergence of cloud computing based on virtualization technologies brings huge opportunities to host virtual resource at low cost without the need of owning any infrastructure. Virtualization technologies enable users to acquire, configure and be charged on pay-per-use basis. However, Cloud data centers mostly comprise heterogeneous commodity servers hosting multiple virtual machines (VMs) with potential various specifications and fluctuating resource usages, which may cause imbalanced resource utilization within servers that may lead to performance degradation and service level agreements (SLAs) violations. To achieve efficient scheduling, these challenges should be addressed and solved by using load balancing strategies, which have been proved to be NP-hard problem. From multiple perspectives, this work identifies the challenges and analyzes existing algorithms for allocating VMs to PMs in infrastructure Clouds, especially focuses on load balancing. A detailed classification targeting load balancing algorithms for VM placement in cloud data centers is investigated and the surveyed algorithms are classified according to the classification. The goal of this paper is to provide a comprehensive and comparative understanding of existing literature and aid researchers by providing an insight for potential future enhancements.Comment: 22 Pages, 4 Figures, 4 Tables, in pres

    Design and optimization of optical grids and clouds

    Get PDF

    Development in Key Share Management to Protect Data over Cloud

    Get PDF
    User data may be stored in a cloud to take advantage of its scalability, accessibility, and economics. However, data of a sensitive nature must be protected from being read in the clear by an untrusted cloud provider. This triggered a l ot of research activities, resulting in a quantity of proposals targeting the various cloud security threats. A key management scheme is proposed where encrypted key shares are stored in the cloud and automatically deleted based on passage of time or user activity. The process does not require additional coordination by the data owner, which is of advantage to a very large population of resource - constrained mobile users. The rate of expiration may be controlled through the initial allocation of shares and t he heuristics for removal. A simulation of the scheme and also its implementation on commercial mobile and cloud platforms demonstrate its practical performance

    Resource allocation in disaggregated optical networks

    Get PDF
    The recently introduced disaggregation model is gaining interest due to its benefits when compared with traditional models.In essence, it consists on the separation of traditional hardware appliances (e.g. servers, network nodes) into commodity components, which then are mounted independently for their exploitation into customized physical infrastructures. Such an approach allows telecommunication operators and service providers to appropriately size their infrastructure and grow as needed. One of the main key benefits of the disaggregation model is the break of the vendor lock-in, pushing towards interoperability between equipment from different vendor with minimum standardization of software and hardware specifications, allowing operators to build the best solutions for their needs. Moreover, efficient scaling is also an important benefit introduced by the disaggregation approach. Due to these benefits, among others, the disaggregation model is gaining momentum and is being adopted into multiple fields and domains of nowadays telecom infrastructures. In this regard, the scenario under study of this master thesis focuses on disaggregated optical transport networks. Disaggregation allows for more open and customized optical networks, reducing both capital and operational expenditures for infrastructure owners.However, despite of these positive aspects, disaggregated optical networks face several challenges, beingthe degradation of the network performance when compared to traditional integrated solutions the most important one. In this regard, this thesis investigates the impact of disaggregation in optical networks and investigates regeneration as a potential solution to compensate the performances’ degradation. Under this premise, optimal solutions for regenerator placement, exploiting the inherent grooming capabilities of regenerators, are proposed and evaluatedIncomin

    SDSF : social-networking trust based distributed data storage and co-operative information fusion.

    Get PDF
    As of 2014, about 2.5 quintillion bytes of data are created each day, and 90% of the data in the world was created in the last two years alone. The storage of this data can be on external hard drives, on unused space in peer-to-peer (P2P) networks or using the more currently popular approach of storing in the Cloud. When the users store their data in the Cloud, the entire data is exposed to the administrators of the services who can view and possibly misuse the data. With the growing popularity and usage of Cloud storage services like Google Drive, Dropbox etc., the concerns of privacy and security are increasing. Searching for content or documents, from this distributed stored data, given the rate of data generation, is a big challenge. Information fusion is used to extract information based on the query of the user, and combine the data and learn useful information. This problem is challenging if the data sources are distributed and heterogeneous in nature where the trustworthiness of the documents may be varied. This thesis proposes two innovative solutions to resolve both of these problems. Firstly, to remedy the situation of security and privacy of stored data, we propose an innovative Social-based Distributed Data Storage and Trust based co-operative Information Fusion Framework (SDSF). The main objective is to create a framework that assists in providing a secure storage system while not overloading a single system using a P2P like approach. This framework allows the users to share storage resources among friends and acquaintances without compromising the security or privacy and enjoying all the benefits that the Cloud storage offers. The system fragments the data and encodes it to securely store it on the unused storage capacity of the data owner\u27s friends\u27 resources. The system thus gives a centralized control to the user over the selection of peers to store the data. Secondly, to retrieve the stored distributed data, the proposed system performs the fusion also from distributed sources. The technique uses several algorithms to ensure the correctness of the query that is used to retrieve and combine the data to improve the information fusion accuracy and efficiency for combining the heterogeneous, distributed and massive data on the Cloud for time critical operations. We demonstrate that the retrieved documents are genuine when the trust scores are also used while retrieving the data sources. The thesis makes several research contributions. First, we implement Social Storage using erasure coding. Erasure coding fragments the data, encodes it, and through introduction of redundancy resolves issues resulting from devices failures. Second, we exploit the inherent concept of trust that is embedded in social networks to determine the nodes and build a secure net-work where the fragmented data should be stored since the social network consists of a network of friends, family and acquaintances. The trust between the friends, and availability of the devices allows the user to make an informed choice about where the information should be stored using `k\u27 optimal paths. Thirdly, for the purpose of retrieval of this distributed stored data, we propose information fusion on distributed data using a combination of Enhanced N-grams (to ensure correctness of the query), Semantic Machine Learning (to extract the documents based on the context and not just bag of words and also considering the trust score) and Map Reduce (NSM) Algorithms. Lastly we evaluate the performance of distributed storage of SDSF using era- sure coding and identify the social storage providers based on trust and evaluate their trustworthiness. We also evaluate the performance of our information fusion algorithms in distributed storage systems. Thus, the system using SDSF framework, implements the beneficial features of P2P networks and Cloud storage while avoiding the pitfalls of these systems. The multi-layered encrypting ensures that all other users, including the system administrators cannot decode the stored data. The application of NSM algorithm improves the effectiveness of fusion since large number of genuine documents are retrieved for fusion
    • …
    corecore