83 research outputs found

    An Overlay Architecture for Personalized Object Access and Sharing in a Peer-to-Peer Environment

    Get PDF
    Due to its exponential growth and decentralized nature, the Internet has evolved into a chaotic repository, making it difficult for users to discover and access resources of interest to them. As a result, users have to deal with the problem of information overload. The Semantic Web's emergence provides Internet users with the ability to associate explicit, self-described semantics with resources. This ability will facilitate in turn the development of ontology-based resource discovery tools to help users retrieve information in an efficient manner. However, it is widely believed that the Semantic Web of the future will be a complex web of smaller ontologies, mostly created by various groups of web users who share a similar interest, referred to as a Community of Interest. This thesis proposes a solution to the information overload problem using a user driven framework, referred to as a Personalized Web, that allows individual users to organize themselves into Communities of Interests based on ontologies agreed upon by all community members. Within this framework, users can define and augment their personalized views of the Internet by associating specific properties and attributes to resources and defining constraint-functions and rules that govern the interpretation of the semantics associated with the resources. Such views can then be used to capture the user's interests and integrate these views into a user-defined Personalized Web. As a proof of concept, a Personalized Web architecture that employs ontology-based semantics and a structured Peer-to-Peer overlay network to provide a foundation of semantically-based resource indexing and advertising is developed. In order to investigate mechanisms that support the resource advertising and retrieval of the Personalized Web architecture, three agent-driven advertising and retrieval schemes, the Aggressive scheme, the Crawler-based scheme, and the Minimum-Cover-Rule scheme, were implemented and evaluated in both stable and churn environments. In addition to the development of a Personalized Web architecture that deals with typical web resources, this thesis used a case study to explore the potential of the Personalized Web architecture to support future web service workflow applications. The results of this investigation demonstrated that the architecture can support the automation of service discovery, negotiation, and invocation, allowing service consumers to actualize a personalized web service workflow. Further investigation will be required to improve the performance of the automation and allow it to be performed in a secure and robust manner. In order to support the next generation Internet, further exploration will be needed for the development of a Personalized Web that includes ubiquitous and pervasive resources

    A framework for the dynamic management of Peer-to-Peer overlays

    Get PDF
    Peer-to-Peer (P2P) applications have been associated with inefficient operation, interference with other network services and large operational costs for network providers. This thesis presents a framework which can help ISPs address these issues by means of intelligent management of peer behaviour. The proposed approach involves limited control of P2P overlays without interfering with the fundamental characteristics of peer autonomy and decentralised operation. At the core of the management framework lays the Active Virtual Peer (AVP). Essentially intelligent peers operated by the network providers, the AVPs interact with the overlay from within, minimising redundant or inefficient traffic, enhancing overlay stability and facilitating the efficient and balanced use of available peer and network resources. They offer an “insider‟s” view of the overlay and permit the management of P2P functions in a compatible and non-intrusive manner. AVPs can support multiple P2P protocols and coordinate to perform functions collectively. To account for the multi-faceted nature of P2P applications and allow the incorporation of modern techniques and protocols as they appear, the framework is based on a modular architecture. Core modules for overlay control and transit traffic minimisation are presented. Towards the latter, a number of suitable P2P content caching strategies are proposed. Using a purpose-built P2P network simulator and small-scale experiments, it is demonstrated that the introduction of AVPs inside the network can significantly reduce inter-AS traffic, minimise costly multi-hop flows, increase overlay stability and load-balancing and offer improved peer transfer performance

    A Location-Aware Middleware Framework for Collaborative Visual Information Discovery and Retrieval

    Get PDF
    This work addresses the problem of scalable location-aware distributed indexing to enable the leveraging of collaborative effort for the construction and maintenance of world-scale visual maps and models which could support numerous activities including navigation, visual localization, persistent surveillance, structure from motion, and hazard or disaster detection. Current distributed approaches to mapping and modeling fail to incorporate global geospatial addressing and are limited in their functionality to customize search. Our solution is a peer-to-peer middleware framework based on XOR distance routing which employs a Hilbert Space curve addressing scheme in a novel distributed geographic index. This allows for a universal addressing scheme supporting publish and search in dynamic environments while ensuring global availability of the model and scalability with respect to geographic size and number of users. The framework is evaluated using large-scale network simulations and a search application that supports visual navigation in real-world experiments

    An Indexation and Discovery Architecture for Semantic Web Services and its Application in Bioinformatics

    Get PDF
    Recently much research effort has been devoted to the discovery of relevant Web services. It is widely recognized that adding semantics to service description is the solution to this challenge. Web services with explicit semantic annotation are called Semantic Web Services (SWS). This research proposes an indexation and discovery architecture for SWS, together with a prototype application in the area of bioinformatics. In this approach, a SWS repository is created and maintained by crawling both ontology-oriented UDDI registries and Web sites that hosting SWS. For a given service request, the proposed system invokes the matching algorithm and a candidate set is returned with different degree of matching considered. This approach can add more flexibility to the current industry standards by offering more choices to both the service requesters and publishers. Also, the prototype developed in this research shows the value can be added by using SWS in application areas such as bioinformatics

    Blocks\u27 Network: Redesign Architecture based on Blockchain Technology

    Get PDF
    The Internet is a global network that uses communication protocols. It is considered the most important system reached by humanity, which no one can abandon. However, this technology has become a weapon that threatens the privacy of users, especially in the client-server model, where data is stored and managed privately. Additionally, users have no power over their data that store in a private server, which means users’ data may interrupt by government or might be sold via service provider for-profit purposes. Furthermore, blockchain is a technology that we can rely on to solve issues related to client-server model if appropriately used. However, blockchain technology uses consensus protocol, which is used for creating an incontrovertible system of agreement between members across a distributed network. Thus, the consensus protocol is used to slow all member down from generating too fast in order to control the network creation pattern, which leads to scalability and latency problems. The proposed system will present a platform that leverages modernize blockchain called Blocks’ Network. The system is taking into consideration the issues related to privacy and confidentiality from the client-side model, and scalability and latency issues from the blockchain technology side. Blocks’ network is a public and a permissioned network that use a multi-dimensional hash to generate multiple chains for the purpose of a systematic approach. The proposed platform is an assembly point for users to create a decentralized network using P2P protocols. The system has high data flow due to frequent use by participants (for example, the use of the Internet). Besides, the system will store all traffic of the network overtly via Blocks’ Network

    Large-scale sensor-rich video management and delivery

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    PEER TO PEER DIGITAL RIGHTS MANAGEMENT USING BLOCKCHAIN

    Get PDF
    Content distribution networks deliver content like videos, apps, and music to users through servers deployed in multiple datacenters to increase availability and delivery speed of content. The motivation of this work is to create a content distribution network that maintains a consumer’s rights and access to works they have purchased indefinitely. If a user purchases content from a traditional content distribution network, they lose access to the content when the service is no longer available. The system uses a peer to peer network for content distribution along with a blockchain for digital rights management. This combination may give users indefinite access to purchased works. The system benefits content rights owners because they can sell their content in a lower cost manner by distributing costs among the community of peers

    Over Cite : a cooperative digital research library

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (leaves 47-50).CiteSeer is a well-known online resource for the computer science research community, allowing users to search and browse a large archive of research papers. Unfortunately, its current centralized incarnation is costly to run. Although members of the community would presumably be willing to donate hardware and bandwidth at their own sites to assist CiteSeer, the current architecture does not facilitate such distribution of resources. OverCite is a design for a new architecture for a distributed and cooperative research library based on a distributed hash table (DHT). The new architecture harnesses donated resources at many sites to provide document search and retrieval service to researchers worldwide. A preliminary evaluation of an initial OverCite prototype shows that it can service more queries per second than a centralized system, and that it increases total storage capacity by a factor of n/4 in a system of n nodes. OverCite can exploit these additional resources by supporting new features such as document alerts, and by scaling to larger data sets.by Jeremy Stribling.S.M

    A Content Delivery Model for Online Video

    Get PDF
    Online video accounts for a large and growing portion of all Internet traffic. In order to cut bandwidth costs, it is necessary to use the available bandwidth of users to offload video downloads. Assuming that users can only keep and distribute one video at any given time, it is necessary to determine the global user cache distribution with the goal of achieving maximum peer traffic. The system model contains three different parties: viewers, idlers and servers. Viewers are those peers who are currently viewing a video. Idlers are those peers who are currently not viewing a video but are available to upload to others. Finally, servers can upload any video to any user and has infinite capacity. Every video maintains a first-in-first-out viewer queue which contains all the viewers for that video. Each viewer downloads from the peer that arrived before it, with the earliest arriving peer downloading from the server. Thus, the server must upload to one peer whenever the viewer queue is not empty. The aim of the idlers is to act as a server for a particular video, thereby eliminating all server traffic for that video. By using the popularity of videos, the number of idlers and some assumptions on the viewer arrival process, the optimal global video distribution in the user caches can be determined

    A formal method for rule analysis and validation in distributed data aggregation service

    Get PDF
    The usage of Cloud Serviced has increased rapidly in the last years. Data management systems, behind any Cloud Service, are a major concern when it comes to scalability, flexibility and reliability due to being implemented in a distributed way. A Distributed Data Aggregation Service relying on a storage system meets these demands and serves as a repository back-end for complex analysis and automatic mining of any type of data. In this paper we continue our previous work on data management in Cloud storage. We present a formal approach to express retrieval and aggregation rules with a compact, yet powerful tool called Rule Markup Language. Our extended solution proposes a standard form to schemes and uses the tool to match the rules to the XML form of the structured data in order to obtain the unstructured entries from BlobSeer data storage system. This allows the Distributed Data Aggregation Service (DDAS) to bypass several steps when processing a retrieval request. Our new architecture is more loosely-coupled with a separate module, the new tool, used fo
    • 

    corecore