487 research outputs found

    Efficient and adaptive congestion control for heterogeneous delay-tolerant networks

    Get PDF
    Detecting and dealing with congestion in delay-tolerant networks (DTNs) is an important and challenging problem. Current DTN forwarding algorithms typically direct traffic towards more central nodes in order to maximise delivery ratios and minimise delays, but as traffic demands increase these nodes may become saturated and unusable. We pro- pose CafRep, an adaptive congestion aware protocol that detects and reacts to congested nodes and congested parts of the network by using implicit hybrid contact and resources congestion heuristics. CafRep exploits localised relative utility based approach to offload the traffic from more to less congested parts of the network, and to replicate at adaptively lower rate in different parts of the network with non-uniform congestion levels. We extensively evaluate our work against benchmark and competitive protocols across a range of metrics over three real connectivity and GPS traces such as Sassy [44], San Francisco Cabs [45] and Infocom 2006 [33]. We show that CafRep performs well, independent of network connectivity and mobility patterns, and consistently outperforms the state-of-the-art DTN forwarding algorithms in the face of increasing rates of congestion. CafRep maintains higher availability and success ratios while keeping low delays, packet loss rates and delivery cost. We test CafRep in the presence of two application scenarios, with fixed rate traffic and with real world Facebook application traffic demands, showing that regardless of the type of traffic CafRep aims to deliver, it reduces congestion and improves forwarding performance

    Distribuição de conteúdos over-the-top multimédia em redes sem fios

    Get PDF
    mestrado em Engenharia Eletrónica e TelecomunicaçõesHoje em dia a Internet é considerada um bem essencial devido ao facto de haver uma constante necessidade de comunicar, mas também de aceder e partilhar conteúdos. Com a crescente utilização da Internet, aliada ao aumento da largura de banda fornecida pelos operadores de telecomunicações, criaram-se assim excelentes condições para o aumento dos serviços multimédia Over-The-Top (OTT), demonstrado pelo o sucesso apresentado pelos os serviços Netflix e Youtube. O serviço OTT engloba a entrega de vídeo e áudio através da Internet sem um controlo direto dos operadores de telecomunicações, apresentando uma proposta atractiva de baixo custo e lucrativa. Embora a entrega OTT seja cativante, esta padece de algumas limitações. Para que a proposta se mantenha em crescimento e com elevados padrões de Qualidade-de-Experiência (QoE) para os consumidores, é necessário investir na arquitetura da rede de distribuição de conteúdos, para que esta seja capaz de se adaptar aos diversos tipos de conteúdo e obter um modelo otimizado com um uso cauteloso dos recursos, tendo como objectivo fornecer serviços OTT com uma boa qualidade para o utilizador, de uma forma eficiente e escalável indo de encontro aos requisitos impostos pelas redes móveis atuais e futuras. Esta dissertação foca-se na distribuição de conteúdos em redes sem fios, através de um modelo de cache distribuída entre os diferentes pontos de acesso, aumentando assim o tamanho da cache e diminuindo o tráfego necessário para os servidores ou caches da camada de agregação acima. Assim, permite-se uma maior escalabilidade e aumento da largura de banda disponível para os servidores de camada de agregação acima. Testou-se o modelo de cache distribuída em três cenários: o consumidor está em casa em que se considera que tem um acesso fixo, o consumidor tem um comportamento móvel entre vários pontos de acesso na rua, e o consumidor está dentro de um comboio em alta velocidade. Testaram-se várias soluções como Redis2, Cachelot e Memcached para servir de cache, bem como se avaliaram vários proxies para ir de encontro ás características necessárias. Mais ainda, na distribuição de conteúdos testaram-se dois algoritmos, nomeadamente o Consistent e o Rendezvouz Hashing. Ainda nesta dissertação utilizou-se uma proposta já existente baseada na previsão de conteúdos (prefetching ), que consiste em colocar o conteúdo nas caches antes de este ser requerido pelos consumidores. No final, verificou-se que o modelo distribuído com a integração com prefecthing melhorou a qualidade de experiência dos consumidores, bem como reduziu a carga nos servidores de camada de agregação acima.Nowadays, the Internet is considered an essential good, due to the fact that there is a need to communicate, but also to access and share information. With the increasing use of the Internet, allied with the increased bandwidth provided by telecommunication operators, it has created conditions for the increase of Over-the-Top (OTT) Multimedia Services, demonstrated by the huge success of Net ix and Youtube. The OTT service encompasses the delivery of video and audio through the Internet without direct control of telecommunication operators, presenting an attractive low-cost and pro table proposal. Although the OTT delivery is captivating, it has some limitations. In order to increase the number of clients and keep the high Quality of Experience (QoE) standards, an enhanced architecture for content distribution network is needed. Thus, the enhanced architecture needs to provide a good quality for the user, in an e cient and scalable way, supporting the requirements imposed by future mobile networks. This dissertation aims to approach the content distribution in wireless networks, through a distributed cache model among the several access points, thus increasing the cache size and decreasing the load on the upstream servers. The proposed architecture was tested in three di erent scenarios: the consumer is at home and it is considered that it has a xed access, the consumer is mobile between several access points in the street, the consumer is in a high speed train. Several solutions were evaluated, such as Redis2, Cachelot and Memcached to serve as caches, along with the evaluation of several proxies server in order to ful ll the required features. Also, it was tested two distributed algorithms, namely the Consistent and Rendezvous Hashing. Moreover, in this dissertation it was integrated a prefetching mechanism, which consists of inserting the content in caches before being requested by the consumers. At the end, it was veri ed that the distributed model with prefetching improved the consumers QoE as well as it reduced the load on the upstream servers

    ESG-CET Final Progress Title

    Full text link

    Content Replication and Placement Schemes for Wireless Mesh Networks

    No full text
    Recently, Wireless Mesh Networks (WMNs) have attracted much of interest from both academia and industry, due to their potential to provide an alternative broadband wireless Internet connectivity. However, due to different reasons such as multi-hop forwarding and the dynamic wireless link characteristics, the performance of current WMNs is rather low when clients are soliciting Web contents. Due to the evolution of advanced mobile computing devices; it is anticipated that the demand for bandwidth-onerous popular content (especially multimedia content) in WMNs will dramatically increase in the coming future. Content replication is a popular approach for outsourcing content on behalf of the origin content provider. This area has been well explored in the context of the wired Internet, but has received comparatively less attention from the research community when it comes to WMNs. There are a number of replica placement algorithms that are specifically designed for the Internet. But they do not consider the special features of wireless networks such as insufficient bandwidth, low server capacity, contention to access the wireless medium, etc. This thesis studies the technical challenges encountered when transforming the traditional model of multi-hop WMNs from an access network into a content network. We advance the thesis that support from packet relaying mesh routers to act as replica servers for popular content such as media streaming, results in significant performance improvement. Such support from infrastructure mesh routers benefits from knowledge of the underlying network topology (i.e., information about the physical connections between network nodes is available at mesh routers). The utilization of cross-layer information from lower layers opens the door to developing efficient replication schemes that account for the specific features of WMNs (e.g., contention between the nodes to access the wireless medium and traffic interference). Moreover, this can benefit from the underutilized resources (e.g., storage and bandwidth) at mesh routers. This utilization enables those infrastructure nodes to participate in content distribution and play the role of replica servers. In this thesis, our main contribution is the design of two lightweight, distributed, and scalable object replication schemes for WMNs. The first scheme follows a hierarchical approach, while the second scheme follows a flat one. The challenge is to replicate content as close as possible to the requesting clients and thus, reduce the access latency per object, while minimizing the number of replicas. The two schemes aim to address the questions of where and how many replicas should be placed in the WMN. In our schemes, we consider the underlying topology joint with link-quality metrics to improve the quality of experience. We show using simulation tests that the schemes significantly enhance the performance of a WMN in terms of reducing the access cost, bandwidth consumption and computation/communication cost

    SDSF : social-networking trust based distributed data storage and co-operative information fusion.

    Get PDF
    As of 2014, about 2.5 quintillion bytes of data are created each day, and 90% of the data in the world was created in the last two years alone. The storage of this data can be on external hard drives, on unused space in peer-to-peer (P2P) networks or using the more currently popular approach of storing in the Cloud. When the users store their data in the Cloud, the entire data is exposed to the administrators of the services who can view and possibly misuse the data. With the growing popularity and usage of Cloud storage services like Google Drive, Dropbox etc., the concerns of privacy and security are increasing. Searching for content or documents, from this distributed stored data, given the rate of data generation, is a big challenge. Information fusion is used to extract information based on the query of the user, and combine the data and learn useful information. This problem is challenging if the data sources are distributed and heterogeneous in nature where the trustworthiness of the documents may be varied. This thesis proposes two innovative solutions to resolve both of these problems. Firstly, to remedy the situation of security and privacy of stored data, we propose an innovative Social-based Distributed Data Storage and Trust based co-operative Information Fusion Framework (SDSF). The main objective is to create a framework that assists in providing a secure storage system while not overloading a single system using a P2P like approach. This framework allows the users to share storage resources among friends and acquaintances without compromising the security or privacy and enjoying all the benefits that the Cloud storage offers. The system fragments the data and encodes it to securely store it on the unused storage capacity of the data owner\u27s friends\u27 resources. The system thus gives a centralized control to the user over the selection of peers to store the data. Secondly, to retrieve the stored distributed data, the proposed system performs the fusion also from distributed sources. The technique uses several algorithms to ensure the correctness of the query that is used to retrieve and combine the data to improve the information fusion accuracy and efficiency for combining the heterogeneous, distributed and massive data on the Cloud for time critical operations. We demonstrate that the retrieved documents are genuine when the trust scores are also used while retrieving the data sources. The thesis makes several research contributions. First, we implement Social Storage using erasure coding. Erasure coding fragments the data, encodes it, and through introduction of redundancy resolves issues resulting from devices failures. Second, we exploit the inherent concept of trust that is embedded in social networks to determine the nodes and build a secure net-work where the fragmented data should be stored since the social network consists of a network of friends, family and acquaintances. The trust between the friends, and availability of the devices allows the user to make an informed choice about where the information should be stored using `k\u27 optimal paths. Thirdly, for the purpose of retrieval of this distributed stored data, we propose information fusion on distributed data using a combination of Enhanced N-grams (to ensure correctness of the query), Semantic Machine Learning (to extract the documents based on the context and not just bag of words and also considering the trust score) and Map Reduce (NSM) Algorithms. Lastly we evaluate the performance of distributed storage of SDSF using era- sure coding and identify the social storage providers based on trust and evaluate their trustworthiness. We also evaluate the performance of our information fusion algorithms in distributed storage systems. Thus, the system using SDSF framework, implements the beneficial features of P2P networks and Cloud storage while avoiding the pitfalls of these systems. The multi-layered encrypting ensures that all other users, including the system administrators cannot decode the stored data. The application of NSM algorithm improves the effectiveness of fusion since large number of genuine documents are retrieved for fusion

    Exploring digital music online : user acceptance and adoption of online music services

    Get PDF
    Mestrado em Ciências EmpresariaisOnline digital music has changed dramatically since the emergence of Napster (in 1999), as a file-sharing system, and the establishment of electronic commerce. The transformation of the music industry value chain enabled Online Music Services (OMS) to serve as reintermediaries in the way the music product is delivered to consumers. However, a need to understand OMS user-end behavior has been recognized and suggested by academic authors. To explore this aspect, we extended the UTAUT2 framework (an IT/IS User Acceptance Model) to study OMS through individual Behavioral Intention and Use. UTAUT2 model was applied with the main purposes of validating its applicability in this environment, and identifying additional determinants in OMS acceptance and adoption. A quantitative approach was undertaken and data was collected from a sample of 329 individuals. Partial Least Squares (PLS) path modeling was proposed to assess the relationships within our model. With our findings, we verified the suitability of UTAUT2 constructs on an OMS background, as well as the significance of Ideology of Consumer Rights and File-Sharing Expertise in the formation of Behavioral Intention and Use, respectively. Moreover, File-Sharing Judgment revealed to have a statistically non-significant impact on Behavioral Intention. Several theoretical and practical implications are provided in order to enhance the comprehension of consumer behavior for OMS providers.A música digital online sofreu alterações profundas desde o aparecimento do Napster (em 1999), como sistema de partilha de ficheiros, e com o desenvolvimento do comércio electrónico. A transformação da cadeia de valor da indústria musical permitiu aos Serviços de Música Online (SMO) desempenharem um papel de “re-intermediação” na forma como o produto pode ser entregue aos consumidores. Contudo, é reconhecida e sugerida por autores académicos a necessidade de compreender o comportamento destes utilizadores. Com o intuito de explorar esta necessidade, estendemos o Modelo de Aceitação Tecnológica UTAUT2, por forma a analisar a Intenção de Comportamento e Uso individual dos SMO. O modelo UTAUT2 foi empregue com os principais objectivos de validar a sua aplicabilidade no contexto musical, com particular incidência nos SMO, e identificar constructos adicionais que levem à sua aceitação e adopção pelos utilizadores. Para este estudo, foi aplicada uma abordagem quantitativa com uma amostra de 329 indivíduos. A utilização de uma análise baseada nos Mínimos Quadrados Parciais (Partial Least Squares – PLS) foi utilizada para avaliar as relações entre os constructos do modelo teórico proposto. Os nossos resultados evidenciam a adequação do UTAUT2 no contexto analisado, assim como a importância da Ideologia dos Direitos do Consumidor e a Perícia da Partilha de Ficheiros na formação da Intenção de Comportamento e no Uso, respectivamente. Adicionalmente, o Julgamento sobre a Partilha de Ficheiros revelou não ser estatisticamente significativo no nosso modelo. Várias implicações teóricas e práticas são propostas, auxiliando os fornecedores de SMO na compreensão do comportamento dos consumidores

    Transferring big data across the globe

    Get PDF
    Transmitting data via the Internet is a routine and common task for users today. The amount of data being transmitted by the average user has dramatically increased over the past few years. Transferring a gigabyte of data in an entire day was normal, however users are now transmitting multiple gigabytes in a single hour. With the influx of big data and massive scientific data sets that are measured in tens of petabytes, a user has the propensity to transfer even larger amounts of data. When transferring data sets of this magnitude on public or shared networks, the performance of all workloads in the system will be impacted. This dissertation addresses the issues and challenges inherent with transferring big data over shared networks. A survey of current transfer techniques is provided and these techniques are evaluated in simulated, experimental and live environments. The main contribution of this dissertation is the development of a new, nice model for big data transfers, which is based on a store-and-forward methodology instead of an end-to-end approach. This nice model ensures that big data transfers only occur when there is idle bandwidth that can be repurposed for these large transfers. The nice model improves overall performance and significantly reduces the transmission time for big data transfers. The model allows for efficient transfers regardless of time zone differences or variations in bandwidth between sender and receiver. Nice is the first model that addresses the challenges of transferring big data across the globe

    Planning broadband infrastructure - a reference model

    Get PDF

    Effective and Economical Content Delivery and Storage Strategies for Cloud Systems

    Get PDF
    Cloud computing has proved to be an effective infrastructure to host various applications and provide reliable and stable services. Content delivery and storage are two main services provided by the cloud. A high-performance cloud can reduce the cost of both cloud providers and customers, while providing high application performance to cloud clients. Thus, the performance of such cloud-based services is closely related to three issues. First, when delivering contents from the cloud to users or transferring contents between cloud datacenters, it is important to reduce the payment costs and transmission time. Second, when transferring contents between cloud datacenters, it is important to reduce the payment costs to the internet service providers (ISPs). Third, when storing contents in the datacenters, it is crucial to reduce the file read latency and power consumption of the datacenters. In this dissertation, we study how to effectively deliver and store contents on the cloud, with a focus on cloud gaming and video streaming services. In particular, we aim to address three problems. i) Cost-efficient cloud computing system to support thin-client Massively Multiplayer Online Game (MMOG): how to achieve high Quality of Service (QoS) in cloud gaming and reduce the cloud bandwidth consumption; ii) Cost-efficient inter-datacenter video scheduling: how to reduce the bandwidth payment cost by fully utilizing link bandwidth when cloud providers transfer videos between datacenters; iii) Energy-efficient adaptive file replication: how to adapt to time-varying file popularities to achieve a good tradeoff between data availability and efficiency, as well as reduce the power consumption of the datacenters. In this dissertation, we propose methods to solve each of aforementioned challenges on the cloud. As a result, we build a cloud system that has a cost-efficient system to support cloud clients, an inter-datacenter video scheduling algorithm for video transmission on the cloud and an adaptive file replication algorithm for cloud storage system. As a result, the cloud system not only benefits the cloud providers in reducing the cloud cost, but also benefits the cloud customers in reducing their payment cost and improving high cloud application performance (i.e., user experience). Finally, we conducted extensive experiments on many testbeds, including PeerSim, PlanetLab, EC2 and a real-world cluster, which demonstrate the efficiency and effectiveness of our proposed methods. In our future work, we will further study how to further improve user experience in receiving contents and reduce the cost due to content transfer
    corecore