51 research outputs found

    Data Replication for Improving Data Accessibility in Ad Hoc Networks

    Get PDF
    In ad hoc networks, due to frequent network partition, data accessibility is lower than that in conventional fixed networks. In this paper, we solve this problem by replicating data items on mobile hosts. First, we propose three replica allocation methods assuming that each data item is not updated. In these three methods, we take into account the access frequency from mobile hosts to each data item and the status of the network connection. Then, we extend the proposed methods by considering aperiodic updates and integrating user profiles consisting of mobile users\u27\u27 schedules, access behavior, and read/write patterns. We also show the results of simulation experiments regarding the performance evaluation of our proposed method

    POLICY-BASED MIDDLEWARE FOR MOBILE CLOUD COMPUTING

    Get PDF
    Mobile devices are the dominant interface for interacting with online services as well as an efficient platform for cloud data consumption. Cloud computing allows the delivery of applications/functionalities as services over the internet and provides the software/hardware infrastructure to host these services in a scalable manner. In mobile cloud computing, the apps running on the mobile device use cloud hosted services to overcome resource constraints of the host device. This approach allows mobile devices to outsource the resource-consuming tasks. Furthermore, as the number of devices owned by a single user increases, there is the growing demand for cross-platform application deployment to ensure a consistent user experience. However, the mobile devices communicate through unstable wireless networks, to access the data and services hosted in the cloud. The major challenges that mobile clients face when accessing services hosted in the cloud, are network latency and synchronization of data. To address the above mentioned challenges, this research proposed an architecture which introduced a policy-based middleware that supports user to access cloud hosted digital assets and services via an application across multiple mobile devices in a seamless manner. The major contribution of this thesis is identifying different information, used to configure the behavior of the middleware towards reliable and consistent communication among mobile clients and the cloud hosted services. Finally, the advantages of the using policy-based middleware architecture are illustrated by experiments conducted on a proof-of-concept prototype

    Adaptive Pull-Based Data Freshness Policies for Diverse Update Patterns

    Get PDF
    An important challenge to effective data delivery in wide area environments is maintaining the data freshness of objects using solutions that can scale to a large number of clients without incurring significant server overhead. Policies for maintaining data freshness are traditionally either push-based or pull-based. Push-based policies involve pushing data updates by servers; they may not scale to a large number of clients. Pull-based policies require clients to contact servers to check for updates; their effectiveness is limited by the difficulty of predicting updates. Models to predict updates generally rely on some knowledge of past updates. Their accuracy of prediction may vary and determining the most appropriate model is non-trivial. In this paper, we present an adaptive pull-based solution to this challenge. We first present several techniques that use update history to estimate the freshness of cached objects, and identify update patterns for which each technique is most effective. We then introduce adaptive policies that can (automatically) choose a policy for an object based on its observed update patterns. Our proposed policies improve the freshness of cached data and reduce costly contacts with remote servers without incurring the large server overhead of push-based policies, and can scale to a large number of clients. Using trace data from a data-intensive website as well as two email logs, we show that our adaptive policies can adapt to diverse update patterns and provide significant improvement compared to a single policy. (UMIACS-TR-2004-01

    Improving Data Delivery in Wide Area and Mobile Environments

    Get PDF
    The popularity of the Internet has dramatically increased the diversity of clients and applications that access data across wide area networks and mobile environments. Data delivery in these environments presents several challenges. First, applications often have diverse requirements with respect to the latency of their requests and recency of data. Traditional data delivery architectures do not provide interfaces to express these requirements. Second, it is difficult to accurately estimate when objects are updated. Existing solutions either require servers to notify clients (push-based), which adds overhead at servers and may not scale, or require clients to contact servers (pull-based), which rely on estimates that are often inaccurate in practice. Third, cache managers need a flexible and scalable way to determine if an object in the cache meets a client's latency and recency preferences. Finally, mobile clients who access data on wireless networks share limited wireless bandwidth and typically have different QoS requirements for different applications. In this dissertation we address these challenges using two complementary techniques, client profiles and server cooperation. Client profiles are a set of parameters that enable clients to communicate application-specific latency and recency preferences to caches and wireless base stations. Profiles are used by cache managers to determine whether to deliver a cached object to the client or to validate the object at a remote server, and for scheduling data delivery to mobile clients. Server cooperation enables servers to provide resource information to cache managers, which enables cache managers to estimate the recency of cached objects. The main contributions of this dissertation are as follows: First, we present a flexible and scalable architecture to support client profiles that is straightforward to implement at a cache. wireless base station. Second, we present techniques to improve estimates of the recency of cached objects using server cooperation by increasing the amount of information servers provide to caches. Third, for mobile clients, we present a framework for incorporating profiles into the cache utilization, downloading, and scheduling decisions at a We evaluate client profiles and server cooperation using synthetic and trace data. Finally, we present an implementation of profiles and experimental results

    A differentiated proposal of three dimension i/o performance characterization model focusing on storage environments

    Get PDF
    The I/O bottleneck remains a central issue in high-performance environments. Cloud computing, high-performance computing (HPC) and big data environments share many underneath difficulties to deliver data at a desirable time rate requested by high-performance applications. This increases the possibility of creating bottlenecks throughout the application feeding process by bottom hardware devices located in the storage system layer. In the last years, many researchers have been proposed solutions to improve the I/O architecture considering different approaches. Some of them take advantage of hardware devices while others focus on a sophisticated software approach. However, due to the complexity of dealing with high-performance environments, creating solutions to improve I/O performance in both software and hardware is challenging and gives researchers many opportunities. Classifying these improvements in different dimensions allows researchers to understand how these improvements have been built over the years and how it progresses. In addition, it also allows future efforts to be directed to research topics that have developed at a lower rate, balancing the general development process. This research present a three-dimension characterization model for classifying research works on I/O performance improvements for large scale storage computing facilities. This classification model can also be used as a guideline framework to summarize researches providing an overview of the actual scenario. We also used the proposed model to perform a systematic literature mapping that covered ten years of research on I/O performance improvements in storage environments. This study classified hundreds of distinct researches identifying which were the hardware, software, and storage systems that received more attention over the years, which were the most researches proposals elements and where these elements were evaluated. In order to justify the importance of this model and the development of solutions that targets I/O performance improvements, we evaluated a subset of these improvements using a a real and complete experimentation environment, the Grid5000. Analysis over different scenarios using a synthetic I/O benchmark demonstrates how the throughput and latency parameters behaves when performing different I/O operations using distinct storage technologies and approaches.O gargalo de E/S continua sendo um problema central em ambientes de alto desempenho. Os ambientes de computação em nuvem, computação de alto desempenho (HPC) e big data compartilham muitas dificuldades para fornecer dados em uma taxa de tempo desejável solicitada por aplicações de alto desempenho. Isso aumenta a possibilidade de criar gargalos em todo o processo de alimentação de aplicativos pelos dispositivos de hardware inferiores localizados na camada do sistema de armazenamento. Nos últimos anos, muitos pesquisadores propuseram soluções para melhorar a arquitetura de E/S considerando diferentes abordagens. Alguns deles aproveitam os dispositivos de hardware, enquanto outros se concentram em uma abordagem sofisticada de software. No entanto, devido à complexidade de lidar com ambientes de alto desempenho, criar soluções para melhorar o desempenho de E/S em software e hardware é um desafio e oferece aos pesquisadores muitas oportunidades. A classificação dessas melhorias em diferentes dimensões permite que os pesquisadores entendam como essas melhorias foram construídas ao longo dos anos e como elas progridem. Além disso, também permite que futuros esforços sejam direcionados para tópicos de pesquisa que se desenvolveram em menor proporção, equilibrando o processo geral de desenvolvimento. Esta pesquisa apresenta um modelo de caracterização tridimensional para classificar trabalhos de pesquisa sobre melhorias de desempenho de E/S para instalações de computação de armazenamento em larga escala. Esse modelo de classificação também pode ser usado como uma estrutura de diretrizes para resumir as pesquisas, fornecendo uma visão geral do cenário real. Também usamos o modelo proposto para realizar um mapeamento sistemático da literatura que abrangeu dez anos de pesquisa sobre melhorias no desempenho de E/S em ambientes de armazenamento. Este estudo classificou centenas de pesquisas distintas, identificando quais eram os dispositivos de hardware, software e sistemas de armazenamento que receberam mais atenção ao longo dos anos, quais foram os elementos de proposta mais pesquisados e onde esses elementos foram avaliados. Para justificar a importância desse modelo e o desenvolvimento de soluções que visam melhorias no desempenho de E/S, avaliamos um subconjunto dessas melhorias usando um ambiente de experimentação real e completo, o Grid5000. Análises em cenários diferentes usando um benchmark de E/S sintética demonstra como os parâmetros de vazão e latência se comportam ao executar diferentes operações de E/S usando tecnologias e abordagens distintas de armazenamento

    Deterministic Object Management in Large Distributed Systems

    Get PDF
    Caching is a widely used technique to improve the scalability of distributed systems. A central issue with caching is maintaining object replicas consistent with their master copies. Large distributed systems, such as the Web, typically deploy heuristic-based consistency mechanisms, which increase delay and place extra load on the servers, while not providing guarantees that cached copies served to clients are up-to-date. Server-driven invalidation has been proposed as an approach to strong cache consistency, but it requires servers to keep track of which objects are cached by which clients. We propose an alternative approach to strong cache consistency, called MONARCH, which does not require servers to maintain per-client state. Our approach builds on a few key observations. Large and popular sites, which attract the majority of the traffic, construct their pages from distinct components with various characteristics. Components may have different content types, change characteristics, and semantics. These components are merged together to produce a monolithic page, and the information about their uniqueness is lost. In our view, pages should serve as containers holding distinct objects with heterogeneous type and change characteristics while preserving the boundaries between these objects. Servers compile object characteristics and information about relationships between containers and embedded objects into explicit object management commands. Servers piggyback these commands onto existing request/response traffic so that client caches can use these commands to make object management decisions. The use of explicit content control commands is a deterministic, rather than heuristic, object management mechanism that gives content providers more control over their content. The deterministic object management with strong cache consistency offered by MONARCH allows content providers to make more of their content cacheable. Furthermore, MONARCH enables content providers to expose internal structure of their pages to clients. We evaluated MONARCH using simulations with content collected from real Web sites. The results show that MONARCH provides strong cache consistency for all objects, even for unpredictably changing ones, and incurs smaller byte and message overhead than heuristic policies. The results also show that as the request arrival rate or the number of clients increases, the amount of server state maintained by MONARCH remains the same while the amount of server state incurred by server invalidation mechanisms grows

    Handling Live Sensor Data on the Semantic Web

    Get PDF
    The increased linking of objects in the Internet of Things and the ubiquitous flood of data and information require new technologies in data processing and data storage in particular in the Internet and the Semantic Web. Because of human limitations in data collection and analysis, more and more automatic methods are used. Above all, these sensors or similar data producers are very accurate, fast and versatile and can also provide continuous monitoring even places that are hard to reach by people. The traditional information processing, however, has focused on the processing of documents or document-related information, but they have different requirements compared to sensor data. The main focus is static information of a certain scope in contrast to large quantities of live data that is only meaningful when combined with other data and background information. The paper evaluates the current status quo in the processing of sensor and sensor-related data with the help of the promising approaches of the Semantic Web and Linked Data movement. This includes the use of the existing sensor standards such as the Sensor Web Enablement (SWE) as well as the utilization of various ontologies. Based on a proposed abstract approach for the development of a semantic application, covering the process from data collection to presentation, important points, such as modeling, deploying and evaluating semantic sensor data, are discussed. Besides the related work on current and future developments on known diffculties of RDF/OWL, such as the handling of time, space and physical units, a sample application demonstrates the key points. In addition, techniques for the spread of information, such as polling, notifying or streaming are handled to provide examples of data stream management systems (DSMS) for processing real-time data. Finally, the overview points out remaining weaknesses and therefore enables the improvement of existing solutions in order to easily develop semantic sensor applications in the future

    Anycast services and its applications

    Full text link
    Anycast in next generation Internet Protocol is a hot topic in the research of computer networks. It has promising potentials and also many challenges, such as architecture, routing, Quality-of-Service, anycast in ad hoc networks, application-layer anycast, etc. In this thesis, we tackle some important topics among them. The thesis at first presents an introduction about anycast, followed by the related work. Then, as our major contributions, a number of challenging issues are addressed in the following chapters. We tackled the anycast routing problem by proposing a requirement based probing algorithm at application layer for anycast routing. Compared with the existing periodical based probing routing algorithm, the proposed routing algorithm improves the performance in terms of delay. We addressed the reliable service problem by the design of a twin server model for the anycast servers, providing a transparent and reliable service for all anycast queries. We addressed the load balance problem of anycast servers by proposing new job deviation strategies, to provide a similar Quality-of-Service to all clients of anycast servers. We applied the mesh routing methodology in the anycast routing in ad hoc networking environment, which provides a reliable routing service and uses much less network resources. We combined the anycast protocol and the multicast protocol to provide a bidirectional service, and applied the service to Web-based database applications, achieving a better query efficiency and data synchronization. Finally, we proposed a new Internet based service, minicast, as the combination of the anycast and multicast protocols. Such a service has potential applications in information retrieval, parallel computing, cache queries, etc. We show that the minicast service consumes less network resources while providing the same services. The last chapter of the thesis presents the conclusions and discusses the future work

    Distributed mobility management for a flat architecture in 5G mobile networks: solutions, analysis and experimental validation

    Get PDF
    In the last years, the commercial deployment of data services in mobile networks has been evolving quickly, providing enhanced radio access technologies and more efficient network architectures. Nowadays, mobile users enjoy broadband and ubiquitous wireless access through their portable devices, like smartphones and tablets, exploiting the connectivity offered by the modern 4G network. Nevertheless, the technological evolution keeps moving towards the development of next generation networks, or 5G, aiming at further improving the current system in order to cope with the huge data traffic growth foreseen in the future years. One of the possible research guidelines aims at innovating the mobile networks architecture by designing a flat system. Indeed, current systems are built upon a centralized and hierarchical structure, where multiple access networks are connected to a central core hosting crucial network functions, e.g., charging, control and maintenance, as well as mobility management, which is the main topic of this thesis. In such a central mobility management system, users’ traffic is aggregated at some key nodes in the core, called mobility anchors. Thus, an anchor can easily handle user’s mobility by redirecting traffic flows to his/her location, but i) it poses scalability issues, ii) it represents a single point of failure, and iii) the routing path is in general suboptimal. These problems can be overcome moving to a flat architecture, adopting a Distributed Mobility Management (DMM) system, where the centralized anchor is removed. This thesis develops within the DMM framework, presenting the design, analysis, implementation and experimental validation of several DMM protocols. In this work we describe original protocols for client-based and network-based mobility management, as well as a hybrid solution. We study analytically our solutions to evaluate their signaling cost, the packet delivery cost, and the latency introduced to handle a handover event. Finally, we assess the validity of some of our protocols with experiments run over a network prototype built in our lab implementing such solutions.El despliegue comercial de los servicios de datos en las redes móviles ha evolucionado rápidamente en los últimos años, proporcionando tecnologías de acceso radio más avanzadas y arquitecturas de red más eficientes. Los usuarios ya pueden disfrutar de los servicios de banda ancha desde sus dispositivos móviles, como smartphones y tablets, aprovechando la conectividad de las modernas redes 4G. Sin embargo, la evolución tecnológica sigue trazando su camino hasta el desarrollo de las redes de próxima generación, o 5G, en previsión del enorme aumento del tráfico de los años futuros. Una de las innovaciones bajo estudio aborda la arquitectura de las redes móviles, con el objetivo de diseñar un sistema plano. Efectivamente, el sistema actual se basa en una estructura centralizada y jerárquica, en la cual múltiples redes de acceso se conectan al núcleo central, dónde residen funciones cruciales para el control de la red y facturación, así como la gestión de la movilidad, que es el tema central de esta tesis. En un sistema con gestión centralizada de la movilidad, se agregan los flujos de tráfico en algunos nodos claves situados en el núcleo de la red, llamados anclas de movilidad. De este modo, un ancla puede fácilmente redirigir los flujos al lugar donde se halla el usuario, pero i) supone problemas de escalabilidad, ii) representa un punto único de fallo, y iii) el encaminamiento es en general sub-óptimo. Estos problemas se pueden resolver pasando a una arquitectura plana, cambiándose a un sistema de gestión distribuida de la movilidad (Distributed Mobility Management – DMM), donde no hay anclas centralizadas. Esta tesis se desarrolla dentro el marco propuesto por DMM, presentando el diseño, el análisis, la implementación y la validación experimental de varios protocolos de movilidad distribuida. Se describen soluciones basadas en el cliente y en la red, así como una solución híbrida. El funcionamiento de las soluciones ha sido estudiado analíticamente, para evaluar los costes de señalización, el coste del transporte de los paquetes y la latencia para gestionar el traspaso de los usuarios de una red a otra. Finalmente, la validez de los protocolos ha sido demostrada con experimentos sobre un prototipo donde se implementan algunas de las soluciones utilizando el equipamiento de nuestro laboratorio.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Arturo Azcorra Saloña.- Secretario: Ramón Agüero Calvo.- Vocal: Jouni Korhone

    Opportunistic Data Services in Least Developed Countries: Benefits, Challenges and Feasibility Issues

    Get PDF
    International audiencefacilitator in establishing primary education, reducing mortality or supporting commercial initiatives in Least Developed Countries. The main barrier to the development of IT services in these regions is not only the lack of communication facilities, but also the lack of consistent information systems, security procedures, economic and legal support, as well as political commitment. In this paper, we propose the vision of an infrastructure-less data platform well suited for the development of innovative IT services in Least Developed Countries. We propose a participatory approach, called Folk-IS, where each individual implements a small subset of a complete information system thanks to highly secure, portable and low-cost personal devices as well as opportunistic networking, without the need for any form of infrastructure. In this paper, we focus on the exploitation and feasibility analysis of the Folk-IS vision. We also review the technical challenges that are specific to this approac
    corecore