92 research outputs found

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Segurança e privacidade em terminologia de rede

    Get PDF
    Security and Privacy are now at the forefront of modern concerns, and drive a significant part of the debate on digital society. One particular aspect that holds significant bearing in these two topics is the naming of resources in the network, because it directly impacts how networks work, but also affects how security mechanisms are implemented and what are the privacy implications of metadata disclosure. This issue is further exacerbated by interoperability mechanisms that imply this information is increasingly available regardless of the intended scope. This work focuses on the implications of naming with regards to security and privacy in namespaces used in network protocols. In particular on the imple- mentation of solutions that provide additional security through naming policies or increase privacy. To achieve this, different techniques are used to either embed security information in existing namespaces or to minimise privacy ex- posure. The former allows bootstraping secure transport protocols on top of insecure discovery protocols, while the later introduces privacy policies as part of name assignment and resolution. The main vehicle for implementation of these solutions are general purpose protocols and services, however there is a strong parallel with ongoing re- search topics that leverage name resolution systems for interoperability such as the Internet of Things (IoT) and Information Centric Networks (ICN), where these approaches are also applicable.Segurança e Privacidade sĂŁo dois topicos que marcam a agenda na discus- sĂŁo sobre a sociedade digital. Um aspecto particularmente subtil nesta dis- cussĂŁo Ă© a forma como atribuĂ­mos nomes a recursos na rede, uma escolha com consequĂȘncias prĂĄticas no funcionamento dos diferentes protocols de rede, na forma como se implementam diferentes mecanismos de segurança e na privacidade das vĂĄrias partes envolvidas. Este problema torna-se ainda mais significativo quando se considera que, para promover a interoperabili- dade entre diferentes redes, mecanismos autĂłnomos tornam esta informação acessĂ­vel em contextos que vĂŁo para lĂĄ do que era pretendido. Esta tese foca-se nas consequĂȘncias de diferentes polĂ­ticas de atribuição de nomes no contexto de diferentes protocols de rede, para efeitos de segurança e privacidade. Com base no estudo deste problema, sĂŁo propostas soluçÔes que, atravĂ©s de diferentes polĂ­ticas de atribuição de nomes, permitem introdu- zir mecanismos de segurança adicionais ou mitigar problemas de privacidade em diferentes protocolos. Isto resulta na implementação de mecanismos de segurança sobre protocolos de descoberta inseguros, assim como na intro- dução de mecanismos de atribuiçao e resolução de nomes que se focam na protecçao da privacidade. O principal veĂ­culo para a implementação destas soluçÔes Ă© atravĂ©s de ser- viços e protocolos de rede de uso geral. No entanto, a aplicabilidade destas soluçÔes extende-se tambĂ©m a outros tĂłpicos de investigação que recorrem a mecanismos de resolução de nomes para implementar soluçÔes de intero- perabilidade, nomedamente a Internet das Coisas (IoT) e redes centradas na informação (ICN).Programa Doutoral em InformĂĄtic

    A Peer-to-Peer Network Framework Utilising the Public Mobile Telephone Network

    Get PDF
    P2P (Peer-to-Peer) technologies are well established and have now become accepted as a mainstream networking approach. However, the explosion of participating users has not been replicated within the mobile networking domain. Until recently the lack of suitable hardware and wireless network infrastructure to support P2P activities was perceived as contributing to the problem. This has changed with ready availability of handsets having ample processing resources utilising an almost ubiquitous mobile telephone network. Coupled with this has been a proliferation of software applications written for the more capable `smartphone' handsets. P2P systems have not naturally integrated and evolved into the mobile telephone ecosystem in a way that `client-server' operating techniques have. However as the number of clients for a particular mobile application increase, providing the `server side' data storage infrastructure becomes more onerous. P2P systems offer mobile telephone applications a way to circumvent this data storage issue by dispersing it across a network of the participating users handsets. The main goal of this work was to produce a P2P Application Framework that supports developers in creating mobile telephone applications that use distributed storage. Effort was assigned to determining appropriate design requirements for a mobile handset based P2P system. Some of these requirements are related to the limitations of the host hardware, such as power consumption. Others relate to the network upon which the handsets operate, such as connectivity. The thesis reviews current P2P technologies to assess which was viable to form the technology foundations for the framework. The aim was not to re-invent a P2P system design, rather to adopt an existing one for mobile operation. Built upon the foundations of a prototype application, the P2P framework resulting from modifications and enhancements grants access via a simple API (Applications Programmer Interface) to a subset of Nokia `smartphone' devices. Unhindered operation across all mobile telephone networks is possible through a proprietary application implementing NAT (Network Address Translation) traversal techniques. Recognising that handsets operate with limited resources, further optimisation of the P2P framework was also investigated. Energy consumption was a parameter chosen for further examination because of its impact on handset participation time. This work has proven that operating applications in conjunction with a P2P data storage framework, connected via the mobile telephone network, is technically feasible. It also shows that opportunity remains for further research to realise the full potential of this data storage technique

    Distributed Late-binding Micro-scheduling and Data Caching for Data-Intensive Workflows

    Get PDF
    Tesis inĂ©dita de la Universidad Complutense de Madrid, Facultad de InformĂĄtica, Departamento de Arquitectura de Computadores y AutomĂĄtica, leĂ­da el 06-07-2015El mundo de hoy en dĂ­a se encuentra inundado por ingentes cantidades de informaciĂłn digital procedente de muy diversas fuentes. Todo apunta, ademĂĄs, a que esta tendencia se agudizarĂĄ en el futuro. Ni la industria, ni la sociedad en general, ni, muy particularmente, la ciencia, permanecen indiferentes ante este hecho. Al contrario, se esfuerzan por obtener el mĂĄximo provecho de esta informaciĂłn, lo que significa que deben capturarla, transferirla, almacenarla y procesarla puntual y eficientemente, utilizando una amplia gama de recursos computacionales. Pero esta tarea no es siempre sencilla. Un ejemplo representativo de los desafĂ­os que suponen el manejo y procesamiento de grandes cantidades de datos es el de los experimentos de fĂ­sica de partĂ­culas del Large Hadron Collider (LHC), en Ginebra, que cada año deben gestionar decenas de petabytes de informaciĂłn. BasĂĄndonos en la experiencia de una de estas colaboraciones, hemos estudiado los principales problemas relativos a la gestiĂłn de volĂșmenes de datos masivos y a la ejecuciĂłn de vastos flujos de trabajo que necesitan consumirlos. En este contexto, hemos desarrollado una arquitectura de propĂłsito general para la planificaciĂłn y ejecuciĂłn de flujos de trabajo con importantes requisitos de datos, que hemos llamado Task Queue. Este nuevo sistema aprovecha el modelo de asignaciĂłn tardĂ­a basado en agentes que ha ayudado a los experimentos del LHC a superar los problemas asociados con la heterogeneidad y la complejidad de las grandes infraestructuras grid de computaciĂłn. Nuestra propuesta presenta varias mejoras con respecto a los sistemas existentes. Los agentes de ejecuciĂłn de la arquitectura Task Queue comparten una tabla hash distribuida (Distributed Hash Table, DHT) y realizan la asignaciĂłn de tareas de una manera cooperativa. De esta forma, se evitan los problemas de escalabilidad de los algoritmos centralizados de asignaciĂłn y se mejoran los tiempos de ejecuciĂłn. Esta escalabilidad nos permite realizar una microplanificaciĂłn de grano fino lo cual posibilita nuevas funcionalidades, como la implementaciĂłn de una cache distribuida en los nodos de ejecuciĂłn y el uso de la informaciĂłn de ubicaciĂłn de los datos en las decisiones de asignaciĂłn de tareas. Esto mejora la eficiencia del procesado de datos y ayuda a aliviar los habitualmente congestionados servicios de almacenamiento del grid. AdemĂĄs, nuestro sistema es mĂĄs robusto frente a problemas en la interacciĂłn con la cola central de tareas y ofrece mejor comportamiento en situaciones con patrones de acceso a datos exigentes o en ausencia de servicios de almacenamiento locales. Todo esto ha sido demostrado en una amplia serie de pruebas de evaluaciĂłn. Dado que nuestro procedimiento de planificaciĂłn de tareas distribuido requiere el uso de mensajes de broadcast, tambiĂ©n hemos realizado un profundo estudio de las posibles aproximaciones a la implementaciĂłn de esta operaciĂłn sobre el DHT Kademlia, el cual es utilizado para la cache de datos compartida. Kademlia ofrece enrutamiento a nodos individuales pero no incluye ninguna primitiva de broadcast. Nuestro trabajo expone las peculiaridades de este sistema, particularmente su mĂ©trica basada en la operaciĂłn XOR, y estudia analĂ­ticamente quĂ© tĂ©cnicas de broadcast pueden ser usadas con Ă©l. TambiĂ©n se ha desarrollado un modelo que estima la cobertura de nodos en funciĂłn de la probabilidad que cada mensaje individual alcance su destino correctamente. Como validaciĂłn, los algoritmos se han implementado y se han evaluado exhaustivamente. AdemĂĄs, proponemos varias tĂ©cnicas para mejorar los protocolos en situaciones adversas, por ejemplo cuando el sistema presenta una alta rotaciĂłn de nodos o la tasa de error en las entregas no es despreciable. Esta tĂ©cnicas incluyen redundancia, reenvĂ­o e inundaciĂłn (flooding), asĂ­ como combinaciones de las mismas. Presentamos un anĂĄlisis de las fortalezas y debilidades de los diferentes algoritmos y las mencionadas tĂ©cnicas complementarias.Depto. de Arquitectura de Computadores y AutomĂĄticaFac. de InformĂĄticaTRUEunpu

    Referee-based architectures for massively multiplayer online games

    Get PDF
    Network computer games are played amongst players on different hosts across the Internet. Massively Multiplayer Online Games (MMOG) are network games in which thousands of players participate simultaneously in each instance of the virtual world. Current commercial MMOG use a Client/Server (C/S) architecture in which the server simulates and validates the game, and notifies players about the current game state. While C/S is very popular, it has several limitations: (i) C/S has poor scalability as the server is a bandwidth and processing bottleneck; (ii) all updates must be routed through the server, reducing responsiveness; (iii) players with lower client-to-server delay than their opponents have an unfair advantage as they can respond to game events faster; and (iv) the server is a single point of failure.The Mirrored Server (MS) architecture uses multiple mirrored servers connected via a private network. MS achieves better scalability, responsiveness, fairness, and reliability than C/S; however, as updates are still routed through the mirrored servers the problems are not eliminated. P2P network game architectures allow players to exchange updates directly, maximising scalability, responsiveness, and fairness, while removing the single point of failure. However, P2P games are vulnerable to cheating. Several P2P architectures have been proposed to detect and/or prevent game cheating. Nevertheless, they only address a subset of cheating methods. Further, these solutions require costly distributed validation algorithms that increase game delay and bandwidth, and prevent players with high latency from participating.In this thesis we propose a new cheat classification that reflects the levels in which the cheats occur: game, application, protocol, or infrastructure. We also propose three network game architectures: the Referee Anti-Cheat Scheme (RACS), the Mirrored Referee Anti-Cheat Scheme (MRACS), and the Distributed Referee Anti-Cheat Scheme (DRACS); which maximise game scalability, responsiveness, and fairness, while maintaining cheat detection/prevention equal to that in C/S. Each proposed architecture utilises one or more trusted referees to validate the game simulation - similar to the server in C/S - while allowing players to exchange updates directly - similar to peers in P2P.RACS is a hybrid C/S and P2P architecture that improves C/S by using a referee in the server. RACS allows honest players to exchange updates directly between each other, with a copy sent to the referee for validation. By allowing P2P communication RACS has better responsiveness and fairness than C/S. Further, as the referee is not required to forward updates it has better bandwidth and processing scalability. The RACS protocol could be applied to any existing C/S game. Compared to P2P protocols RACS has lower delay, and allows players with high delay to participate. Like in many P2P architectures, RACS divides time into rounds. We have proposed two efficient solutions to find the optimal round length such that the total system delay is minimised.MRACS combines the RACS and MS architectures. A referee is used at each mirror to validate player updates, while allowing players to exchange updates directly. By using multiple mirrored referees the bandwidth required by each referee, and the player-to mirror delays, are reduced; improving the scalability, responsiveness and fairness of RACS, while removing its single point of failure. Direct communication MRACS improves MS in terms of its responsiveness, fairness, and scalability. To maximise responsiveness, we have defined and solved the Client-to-Mirror Assignment (CMA) problem to assign clients to mirrors such that the total delay is minimised, and no mirror is overloaded. We have proposed two sets of efficient solutions: the optimal J-SA/L-SA and the faster heuristic J-Greedy/L-Greedy to solve CMA.DRACS uses referees distributed to player hosts to minimise the publisher / developer infrastructure, and maximise responsiveness and/or fairness. To prevent colluding players cheating DRACS requires every update to be validated by multiple unaffiliated referees, providing cheat detection / prevention equal to that in C/S. We have formally defined the Referee Selection Problem (RSP) to select a set of referees from the untrusted peers such that responsiveness and/or fairness are maximised, while ensuring the probability of the majority of referees colluding is below a pre-defined threshold. We have proposed two efficient algorithms, SRS-1 and SRS-2, to solve the problem.We have evaluated the performances of RACS, MRACS, and DRACS analytically and using simulations. We have shown analytically that RACS, MRACS and DRACS have cheat detection/prevention equivalent to that in C/S. Our analysis shows that RACS has better scalability and responsiveness than C/S; and that MRACS has better scalability and responsiveness than C/S, RACS, and MS. As there is currently no publicly available traces from MMOG we have constructed artificial and realistic inputs. We have used these inputs on all simulations in this thesis to show the benefits of our proposed architectures and algorithms

    A framework for the dynamic management of Peer-to-Peer overlays

    Get PDF
    Peer-to-Peer (P2P) applications have been associated with inefficient operation, interference with other network services and large operational costs for network providers. This thesis presents a framework which can help ISPs address these issues by means of intelligent management of peer behaviour. The proposed approach involves limited control of P2P overlays without interfering with the fundamental characteristics of peer autonomy and decentralised operation. At the core of the management framework lays the Active Virtual Peer (AVP). Essentially intelligent peers operated by the network providers, the AVPs interact with the overlay from within, minimising redundant or inefficient traffic, enhancing overlay stability and facilitating the efficient and balanced use of available peer and network resources. They offer an “insider‟s” view of the overlay and permit the management of P2P functions in a compatible and non-intrusive manner. AVPs can support multiple P2P protocols and coordinate to perform functions collectively. To account for the multi-faceted nature of P2P applications and allow the incorporation of modern techniques and protocols as they appear, the framework is based on a modular architecture. Core modules for overlay control and transit traffic minimisation are presented. Towards the latter, a number of suitable P2P content caching strategies are proposed. Using a purpose-built P2P network simulator and small-scale experiments, it is demonstrated that the introduction of AVPs inside the network can significantly reduce inter-AS traffic, minimise costly multi-hop flows, increase overlay stability and load-balancing and offer improved peer transfer performance

    Supporting Device Mobility and State Distribution through Indirection, Topological Isomorphism and Evolutionary Algorithms

    Get PDF
    The Internet of Things will result in the deployment of many billions of wireless embedded systems, creating interactive pervasive environments. These pervasive networks will provide seamless access to sensor actuators, enabling organisations and individuals to control and monitor their environment. The majority of devices attached to the Internet of Things will be static. However, it is anticipated that with the advent of body and vehicular networks, we will see many mobile Internet of Things Devices. During emergency situations, the flow of data across the Internet of Things may be disrupted, giving rise to a requirement for machine-to-machine interaction within the remaining environment. Current approaches to routing on the Internet and wireless sensor networks fail to address the requirements of mobility, isolated operation during failure or deal with the imbalance caused by either initial or failing topologies when applying geographic coordinate-based peer-to-peer storage mechanisms. The use of global and local DHT mechanisms to facilitate improved reachability and data redundancy are explored in this thesis. Resulting in the development of an Architecture to support the global reachability of static and mobile Internet of Things Devices. This is achieved through the development of a global indirection mechanism supporting position relative wireless environments. To support the distribution and preservation of device state within the wireless domain a new geospatial keying mechanism is presented, this enables a device to persist state within an overlay with certain guarantees as to its survival. The guarantees relating to geospatial storage rely on the balanced allocation of distributed information. This thesis details a mechanism to balance the address space utilising evolutionary techniques. Following the generation of an initial balanced topology, we present a protocol that applies Topological Isomorphism to provide the continued balancing and reachability of data following partial network failure. This dissertation details the analysis of the proposed protocols and their evaluation through simulation. The results show that our proposed Architecture operates within the capabilities of the devices that operate in this space. The evaluation of Geospatial Keying within the wireless domain showed that the mechanism presented provides better device state preservation than would be found in the random placement exhibited by the storage of state in overlay DHT schemes. Experiments confirm device storage imbalance when using geographic routing; however, the results provided in this thesis show that the use of genetic algorithms can provide an improved identity assignment through the application of alternating fitness between reachability and ideal key displacement. This topology, as is commonly found in geographical routing, was susceptible to imbalance following device failure. The use of topological isomorphism provided an improvement over existing geographical routing protocols to counteract the reachability and imbalance caused by failure

    Dynamic data placement and discovery in wide-area networks

    Get PDF
    The workloads of online services and applications such as social networks, sensor data platforms and web search engines have become increasingly global and dynamic, setting new challenges to providing users with low latency access to data. To achieve this, these services typically leverage a multi-site wide-area networked infrastructure. Data access latency in such an infrastructure depends on the network paths between users and data, which is determined by the data placement and discovery strategies. Current strategies are static, which offer low latencies upon deployment but worse performance under a dynamic workload. We propose dynamic data placement and discovery strategies for wide-area networked infrastructures, which adapt to the data access workload. We achieve this with data activity correlation (DAC), an application-agnostic approach for determining the correlations between data items based on access pattern similarities. By dynamically clustering data according to DAC, network traffic in clusters is kept local. We utilise DAC as a key component in reducing access latencies for two application scenarios, emphasising different aspects of the problem: The first scenario assumes the fixed placement of data at sites, and thus focusses on data discovery. This is the case for a global sensor discovery platform, which aims to provide low latency discovery of sensor metadata. We present a self-organising hierarchical infrastructure consisting of multiple DAC clusters, maintained with an online and distributed split-and-merge algorithm. This reduces the number of sites visited, and thus latency, during discovery for a variety of workloads. The second scenario focusses on data placement. This is the case for global online services that leverage a multi-data centre deployment to provide users with low latency access to data. We present a geo-dynamic partitioning middleware, which maintains DAC clusters with an online elastic partition algorithm. It supports the geo-aware placement of partitions across data centres according to the workload. This provides globally distributed users with low latency access to data for static and dynamic workloads.Open Acces

    Efficient service discovery in wide area networks

    Get PDF
    Living in an increasingly networked world, with an abundant number of services available to consumers, the consumer electronics market is enjoying a boom. The average consumer in the developed world may own several networked devices such as games consoles, mobile phones, PDAs, laptops and desktops, wireless picture frames and printers to name but a few. With this growing number of networked devices comes a growing demand for services, defined here as functions requested by a client and provided by a networked node. For example, a client may wish to download and share music or pictures, find and use printer services, or lookup information (e.g. train times, cinema bookings). It is notable that a significant proportion of networked devices are now mobile. Mobile devices introduce a new dynamic to the service discovery problem, such as lower battery and processing power and more expensive bandwidth. Device owners expect to access services not only in their immediate proximity, but further afield (e.g. in their homes and offices). Solving these problems is the focus of this research. This Thesis offers two alternative approaches to service discovery in Wide Area Networks (WANs). Firstly, a unique combination of the Session Initiation Protocol (SIP) and the OSGi middleware technology is presented to provide both mobility and service discovery capability in WANs. Through experimentation, this technique is shown to be successful where the number of operating domains is small, but it does not scale well. To address the issue of scalability, this Thesis proposes the use of Peer-to-Peer (P2P) service overlays as a medium for service discovery in WANs. To confirm that P2P overlays can in fact support service discovery, a technique to utilise the Distributed Hash Table (DHT) functionality of distributed systems is used to store and retrieve service advertisements. Through simulation, this is shown to be both a scalable and a flexible service discovery technique. However, the problems associated with P2P networks with respect to efficiency are well documented. In a novel approach to reduce messaging costs in P2P networks, multi-destination multicast is used. Two well known P2P overlays are extended using the Explicit Multi-Unicast (XCAST) protocol. The resulting analysis of this extension provides a strong argument for multiple P2P maintenance algorithms co-existing in a single P2P overlay to provide adaptable performance. A novel multi-tier P2P overlay system is presented, which is tailored for service rich mobile devices and which provides an efficient platform for service discovery

    Real-time video streaming using peer-to-peer for video distribution

    Get PDF
    The growth of the Internet has led to research and development of several new and useful applications including video streaming. Commercial experiments are underway to determine the feasibility of multimedia broadcasting using packet based data networks alongside traditional over-the-air broadcasting. Broadcasting companies are offering low cost or free versions of video content online to both guage and at the same time generate popularity. In addition to television broadcasting, video streaming is used in a number of application areas including video conferencing, telecommuting and long distance education. Large scale video streaming has not become as widespread or widely deployed as could be expected. The reason for this is the high bandwidth requirement (and thus high cost) associated with video data. Provision of a constant stream of video data on a medium to large scale typically consumes a significant amount of bandwidth. An effect of this is that encoding bit rates are lowered and consequently video quality is degraded resulting in even slower uptake rates for video streaming services. The aim of this dissertation is to investigate peer-to-peer streaming as a potential solution to this bandwidth problem. The proposed peer-to-peer based solution relies on end user co-operation for video data distribution. This approach is highly effective in reducing the outgoing bandwidth requirement for the video streaming server. End users redistribute received video chunks amongst their respective peers and in so doing increase the potential capacity of the entire network for supporting more clients. A secondary effect of such a system is that encoding capabilities (including higher encoding bit rates or encoding of additional sub-channels) can be enhanced. Peer-to-peer distribution enables any regular user to stream video to large streaming networks with many viewers. This research includes a detailed overview of the fields of video streaming and peer-to-peer networking. Techniques for optimal video preparation and data distribution were investigated. A variety of academic and commercial peer-to-peer based multimedia broadcasting systems were analysed as a means to further define and place the proposed implementation in context with respect to other peercasting implementations. A proof-of-concept of the proposed implementation was developed, mathematically analyzed and simulated in a typical deployment scenario. Analysis was carried out to predict simulation performance and as a form of design evaluation and verification. The analysis highlighted some critical areas which resulted in adaptations to the initial design as well as conditions under which performance can be guaranteed. A simulation of the proof-of-concept system was used to determine the extent of bandwidth savings for the video server. The aim of the simulations was to show that it is possible to encode and deliver video data in real time over a peer-to-peer network. The proposed system achieved expectations and showed significant bandwidth savings for a sustantially large video streaming audience. The implementation was able to encode video in real time and continually stream video packets on time to connected peers while continually supporting network growth by connecting additional peers (or stream viewers). The system performed well and showed good performance under typical real world restrictions on available bandwith capacity.Dissertation (MEng)--University of Pretoria, 2009.Electrical, Electronic and Computer Engineeringunrestricte
    • 

    corecore