336 research outputs found

    Design and implementation of architectures for the deployment of secure community wireless networks

    Full text link
    Recientes avances en las tecnologías de la comunicación, así como la proliferación de nuevos dispositivos de computación, están plasmando nuestro entorno hacia un Internet ubicuo. Internet ofrece una plataforma global para acceder con bajo coste a una vasta gama de servicios de telecomunicaciones, como el correo electrónico, comercio electrónico, tele-educación, tele-salud y tele-medicina a bajo coste. Sin embargo, incluso en los países más desarrollados, un gran número de áreas rurales todavía están pobremente equipadas con una infraestructura básica de telecomunicaciones. Hoy en día, existen algunos esfuerzos para resolver esta falta de infraestructura, pero resultan todavía insuficientes. Con este objetivo presentamos en esta tesis RuralNet, una red comunitaria inalámbrica para proveer acceso a Internet de forma personalizada a los subscriptores de un área rural. Los objetivos de este estudio han sido el desarrollo de una nueva arquitectura para ofrecer un acceso a Internet flexible y seguro para zonas rurales aisladas. RuralNet combina el paradigma de las redes mesh y el uso de los dispositivos inalámbricos embebidos más económicos para ofrecer un gran número de servicios y aplicaciones basados en Internet. La solución desarrollada por RuralNet es capaz de cubrir grandes áreas a bajo coste, y puede también ser fácilmente desplegado y extendido tanto en términos de cobertura como de servicios ofrecidos. Dado que la implementación y la evaluación de RuralNet requiere un alto coste y una gran cantidad de mano de obra, hemos considerado que la simulación y la emulación eran una alternativa válida para ahorrar costes. Con este objetivo hemos desarrollado Castadiva, un emulador flexible proyectado para la evaluación de redes MANET y mesh. Castadiva es un emulador basado en dispositivos de bajo coste, utilizado para evaluar los protocolos y las aplicaciones desarrolladas.Hortelano Otero, J. (2011). Design and implementation of architectures for the deployment of secure community wireless networks [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/10079Palanci

    Performance analysis and application development of hybrid WiMAX-WiFi IP video surveillance systems

    Get PDF
    Traditional Closed Circuit Television (CCTV) analogue cameras installed in buildings and other areas of security interest necessitates the use of cable lines. However, analogue systems are limited by distance; and storing analogue data requires huge space or bandwidth. Wired systems are also prone to vandalism, they cannot be installed in a hostile terrain and in heritage sites, where cabling would distort original design. Currently, there is a paradigm shift towards wireless solutions (WiMAX, Wi-Fi, 3G, 4G) to complement and in some cases replace the wired system. A wireless solution of the Fourth-Generation Surveillance System (4GSS) has been proposed in this thesis. It is a hybrid WiMAX-WiFi video surveillance system. The performance analysis of the hybrid WiMAX-WiFi is compared with the conventional WiMAX surveillance models. The video surveillance models and the algorithm that exploit the advantages of both WiMAX and Wi-Fi for scenarios of fixed and mobile wireless cameras have been proposed, simulated and compared with the mathematical/analytical models. The hybrid WiMAX-WiFi video surveillance model has been extended to include a Wireless Mesh configuration on the Wi-Fi part, to improve the scalability and reliability. A performance analysis for hybrid WiMAX-WiFi system with an appropriate Mobility model has been considered for the case of mobile cameras. A security software application for mobile smartphones that sends surveillance images to either local or remote servers has been developed. The developed software has been tested, evaluated and deployed in low bandwidth Wi-Fi wireless network environments. WiMAX is a wireless metropolitan access network technology that provides broadband services to the connected customers. Major modules and units of WiMAX include the Customer Provided Equipment (CPE), the Access Service Network (ASN) which consist one or more Base Stations (BS) and the Connectivity Service Network (CSN). Various interfaces exist between each unit and module. WiMAX is based on the IEEE 802.16 family of standards. Wi-Fi, on the other hand, is a wireless access network operating in the local area network; and it is based on the IEEE 802.11 standards

    Dissecting HTTP/2 and QUIC : measurement, evaluation and optimization

    Get PDF
    Tesi en cotutel·la: Universitat Politècnica de Catalunya i Université catholique de LouvainThe Internet is evolving from the perspective of both usage and connectivity. The meteoric rise of smartphones has not only facilitated connectivity for the masses, it has also increased their appetite for more responsive applications. The widespread availability of wireless networks has caused a paradigm shift in the way we access the Internet. This shift has resulted in a new trend where traditional applications are getting migrated to the cloud, e.g., Microsoft Office 365, Google Apps etc. As a result, modern web content has become extremely complex and requires efficient web delivery protocols to maintain users’ experience regardless of the technology they use to connect to the Internet and despite variations in the quality of users’ Internet connectivity. To achieve this goal, efforts have been put into optimizing existing web and transport protocols, designing new low latency transport protocols and introducing enhance- ments in the WiFi MAC layer. In recent years, several improvements have been introduced in the HTTP protocol resulting in the HTTP/2 standard which allows more efficient use of network resources and a reduced perception of latency. QUIC transport protocol is another example of these ambitious efforts. Initially developed by Google as an experiment, the protocol has already made phenomenal strides, thanks to its support in Google’s servers and Chrome browser. However there is a lack of sufficient understanding and evaluation of these new protocols across a range of environments, which opens new opportunities for research in this direction. This thesis provides a comprehensive study on the behavior, usage and performance of HTTP/2 and QUIC, and advances them by implementing several optimizations. First, in order to understand the behavior of HTTP/1 and HTTP/2 traffic we analyze datasets of passive measurements collected in various operational networks and discover that they have very different characteristics. This calls for a reappraisal of traffic models, as well as HTTP traffic simulation and benchmarking approaches that were built on the understanding of HTTP/1 traffic only and may no longer be valid for modern web traffic. We develop a machine learning-based method compatible with existing flow monitoring systems for the classification of encrypted web traffic into appropriate HTTP versions. This will enable network administrators to identify HTTP/1 and HTTP/2 flows for network managements tasks such as traffic shaping or prioritization. We also investigate the behavior of HTTP/2 stream multiplexing in the wild. We devise a methodology for analysis of large datasets of network traffic comprising over 200 million flows to quantify the usage of H2 multiplexing in the wild and to understand its implications for network infrastructure. Next, we show with the help of emulations that HTTP/2 exhibits poor performance in adverse scenarios such as under high packet losses or network congestion. We confirm that the use of a single connection sometimes impairs application performance of HTTP/2 and implement an optimization in Chromium browser to make it more robust in such scenarios. Finally, we collect and analyze QUIC and TCP traffic in a production wireless mesh network. Our results show that while QUIC outperforms TCP in fixed networks, it exhibits significantly lower performance than TCP when there are wireless links in the end-to-end path. To see why this is the case, we carefully examine how delay variations which are common in wireless networks impact the congestion control and loss detection algorithms of QUIC. We also explore the interaction of QUIC transport with the advanced link layer features of WiFi such as frame aggregation. We fine-tune QUIC based on our findings and show notable increase in performance.Internet está evolucionando desde la perspectiva del uso y la conectividad. El ascenso meteórico de los teléfonos inteligentes no solo ha facilitado la conectividad para las masas, sino que también ha aumentado su apetito por aplicaciones más exigentes. La disponibilidad generalizada de las redes inalámbricas ha provocado un cambio de paradigma en la forma en que accedemos a Internet. Este cambio ha dado lugar a una nueva tendencia en la que las aplicaciones tradicionales se están migrando a la nube. Como resultado, el contenido web moderno se ha vuelto extremadamente complejo y requiere protocolos de entrega web eficientes para mantener la calidad de experiencia de los usuarios. Para lograr este objetivo, se han realizado esfuerzos para optimizar los protocolos web y de transporte existentes, diseñar nuevos protocolos de transporte de baja latencia e introducir mejo-ras en la capa MAC de WiFi. En los últimos años, se han introducido varias mejoras en el proto-colo HTTP que dan como resultado el estándar HTTP/2 que permite un uso más eficiente de los recursos de la red y una menor percepción de la latencia. El protocolo de transporte QUIC es otro ejemplo de estos esfuerzos ambiciosos. Inicialmente desarrollado por Google como un experi-mento, el protocolo ya ha hecho grandes avances, gracias a su soporte en los servidores de Google y el navegador Chrome. Esta tesis proporciona un estudio exhaustivo sobre el comportamiento, uso y rendimiento de HTTP/2 y QUIC, y los mejora mediante la implementación de varias optimizaciones. Primero, para comprender el comportamiento del tráfico HTTP/1 y HTTP/2, analizamos los conjuntos de datos de mediciones pasivas recopiladas en varias redes operativas y descubrimos que tienen carac-terísticas muy diferentes. Esto requiere una reevaluación de los modelos de tráfico, así como los métodos de simulación y evaluación comparativa del tráfico HTTP que se desarrollaron en el es-tudio hecho anteriormente sólo considerando el tráfico HTTP/1, y que ya no sean válidos para el tráfico web moderno. Desarrollamos un método basado en aprendizaje automático compatible con los sistemas de monitoreo de flujo existentes para la clasificación del tráfico web encriptado en las versiones HTTP. Esto permitirá a los administradores de red identificar los flujos de HTTP/1 y HTTP/2 para las tareas de administración de red, como la configuración del tráfico o la prior-ización. También investigamos el comportamiento de la multiplexación de flujos HTTP/2. Dise-ñamos una metodología para el análisis de grandes conjuntos de datos de tráfico de red que comprende más de 200 millones de flujos para cuantificar el uso de la multiplexación HTTP/2 y para comprender sus implicaciones para la infraestructura de red. A continuación, mostramos con la ayuda de las emulaciones que HTTP/2 muestra un rendimiento deficiente en escenarios adversos, como por ejemplo, una gran pérdida de paquetes o la conges-tión de la red. Confirmamos que el uso de una sola conexión a veces perjudica el rendimiento de la aplicación de HTTP/2 e implementamos una optimización en el navegador Chromium para ha-cerlo más robusto en tales escenarios. Finalmente, recopilamos y analizamos el tráfico de QUIC y TCP en una red de malla inalámbrica en producción. Nuestros resultados muestran que, si bien QUIC supera a TCP en redes fijas, puede presentar un rendimiento significativamente menor que TCP cuando hay enlaces inalám-bricos en la ruta de extremo a extremo. Para ver por qué ocurre, examinamos cuidadosamente cómo las variaciones de retardo, que son comunes en las redes inalámbricas, afectan el control de congestión y los algoritmos de detección de pérdida de QUIC. También exploramos la interacción de QUIC con las características avanzadas de la capa de enlace de WiFi, como la agregación de tramas. Ajustando QUIC en función de nuestros hallazgos mostramos que puede conseguirse un notable aumento en el rendimientoPostprint (published version

    Accurate Human Tissue Characterization for Energy-Efficient Wireless On-Body Communications

    Get PDF
    The demand for Wireless Body Sensor Networks (WBSNs) is rapidly increasing due to the revolution in wearable systems demonstrated by the penetration of on-the-body sensors in hospitals, sports medicine and general health-care practices. In WBSN, the body acts as a communication channel for the propagation of electromagnetic (EM) waves, where losses are mainly due to absorption of power in the tissue. This paper shows the effects of the dielectric properties of biological tissues in the signal strength and, for the first time, relates these effects with the human body composition. After a careful analysis of results, this work proposes a reactive algorithm for power transmission to alleviate the effect of body movement and body type. This policy achieves up to 40.8% energy savings in a realistic scenario with no performance overhead

    Actas da 10ª Conferência sobre Redes de Computadores

    Get PDF
    Universidade do MinhoCCTCCentro AlgoritmiCisco SystemsIEEE Portugal Sectio

    An elementary proposition on the dynamic routing problem in wireless networks of sensors

    Get PDF
    The routing problem (finding an optimal route from one point in a computer network to another) is surrounded by impossibility results. These results are usually expressed as lower and upper bounds on the set of nodes (or the set of links) of a network and represent the complexity of a solution to the routing problem (a routing function). The routing problem dealt with here, in particular, is a dynamic one (it accounts for network dynamics) and concerns wireless networks of sensors. Sensors form wireless links of limited capacity and time-variable quality to route messages amongst themselves. It is desired that sensors self-organize ad hoc in order to successfully carry out a routing task, e.g. provide daily soil erosion reports for a monitored watershed, or provide immediate indications of an imminent volcanic eruption, in spite of network dynamics. Link dynamics are the first barrier to finding an optimal route between a node x and a node y in a sensor network. The uncertainty of the outcome (the best next hop) of a routing function lies partially with the quality fluctuations of wireless links. Take, for example, a static network. It is known that, given the set of nodes and their link weights (or costs), a node can compute optimal routes by running, say, Dijkstra's algorithm. Link dynamics however suggest that costs are not static. Hence, sensors need a metric (a measurable quantity of uncertainty) to monitor for fluctuations, either improvements or degradations of quality or load; when a fluctuation is sufficiently large (say, by Delta), sensors ought to update their costs and seek another route. Therein lies the other fundamental barrier to find an optimal route - complexity. A crude argument would suggest that sensors (and their links) have an upper bound on the number of messages they can transmit, receive and store due to resource constraints. Such messages can be application traffic, in which case it is desirable, or control traffic, in which case it should be kept minimal. The first type of traffic is demand, and a user should provision for it accordingly. The second type of traffic is overhead, and it is necessary if a routing system (or scheme) is to ensure its fidelity to the application requirements (policy). It is possible for a routing scheme to approximate optimal routes (by Delta) by reducing its message and/or memory complexity. The common denominator of the routing problem and the desire to minimize overhead while approximating optimal routes is Delta, the deviation (or stretch) of a computed route from an optimal one, as computed by a node that has instantaneous knowledge of the set of all nodes and their interaction costs (an oracle). This dissertation deals with both problems in unison. To do so, it needs to translate the policy space (the user objectives) into a metric space (routing objectives). It does so by means of a cost function that normalizes metrics into a number of hops. Then it proceeds to devise, design, and implement a scheme that computes minimum-hop-count routes with manageable complexity. The theory presented is founded on (well-ordered) sets with respect to an elementary proposition, that a route from a source x to a destination y can be computed either by y sending an advertisement to the set of all nodes, or by x sending a query to the set of all nodes; henceforth the proactive method (of y) and the reactive method (of x), respectively. The debate between proactive and reactive routing protocols appears in many instances of the routing problem (e.g. routing in mobile networks, routing in delay-tolerant networks, compact routing), and it is focussed on whether nodes should know a priori all routes and then select the best one (with the proactive method), or each node could simply search for a (hopefully best) route on demand (with the reactive method). The proactive method is stateful, as it requires the entire metric space - the set of nodes and their interaction costs - in memory (in a routing table). The routes computed by the proactive method are optimal and the lower and upper bounds of proactive schemes match those of an oracle. Any attempt to reduce the proactive overhead, e.g. by introducing hierarchies, will result in sub-optimal routes (of known stretch). The reactive method is stateless, as it requires no information whatsoever to compute a route. Reactive schemes - at least as they are presently understood - compute sub-optimal routes (and thus far, of unknown stretch). This dissertation attempts to answer the following question: "what is the least amount of state required to compute an optimal route from a source to a destination?" A hybrid routing scheme is used to investigate this question, one that uses the proactive method to compute routes to near destinations and the reactive method for distant destinations. It is shown that there are cases where hybrid schemes can converge to optimal routes, despite possessing incomplete routing state, and that the necessary and sufficient condition to compute optimal routes with local state alone is related neither to the size nor the density of a network; it is rather the circumference (the size of the largest cycle) of a network that matters. Counterexamples, where local state is insufficient, are discussed to derive the worst-case stretch. The theory is augmented with simulation results and a small experimental testbed to motivate the discussion on how policy space (user requirements) can translate into metric spaces and how different metrics affect performance. On the debate between proactive and reactive protocols, it is shown that the two classes are equivalent. The dissertation concludes with a discussion on the applicability of its results and poses some open problems

    Efficiency and Sustainability of the Distributed Renewable Hybrid Power Systems Based on the Energy Internet, Blockchain Technology and Smart Contracts

    Get PDF
    The climate changes that are visible today are a challenge for the global research community. In this context, renewable energy sources, fuel cell systems, and other energy generating sources must be optimally combined and connected to the grid system using advanced energy transaction methods. As this book presents the latest solutions in the implementation of fuel cell and renewable energy in mobile and stationary applications such as hybrid and microgrid power systems based on energy internet, blockchain technology, and smart contracts, we hope that they are of interest to readers working in the related fields mentioned above
    corecore