603 research outputs found

    Characterizing and Improving the Reliability of Broadband Internet Access

    Full text link
    In this paper, we empirically demonstrate the growing importance of reliability by measuring its effect on user behavior. We present an approach for broadband reliability characterization using data collected by many emerging national initiatives to study broadband and apply it to the data gathered by the Federal Communications Commission's Measuring Broadband America project. Motivated by our findings, we present the design, implementation, and evaluation of a practical approach for improving the reliability of broadband Internet access with multihoming.Comment: 15 pages, 14 figures, 6 table

    The Road Ahead for Networking: A Survey on ICN-IP Coexistence Solutions

    Full text link
    In recent years, the current Internet has experienced an unexpected paradigm shift in the usage model, which has pushed researchers towards the design of the Information-Centric Networking (ICN) paradigm as a possible replacement of the existing architecture. Even though both Academia and Industry have investigated the feasibility and effectiveness of ICN, achieving the complete replacement of the Internet Protocol (IP) is a challenging task. Some research groups have already addressed the coexistence by designing their own architectures, but none of those is the final solution to move towards the future Internet considering the unaltered state of the networking. To design such architecture, the research community needs now a comprehensive overview of the existing solutions that have so far addressed the coexistence. The purpose of this paper is to reach this goal by providing the first comprehensive survey and classification of the coexistence architectures according to their features (i.e., deployment approach, deployment scenarios, addressed coexistence requirements and architecture or technology used) and evaluation parameters (i.e., challenges emerging during the deployment and the runtime behaviour of an architecture). We believe that this paper will finally fill the gap required for moving towards the design of the final coexistence architecture.Comment: 23 pages, 16 figures, 3 table

    Machine learning models for traffic classification in electromagnetic nano-networks

    Get PDF
    The number of nano-sensors connected to wireless electromagnetic nano-network generates different traffic volumes that have increased dramatically, enabling various applications of the Internet of nano-things. Nano-network traffic classification is more challenging nowadays to analyze different types of flows and study the overall performance of a nano-network that connects to the Internet through micro/nanogateways. There are traditional techniques to classify traffic, such as port-based technique and load-based technique, however the most promising technique used recently is machine learning. As machine learning models have a great impact on traffic classification and network performance evaluation in general, it is difficult to declare which is the best or the most suitable model to address the analysis of large volumes of traffic collected in operational nano-networks. In this paper, we study the classification problem of nano-network traffic captured by micro/nano-gateway, and then five supervised machine learning algorithms are used to analyze and classify the nano-network traffic from traditional traffic. Experimental analysis of the proposed models is evaluated and compared to show the most adequate classifier for nano-network traffic that gives very good accuracy and performance score to other classifiers.This work was supported in part by the ‘‘Agencia Estatal de Investigación’’ of ‘‘Ministerio de Ciencia e Innovación’’ of Spain under Project PID2019-108713RB-C51/MCIN/AEI/10.13039/501100011033, and in part by the ‘‘Agència de Gestió d’Ajuts Universitaris i de Recerca’’ (AGAUR) of the ‘‘Generalitat de Catalunya’’ under Grant 2021FI_B2 00091.Postprint (published version

    Best effort measurement based congestion control

    Get PDF
    Abstract available: p.

    Internet sharing in community networks

    Get PDF
    Cotutela Universitat Politècnica de Catalunya i Instituto Superior TécnicoThe majority of the world's population does not have any or adequate Internet access. This implies that the Internet cannot provide universal service, reaching everyone without discrimination. Global access to the Internet for all requires the expansion of network infrastructures and a dramatic reduction in Internet access costs especially in less developed geographical regions. Local communities come together to build their own network infrastructures, known as Community Networks, and provide accessible and affordable local and Internet inter-networking. Sharing resources, such as infrastructure or Internet access, is encouraged at all levels, in order to lower the cost of connectivity and services. Communities can develop their own network infrastructures as a commons, using several interconnected sub-networks when the scale requires it, and sharing several Internet gateways among their participants. Shared Internet access is offered through web proxy gateways, where individuals or organisations share the full or spare capacity of their Internet connections with other participants. However, these gateway nodes may be overloaded by the demand, and their Internet capacity may degrade due to lack of regulation. This thesis investigates whether shared Internet access in community networks can be utilized to provide universal Internet access. As a first step in this direction, in this thesis we explored characteristics, limitations and usability of a concrete shared Internet Web proxy service in community networks. Based on our findings we studied and proposed mechanisms to improve the user experience and fairness of Internet sharing Web proxy services in community networks, without introducing significant overhead to the network and other services. More specifically, we proposed a scalable client-side Internet gateway selection mechanism suitable for heterogeneous environments such as community networks. Finally, we studied and proposed techniques for sharing spare Internet capacity without deteriorating the contributors' performance.La mayoría de la población mundial no tiene ningún o un adecuado acceso a Internet. Esto implica que Internet no puede prestar un servicio universal, llegando a todos sin discriminación. El acceso global a Internet para todos requiere una drástica reducción de los costos de acceso a Internet, especialmente en zonas geográficas y poblaciones menos desarrolladas. Las comunidades locales se organizan para construir sus propias infraestructuras de red, conocidas como redes comunitarias, y proporcionan interconexión local y con Internet de forma accesible y asequible. Se fomenta la compartición de recursos, como la infraestructura o el acceso a Internet, para reducir el coste de la conectividad y los servicios. Las comunidades pueden desarrollar sus propias infraestructuras de red como un recurso común, utilizando varias subredes interconectadas dado su tamaño, y compartiendo varias pasarelas de Internet entre sus participantes. El acceso compartido a Internet se ofrece a través de pasarelas que son proxy web, donde los participantes o las organizaciones comparten la capacidad total o excedente de su conexión a Internet con otros participantes. Sin embargo, estas pasarelas pueden saturarse por la demanda, y su capacidad de acceso a Internet se puede degradar debido la falta de regulación. Esta tesis investiga si las redes comunitarias se pueden utilizar para proporcionar acceso universal a Internet. Como primer paso en esta dirección, exploramos las características, limitaciones y usabilidad de un servicio concreto de acceso compartido a Internet con proxies web en una red comunitaria. Sobre la base de nuestros hallazgos, estudiamos y proponemos mecanismos para mejorar la experiencia del usuario y la equitatividad de la compartición, sin introducir una sobrecarga significativa en la red y a otros servicios. Más específicamente, proponemos un mecanismo escalable de selección de pasarela a Internet del lado del cliente, adecuado para entornos heterogéneos como las redes comunitarias. Además, estudiamos y proponemos técnicas para compartir la capacidad de Internet sin deteriorar el desempeño de los participantes que contribuyen.Postprint (published version

    Reducing Internet Latency : A Survey of Techniques and their Merit

    Get PDF
    Bob Briscoe, Anna Brunstrom, Andreas Petlund, David Hayes, David Ros, Ing-Jyh Tsang, Stein Gjessing, Gorry Fairhurst, Carsten Griwodz, Michael WelzlPeer reviewedPreprin

    User-Centric Quality of Service Provisioning in IP Networks

    Get PDF
    The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.France Teleco

    Secure VoIP Performance Measurement

    Get PDF
    This project presents a mechanism for instrumentation of secure VoIP calls. The experiments were run under different network conditions and security systems. VoIP services such as Google Talk, Express Talk and Skype were under test. The project allowed analysis of the voice quality of the VoIP services based on the Mean Opinion Score (MOS) values generated by Perceptual valuation of Speech Quality (PESQ). The quality of the audio streams produced were subjected to end-to-end delay, jitter, packet loss and extra processing in the networking hardware and end devices due to Internetworking Layer security or Transport Layer security implementations. The MOS values were mapped to Perceptual Evaluation of Speech Quality for wideband (PESQ-WB) scores. From these PESQ-WB scores, the graphs of the mean of 10 runs and box and whisker plots for each parameter were drawn. Analysis on the graphs was performed in order to deduce the quality of each VoIP service. The E-model was used to predict the network readiness and Common vulnerability Scoring System (CVSS) was used to predict the network vulnerabilities. The project also provided the mechanism to measure the throughput for each test case. The overall performance of each VoIP service was determined by PESQ-WB scores, CVSS scores and the throughput. The experiment demonstrated the relationship among VoIP performance, VoIP security and VoIP service type. The experiment also suggested that, when compared to an unsecure IPIP tunnel, Internetworking Layer security like IPSec ESP or Transport Layer security like OpenVPN TLS would improve a VoIP security by reducing the vulnerabilities of the media part of the VoIP signal. Morever, adding a security layer has little impact on the VoIP voice quality

    Ambient intelligence in buildings : design and development of an interoperable Internet of Things platform

    Get PDF
    During many years, people and governments have been warned about the increasing levels of pollution and greenhouse gases (GHG) emissions that are endangering our lives on this planet. The Information and Communication Technology sector, usually known as the ICT sector, responsible for the computerization of the society, has been pinpointed as one of the most important sectors contributing to such a problem. Many efforts, however, have been put to shift the trend towards the utilization of renewable resources, such as wind or solar power. Even though governments have agreed to follow this path and avoid the usage of non-renewable energies, it is not enough. Although the ICT sector might seem an added problem due to the number of connected devices, technology improvements and hardware optimization enable new ways of fighting against global warming and GHG emissions. The aforementioned computerization has forced companies to evolve their work into a computer-assisted one. Due to this, companies are now forced to establish their main headquarters inside buildings for work coordination, connection and management. Due to this, buildings are becoming one of the most important issues regarding energy consumption. In order to cope with such problem, the Internet of Things (IoT) offers new paradigms and alternatives for leading the change. IoT is commonly defined as the network of physical and virtual objects that are capable of collecting surrounding data and exchanging it between them or through the Internet. Thanks to these networks, it is possible to monitor any thinkable metric inside buildings, and, then, utilize this information to build efficient automated systems, commonly known as Building Energy Management Systems (BEMS), capable of extracting conclusions on how to optimally and efficiently manage the resources of the building. ICT companies have foreseen this market opportunity that, paired with the appearance of smaller, efficient and more durable sensors, allows the development of efficient IoT systems. However, the lack of agreement and standardization creates chaos inside IoT, and the horizontal connectivity between such systems is still a challenge. Moreover, the vast amount of data to process requires the utilization of Big Data techniques to guarantee close to real-time responses. This thesis initially presents a standard Cloud-based IoT architecture that tries to cope with the aforementioned problems by employing a Cloud middleware that obfuscates the underlying hardware architecture and permits the aggregation of data from multiple heterogeneous sources. Also, sensor information is exposed to any third-party client after authentication. The utilization of automated IoT systems for managing building resources requires high reliability, resilience, and availability. The loss of sensor data is not permitted due to the negative consequences it might have, such as disruptive resource management. For this, it is mandatory to grant backup options to sensor networks in order to guarantee correct functioning in case of partial network disconnections. Additionally, the placement of the sensors inside the building must guarantee minimal energy consumption while fulfilling sensing requirements. Finally, a building resource management use case is presented by means of a simulation tool. The tool draws on occupants' probabilistic models and environmental condition models for actuating upon building elements to ensure optimal and efficient functioning. Occupants' comfort is also taken into consideration and the trade-off between the two metrics is studied. All the presented work is meant to deliver insights and tools for current and future IoT system implementations by setting the basis for standardization agreements yet to happen.Durant molts anys, s'ha alertat a la població i als governs sobre l'increment en els nivells de pol·lució i d'emissió de gasos d'efecte hivernacle, que estan posant en perill la nostra vida a la Terra. El sector de les Tecnologies de la Informació i Comunicació, normalment conegut com les TIC, responsable de la informatització de la societat, ha estat senyalat com un dels sectors més importants encarregat d'agreujar tal problema. Però, molt esforç s'està posant per revertir aquesta situació mitjançant l'ús de recursos renovables, com l'energia eòlica o solar. Tot i que els governs han acordat seguir dit camí i evitar l'ús d'energia no renovable tant com sigui possible, no és suficient per erradicar el problema. Encara que el sector de les TIC pugui semblar un problema afegit donada la gran quantitat i l'increment de dispositius connectats, les millores en tecnologia i en hardware estan habilitant noves maneres de lluitar contra l'escalfament global i l'emissió de gasos d'efecte hivernacle. La informatització, anteriorment mencionada, ha forçat a les empreses a evolucionar el seu model de negoci cap a un més enfocat a la utilització de xarxes d'ordinadors per gestionar els seus recursos. Per això, dites companyies s'estan veient forçades a establir les seves seus centrals dintre d'edificis, per tenir un major control sobre la coordinació, connexió i maneig dels seus recursos. Això està provocant un augment en el consum energètic dels edificis, que s'estan convertint en un dels principals problemes. Per poder fer front al problema, la Internet de les Coses o Internet of Things (IoT) ofereix nous paradigmes i alternatives per liderar el canvi. IoT es defineix com la xarxa d'objectes físics i virtuals, capaços de recol·lectar la informació per construir sistemes automatitzats, coneguts com a Sistemes de Gestió Energètica per Edificis, capaços d'extreure conclusions sobre com utilitzar de manera eficient i òptima els recursos de l'edifici. Companyies pertanyents a les TIC han previst aquesta oportunitat de mercat que, en sincronia amb l'aparició de sensors més petits, eficients i duradors, permeten el desenvolupament de sistemes IoT eficients. Però, la falta d'acord en quant a l'estandardització de dits sistemes està creant un escenari caòtic, ja que s'està fent impossible la connectivitat horitzontal entre dits sistemes. A més, la gran quantitat de dades a processar requereix la utilització de tècniques de Big Data per poder garantir respostes en temps acceptables. Aquesta tesi presenta, inicialment, una arquitectura IoT estàndard basada en la Neu, que tracta de fer front als problemes anteriorment presentats mitjançant l'ús d'un middleware allotjat a la Neu que ofusca l'arquitectura hardware subjacent i permet l'agregació de la informació originada des de múltiples fonts heterogènies. A més, la informació dels sensors s'exposa perquè qualsevol client de tercers pugui consultar-la, després d'haver-se autenticat. La utilització de sistemes IoT automatitzats per gestionar els recursos dels edificis requereix un alt nivell de fiabilitat, resistència i disponibilitat. La perduda d'informació no està permesa degut a les conseqüències negatives que podría suposar, com una mala presa de decisions. Per això, és obligatori atorgar opcions de backup a les xarxes de sensors per garantir un correcte funcionament inclús quan es produeixen desconnexions parcials de la xarxa. Addicionalment, la col·locació dels sensors dintre de l'edifici ha de garantir un consum energètic mínim dintre de les restriccions de desplegament imposades. Finalment, presentem un cas d'ús d'un Sistema de Gestió Energètica per Edificis mitjançant una eina de simulació. Dita eina utilitza com informació d'entrada models probabilístics sobre les accions dels ocupants i models sobre la condició ambiental per actuar sobre els elements de l'edifici i garantir un funcionament òptim i eficient. A més, el confort dels ocupants també es considera com mètrica a optimitzar. Donada la impossibilitat d’optimitzar les dues mètriques de manera conjunta, aquesta tesi també presenta un estudi sobre el trade-off que existeix entre elles. Tot el treball presentat està pensat per atorgar idees i eines pels sistemes IoT actuals i futurs, i assentar les bases per l’estandardització que encara està per arribar.Durante muchos años, se ha alertado a la población y a los gobiernos acerca del incremento en los niveles de polución y de emisión de gases de efecto invernadero, que están poniendo en peligro nuestra vida en la Tierra. El sector de las Tecnologías de la Información y Comunicación, normalmente conocido como las TIC, responsable de la informatización de la sociedad, ha sido señalada como uno de los sectores más importantes encargado de agravar tal problema. Sin embargo, mucho esfuerzo se está poniendo para revertir esta situación mediante el uso de recursos renovables, como la energía eólica o solar. A pesar de que los gobiernos han acordado seguir dicho camino y evitar el uso de energía no renovable tanto como sea posible, no es suficiente para erradicar el problema. Aunque el sector de las TIC pueda parecer un problema añadido dada la gran cantidad y el incremento de dispositivos conectados, las mejoras en tecnología y en hardware están habilitando nuevas maneras de luchar contra el calentamiento global y la emisión de gases de efecto invernadero. Durante las últimas décadas, compañías del sector público y privado conscientes del problema han centrado sus esfuerzos en la creación de soluciones orientadas a la eficiencia energética tanto a nivel de hardware como de software. Las nuevas redes troncales están siendo creadas con dispositivos eficientes y los proveedores de servicios de Internet tienden a crear sistemas conscientes de la energía para su optimización dentro de su dominio. Siguiendo esta tendencia, cualquier nuevo sistema creado y añadido a la red debe garantizar un cierto nivel de conciencia y un manejo óptimo de los recursos que utiliza. La informatización, anteriormente mencionada, ha forzado a las empresas a evolucionar su modelo de negocio hacia uno más enfocado en la utilización de redes de ordenadores para gestionar sus recursos. Por eso, dichas compañías se están viendo forzadas a establecer sus sedes centrales dentro de edificios, para tener un mayor control sobre la coordinación, conexión y manejo de sus recursos. Esto está provocando un aumento en el consumo energético de los edificios, que se están convirtiendo en uno de los principales problemas. Para poder hacer frente al problema, el Internet de las Cosas o Internet of Things (IoT) ofrece nuevos paradigmas y alternativas para liderar el cambio. IoT se define como la red de objetos físicos y virtuales, capaces de recolectar la información del entorno e intercambiarla entre los propios objetos o a través de Internet. Gracias a estas redes, es posible monitorizar cualquier métrica que podamos imaginar dentro de un edificio, y, después, utilizar dicha información para construir sistemas automatizados, conocidos como Sistemas de Gestión Energética para Edificios, capaces de extraer conclusiones sobre cómo utilizar de manera eficiente y óptima los recursos del edificio. Compañías pertenecientes a las TIC han previsto esta oportunidad de mercado que, en sincronía con la aparición de sensores más pequeños, eficientes y duraderos, permite el desarrollo de sistemas IoT eficientes. Sin embargo, la falta de acuerdo en cuanto a la estandarización de dichos sistemas está creando un escenario caótico, ya que se hace imposible la conectividad horizontal entre dichos sistemas. Además, la gran cantidad de datos a procesar requiere la utilización de técnicas de Big Data para poder garantizar respuestas en tiempos aceptables. Esta tesis presenta, inicialmente, una arquitectura IoT estándar basada en la Nube que trata de hacer frente a los problemas anteriormente presentados mediante el uso de un middleware alojado en la Nube que ofusca la arquitectura hardware subyacente y permite la agregación de la información originada des de múltiples fuentes heterogéneas. Además, la información de los sensores se expone para que cualquier cliente de terceros pueda consultarla, después de haberse autenticado. La utilización de sistemas IoT automatizados para manejar los recursos de los edificios requiere un alto nivel de fiabilidad, resistencia y disponibilidad. La pérdida de información no está permitida debido a las consecuencias negativas que podría suponer, como una mala toma de decisiones. Por eso, es obligatorio otorgar opciones de backup a las redes de sensores para garantizar su correcto funcionamiento incluso cuando se producen desconexiones parciales de la red. Adicionalmente, la colocación de los sensores dentro del edificio debe garantizar un consumo energético mínimo dentro de las restricciones de despliegue impuestas. En esta tesis, mejoramos el problema de colocación de los sensores para redes heterogéneas de sensores inalámbricos añadiendo restricciones de clustering o agrupamiento, para asegurar que cada tipo de sensor es capaz de obtener su métrica correspondiente, y restricciones de protección mediante la habilitación de rutas de transmisión secundarias. En cuanto a grandes redes homogéneas de sensores inalámbricos, esta tesis estudia aumentar su resiliencia mediante la identificación de los sensores más críticos. Finalmente, presentamos un caso de uso de un Sistema de Gestión Energética para Edificios mediante una herramienta de simulación. Dicha herramienta utiliza como información de entrada modelos probabilísticos sobre las acciones de los ocupantes y modelos sobre la condición ambiental para actuar sobre los elementos del edificio y garantizar un funcionamiento óptimo y eficiente. Además, el comfort de los ocupantes también se considera como métrica a optimizar. Dada la imposibilidad de optimizar las dos métricas de manera conjunta, esta tesis también presenta un estudio sobre el trade-off que existe entre ellas. Todo el trabajo presentado está pensado para otorgar ideas y herramientas para los sistemas IoT actuales y futuros, y asentar las bases para la estandarización que todavía está por llegar.Postprint (published version
    • …
    corecore