17 research outputs found

    Tietoverkkojen valvonnan yhdenmukaistaminen

    Get PDF
    As the modern society is increasingly dependant on computer networks especially as the Internet of Things gaining popularity, a need to monitor computer networks along with associated devices increases. Additionally, the amount of cyber attacks is increasing and certain malware such as Mirai target especially network devices. In order to effectively monitor computer networks and devices, effective solutions are required for collecting and storing the information. This thesis designs and implements a novel network monitoring system. The presented system is capable of utilizing state-of-the-art network monitoring protocols and harmonizing the collected information using a common data model. This design allows effective queries and further processing on the collected information. The presented system is evaluated by comparing the system against the requirements imposed on the system, by assessing the amount of harmonized information using several protocols and by assessing the suitability of the chosen data model. Additionally, the protocol overheads of the used network monitoring protocols are evaluated. The presented system was found to fulfil the imposed requirements. Approximately 21% of the information provided by the chosen network monitoring protocols could be harmonized into the chosen data model format. The result is sufficient for effective querying and combining the information, as well as for processing the information further. The result can be improved by extending the data model and improving the information processing. Additionally, the chosen data model was shown to be suitable for the use case presented in this thesis.Yhteiskunnan ollessa jatkuvasti verkottuneempi erityisesti Esineiden Internetin kasvattaessa suosiotaan, tarve seurata sekä verkon että siihen liitettyjen laitteiden tilaa ja mahdollisia poikkeustilanteita kasvaa. Lisäksi tietoverkkohyökkäysten määrä on kasvamassa ja erinäiset haittaohjelmat kuten Mirai, ovat suunnattu erityisesti verkkolaitteita kohtaan. Jotta verkkoa ja sen laitteiden tilaa voidaan seurata, tarvitaan tehokkaita ratkaisuja tiedon keräämiseen sekä säilöntään. Tässä diplomityössä suunnitellaan ja toteutetaan verkonvalvontajärjestelmä, joka mahdollistaa moninaisten verkonvalvontaprotokollien hyödyntämisen tiedonkeräykseen. Lisäksi järjestelmä säilöö kerätyn tiedon käyttäen yhtenäistä tietomallia. Yhtenäisen tietomallin käyttö mahdollistaa tiedon tehokkaan jatkojalostamisen sekä haut tietosisältöihin. Diplomityössä esiteltävän järjestelmän ominaisuuksia arvioidaan tarkastelemalla, minkälaisia osuuksia eri verkonvalvontaprotokollien tarjoamasta informaatiosta voidaan yhdenmukaistaa tietomalliin, onko valittu tietomalli soveltuva verkonvalvontaan sekä varmistetaan esiteltävän järjestelmän täyttävän sille asetetut vaatimukset. Lisäksi työssä arvioidaan käytettävien verkonvalvontaprotokollien siirtämisen kiinteitä kustannuksia kuten otsakkeita. Työssä esitellyn järjestelmän todettiin täyttävän sille asetetut vaatimukset. Eri verkonvalvontaprotokollien tarjoamasta informaatiosta keskimäärin 21% voitiin harmonisoida tietomalliin. Saavutettu osuus on riittävä, jotta eri laitteista saatavaa informaatiota voidaan yhdistellä ja hakea tehokkaasti. Lukemaa voidaan jatkossa parantaa laajentamalla tietomallia sekä kehittämällä kerätyn informaation prosessointia. Lisäksi valittu tietomalli todettiin soveltuvaksi tämän diplomityön käyttötarkoitukseen

    An ontology-driven architecture for data integration and management in home-based telemonitoring scenarios

    Get PDF
    The shift from traditional medical care to the use of new technology and engineering innovations is nowadays an interesting and growing research area mainly motivated by a growing population with chronic conditions and disabilities. By means of information and communications technologies (ICTs), telemedicine systems offer a good solution for providing medical care at a distance to any person in any place at any time. Although significant contributions have been made in this field in recent decades, telemedicine and in e-health scenarios in general still pose numerous challenges that need to be addressed by researchers in order to take maximum advantage of the benefits that these systems provide and to support their long-term implementation. The goal of this research thesis is to make contributions in the field of home-based telemonitoring scenarios. By periodically collecting patients' clinical data and transferring them to physicians located in remote sites, patient health status supervision and feedback provision is possible. This type of telemedicine system guarantees patient supervision while reducing costs (enabling more autonomous patient care and avoiding hospital over flows). Furthermore, patients' quality of life and empowerment are improved. Specifically, this research investigates how a new architecture based on ontologies can be successfully used to address the main challenges presented in home-based telemonitoring scenarios. The challenges include data integration, personalized care, multi-chronic conditions, clinical and technical management. These are the principal issues presented and discussed in this thesis. The proposed new ontology-based architecture takes into account both practical and conceptual integration issues and the transference of data between the end points of the telemonitoring scenario (i.e, communication and message exchange). The architecture includes two layers: 1) a conceptual layer and 2) a data and communication layer. On the one hand, the conceptual layer based on ontologies is proposed to unify the management procedure and integrate incoming data from all the sources involved in the telemonitoring process. On the other hand, the data and communication layer based on web service technologies is proposed to provide practical back-up to the use of the ontology, to provide a real implementation of the tasks it describes and thus to provide a means of exchanging data. This architecture takes advantage of the combination of ontologies, rules, web services and the autonomic computing paradigm. All are well-known technologies and popular solutions applied in the semantic web domain and network management field. A review of these technologies and related works that have made use of them is presented in this thesis in order to understand how they can be combined successfully to provide a solution for telemonitoring scenarios. The design and development of the ontology used in the conceptual layer led to the study of the autonomic computing paradigm and its combination with ontologies. In addition, the OWL (Ontology Web Language) language was studied and selected to express the required knowledge in the ontology while the SPARQL language was examined for its effective use in defining rules. As an outcome of these research tasks, the HOTMES (Home Ontology for Integrated Management in Telemonitoring Scenarios) ontology, presented in this thesis, was developed. The combination of the HOTMES ontology with SPARQL rules to provide a flexible solution for personalising management tasks and adapting the methodology for different management purposes is also discussed. The use of Web Services (WSs) was investigated to support the exchange of information defined in the conceptual layer of the architecture. A generic ontology based solution was designed to integrate data and management procedures in the data and communication layer of the architecture. This is an innovative REST-inspired architecture that allows information contained in an ontology to be exchanged in a generic manner. This layer structure and its communication method provide the approach with scalability and re-usability features. The application of the HOTMES-based architecture has been studied for clinical purposes following three simple methodological stages described in this thesis. Data and management integration for context-aware and personalized monitoring services for patients with chronic conditions in the telemonitoring scenario are thus addressed. In particular, the extension of the HOTMES ontology defines a patient profile. These profiles in combination with individual rules provide clinical guidelines aiming to monitor and evaluate the evolution of the patient's health status evolution. This research implied a multi-disciplinary collaboration where clinicians had an essential role both in the ontology definition and in the validation of the proposed approach. Patient profiles were defined for 16 types of different diseases. Finally, two solutions were explored and compared in this thesis to address the remote technical management of all devices that comprise the telemonitoring scenario. The first solution was based on the HOTMES ontology-based architecture. The second solution was based on the most popular TCP/IP management architecture, SNMP (Simple Network Management Protocol). As a general conclusion, it has been demonstrated that the combination of ontologies, rules, WSs and the autonomic computing paradigm takes advantage of the main benefits that these technologies can offer in terms of knowledge representation, work flow organization, data transference, personalization of services and self-management capabilities. It has been proven that ontologies can be successfully used to provide clear descriptions of managed data (both clinical and technical) and ways of managing such information. This represents a further step towards the possibility of establishing more effective home-based telemonitoring systems and thus improving the remote care of patients with chronic diseases

    Managing the Transition from SNMP to NETCONF: Comparing Dual-Stack and Protocol Gateway Hybrid Approaches

    Get PDF
    As industries become increasingly automated and stressed to seek business advantages, they often have operational constraints that make modernization and security more challenging. Constraints exist such as low operating budgets, long operational lifetimes and infeasible network/device upgrade/modification paths. In order to bypass these constraints with minimal risk of disruption and perform ``no harm'', network administrators have come to rely on using dual-stack approaches, which allow legacy protocols to co-exist with modern ones. For example, if SNMP is required for managing legacy devices, and a newer protocol (NETCONF) is required for modern devices, then administrators simply modify firewall Access Control Lists (ACLs) to allow passage of both protocols. In today's networks, firewalls are ubiquitous, relatively inexpensive, and able to support multiple protocols (hence dual-stack) while providing network security. While investigating securing legacy devices in heterogeneous networks, it was determined that dual-stack firewall approaches do not provide adequate protection beyond layer three filtering of the IP stack. Therefore, the NETCONF/SNMP Protocol Gateway hybrid (NSPG) was developed as an alternative in environments where security is necessary, but legacy devices are infeasible to upgrade, replace, and modify. The NSPG allows network administrators to utilize only a single modern protocol (NETCONF) instead of both NETCONF and SNMP, and enforce additional security controls without modifying existing deployments. It has been demonstrated that legacy devices can be securely managed in a protocol-agnostic manner using low-cost commodity hardware (e.g., the RaspberryPi platform) with administrator-derived XML-based configuration policies

    Design and implementation of a fault management service for heterogeneous networks using Tina Network Resource architecture

    Get PDF
    Master of Science in Engineering - EngineeringFaults are unavoidable and cause network downtime and degradation of large and complex communication networks. The need for fault management capabilities for improving network reliability is critical to rectify these faults. Current communication networks are moving towards the distributed computing environment enabling these networks to transport heterogeneous multimedia information across end to end connections. An advanced fault management system is thus required for such communication networks. Fault Management provides information on the status of the network by locating, detecting, identifying, isolating, and correcting network problems thereby increasing network reliability. The TINA (Telecommunication Information Networking Architecture) standards define a Network Resource Architecture (NRA) that provides a framework of a transport network that is capable of transporting heterogeneous multimedia media information across heterogeneous networks. TINA also defines a Management Architecture that follows the functional area organization defined in the OSI (Open Systems Interconnection) Management Framework, namely fault, configuration, accounting, performance, and security management (FCAPS). The aim of this project is to utilise the TINA NRA and Management Architecture concepts and principles to design and implement a distributed Fault Management Service for heterogeneous networks. The design presented here utilises TINA’s fault management specifi- cations, together with UML modelling tools to developed this Fault Management Service. The design incorporates the use of CORBA and SNMP to provide a distributed management functionality capable of providing fault management support across heterogeneous networks. The generic nature of the fault management service is tested on the SATINA Trial platform which consists of both an ATM network as well as an IP MPLS network. The report concludes that the Fault Management Service is applicable to any connectionoriented network that is modeled using the TINA NRA specification and principles

    Discovery and Group Communication for Constrained Internet of Things Devices using the Constrained Application Protocol

    Get PDF
    The ubiquitous Internet is rapidly spreading to new domains. This expansion of the Internet is comparable in scale to the spread of the Internet in the ’90s. The resulting Internet is now commonly referred to as the Internet of Things (IoT) and is expected to connect about 50 billion devices by the year 2020. This means that in just five years from the time of writing this PhD the number of interconnected devices will exceed the number of humans by sevenfold. It is further expected that the majority of these IoT devices will be resource constrained embedded devices such as sensors and actuators. Sensors collect information about the physical world and inject this information into the virtual world. Next processing and reasoning can occur and decisions can be taken to enact upon the physical world by injecting feedback to actuators. The integration of embedded devices into the Internet introduces new challenges, since many of the existing Internet technologies and protocols were not designed for this class of constrained devices. These devices are typically optimized for low cost and power consumption and thus have very limited power, memory, and processing resources and have long sleep periods. The networks formed by these embedded devices are also constrained and have different characteristics than those typical in todays Internet. These constrained networks have high packet loss, low throughput, frequent topology changes and small useful payload sizes. They are referred to as LLN. Therefore, it is in most cases unfeasible to run standard Internet protocols on this class of constrained devices and/or LLNs. New or adapted protocols that take into consideration the capabilities of the constrained devices and the characteristics of LLNs, are required. In the past few years, there were many efforts to enable the extension of the Internet technologies to constrained devices. Initially, most of these efforts were focusing on the networking layer. However, the expansion of the Internet in the 90s was not due to introducing new or better networking protocols. It was a result of introducing the World Wide Web (WWW), which made it easy to integrate services and applications. One of the essential technologies underpinning the WWW was the Hypertext Transfer Protocol (HTTP). Today, HTTP has become a key protocol in the realization of scalable web services building around the Representational State Transfer (REST) paradigm. The REST architectural style enables the realization of scalable and well-performing services using uniform and simple interfaces. The availability of an embedded counterpart of HTTP and the REST architecture could boost the uptake of the IoT. Therefore, more recently, work started to allow the integration of constrained devices in the Internet at the service level. The Internet Engineering Task Force (IETF) Constrained RESTful Environments (CoRE) working group has realized the REST architecture in a suitable form for the most constrained nodes and networks. To that end the Constrained Application Protocol (CoAP) was introduced, a specialized RESTful web transfer protocol for use with constrained networks and nodes. CoAP realizes a subset of the REST mechanisms offered by HTTP, but is optimized for Machine-to-Machine (M2M) applications. This PhD research builds upon CoAP to enable a better integration of constrained devices in the IoT and examines proposed CoAP solutions theoretically and experimentally proposing alternatives when appropriate. The first part of this PhD proposes a mechanism that facilitates the deployment of sensor networks and enables the discovery, end-to-end connectivity and service usage of newly deployed sensor nodes. The proposed approach makes use of CoAP and combines it with Domain Name System (DNS) in order to enable the use of userfriendly Fully Qualified Domain Names (FQDNs) for addressing sensor nodes. It includes the automatic discovery of sensors and sensor gateways and the translation of HTTP to CoAP, thus making the sensor resources globally discoverable and accessible from any Internet-connected client using either IPv6 addresses or DNS names both via HTTP or CoAP. As such, the proposed approach provides a feasible and flexible solution to achieve hierarchical self-organization with a minimum of pre-configuration. By doing so we minimize costly human interventions and eliminate the need for introducing new protocols dedicated for the discovery and organization of resources. This reduces both cost and the implementation footprint on the constrained devices. The second, larger, part of this PhD focuses on using CoAP to realize communication with groups of resources. In many IoT application domains, sensors or actuators need to be addressed as groups rather than individually, since individual resources might not be sufficient or useful. A simple example is that all lights in a room should go on or off as a result of the user toggling the light switch. As not all IoT applications may need group communication, the CoRE working group did not include it in the base CoAP specification. This way the base protocol is kept as efficient and as simple as possible so it would run on even the most constrained devices. Group communication and other features that might not be needed by all devices are standardized in a set of optional separate extensions. We first examined the proposed CoAP extension for group communication, which utilizes Internet Protocol version 6 (IPv6) multicasts. We highlight its strengths and weaknesses and propose our own complementary solution that uses unicast to realize group communication. Our solution offers capabilities beyond simple group communication. For example, we provide a validation mechanism that performs several checks on the group members, to make sure that combining them together is possible. We also allow the client to request that results of the individual members are processed before they are sent to the client. For example, the client can request to obtain only the maximum value of all individual members. Another important optional extension to CoAP allows clients to continuously observe resources by registering their interest in receiving notifications from CoAP servers once there are changes to the values of the observed resources. By using this publish/subscribe mechanism the client does not need to continuously poll the resource to find out whether it has changed its value. This typically leads to more efficient communication patterns that preserve valuable device and LLN resources. Unfortunately CoAP observe does not work together with the CoAP group communication extension, since the observe extension assumes unicast communication while the group communication extension only support multicast communication. In this PhD we propose to extend our own group communication solution to offer group observation capabilities. By combining group observation with group processing features, it becomes possible to notify the client only about certain changes to the observed group (e.g., the maximum value of all group members has changed). Acknowledging that the use of multicast as well as unicast has strengths and weaknesses we propose to extend our unicast based solution with certain multicast features. By doing so we try to combine the strengths of both approaches to obtain a better overall group communication that is flexible and that can be tailored according to the use case needs. Together, the proposed mechanisms represent a powerful and comprehensive solution to the challenging problem of group communication with constrained devices. We have evaluated the solutions proposed in this PhD extensively and in a variety of forms. Where possible, we have derived theoretical models and have conducted numerous simulations to validate them. We have also experimentally evaluated those solutions and compared them with other proposed solutions using a small demo box and later on two large scale wireless sensor testbeds and under different test conditions. The first testbed is located in a large, shielded room, which allows testing under controlled environments. The second testbed is located inside an operational office building and thus allows testing under normal operation conditions. Those tests revealed performance issues and some other problems. We have provided some solutions and suggestions for tackling those problems. Apart from the main contributions, two other relevant outcomes of this PhD are described in the appendices. In the first appendix we review the most important IETF standardization efforts related to the IoT and show that with the introduction of CoAP a complete set of standard protocols has become available to cover the complete networking stack and thus making the step from the IoT into the Web of Things (WoT). Using only standard protocols makes it possible to integrate devices from various vendors into one bigWoT accessible to humans and machines alike. In the second appendix, we provide an alternative solution for grouping constrained devices by using virtualization techniques. Our approach focuses on the objects, both resource-constrained and non-constrained, that need to cooperate by integrating them into a secured virtual network, named an Internet of Things Virtual Network or IoT-VN. Inside this IoT-VN full end-to-end communication can take place through the use of protocols that take the limitations of the most resource-constrained devices into account. We describe how this concept maps to several generic use cases and, as such, can constitute a valid alternative approach for supporting selected applications

    Discovery and group communication for constrained Internet of Things devices using the Constrained Application Protocol

    Get PDF
    The ubiquitous Internet is rapidly spreading to new domains. This expansion of the Internet is comparable in scale to the spread of the Internet in the ’90s. The resulting Internet is now commonly referred to as the Internet of Things (IoT) and is expected to connect about 50 billion devices by the year 2020. This means that in just five years from the time of writing this PhD the number of interconnected devices will exceed the number of humans by sevenfold. It is further expected that the majority of these IoT devices will be resource constrained embedded devices such as sensors and actuators. Sensors collect information about the physical world and inject this information into the virtual world. Next processing and reasoning can occur and decisions can be taken to enact upon the physical world by injecting feedback to actuators. The integration of embedded devices into the Internet introduces new challenges, since many of the existing Internet technologies and protocols were not designed for this class of constrained devices. These devices are typically optimized for low cost and power consumption and thus have very limited power, memory, and processing resources and have long sleep periods. The networks formed by these embedded devices are also constrained and have different characteristics than those typical in todays Internet. These constrained networks have high packet loss, low throughput, frequent topology changes and small useful payload sizes. They are referred to as LLN. Therefore, it is in most cases unfeasible to run standard Internet protocols on this class of constrained devices and/or LLNs. New or adapted protocols that take into consideration the capabilities of the constrained devices and the characteristics of LLNs, are required. In the past few years, there were many efforts to enable the extension of the Internet technologies to constrained devices. Initially, most of these efforts were focusing on the networking layer. However, the expansion of the Internet in the 90s was not due to introducing new or better networking protocols. It was a result of introducing the World Wide Web (WWW), which made it easy to integrate services and applications. One of the essential technologies underpinning the WWW was the Hypertext Transfer Protocol (HTTP). Today, HTTP has become a key protocol in the realization of scalable web services building around the Representational State Transfer (REST) paradigm. The REST architectural style enables the realization of scalable and well-performing services using uniform and simple interfaces. The availability of an embedded counterpart of HTTP and the REST architecture could boost the uptake of the IoT. Therefore, more recently, work started to allow the integration of constrained devices in the Internet at the service level. The Internet Engineering Task Force (IETF) Constrained RESTful Environments (CoRE) working group has realized the REST architecture in a suitable form for the most constrained nodes and networks. To that end the Constrained Application Protocol (CoAP) was introduced, a specialized RESTful web transfer protocol for use with constrained networks and nodes. CoAP realizes a subset of the REST mechanisms offered by HTTP, but is optimized for Machine-to-Machine (M2M) applications. This PhD research builds upon CoAP to enable a better integration of constrained devices in the IoT and examines proposed CoAP solutions theoretically and experimentally proposing alternatives when appropriate. The first part of this PhD proposes a mechanism that facilitates the deployment of sensor networks and enables the discovery, end-to-end connectivity and service usage of newly deployed sensor nodes. The proposed approach makes use of CoAP and combines it with Domain Name System (DNS) in order to enable the use of userfriendly Fully Qualified Domain Names (FQDNs) for addressing sensor nodes. It includes the automatic discovery of sensors and sensor gateways and the translation of HTTP to CoAP, thus making the sensor resources globally discoverable and accessible from any Internet-connected client using either IPv6 addresses or DNS names both via HTTP or CoAP. As such, the proposed approach provides a feasible and flexible solution to achieve hierarchical self-organization with a minimum of pre-configuration. By doing so we minimize costly human interventions and eliminate the need for introducing new protocols dedicated for the discovery and organization of resources. This reduces both cost and the implementation footprint on the constrained devices. The second, larger, part of this PhD focuses on using CoAP to realize communication with groups of resources. In many IoT application domains, sensors or actuators need to be addressed as groups rather than individually, since individual resources might not be sufficient or useful. A simple example is that all lights in a room should go on or off as a result of the user toggling the light switch. As not all IoT applications may need group communication, the CoRE working group did not include it in the base CoAP specification. This way the base protocol is kept as efficient and as simple as possible so it would run on even the most constrained devices. Group communication and other features that might not be needed by all devices are standardized in a set of optional separate extensions. We first examined the proposed CoAP extension for group communication, which utilizes Internet Protocol version 6 (IPv6) multicasts. We highlight its strengths and weaknesses and propose our own complementary solution that uses unicast to realize group communication. Our solution offers capabilities beyond simple group communication. For example, we provide a validation mechanism that performs several checks on the group members, to make sure that combining them together is possible. We also allow the client to request that results of the individual members are processed before they are sent to the client. For example, the client can request to obtain only the maximum value of all individual members. Another important optional extension to CoAP allows clients to continuously observe resources by registering their interest in receiving notifications from CoAP servers once there are changes to the values of the observed resources. By using this publish/subscribe mechanism the client does not need to continuously poll the resource to find out whether it has changed its value. This typically leads to more efficient communication patterns that preserve valuable device and LLN resources. Unfortunately CoAP observe does not work together with the CoAP group communication extension, since the observe extension assumes unicast communication while the group communication extension only support multicast communication. In this PhD we propose to extend our own group communication solution to offer group observation capabilities. By combining group observation with group processing features, it becomes possible to notify the client only about certain changes to the observed group (e.g., the maximum value of all group members has changed). Acknowledging that the use of multicast as well as unicast has strengths and weaknesses we propose to extend our unicast based solution with certain multicast features. By doing so we try to combine the strengths of both approaches to obtain a better overall group communication that is flexible and that can be tailored according to the use case needs. Together, the proposed mechanisms represent a powerful and comprehensive solution to the challenging problem of group communication with constrained devices. We have evaluated the solutions proposed in this PhD extensively and in a variety of forms. Where possible, we have derived theoretical models and have conducted numerous simulations to validate them. We have also experimentally evaluated those solutions and compared them with other proposed solutions using a small demo box and later on two large scale wireless sensor testbeds and under different test conditions. The first testbed is located in a large, shielded room, which allows testing under controlled environments. The second testbed is located inside an operational office building and thus allows testing under normal operation conditions. Those tests revealed performance issues and some other problems. We have provided some solutions and suggestions for tackling those problems. Apart from the main contributions, two other relevant outcomes of this PhD are described in the appendices. In the first appendix we review the most important IETF standardization efforts related to the IoT and show that with the introduction of CoAP a complete set of standard protocols has become available to cover the complete networking stack and thus making the step from the IoT into the Web of Things (WoT). Using only standard protocols makes it possible to integrate devices from various vendors into one bigWoT accessible to humans and machines alike. In the second appendix, we provide an alternative solution for grouping constrained devices by using virtualization techniques. Our approach focuses on the objects, both resource-constrained and non-constrained, that need to cooperate by integrating them into a secured virtual network, named an Internet of Things Virtual Network or IoT-VN. Inside this IoT-VN full end-to-end communication can take place through the use of protocols that take the limitations of the most resource-constrained devices into account. We describe how this concept maps to several generic use cases and, as such, can constitute a valid alternative approach for supporting selected applications

    Judging traffic differentiation as network neutrality violation according to internet regulation

    Get PDF
    Network Neutrality (NN) is a principle that establishes that traffic generated by Internet applications should be treated equally and it should not be affected by arbitrary interfer- ence, degradation, or interruption. Despite this common sense, NN has multiple defi- nitions spread across the academic literature, which differ primarily on what constitutes the proper equality level to consider the network as neutral. NN definitions may also be included in regulations that control activities on the Internet. However, the regulations are set by regulators whose acts are valid within a geographical area, named jurisdic- tion. Thus, both the academia and regulations provide multiple and heterogeneous NN definitions. In this thesis, the regulations are used as guidelines to detect NN violations, which are, by this approach, the adoption of traffic management practices prohibited by regulators. Thereafter, the solutions can provide helpful information for users to support claims against illegal traffic management practices. However, state-of-the-art solutions adopt strict academic definitions (e.g., all traffic must be treated equally) or adopt the regulatory definitions from one jurisdiction, which is not realistic or does not consider that multiple jurisdictions may be traversed in an end-to-end network path, respectively An impact analysis showed that, under certain circumstances, from 39% to 48% of the detected Traffic Differentiations (TDs) are not NN violations when the regulations are considered, exposing that the regulatory aspect must not be ignored. In this thesis, a Reg- ulation Assessment step is proposed to be performed after the TD detection. This step shall consider all NN definitions that may be found in an end-to-end network path and point out NN violation when they are violated. A service is proposed to perform this step for TD detection solutions, given the unfeasibility of every solution implementing the re- quired functionalities. A Proof-of-Concept (PoC) prototype was developed based on the requirements identified along with the impact analysis, which was evaluated using infor- mation about TDs detected by a state-of-the-art solution. The verdicts were inconclusive (the TD is an NN violation or not) for a quarter of the scenarios due to lack of information about the traversed network paths and the occurrence zones (where in the network path, the TD is suspected of being deployed). However, the literature already has proposals of approaches to obtain such information. These results should encourage TD detection solution proponents to collect this data and submit them for the Regulation Assessment.Neutralidade da rede (NR) é um princípio que estabelece que o tráfego de aplicações e serviços seja tratado igualitariamente e não deve ser afetado por interferência, degradação, ou interrupção arbitrária. Apesar deste senso comum, NR tem múltiplas definições na literatura acadêmica, que diferem principalmente no que constitui o nível de igualdade adequado para considerar a rede como neutra. As definições de NR também podem ser incluídas nas regulações que controlam as atividades na Internet. No entanto, tais regu- lações são definidas por reguladores cujos atos são válidos apenas dentro de uma área geográfica denominada jurisdição. Assim, tanto a academia quanto a regulação forne- cem definições múltiplas e heterogêneas de NR. Nesta tese, a regulação é utilizada como guia para detecção de violação da NR, que nesta abordagem, é a adoção de práticas de gerenciamento de tráfego proibidas pelos reguladores. No entanto, as soluções adotam definições estritas da academia (por exemplo, todo o tráfego deve ser tratado igualmente) ou adotam as definições regulatórias de uma jurisdição, o que pode não ser realista ou pode não considerar que várias jurisdições podem ser atravessadas em um caminho de rede, respectivamente. Nesta tese, é proposta uma etapa de Avaliação da Regulação após a detecção da Diferenciação de Tráfego (DT), que deve considerar todas as definições de NR que podem ser encontradas em um caminho de rede e sinalizar violações da NR quando elas forem violadas. Uma análise de impacto mostrou que, em determinadas cir- cunstâncias, de 39% a 48% das DTs detectadas não são violações quando a regulação é considerada. É proposto um serviço para realizar a etapa de Avaliação de Regulação, visto que seria inviável que todas as soluções tivessem que implementar tal etapa. Um protótipo foi desenvolvido e avaliado usando informações sobre DTs detectadas por uma solução do estado-da-arte. Os veredictos foram inconclusivos (a DT é uma violação ou não) para 75% dos cenários devido à falta de informações sobre os caminhos de rede percorridos e sobre onde a DT é suspeita de ser implantada. No entanto, existem propostas para realizar a coleta dessas informações e espera-se que os proponentes de soluções de detecção de DT passem a coletá-las e submetê-las para o serviço de Avaliação de Regulação

    The development of a discovery and control environment for networked audio devices based on a study of current audio control protocols

    Get PDF
    This dissertation develops a standard device model for networked audio devices and introduces a novel discovery and control environment that uses the developed device model. The proposed standard device model is derived from a study of current audio control protocols. Both the functional capabilities and design principles of audio control protocols are investigated with an emphasis on Open Sound Control, SNMP and IEC-62379, AES64, CopperLan and UPnP. An abstract model of networked audio devices is developed, and the model is implemented in each of the previously mentioned control protocols. This model is also used within a novel discovery and control environment designed around a distributed associative memory termed an object space. This environment challenges the accepted notions of the functionality provided by a control protocol. The study concludes by comparing the salient features of the different control protocols encountered in this study. Different approaches to control protocol design are considered, and several design heuristics for control protocols are proposed

    Standards as interdependent artifacts : the case of the Internet

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Engineering Systems Division, 2008.Includes bibliographical references.This thesis has explored a new idea: viewing standards as interdependent artifacts and studying them with network analysis tools. Using the set of Internet standards as an example, the research of this thesis includes the citation network, the author affiliation network, and the co-author network of the Internet standards over the period of 1989 to 2004. The major network analysis tools used include cohesive subgroup decomposition (the algorithm by Newman and Girvan is used), regular equivalence class decomposition (the REGE algorithm and the method developed in this thesis is used), nodal prestige and acquaintance (both calculated from Kleinberg's technique), and some social network analysis tools. Qualitative analyses of the historical and technical context of the standards as well as statistical analyses of various kinds are also used in this research. A major finding of this thesis is that for the understanding of the Internet, it is beneficial to consider its standards as interdependent artifacts. Because the basic mission of the Internet (i.e. to be an interoperable system that enables various services and applications) is enabled, not by one or a few, but by a great number of standards developed upon each other, to study the standards only as stand-alone specifications cannot really produce meaningful understandings about a workable system. Therefore, the general approaches and methodologies introduced in this thesis which we label a systems approach is a necessary addition to the existing approaches. A key finding of this thesis is that the citation network of the Internet standards can be decomposed into functionally coherent subgroups by using the Newman-Girvan algorithm.(cont.) This result shows that the (normative) citations among the standards can meaningfully be used to help us better manage and monitor the standards system. The results in this thesis indicate that organizing the developing efforts of the Internet standards into (now) 121 Working Groups was done in a manner reasonably consistent with achieving a modular (and thus more evolvable) standards system. A second decomposition of the standards network was achieved by employing the REGE algorithm together with a new method developed in this thesis (see the Appendix) for identifying regular equivalence classes. Five meaningful subgroups of the Internet standards were identified, and each of them occupies a specific position and plays a specific role in the network. The five positions are reflected in the names we have assigned to them: the Foundations, the Established, the Transients, the Newcomers, and the Stand-alones. The life cycle among these positions was uncovered and is one of the insights that the systems approach on this standard system gives relative to the evolution of the overall standards system. Another insight concerning evolution of the standard system is the development of a predictive model for promotion of standards to a new status (i.e. Proposed, Draft and Internet Standards as the three ascending statuses). This model also has practical potential to managers of standards setting organizations and to firms (and individuals) interested in efficiently participating in standards setting processes. The model prediction is based on assessing the implicit social influence of the standards (based upon the social network metric, betweenness centrality, of the standards' authors) and the apparent importance of the standard to the network (based upon calculating the standard's prestige from the citation network).(cont.) A deeper understanding of the factors that go into this model was also developed through the analysis of the factors that can predict increased prestige over time for a standard. The overall systems approach and the tools developed and demonstrated in this thesis for the study of the Internet standards can be applied to other standards systems. Application (and extension) to the World Wide Web, electric power system, mobile communication, and others would we believe lead to important improvements in our practical and scholarly understanding of these systems.by Mo-Han Hsieh.Ph.D

    A web services based framework for efficient monitoring and event reporting.

    Get PDF
    Network and Service Management (NSM) is a research discipline with significant research contributions the last 25 years. Despite the numerous standardised solutions that have been proposed for NSM, the quest for an "all encompassing technology" still continues. A new technology introduced lately to address NSM problems is Web Services (WS). Despite the research effort put into WS and their potential for addressing NSM objectives, there are efficiency, interoperability, etc issues that need to be solved before using WS for NSM. This thesis looks at two techniques to increase the efficiency of WS management applications so that the latter can be used for efficient monitoring and event reporting. The first is a query tool we built that can be used for efficient retrieval of management state data close to the devices where they are hosted. The second technique is policies used to delegate a number of tasks from a manager to an agent to make WS-based event reporting systems more efficient. We tested the performance of these mechanisms by incorporating them in a custom monitoring and event reporting framework and supporting systems we have built, against other similar mechanisms (XPath) that have been proposed for the same tasks, as well as previous technologies such as SNMP. Through these tests we have shown that these mechanisms are capable of allowing us to use WS efficiently in various monitoring and event reporting scenarios. Having shown the potential of our techniques we also present the design and implementation challenges for building a GUI tool to support and enhance the above systems with extra capabilities. In summary, we expect that other problems WS face will be solved in the near future, making WS a capable platform for it to be used for NSM
    corecore