829 research outputs found

    Ensuring interoperability between network elements in next generation networks

    Get PDF
    Next Generation Networks (NGNs), based on the Internet Protocol (IP), implement several services such as IP-based telephony and are beginning to replace the classic telephony systems. Due to the development and implementation of new powerful services these systems are becoming increasingly complex. Implementing these new services (typically software-based network elements) is often accompanied by unexpected and erratic behaviours which can manifest as interoperability problems. The reason for this caused by insufficient testing at the developing companies. The testing of such products is by nature a costly and time-consuming exercise and therefore cut down to what is considered the maximum acceptable level. Ensuring the interoperability between network elements is a known challenge. However, there exists no concept of which testing methods should be utilised to achieve an acceptable level of quality. The objective of this thesis was to improve the interoperability between network elements in NGNs by creating a testing scheme comprising of three diverse testing methods: conformance testing, interoperability testing and posthoc analysis. In the first project a novel conformance testing methodology for developing sets of conformance test cases for service specifications in NGNs was proposed. This methodology significantly improves the chance of interoperability and provides a considerable enhancement to the currently used interoperability tests. It was evaluated by successfully applying it to the Presence Service. The second report proposed a post-hoc methodology which enables the identification of the ultimate causes for interoperability problems in a NGN in daily operation. The new methods were implemented in the tool IMPACT (IP-Based Multi Protocol Posthoc Analyzer and Conformance Tester), which stores all exchanged messages between network elements in a database. Using SQL queries, the causes for errors can be found efficiently. Overall the presented testing scheme improves significantly the chance that network elements interoperate successfully by providing new methods. Beyond that, the quality of the software product is raised by mapping these methods to phases in a process model and providing well defined steps on which test method is the best suited at a certain stage

    CLOCIS:Cloud-based conformance testing framework for IoT devices in the future internet

    Get PDF
    In recent years, the Internet of Things (IoT) has not only become ubiquitous in daily life but has also emerged as a pivotal technology across various sectors, including smart factories and smart cities. Consequently, there is a pressing need to ensure the consistent and uninterrupted delivery of IoT services. Conformance testing has thus become an integral aspect of IoT technologies. However, traditional methods of IoT conformance testing fall short of addressing the evolving requirements put forth by both industry and academia. Historically, IoT testing has necessitated a visit to a testing laboratory, implying that both the testing systems and testers must be co-located. Furthermore, there is a notable absence of a comprehensive method for testing an array of IoT standards, especially given their inherent heterogeneity. With a surge in the development of diverse IoT standards, crafting an appropriate testing environment poses challenges. To address these concerns, this article introduces a method for remote IoT conformance testing, underpinned by a novel conceptual architecture termed CLOCIS. This architecture encompasses an extensible approach tailored for a myriad of IoT standards. Moreover, we elucidate the methods and procedures integral to testing IoT devices. CLOCIS, predicated on this conceptual framework, is actualized, and to attest to its viability, we undertake IoT conformance testing and present the results. When leveraging CLOCIS, small and medium-sized enterprises (SMEs) and entities in the throes of IoT service development stand to benefit from a reduced time to market and cost-efficient testing procedures. Additionally, this innovation holds promise for IoT standardization communities, enabling them to champion their standards with renewed vigor

    Center For Distributed Interactive Simulation Testing: Volume I Technical Proposal

    Get PDF
    Proposal for creating a Center for distributed interactive simulation testing which would provide facilities and capabilities for conducting conformance, interoperability and performance testing for the evolving Department of Defense-sponsored Military Standard for Distributed Interactive Simulation

    Osiguravanje međuoperabilnosti za Internet stvari: iskustvo s testiranjem CoAP protokola

    Get PDF
    Constrained Application Protocol (CoAP) is a specialized web transfer protocol, designed for realizing interoperation with constrained networks and nodes for machine to machine applications like smart energy, building automation, etc. As an important ubiquitous application protocol for the future Internet of Things, CoAP will be potentially implemented by a wide range of smart devices to achieve cooperative services. Therefore, a high level of interoperability of CoAP implementations is crucial. In this context, CoAP Plugtest – the first formal CoAP interoperability testing event was held in Paris, March 2012 to motivate vendors to verify the interoperability of their equipments. The event turned to be successful due to our contribution, including the test method and tool. This paper presents the testing method and procedure for the CoAP Plugtest event. To carry out the tests, a set of test objectives concerning the most important properties of CoAP have been selected and used to measure the interoperability of CoAP implementations. The process of verification has been automated by implementing a test validation tool based on the technique of passive testing. By using the test tool, a number of devices were successfully tested.Constrained Application Protocol (CoAP) je specijalizirani prijenosni protokol, dizajniran za realizaciju međuoperabilnosti uz ograničene mrežame i čvorove za primjene poput pametne energije, automatizacije u zgradarstvu itd. Kao važan i sveprisutan protokol za Internet stvari CoAP bi mogao biti implementiran kod velikog broja pametnih uređaja kako bi se ostvarile kooperativne usluge. Zbog toga je od velike važnosti postic´i visoku razinu međuoperabilnosti CoAP implementacija. U tom kontekstu, CoAP Pluqtest - prvo formalno testiranje CoAP međuoperabilnosti je održano u Parizu u ožujku 2012. kako bi se motivirali prodavači da provjere međuoperabilnost svoje opreme. Testiranje je bilo uspješno zahvaljujući našem doprinosu koji uključuje metodu i alate za testiranje. U ovom radu prikazana je metoda i procedura testiranja za CoAP Pluqtest. Kako bi se proveli testovi odabran je skup ciljeva koji se odnose na najvažnija svojstva CoAP protokola i oni su korišteni za mjerenje međuoperabilnosti CoAP implementacija. Proces verifikacije je automatiziran implementacijom alata za provjeru testa koji se temelji na tehnici pasivnog testiranja. Korištenjem alata za testiranje uspješno su testirani brojni uređaji

    Assessing and Improving Interoperability of Distributed Systems

    Get PDF
    Interoperabilität von verteilten Systemen ist eine Grundlage für die Entwicklung von neuen und innovativen Geschäftslösungen. Sie erlaubt es existierende Dienste, die auf verschiedenen Systemen angeboten werden, so miteinander zu verknüpfen, dass neue oder erweiterte Dienste zur Verfügung gestellt werden können. Außerdem kann durch diese Integration die Zuverlässigkeit von Diensten erhöht werden. Das Erreichen und Bewerten von Interoperabilität stellt jedoch eine finanzielle und zeitliche Herausforderung dar. Zur Sicherstellung und Bewertung von Interoperabilität werden systematische Methoden benötigt. Um systematisch Interoperabilität von Systemen erreichen und bewerten zu können, wurde im Rahmen der vorliegenden Arbeit ein Prozess zur Verbesserung und Beurteilung von Interoperabilität (IAI) entwickelt. Der IAI-Prozess beinhaltet drei Phasen und kann die Interoperabilität von verteilten, homogenen und auch heterogenen Systemen bewerten und verbessern. Die Bewertung erfolgt dabei durch Interoperabilitätstests, die manuell oder automatisiert ausgeführt werden können. Für die Automatisierung von Interoperabilitätstests wird eine neue Methodik vorgestellt, die einen Entwicklungsprozess für automatisierte Interoperabilitätstestsysteme beinhaltet. Die vorgestellte Methodik erleichtert die formale und systematische Bewertung der Interoperabilität von verteilten Systemen. Im Vergleich zur manuellen Prüfung von Interoperabilität gewährleistet die hier vorgestellte Methodik eine höhere Testabdeckung, eine konsistente Testdurchführung und wiederholbare Interoperabilitätstests. Die praktische Anwendbarkeit des IAI-Prozesses und der Methodik für automatisierte Interoperabilitätstests wird durch drei Fallstudien belegt. In der ersten Fallstudie werden Prozess und Methodik für Internet Protocol Multimedia Subsystem (IMS) Netzwerke instanziiert. Die Interoperabilität von IMS-Netzwerken wurde bisher nur manuell getestet. In der zweiten und dritten Fallstudie wird der IAI-Prozess zur Beurteilung und Verbesserung der Interoperabilität von Grid- und Cloud-Systemen angewendet. Die Bewertung und Verbesserung dieser Interoperabilität ist eine Herausforderung, da Grid- und Cloud-Systeme im Gegensatz zu IMS-Netzwerken heterogen sind. Im Rahmen der Fallstudien werden Möglichkeiten für Integrations- und Interoperabilitätslösungen von Grid- und Infrastructure as a Service (IaaS) Cloud-Systemen sowie von Grid- und Platform as a Service (PaaS) Cloud-Systemen aufgezeigt. Die vorgestellten Lösungen sind in der Literatur bisher nicht dokumentiert worden. Sie ermöglichen die komplementäre Nutzung von Grid- und Cloud-Systemen, eine vereinfachte Migration von Grid-Anwendungen in ein Cloud-System sowie eine effiziente Ressourcennutzung. Die Interoperabilitätslösungen werden mit Hilfe des IAI-Prozesses bewertet. Die Durchführung der Tests für Grid-IaaS-Cloud-Systeme erfolgte manuell. Die Interoperabilität von Grid-PaaS-Cloud-Systemen wird mit Hilfe der Methodik für automatisierte Interoperabilitätstests bewertet. Interoperabilitätstests und deren Beurteilung wurden bisher in der Grid- und Cloud-Community nicht diskutiert, obwohl sie eine Basis für die Entwicklung von standardisierten Schnittstellen zum Erreichen von Interoperabilität zwischen Grid- und Cloud-Systemen bieten.Achieving interoperability of distributed systems offers means for the development of new and innovative business solutions. Interoperability allows the combination of existing services provided on different systems, into new or extended services. Such an integration can also increase the reliability of the provided service. However, achieving and assessing interoperability is a technical challenge that requires high effort regarding time and costs. The reasons are manifold and include differing implementations of standards as well as the provision of proprietary interfaces. The implementations need to be engineered to be interoperable. Techniques that assess and improve interoperability systematically are required. For the assurance of reliable interoperation between systems, interoperability needs to be assessed and improved in a systematic manner. To this aim, we present the Interoperability Assessment and Improvement (IAI) process, which describes in three phases how interoperability of distributed homogeneous and heterogeneous systems can be improved and assessed systematically. The interoperability assessment is achieved by means of interoperability testing, which is typically performed manually. For the automation of interoperability test execution, we present a new methodology including a generic development process for a complete and automated interoperability test system. This methodology provides means for a formalized and systematic assessment of systems' interoperability in an automated manner. Compared to manual interoperability testing, the application of our methodology has the following benefits: wider test coverage, consistent test execution, and test repeatability. We evaluate the IAI process and the methodology for automated interoperability testing in three case studies. Within the first case study, we instantiate the IAI process and the methodology for Internet Protocol Multimedia Subsystem (IMS) networks, which were previously assessed for interoperability only in a manual manner. Within the second and third case study, we apply the IAI process to assess and improve the interoperability of grid and cloud computing systems. Their interoperability assessment and improvement is challenging, since cloud and grid systems are, in contrast to IMS networks, heterogeneous. We develop integration and interoperability solutions for grids and Infrastructure as a Service (IaaS) clouds as well as for grids and Platform as a Service (PaaS) clouds. These solutions are unique and foster complementary usage of grids and clouds, simplified migration of grid applications into the cloud, as well as efficient resource utilization. In addition, we assess the interoperability of the grid-cloud interoperability solutions. While the tests for grid-IaaS clouds are performed manually, we applied our methodology for automated interoperability testing for the assessment of interoperability to grid-PaaS cloud interoperability successfully. These interoperability assessments are unique in the grid-cloud community and provide a basis for the development of standardized interfaces improving the interoperability between grids and clouds

    A Services\u27 Frameworks And Support Services For Environmental Information Communities

    Full text link
    For environmental datasets to be used effectively via the Internet, they must present standardized data and metadata services and link the two. The Open Geospatial Consortium\u27s (OGC) web services (WFS, WMS, CSW etc.), have seen widespread use over many years however few organizations have deployed information architectures based solely on OGC standards for all their datasets. Collections of organizations within a thematically-based community certainly cannot realistically be expected to do so. To enable service use flexibility we present a services framework - a Data Brokering Layer (DBL). A DBL presents access to data and metadata services for datasets, and links between them, in a standardized manner based on Linked Data and Semantic Web principles. By specifying regular access methods to any data or metadata service relevant for a dataset, community organizers allow a wide range of services for use within their community. Additionally, a community service profile testing service – a Conformance Service – may be run that reveals the day-to-day status of all of a community’s services to be known allowing both better end-user experiences and also that data providers’ data is acceptable to a community and continues to remains available for use. We present DBL and Conformance Service designs as well as a whole-of-community architecture that facilitates the use of the two. We describe implementations of them within two Australian environmental information communities: eReefs and Bioregional Assessments and plans for wider deployment

    Acta Cybernetica : Volume 14. Number 2.

    Get PDF

    Analysis of Web Protocols Evolution on Internet Traffic

    Get PDF
    This research focus on the analysis of ten years of Internet traffic, from 2004 until 2013, captured and measured by Mawi Lab at a link connecting Japan to the United States of America. The collected traffic was analysed for each of the days in that period, and conjointly in that timeframe. Initial research questions included the test of the hypothesis of weather the change in Internet applications and Internet usage patterns were observable in the generated traffic or not. Several protocols were thoroughly analysed, including HTTP, HTTPS, TCP, UDP, IPv4, IPv6, SMTP, DNS. The effect of the transition from IPv4 to IPv6 was also analysed. Conclusions were drawn and the research questions were answered and the research hypothesis was confirmed.Esta pesquisa foca-se na análise de dez anos de tráfego de Internet, a partir de 2004 até 2013, capturado e medido pelo Mawi Lab numa ligação de fibra óptica entre o Japão e os Estados Unidos da América. O tráfego recolhido foi analisado para cada um dos dias nesse período, e também conjuntamente nesse período. As questões de pesquisa iniciais incluíram testar a hipótese de ser observável no tráfego gerado, a alteração das aplicações em uso na Internet e a alteração dos padrões de uso da Internet. Vários protocolos foram analisados exaustivamente, incluindo HTTP, HTTPS, TCP, UDP, IPv4, IPv6, SMTP e DNS. O efeito da transição do IPv4 para o IPv6 também foi analisado. As conclusões foram tiradas, as questões de pesquisa foram respondidas e a hipótese de pesquisa foi confirmada
    • …
    corecore