44 research outputs found

    Design and Implement a Hybrid WebRTC SignallingMechanism for Unidirectional & Bi-directional VideoConferencing

    Get PDF
    WebRTC (Web Real-Time Communication) is a technology that enables browser-to-browser communication. Therefore, a signalling mechanism must be negotiated to create a connection between peers. The main aim of this paper is to create and implement a WebRTC hybrid signalling mechanism named (WebNSM) for video conferencing based on the Socket.io (API) mechanism and Firefox. WebNSM was designed over a combination of different topologies, such as simplex, star and mesh. Therefore it offers several communications at the same time as one-to-one (unidirectional/bidirectional), one-to-many (unidirectional) and many-to-many (bi-directional) without any downloading or installation. In this paper, WebRTC video conferencing was accomplished via LAN and WAN networks, including the evaluation of resources in WebRTC like bandwidth consumption, CPU performance, memory usage, Quality of Experience (QoE) and maximum links and RTPs calculation. This paper presents a novel signalling mechanism among different users, devices and networks to offer multi-party video conferencing using various topologies at the same time, as well as other typical features such as using the same server, determining room initiator, keeping the communication active even if the initiator or another peer leaves, etc. This scenario highlights the limitations of resources and the use of different topologies for WebRTC video conferencing

    Models and mechanisms for tangible user interfaces

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1997.Includes bibliographical references (leaves 79-82).Brygg Anders Ullmer.M.S

    IP and ATM integration: A New paradigm in multi-service internetworking

    Get PDF
    ATM is a widespread technology adopted by many to support advanced data communication, in particular efficient Internet services provision. The expected challenges of multimedia communication together with the increasing massive utilization of IP-based applications urgently require redesign of networking solutions in terms of both new functionalities and enhanced performance. However, the networking context is affected by so many changes, and to some extent chaotic growth, that any approach based on a structured and complex top-down architecture is unlikely to be applicable. Instead, an approach based on finding out the best match between realistic service requirements and the pragmatic, intelligent use of technical opportunities made available by the product market seems more appropriate. By following this approach, innovations and improvements can be introduced at different times, not necessarily complying with each other according to a coherent overall design. With the aim of pursuing feasible innovations in the different networking aspects, we look at both IP and ATM internetworking in order to investigating a few of the most crucial topics/ issues related to the IP and ATM integration perspective. This research would also address various means of internetworking the Internet Protocol (IP) and Asynchronous Transfer Mode (ATM) with an objective of identifying the best possible means of delivering Quality of Service (QoS) requirements for multi-service applications, exploiting the meritorious features that IP and ATM have to offer. Although IP and ATM often have been viewed as competitors, their complementary strengths and limitations from a natural alliance that combines the best aspects of both the technologies. For instance, one limitation of ATM networks has been the relatively large gap between the speed of the network paths and the control operations needed to configure those data paths to meet changing user needs. IP\u27s greatest strength, on the other hand, is the inherent flexibility and its capacity to adapt rapidly to changing conditions. These complementary strengths and limitations make it natural to combine IP with ATM to obtain the best that each has to offer. Over time many models and architectures have evolved for IP/ATM internetworking and they have impacted the fundamental thinking in internetworking IP and ATM. These technologies, architectures, models and implementations will be reviewed in greater detail in addressing possible issues in integrating these architectures s in a multi-service, enterprise network. The objective being to make recommendations as to the best means of interworking the two in exploiting the salient features of one another to provide a faster, reliable, scalable, robust, QoS aware network in the most economical manner. How IP will be carried over ATM when a commercial worldwide ATM network is deployed is not addressed and the details of such a network still remain in a state of flux to specify anything concrete. Our research findings culminated with a strong recommendation that the best model to adopt, in light of the impending integrated service requirements of future multi-service environments, is an ATM core with IP at the edges to realize the best of both technologies in delivering QoS guarantees in a seamless manner to any node in the enterprise

    Defining interoperability standards: A case study of public health observatory websites

    Get PDF
    The Association of Public Health Observatories (APHO) is a group of region-based health-information providers. Each PHO publishes health-related data for their specific region. Each observatory has taken a national lead in one or more key health area - such as 'cancer' or Obesity'. In 2003, a project was initiated to develop 'interoperability' between public health observatory websites, so the national resources published by one lead observatory could be found on the websites for each other PHO. The APHO interoperability project defined a set of requirements for each PHO - websites should comply with the current government data standards and provide webservices to allow data to be searched in real-time between different PHOs. This thesis describes the production of an interoperable website for the North East Public Health Observatory (NEPHO) and the problems faced during implementation to comply with the APHO interoperability requirements. The areas of interoperability, e-Government and metadata were investigated specifically in suitability for NEPHO and an action list of tasks necessary to achieve the project aims was drawn up. This project has resulted in the successful introduction of a new NEPHO website that complies with the APHO and e-Govemment requirements, however interoperability with other organisations has been difficult to achieve. This thesis describes how other organisations approached the same APHO interoperability criteria and questions whether the national project governance could be improved

    Enhancing cooperation in wireless networks using different concepts of game theory

    Get PDF
    PhDOptimizing radio resource within a network and across cooperating heterogeneous networks is the focus of this thesis. Cooperation in a multi-network environment is tackled by investigating network selection mechanisms. These play an important role in ensuring quality of service for users in a multi-network environment. Churning of mobile users from one service provider to another is already common when people change contracts and in a heterogeneous communication environment, where mobile users have freedom to choose the best wireless service-real time selection is expected to become common feature. This real time selection impacts both the technical and the economic aspects of wireless network operations. Next generation wireless networks will enable a dynamic environment whereby the nodes of the same or even different network operator can interact and cooperate to improve their performance. Cooperation has emerged as a novel communication paradigm that can yield tremendous performance gains from the physical layer all the way up to the application layer. Game theory and in particular coalitional game theory is a highly suited mathematical tool for modelling cooperation between wireless networks and is investigated in this thesis. In this thesis, the churning behaviour of wireless service users is modelled by using evolutionary game theory in the context of WLAN access points and WiMAX networks. This approach illustrates how to improve the user perceived QoS in heterogeneous networks using a two-layered optimization. The top layer views the problem of prediction of the network that would be chosen by a user where the criteria are offered bit rate, price, mobility support and reputation. At the second level, conditional on the strategies chosen by the users, the network provider hypothetically, reconfigures the network, subject to the network constraints of bandwidth and acceptable SNR and optimizes the network coverage to support users who would otherwise not be serviced adequately. This forms an iterative cycle until a solution that optimizes the user satisfaction subject to the adjustments that the network provider can make to mitigate the binding constraints, is found and applied to the real network. The evolutionary equilibrium, which is used to 3 compute the average number of users choosing each wireless service, is taken as the solution. This thesis also proposes a fair and practical cooperation framework in which the base stations belonging to the same network provider cooperate, to serve each other‘s customers. How this cooperation can potentially increase their aggregate payoffs through efficient utilization of resources is shown for the case of dynamic frequency allocation. This cooperation framework needs to intelligently determine the cooperating partner and provide a rational basis for sharing aggregate payoff between the cooperative partners for the stability of the coalition. The optimum cooperation strategy, which involves the allocations of the channels to mobile customers, can be obtained as solutions of linear programming optimizations

    Collaborative development of predictive toxicology applications

    Get PDF
    OpenTox provides an interoperable, standards-based Framework for the support of predictive toxicology data management, algorithms, modelling, validation and reporting. It is relevant to satisfying the chemical safety assessment requirements of the REACH legislation as it supports access to experimental data, (Quantitative) Structure-Activity Relationship models, and toxicological information through an integrating platform that adheres to regulatory requirements and OECD validation principles. Initial research defined the essential components of the Framework including the approach to data access, schema and management, use of controlled vocabularies and ontologies, architecture, web service and communications protocols, and selection and integration of algorithms for predictive modelling. OpenTox provides end-user oriented tools to non-computational specialists, risk assessors, and toxicological experts in addition to Application Programming Interfaces (APIs) for developers of new applications. OpenTox actively supports public standards for data representation, interfaces, vocabularies and ontologies, Open Source approaches to core platform components, and community-based collaboration approaches, so as to progress system interoperability goals

    A Survey on Smart Grid Communication Infrastructures: Motivations, Requirements and Challenges

    Full text link

    Distributed Simulation in Industry

    Get PDF
    Csaba Attila Boer was born in Satu Mare, Romania, on 29 October, 1975. He completed his secondary education at Kölcsey Ferenc High School, in Satu Mare, in 1994. In the same year he started his higher education at Babeş-Bolyai University, Faculty of Mathematics and Computer Science, Cluj-Napoca, Romania, where he received his B.Sc. degree in Computer Science, in 1998, and his M.Sc. degree with major in Information Systems, specialization Designing and Implementing Complex Systems, in 1999. During these years, he obtained fellowships at the Eötvös Lóránd University, and at the Computer and Automation Research Institute of the Hungarian Academy of Sciences, Budapest, Hungary within the Central European Exchange Program for University Studies (CEEPUS). Since 1999, he has been affiliated with the Computer Science Department, Faculty of Economics at Erasmus University Rotterdam, The Netherlands. There, he worked as a researcher for one year, studying the storage and retrieval of discrete event simulation models, research that resulted in three scientific articles. Between 2000 and 2004, he was associated with the same department as a Ph.D. candidate aiming to research the area of distributed simulation and its application in industry. His topic being close to the research carried out at the Faculty of Technology, Policy and Management, Delft University of Technology, and the BETADE research program, he started to collaborate with researchers from these groups, getting involved in two joint practical case study projects. This collaboration resulted in seven joint scientific articles, presented at various international conferences. Furthermore, Csaba has maintained international contacts with researchers from the distributed simulation area. He has been invited twice to Brunel University, London to give a presentation concerning the application of distributed simulation in industry. Currently, he is working as a simulation consultant atGedistribueerde simulatie wordt binnen de defensie in brede kring geaccepteerd en toegepast, maar het heeft in de industrie geen voet aan de grond gekregen. In dit proefschrift onderzoeken we de redenen voor dit fenomeen door te bestuderen wat de industrie verwacht op het terrein van de gedistribueerde simulatie. In het algemeen worden in de industrie simulatiemodellen ontworpen en ontwikkeld met COTS (“commercial-off-the-shelf”) simulatiepakketten. Echter, de bestaande architecturen voor gedistribueerde simulatie binnen defensie zijn niet gericht op het koppelen van modellen gebouwd met COTS simulatiepakketten. Om de industrie te motiveren gedistribueerde simulatie te accepteren en te gebruiken moet men derhalve ernaar streven het mogelijk te maken om modellen, die gebouwd zijn met deze pakketten, aan elkaar te koppelen zonder dat dat al te veel inspanning vereist van de modelbouwers. Uitgaande van een onderzoek onder experts in dit domein, stellen we in dit proefschrift een pakket van eisen voor voor het ontwerp en ontwikkelen van gedistribueerde simulatiearchitecturen dat de industriegemeenschap zal motiveren om gedistribueerde simulatie te accepteren en toe te passen. Daarnaast presenteren we een lichtgewicht architectuur voor gedistribueerde simulatie die met succes toegepast is in twee industriële projecten, en die in grote mate voldoet aan het voorgestelde pakket van eisen.While distributed simulation is widely accepted and applied in defence, it has not gathered ground yet in industry. In this thesis we investigate the reasons behind this phenomenon by surveying the expectation of industry with respect to distributed simulation solutions. Simulation models in industry are mainly designed and developed in commercial-off-the-shelf (COTS) simulation packages. The existing distributed simulation architectures in defence, however, do not focus on coupling models created in COTS simulation packages. Therefore, in order to motivate the industrial community to easily accept and use distributed simulation, one should strive to couple models built in these packages. Further, coupling these models should be possible without needing too much extra effort from modellers. In this thesis, based on a survey with experts in domain, we propose a list of requirements for designing and developing distributed simulation architectures that would encourage the industrial community to accept and apply distributed simulation. Furthermore, we present a lightweight distributed simulation architecture which has been successfully applied in two industrial projects, and satisfies to a large extent the proposed requirements

    The Third Annual NASA Science Internet User Working Group Conference

    Get PDF
    The NASA Science Internet (NSI) User Support Office (USO) sponsored the Third Annual NSI User Working Group (NSIUWG) Conference March 30 through April 3, 1992, in Greenbelt, MD. Approximately 130 NSI users attended to learn more about the NSI, hear from projects which use NSI, and receive updates about new networking technologies and services. This report contains material relevant to the conference; copies of the agenda, meeting summaries, presentations, and descriptions of exhibitors. Plenary sessions featured a variety of speakers, including NSI project management, scientists, and NSI user project managers whose projects and applications effectively use NSI, and notable citizens of the larger Internet community. The conference also included exhibits of advanced networking applications; tutorials on internetworking, computer security, and networking technologies; and user subgroup meetings on the future direction of the conference, networking, and user services and applications

    Contributions to Edge Computing

    Get PDF
    Efforts related to Internet of Things (IoT), Cyber-Physical Systems (CPS), Machine to Machine (M2M) technologies, Industrial Internet, and Smart Cities aim to improve society through the coordination of distributed devices and analysis of resulting data. By the year 2020 there will be an estimated 50 billion network connected devices globally and 43 trillion gigabytes of electronic data. Current practices of moving data directly from end-devices to remote and potentially distant cloud computing services will not be sufficient to manage future device and data growth. Edge Computing is the migration of computational functionality to sources of data generation. The importance of edge computing increases with the size and complexity of devices and resulting data. In addition, the coordination of global edge-to-edge communications, shared resources, high-level application scheduling, monitoring, measurement, and Quality of Service (QoS) enforcement will be critical to address the rapid growth of connected devices and associated data. We present a new distributed agent-based framework designed to address the challenges of edge computing. This actor-model framework implementation is designed to manage large numbers of geographically distributed services, comprised from heterogeneous resources and communication protocols, in support of low-latency real-time streaming applications. As part of this framework, an application description language was developed and implemented. Using the application description language a number of high-order management modules were implemented including solutions for resource and workload comparison, performance observation, scheduling, and provisioning. A number of hypothetical and real-world use cases are described to support the framework implementation
    corecore