2,807 research outputs found

    Service composition based on SIP peer-to-peer networks

    Get PDF
    Today the telecommunication market is faced with the situation that customers are requesting for new telecommunication services, especially value added services. The concept of Next Generation Networks (NGN) seems to be a solution for this, so this concept finds its way into the telecommunication area. These customer expectations have emerged in the context of NGN and the associated migration of the telecommunication networks from traditional circuit-switched towards packet-switched networks. One fundamental aspect of the NGN concept is to outsource the intelligence of services from the switching plane onto separated Service Delivery Platforms using SIP (Session Initiation Protocol) to provide the required signalling functionality. Caused by this migration process towards NGN SIP has appeared as the major signalling protocol for IP (Internet Protocol) based NGN. This will lead in contrast to ISDN (Integrated Services Digital Network) and IN (Intelligent Network) to significantly lower dependences among the network and services and enables to implement new services much easier and faster. In addition, further concepts from the IT (Information Technology) namely SOA (Service-Oriented Architecture) have largely influenced the telecommunication sector forced by amalgamation of IT and telecommunications. The benefit of applying SOA in telecommunication services is the acceleration of service creation and delivery. Main features of the SOA are that services are reusable, discoverable combinable and independently accessible from any location. Integration of those features offers a broader flexibility and efficiency for varying demands on services. This thesis proposes a novel framework for service provisioning and composition in SIP-based peer-to-peer networks applying the principles of SOA. One key contribution of the framework is the approach to enable the provisioning and composition of services which is performed by applying SIP. Based on this, the framework provides a flexible and fast way to request the creation for composite services. Furthermore the framework enables to request and combine multimodal value-added services, which means that they are no longer limited regarding media types such as audio, video and text. The proposed framework has been validated by a prototype implementation

    CollaborationBus: An Editor for the Easy Configuration of Complex Ubiquitous Environment

    Get PDF
    Early sensor-based infrastructures were often developed by experts with a thorough knowledge of base technology for sensing information, for processing the captured data, and for adapting the system’s behaviour accordingly. In this paper we argue that also end-users should be able to configure Ubiquitous Computing environments. We introduce the CollaborationBus application: a graphical editor that provides abstractions from base technology and thereby allows multifarious users to configure Ubiquitous Computing environments. By composing pipelines users can easily specify the information flows from selected sensors via optional filters for processing the sensor data to actuators changing the system behaviour according to the users’ wishes. Users can compose pipelines for both home and work environments. An integrated sharing mechanism allows them to share their own compositions, and to reuse and build upon others’ compositions. Real-time visualisations help them understand how the information flows through their pipelines. In this paper we present the concept, implementation, and early user feedback of the CollaborationBus application

    Formal Specification Language for Vehicular Ad-Hoc Networks

    Get PDF
    Vehicular Ad-Hoc Network (VANET) is a form of Mobile Ad-Hoc Network (wireless Network), originally used to provide safety & comfort for passengers, & currently being used to establish Dedicated Short Range Communications (DSRC) among near by Vehicles (V2V Communications) and between vehicles and nearby fixed infrastructure equipments; Roadside equipments (V2I Communications). VANET was used also to warn drivers of collision possibilities, road sign alarms, auto-payment at road tolls and parks. Usually VANET can be found in Intelligent Transportation Systems (ITS). VANET is the current and near future hot topic for research, that has been targeted by many researchers to develop some applications and protocols specifically for the VANET. But a problem facing all VANET researchers is the unavailability of a formal specification language to specify the VANET systems, protocols, applications and scenarios proposed by those researchers. A specification language is a formal language that is used during the systems design, analysis, and requirements analysis. Using a formal specification language, a researcher can show “What his system does”, Not How. As a contribution of our research we have created a formal specification language for VANET. We made the use of some Romans characters & some basic symbols to represent VANET Systems & Applications. In addition, we have created some combined symbols to represent actions and operations of the VANET system and its participating devices. Our formal specification language covers many of the VANET aspects, and offers Validity Test and Consistency Test for the systems. Using our specification language, we have presented three different case studies based on a VANET system model we have created and put them into the system validity and consistency tests and showed how to describe a VANET system and its applications using our formal specification language

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Communication-oriented Project Management Solution

    Get PDF
    Hajusa tarkvaraarenduse populaarsuse kiire kasv muudab tarkvaraloomeprotsessi kohanemisvõimelisemaks ja paindlikumaks inimressursside osas. Selleks, et hajusat loomeprotsessi toetada, tekib kliendi, meeskonna liikmete ja projektijuhi vahel lisakoorem kommunikatsiooni näol. Kaasaegse tarkvaraarenduse praktikas eksisteerib hulk nutikaid ja mugavaid tööriistu, mis aitavad muuta kommunikatsiooni mugavamaks ja ladusamaks. Kahjuks need riistad ei tegele mitme kommunikatsioonivahendi integratsiooniga ühtseks töötavaks süsteemiks. Töö eesmärgiks on pakkuda kirjeldatud probleemile lahendus. Töös kirjeldatakse loodud koostöötamise tarkvara, mis on mõeldud toetama hajusat tarkvaraarendust ning mille eesmärgid on erinevate kommunikatsioonivahendite andmete integratsioon ja tarkvaraarenduse projektiga seotud analüütilise informatsiooni pakkumine.Growth of popularity of distributed software development makes development process more adaptive and flexible in terms of human resources. But in order to sustain the process there is an additional burden put on the communication between customer, team members and project managers. In the contemporary software development practice there exists a number of smart and handy tools, which help making the communication more fluent and convenient. However none of those tools tackle a problem of integrating multiple communicational sources into a single tool. This paper intends to present a solution to this problem by introducing a collaboration tool for distributed software development. The collaboration tool will be oriented on integra- tion of multiple communication sources and provide analytical information on software development project

    TalkyCars: A Distributed Software Platform for Cooperative Perception among Connected Autonomous Vehicles based on Cellular-V2X Communication

    Get PDF
    Autonomous vehicles are required to operate among highly mixed traffic during their early market-introduction phase, solely relying on local sensory with limited range. Exhaustively comprehending and navigating complex urban environments is potentially not feasible with sufficient reliability using the aforesaid approach. Addressing this challenge, intelligent vehicles can virtually increase their perception range beyond their line of sight by utilizing Vehicle-to-Everything (V2X) communication with surrounding traffic participants to perform cooperative perception. Since existing solutions face a variety of limitations, including lack of comprehensiveness, universality and scalability, this thesis aims to conceptualize, implement and evaluate an end-to-end cooperative perception system using novel techniques. A comprehensive yet extensible modeling approach for dynamic traffic scenes is proposed first, which is based on probabilistic entity-relationship models, accounts for uncertain environments and combines low-level attributes with high-level relational- and semantic knowledge in a generic way. Second, the design of a holistic, distributed software architecture based on edge computing principles is proposed as a foundation for multi-vehicle high-level sensor fusion. In contrast to most existing approaches, the presented solution is designed to rely on Cellular-V2X communication in 5G networks and employs geographically distributed fusion nodes as part of a client-server configuration. A modular proof-of-concept implementation is evaluated in different simulated scenarios to assess the system\u27s performance both qualitatively and quantitatively. Experimental results show that the proposed system scales adequately to meet certain minimum requirements and yields an average improvement in overall perception quality of approximately 27 %

    A REST Model for High Throughput Scheduling in Computational Grids

    Get PDF
    Current grid computing architectures have been based on cluster management and batch queuing systems, extended to a distributed, federated domain. These have shown shortcomings in terms of scalability, stability, and modularity. To address these problems, this dissertation applies architectural styles from the Internet and Web to the domain of generic computational grids. Using the REST style, a flexible model for grid resource interaction is developed which removes the need for any centralised services or specific protocols, thereby allowing a range of implementations and layering of further functionality. The context for resource interaction is a generalisation and formalisation of the Condor ClassAd match-making mechanism. This set theoretic model is described in depth, including the advantages and features which it realises. This RESTful style is also motivated by operational experience with existing grid infrastructures, and the design, operation, and performance of a proto-RESTful grid middleware package named DIRAC. This package was designed to provide for the LHCb particle physics experiment's âワoff-lineâ computational infrastructure, and was first exercised during a 6 month data challenge which utilised over 670 years of CPU time and produced 98 TB of data through 300,000 tasks executed at computing centres around the world. The design of DIRAC and performance measures from the data challenge are reported. The main contribution of this work is the development of a REST model for grid resource interaction. In particular, it allows resource templating for scheduling queues which provide a novel distributed and scalable approach to resource scheduling on the grid
    • …
    corecore