10 research outputs found

    Ontology-driven dynamic discovery and distributed coordination of a robot swarm

    Get PDF
    Swarm robotic systems rely heavily on dynamic interactions to provide interoperability between the different autonomous robots. In current systems, interactions between robots are programmed into the applications controlling them. Incorporating service discovery into these applications allows the robots to dynamically discover other devices. However, since most of these mechanisms use syntax-based matching, the robots cannot reason about the offered functionality. Moreover, as contextual information is often not included in the matching process, it is impossible for robots to select the most suitable device under the current context. This paper aims to tackle these issues by proposing a framework for semantic service discovery in a dynamically changing environment. A semantic layer was added to an existing discovery protocol, offering a semantic interface. Using this framework, services can be searched based on what they offer, with services best suiting the current context yielding the highest matching scores

    A purely logic-based approach to approximate matching of Semantic Web Services

    Full text link
    Most current approaches to matchmaking of semantic Web services utilize hybrid strategies consisting of logic- and non-logic-based similarity measures (or even no logic-based similarity at all). This is mainly due to pure logic-based matchers achieving a good precision, but very low recall values. We present a purely logic-based matcher implementation based on approximate subsumption and extend this approach to take additional information about the taxonomy of the background ontology into account. Our aim is to provide a purely logic-based matchmaker implementation, which also achieves reasonable recall levels without large impact on precision

    Semantics take the SOA registry to the next level: an empirical study in a telecom company

    Get PDF
    We describe an empirical study of the creation of a Semantic Service Registry in the context of the Operations Support Systems (OSS) department of a telecom company, to address an emerging problem of finding the right services to build new business processes in a pool that steadily increases. We show how to obtain an ontology for the telecom domain to annotate services and thus benefit from semantic technologies to effectively find them based on description logics inference mapping. We designed and implemented a proof of concept for providing a matching degree even when the cardinality of the service elements of the query and the cardinality of the service elements being sought differ. This is relevant for web services reusability and flexibility. Our solutions are overviewed and a set of lessons learned are discussed

    Developing a semantic web-based distributed model management system: Experiences and lessons learned

    Get PDF
    Distributed model management systems (DMMSs) are decision support systems with a focus on managing decision models throughout the modeling lifecycle and across the extended enterprise. The advent and proliferation of web services and semantic web technologies offers the possibilities of sharing and reusing models in a distributed setting. This paper presents the design and implementation of a semantic web-based DMMS. Key lessons learned, technical and organizational issues encountered are summarized and directions for future research have been outlined. From a technical perspective, future research will need to explore the viability of tools specifically designed to facilitate the semantic annotation of models, specify and validate SA-SMML, and extend the white-box approach presented in this paper to other model types not amenable to structured modeling. From an organizational perspective, further research is needed in the areas of adoption issues and business models that would ensure the sustainable support for of such systems in the service enterprise

    Approaches to Addressing Service Selection Ties in Ad Hoc Mobile Cloud Computing

    Get PDF

    Knowledge-driven architecture composition

    Full text link
    Service interoperability for embedded devices is a mandatory feature for dynamically changing Internet-of-Things and Industry 4.0 software platforms. Service interoperability is achieved on a technical, syntactic, and semantic level. If service interoperability is achieved on all layers, plug and play functionality known from USB storage sticks or printer drivers becomes feasible. As a result, micro batch size production, individualized automation solution, or job order production become affordable. However, interoperability at the semantic layer is still a problem for the maturing class of IoT systems. Current solutions to achieve semantic integration of IoT devices’ heterogeneous services include standards, machine-understandable service descriptions, and the implementation of software adapters. Standardization bodies such as the VDMA tackle the problem by providing a reference software architecture and an information meta model for building up domain standards. For instance, the universal machine technology interface (UMATI) facilitates the data exchange between machines, components, installations, and their integration into a customerand user-specific IT ecosystem for mechanical engineering and plant construction worldwide. Automated component integration approaches fill the gap of software interfaces that are not relying on a global standard. These approaches translate required into provided software interfaces based on the needed architectural styles (e.g., client-server, layered, publish-subscribe, or cloud-based) using additional component descriptions. Interoperability at the semantic layer is achieved by relying on a shared domain vocabulary (e.g., an ontology) and service description (e.g., SAWSDL) used by all devices involved. If these service descriptions are available and machine-understandable knowledge of how to integrate software components on the functional and behavioral level is available, plug and play scenarios are feasible. Both standards and formal service descriptions cannot be applied effectively to IoT systems as they rely on the assumption that the semantic domain is completely known when they are noted down. This assumption is hard to believe as an increasing number of decentralized developed and connected IoT devices will exist (i.e., 30.73 billion in 2020 and 75.44 billion in 2025). If standards are applied in IoT systems, they must be updated continuously, so they contain the most recent domain knowledge agreed upon centrally and ahead of application. Although formal descriptions of concrete integration contexts can happen in a decentralized manner, they still rely on the assumption that the knowledge once noted down is complete. Hence, if an interoperable service from a new device is available that has not been considered in the initial integration context, the formal descriptions must be updated continuously. Both the formalization effort and keeping standards up to date result in too much additional engineering effort. Consequently, practitioners rely on implementing software adapters manually. However, this dull solution hardly scales with the increasing number of IoT devices. In this work, we introduce a novel engineering method that explicitly allows for an incomplete semantic domain description without losing the ability for automated IoT system integration. Dropping the completeness claim requires the management of incomplete integration knowledge. By sharing integration knowledge centrally, we assist the system integrator in automating software adapter generation. In addition to existing approaches, we enable semantic integration for services by making integration knowledge reusable. We empirically show with students that integration effort can be lowered in a home automation context

    Peer-to-peer, multi-agent interaction adapted to a web architecture

    Get PDF
    The Internet and Web have brought in a new era of information sharing and opened up countless opportunities for people to rethink and redefine communication. With the development of network-related technologies, a Client/Server architecture has become dominant in the application layer of the Internet. Nowadays network nodes are behind firewalls and Network Address Translations, and the centralised design of the Client/Server architecture limits communication between users on the client side. Achieving the conflicting goals of data privacy and data openness is difficult and in many cases the difficulty is compounded by the differing solutions adopted by different organisations and companies. Building a more decentralised or distributed environment for people to freely share their knowledge has become a pressing challenge and we need to understand how to adapt the pervasive Client/Server architecture to this more fluid environment. This thesis describes a novel framework by which network nodes or humans can interact and share knowledge with each other through formal service-choreography specifications in a decentralised manner. The platform allows peers to publish, discover and (un)subscribe to those specifications in the form of Interaction Models (IMs). Peer groups can be dynamically formed and disbanded based on the interaction logs of peers. IMs are published in HTML documents as normal Web pages indexable by search engines and associated with lightweight annotations which semantically enhance the embedded IM elements and at the same time make IM publications comply with the Linked Data principles. The execution of IMs is decentralised on each peer via conventional Web browsers, potentially giving the system access to a very large user community. In this thesis, after developing a proof-of-concept implementation, we carry out case studies of the resulting functionality and evaluate the implementation across several metrics. An increasing number of service providers have began to look for customers proactively, and we believe that in the near future we will not search for services but rather services will find us through our peer communities. Our approaches show how a peer-to-peer architecture for this purpose can be obtained on top of a conventional Client/Server Web infrastructure

    Serviceorientiertes Text Mining am Beispiel von Entitätsextrahierenden Diensten

    Get PDF
    Der Großteil des geschäftsrelevanten Wissens liegt heute als unstrukturierte Information in Form von Textdaten auf Internetseiten, in Office-Dokumenten oder Foreneinträgen vor. Zur Extraktion und Verwertung dieser unstrukturierten Informationen wurde eine Vielzahl von Text-Mining-Lösungen entwickelt. Viele dieser Systeme wurden in der jüngeren Vergangenheit als Webdienste zugänglich gemacht, um die Verwertung und Integration zu vereinfachen. Die Kombination verschiedener solcher Text-Mining-Dienste zur Lösung konkreter Extraktionsaufgaben erscheint vielversprechend, da so bestehende Stärken ausgenutzt, Schwächen der Systeme minimiert werden können und die Nutzung von Text-Mining-Lösungen vereinfacht werden kann. Die vorliegende Arbeit adressiert die flexible Kombination von Text-Mining-Diensten in einem serviceorientierten System und erweitert den Stand der Technik um gezielte Methoden zur Auswahl der Text-Mining-Dienste, zur Aggregation der Ergebnisse und zur Abbildung der eingesetzten Klassifikationsschemata. Zunächst wird die derzeit existierende Dienstlandschaft analysiert und aufbauend darauf eine Ontologie zur funktionalen Beschreibung der Dienste bereitgestellt, so dass die funktionsgesteuerte Auswahl und Kombination der Text-Mining-Dienste ermöglicht wird. Des Weiteren werden am Beispiel entitätsextrahierender Dienste Algorithmen zur qualitätssteigernden Kombination von Extraktionsergebnissen erarbeitet und umfangreich evaluiert. Die Arbeit wird durch zusätzliche Abbildungs- und Integrationsprozesse ergänzt, die eine Anwendbarkeit auch in heterogenen Dienstlandschaften, bei denen unterschiedliche Klassifikationsschemata zum Einsatz kommen, gewährleisten. Zudem werden Möglichkeiten der Übertragbarkeit auf andere Text-Mining-Methoden erörtert
    corecore