1,934 research outputs found

    Prikaz znanja u internetu stvari: semantičko modeliranje i njegove primjene

    Get PDF
    Semantic modelling provides a potential basis for interoperating among different systems and applications in the Internet of Things (IoT). However, current work has mostly focused on IoT resource management while not on the access and utilisation of information generated by the “Things”. We present the design of a comprehensive and lightweight semantic description model for knowledge representation in the IoT domain. The design follows the widely recognised best practices in knowledge engineering and ontology modelling. Users are allowed to extend the model by linking to external ontologies, knowledge bases or existing linked data. Scalable access to IoT services and resources is achieved through a distributed, semantic storage design. The usefulness of the model is also illustrated through an IoT service discovery method.Semantičko modeliranje pruĆŸa potencijalnu osnovu za me.udjelovanje različitih sustava i aplikacija unutar interneta stvari (IoT). Međutim, postojeći radovi uglavnom su fokusirani na upravljanje IoT resursima, ali ne i pristupu i koriĆĄtenju informacija koje generira “stvar”. Predstavljamo projektiranje sveobuhvatnog i laganog semantičkog opisnog modela za prikaz znanja u IoT domeni. Projektiranje slijedi ĆĄiroko-priznate najbolje običaje u inĆŸenjerstvu znanja i ontoloĆĄkom modeliranju. Korisnicima se dopuĆĄta proĆĄirenje modela povezivanjem na eksterne ontologije, baze znanja ili postoje će povezane podatke. Skalabilni pristup IoT uslugama i resursima postiĆŸe se kroz distribuirano, semantičko projektiranje pohrane. Upotrebljivost modela tako.er je ilustrirana kroz metodu pronalaska IoT usluga

    Semantic Web Service Composition using Planning and Ontology Concept Relevance

    Get PDF
    Abstract: This paper presents PORSCE II, a system that combines planning and ontology concept relevance for automatically composing semantic web services. The presented approach includes transformation of the web service composition problem into a planning problem, enhancement with semantic awareness and relaxation and solution through external planners. The produced plans are visualized and their accuracy is assessed

    An Investigation into Dynamic Web Service Composition Using a Simulation Framework

    Get PDF
    [Motivation] Web Services technology has emerged as a promising solution for creat- ing distributed systems with the potential to overcome the limitation of former distrib- uted system technologies. Web services provide a platform-independent framework that enables companies to run their business services over the internet. Therefore, many techniques and tools are being developed to create business to business/business to customer applications. In particular, researchers are exploring ways to build new services from existing services by dynamically composing services from a range of resources. [Aim] This thesis aims to identify the technologies and strategies cur- rently being explored for organising the dynamic composition of Web services, and to determine how extensively each of these has been demonstrated and assessed. In addition, the thesis will study the matchmaking and selection processes which are essential processes for Web service composition. [Research Method] We under- took a mapping study of empirical papers that had been published over the period 2000 to 2009. The aim of the mapping study was to identify the technologies and strategies currently being explored for organising the composition of Web services, and to determine how extensively each of these has been demonstrated and assessed. We then built a simulation framework to carry out some experiments on composition strategies. The rst experiment compared the results of a close replication of an ex- isting study with the original results in order to evaluate our close replication study. The simulation framework was then used to investigate the use of a QoS model for supporting the selection process, comparing this with the ranking technique in terms of their performance. [Results] The mapping study found 1172 papers that matched our search terms, from which 94 were classied as providing practical demonstration of ideas related to dynamic composition. We have analysed 68 of these in more detail. Only 29 provided a `formal' empirical evaluation. From these, we selected a `baseline' study to test our simulation model. Running the experiments using simulated data- sets have shown that in the rst experiment the results of the close replication study and the original study were similar in terms of their prole. In the second experiment, the results demonstrated that the QoS model was better than the ranking mechanism in terms of selecting a composite plan that has highest quality score. [Conclusions] No one approach to service composition seemed to meet all needs, but a number has been investigated more. The similarity between the results of the close replication and the original study showed the validity of our simulation framework and a proof that the results of the original study can be replicated. Using the simulation it was demonstrated that the performance of the QoS model was better than the ranking mechanism in terms of the overall quality for a selected plan. The overall objectives of this research are to develop a generic life-cycle model for Web service composition from a mapping study of the literature. This was then used to run simulations to replicate studies on matchmaking and compare selection methods

    Modelling Web Service Composition for Deductive Web Mining

    Get PDF
    Composition of simpler web services into custom applications is understood as promising technique for information requests in a heterogeneous and changing environment. This is also relevant for applications characterised as deductive web mining (DWM). We suggest to use problem-solving methods (PSMs) as templates for composed services. We developed a multi-dimensional, ontology-based framework, and a collection of PSMs, which enable to characterise DWM applications at an abstract level; we describe several existing applications in this framework. We show that the heterogeneity and unboundedness of the web demands for some modifications of the PSM paradigm used in the context of traditional artificial intelligence. Finally, as simple proof of concept, we simulate automated DWM service composition on a small collection of services, PSM-based templates, data objects and ontological knowledge, all implemented in Prolog

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Proceedings of the International Workshop on Enterprise Interoperability (IWEI 2008)

    Get PDF

    Towards Intelligent Assistance for a Data Mining Process:-

    Get PDF
    A data mining (DM) process involves multiple stages. A simple, but typical, process might include preprocessing data, applying a data-mining algorithm, and postprocessing the mining results. There are many possible choices for each stage, and only some combinations are valid. Because of the large space and non-trivial interactions, both novices and data-mining specialists need assistance in composing and selecting DM processes. Extending notions developed for statistical expert systems we present a prototype Intelligent Discovery Assistant (IDA), which provides users with (i) systematic enumerations of valid DM processes, in order that important, potentially fruitful options are not overlooked, and (ii) effective rankings of these valid processes by different criteria, to facilitate the choice of DM processes to execute. We use the prototype to show that an IDA can indeed provide useful enumerations and effective rankings in the context of simple classification processes. We discuss how an IDA could be an important tool for knowledge sharing among a team of data miners. Finally, we illustrate the claims with a comprehensive demonstration of cost-sensitive classification using a more involved process and data from the 1998 KDDCUP competition.NYU, Stern School of Business, IOMS Department, Center for Digital Economy Researc
    • 

    corecore