194 research outputs found

    Managing Uncertainty and Vagueness in Semantic Web

    Get PDF
    Ο Σημασιολογικός Ιστός στοχεύει στην διεκπεραίωση εργασιών σε υπολογιστικά συστήματα χωρίς την ανθρώπινη παρέμβαση. Προκειμένου να επιτευχθεί ο στόχος αυτός, εισάγεται η έννοια της πληροφορίας που είναι επεξεργάσιμη από μηχανές. Στα περισσότερα προβλήματα, η έννοια της πληροφορίας είναι συνυφασμένη με την έννοια της αβεβαιότητας και της ασάφειας. Και οι δύο έννοιες περιγράφονται με την κοινή ονομασία ατελής πληροφορία. Δεδομένου ότι ο Σημασιολογικός Ιστός απαρτίζεται από ένα σύνολο τεχνολογιών και των θεωριών που τις διέπουν, οποιαδήποτε μέθοδος αναπαράστασης θα πρέπει να βρίσκεται σε συμφωνία με άλλες υπάρχουσες. Συγκεκριμένα, το θεωρητικό πλαίσιο πρέπει να εντάσσεται ομαλά στη θεωρία που εφαρμόζεται στο Σημασιολογικό Ιστό. Η δε υλοποίησή του, ιδανικό είναι, να υποστηριχθεί με χρήση μεθόδων του Σημασιολογικού Ιστού, στις οποίες κυριαρχεί εκείνη των οντολογιών. Στη διατριβή μας, ορίσαμε μία μέθοδο αναπαράστασης της αβεβαιότητας και της ασάφειας μέσω ενός ενιαίου πλαισίου. Το μοντέλο Dempster-Shafer χρησιμοποιήθηκε για την αναπαράσταση της αβεβαιότητας και το μοντέλο Ασαφούς Λογικής και Ασαφών Συνόλων για την αναπαράσταση της ασάφειας. Για το λόγο αυτό, ορίσαμε το θεωρητικό πλαίσιο, στοχεύοντας σε ένα συνδυασμό ALC Λογικών Περιγραφών (Description Logics) με το μοντέλο Dempster-Shafer. Κατά τη διάρκεια της έρευνάς μας υλοποιήσαμε μεταοντολογίες για την αναπαράσταση της αβεβαιότητας και της ασάφειας και στη συνέχεια μελετήσαμε την συμπεριφορά τους σε πραγματικές εφαρμογές.Semantic Web has been designed for processing tasks without human intervention. In this context, the term machine processable information has been introduced. In most Semantic Web tasks, we come across information incompleteness issues, aka uncertainty and vagueness. For this reason, a method that represents uncertainty and vagueness under a common framework has to be defined. Semantic Web technologies are defined through a Semantic Web Stack and are based on a clear formal foundation. Therefore, any representation scheme should be aligned with these technologies and be formally defined. As the concept of ontologies is significant in the Semantic Web for representing knowledge, any framework is desirable to be built upon it. In our work, we have defined an approach for representing uncertainty and vagueness under a common framework. Uncertainty is represented through Dempster-Shafer model, whereas vagueness has been represented through Fuzzy Logic and Fuzzy Sets. For this reason, we have defined our theoretical framework, aimed at a combination of the classical crisp DL ALC with a Dempster-Shafer module. As a next step, we added fuzziness to this model. Throughout our work, we have implemented metaontologies in order to represent uncertain and vague concepts and, next, we have tested our methodology in real-world applications

    Multi-Attribute Decision Making using Weighted Description Logics

    Full text link
    We introduce a framework based on Description Logics, which can be used to encode and solve decision problems in terms of combining inference services in DL and utility theory to represent preferences of the agent. The novelty of the approach is that we consider ABoxes as alternatives and weighted concept and role assertions as preferences in terms of possible outcomes. We discuss a relevant use case to show the benefits of the approach from the decision theory point of view

    Expressive probabilistic description logics

    Get PDF
    AbstractThe work in this paper is directed towards sophisticated formalisms for reasoning under probabilistic uncertainty in ontologies in the Semantic Web. Ontologies play a central role in the development of the Semantic Web, since they provide a precise definition of shared terms in web resources. They are expressed in the standardized web ontology language OWL, which consists of the three increasingly expressive sublanguages OWL Lite, OWL DL, and OWL Full. The sublanguages OWL Lite and OWL DL have a formal semantics and a reasoning support through a mapping to the expressive description logics SHIF(D) and SHOIN(D), respectively. In this paper, we present the expressive probabilistic description logics P-SHIF(D) and P-SHOIN(D), which are probabilistic extensions of these description logics. They allow for expressing rich terminological probabilistic knowledge about concepts and roles as well as assertional probabilistic knowledge about instances of concepts and roles. They are semantically based on the notion of probabilistic lexicographic entailment from probabilistic default reasoning, which naturally interprets this terminological and assertional probabilistic knowledge as knowledge about random and concrete instances, respectively. As an important additional feature, they also allow for expressing terminological default knowledge, which is semantically interpreted as in Lehmann's lexicographic entailment in default reasoning from conditional knowledge bases. Another important feature of this extension of SHIF(D) and SHOIN(D) by probabilistic uncertainty is that it can be applied to other classical description logics as well. We then present sound and complete algorithms for the main reasoning problems in the new probabilistic description logics, which are based on reductions to reasoning in their classical counterparts, and to solving linear optimization problems. In particular, this shows the important result that reasoning in the new probabilistic description logics is decidable/computable. Furthermore, we also analyze the computational complexity of the main reasoning problems in the new probabilistic description logics in the general as well as restricted cases

    Optimization and inference under fuzzy numerical constraints

    Get PDF
    Εκτεταμένη έρευνα έχει γίνει στους τομείς της Ικανοποίησης Περιορισμών με διακριτά (ακέραια) ή πραγματικά πεδία τιμών. Αυτή η έρευνα έχει οδηγήσει σε πολλαπλές σημασιολογικές περιγραφές, πλατφόρμες και συστήματα για την περιγραφή σχετικών προβλημάτων με επαρκείς βελτιστοποιήσεις. Παρά ταύτα, λόγω της ασαφούς φύσης πραγματικών προβλημάτων ή ελλιπούς μας γνώσης για αυτά, η σαφής μοντελοποίηση ενός προβλήματος ικανοποίησης περιορισμών δεν είναι πάντα ένα εύκολο ζήτημα ή ακόμα και η καλύτερη προσέγγιση. Επιπλέον, το πρόβλημα της μοντελοποίησης και επίλυσης ελλιπούς γνώσης είναι ακόμη δυσκολότερο. Επιπροσθέτως, πρακτικές απαιτήσεις μοντελοποίησης και μέθοδοι βελτιστοποίησης του χρόνου αναζήτησης απαιτούν συνήθως ειδικές πληροφορίες για το πεδίο εφαρμογής, καθιστώντας τη δημιουργία ενός γενικότερου πλαισίου βελτιστοποίησης ένα ιδιαίτερα δύσκολο πρόβλημα. Στα πλαίσια αυτής της εργασίας θα μελετήσουμε το πρόβλημα της μοντελοποίησης και αξιοποίησης σαφών, ελλιπών ή ασαφών περιορισμών, καθώς και πιθανές στρατηγικές βελτιστοποίησης. Καθώς τα παραδοσιακά προβλήματα ικανοποίησης περιορισμών λειτουργούν βάσει συγκεκριμένων και προκαθορισμένων κανόνων και σχέσεων, παρουσιάζει ενδιαφέρον η διερεύνηση στρατηγικών και βελτιστοποιήσεων που θα επιτρέπουν το συμπερασμό νέων ή/και αποδοτικότερων περιορισμών. Τέτοιοι επιπρόσθετοι κανόνες θα μπορούσαν να βελτιώσουν τη διαδικασία αναζήτησης μέσω της εφαρμογής αυστηρότερων περιορισμών και περιορισμού του χώρου αναζήτησης ή να προσφέρουν χρήσιμες πληροφορίες στον αναλυτή για τη φύση του προβλήματος που μοντελοποιεί.Extensive research has been done in the areas of Constraint Satisfaction with discrete/integer and real domain ranges. Multiple platforms and systems to deal with these kinds of domains have been developed and appropriately optimized. Nevertheless, due to the incomplete and possibly vague nature of real-life problems, modeling a crisp and adequately strict satisfaction problem may not always be easy or even appropriate. The problem of modeling incomplete knowledge or solving an incomplete/relaxed representation of a problem is a much harder issue to tackle. Additionally, practical modeling requirements and search optimizations require specific domain knowledge in order to be implemented, making the creation of a more generic optimization framework an even harder problem.In this thesis, we will study the problem of modeling and utilizing incomplete and fuzzy constraints, as well as possible optimization strategies. As constraint satisfaction problems usually contain hard-coded constraints based on specific problem and domain knowledge, we will investigate whether strategies and generic heuristics exist for inferring new constraint rules. Additional rules could optimize the search process by implementing stricter constraints and thus pruning the search space or even provide useful insight to the researcher concerning the nature of the investigated problem

    Formal Description of Web Services for Expressive Matchmaking

    Get PDF

    Semantics-aware planning methodology for automatic web service composition

    Get PDF
    Service-Oriented Computing (SOC) has been a major research topic in the past years. It is based on the idea of composing distributed applications even in heterogeneous environments by discovering and invoking network-available Web Services to accomplish some complex tasks when no existing service can satisfy the user request. Service-Oriented Architecture (SOA) is a key design principle to facilitate building of these autonomous, platform-independent Web Services. However, in distributed environments, the use of services without considering their underlying semantics, either functional semantics or quality guarantees can negatively affect a composition process by raising intermittent failures or leading to slow performance. More recently, Artificial Intelligence (AI) Planning technologies have been exploited to facilitate the automated composition. But most of the AI planning based algorithms do not scale well when the number of Web Services increases, and there is no guarantee that a solution for a composition problem will be found even if it exists. AI Planning Graph tries to address various limitations in traditional AI planning by providing a unique search space in a directed layered graph. However, the existing AI Planning Graph algorithm only focuses on finding complete solutions without taking account of other services which are not achieving the goals. It will result in the failure of creating such a graph in the case that many services are available, despite most of them being irrelevant to the goals. This dissertation puts forward a concept of building a more intelligent planning mechanism which should be a combination of semantics-aware service selection and a goal-directed planning algorithm. Based on this concept, a new planning system so-called Semantics Enhanced web service Mining (SEwsMining) has been developed. Semantic-aware service selection is achieved by calculating on-demand multi-attributes semantics similarity based on semantic annotations (QWSMO-Lite). The planning algorithm is a substantial revision of the AI GraphPlan algorithm. To reduce the size of planning graph, a bi-directional planning strategy has been developed

    An Investigation into Dynamic Web Service Composition Using a Simulation Framework

    Get PDF
    [Motivation] Web Services technology has emerged as a promising solution for creat- ing distributed systems with the potential to overcome the limitation of former distrib- uted system technologies. Web services provide a platform-independent framework that enables companies to run their business services over the internet. Therefore, many techniques and tools are being developed to create business to business/business to customer applications. In particular, researchers are exploring ways to build new services from existing services by dynamically composing services from a range of resources. [Aim] This thesis aims to identify the technologies and strategies cur- rently being explored for organising the dynamic composition of Web services, and to determine how extensively each of these has been demonstrated and assessed. In addition, the thesis will study the matchmaking and selection processes which are essential processes for Web service composition. [Research Method] We under- took a mapping study of empirical papers that had been published over the period 2000 to 2009. The aim of the mapping study was to identify the technologies and strategies currently being explored for organising the composition of Web services, and to determine how extensively each of these has been demonstrated and assessed. We then built a simulation framework to carry out some experiments on composition strategies. The rst experiment compared the results of a close replication of an ex- isting study with the original results in order to evaluate our close replication study. The simulation framework was then used to investigate the use of a QoS model for supporting the selection process, comparing this with the ranking technique in terms of their performance. [Results] The mapping study found 1172 papers that matched our search terms, from which 94 were classied as providing practical demonstration of ideas related to dynamic composition. We have analysed 68 of these in more detail. Only 29 provided a `formal' empirical evaluation. From these, we selected a `baseline' study to test our simulation model. Running the experiments using simulated data- sets have shown that in the rst experiment the results of the close replication study and the original study were similar in terms of their prole. In the second experiment, the results demonstrated that the QoS model was better than the ranking mechanism in terms of selecting a composite plan that has highest quality score. [Conclusions] No one approach to service composition seemed to meet all needs, but a number has been investigated more. The similarity between the results of the close replication and the original study showed the validity of our simulation framework and a proof that the results of the original study can be replicated. Using the simulation it was demonstrated that the performance of the QoS model was better than the ranking mechanism in terms of the overall quality for a selected plan. The overall objectives of this research are to develop a generic life-cycle model for Web service composition from a mapping study of the literature. This was then used to run simulations to replicate studies on matchmaking and compare selection methods

    A Geospatial Service Model and Catalog for Discovery and Orchestration

    Get PDF
    The goal of this research is to provide a supporting Web services architecture, consisting of a service model and catalog, to allow discovery and automatic orchestration of geospatial Web services. First, a methodology for supporting geospatial Web services with existing orchestration tools is presented. Geospatial services are automatically translated into SOAP/WSDL services by a portable service wrapper. Their data layers are exposed as atomic functions while WSDL extensions provide syntactic metadata. Compliant services are modeled using the descriptive logic capabilities of the Ontology Language for the Web (OWL). The resulting geospatial service model has a number of functions. It provides a basic taxonomy of geospatial Web services that is useful for templating service compositions. It also contains the necessary annotations to allow discovery of services. Importantly, the model defines a number of logical relationships between its internal concepts which allow inconsistency detection for the model as a whole and for individual service instances as they are added to the catalog. These logical relationships have the additional benefit of supporting automatic classification of geospatial services individuals when they are added to the service catalog. The geospatial service catalog is backed by the descriptive logic model. It supports queries which are more complex that those available using standard relational data models, such as the capability to query using concept hierarchies. An example orchestration system demonstrates the use of the geospatial service catalog for query evaluation in an automatic orchestration system (both fully and semi-automatic orchestration). Computational complexity analysis and experimental performance analysis identify potential performance problems in the geospatial service catalog. Solutions to these performance issues are presented in the form of partitioning service instance realization, low cost pre-filtering of service instances, and pre-processing realization. The resulting model and catalog provide an architecture to support automatic orchestration capable of complementing the multiple service composition algorithms that currently exist. Importantly, the geospatial service model and catalog go beyond simply supporting orchestration systems. By providing a general solution to the modeling and discovery of geospatial Web services they are useful in any geospastial Web service enterprise

    Human Resources and Performance in Social Enterprises : Evidence from Microfinance Institutions

    Get PDF
    publishedVersio

    Cross-domain Recommendations based on semantically-enhanced User Web Behavior

    Get PDF
    Information seeking in the Web can be facilitated by recommender systems that guide the users in a personalized manner to relevant resources in the large space of the possible options in the Web. This work investigates how to model people\u27s Web behavior at multiple sites and learn to predict future preferences, in order to generate relevant cross-domain recommendations. This thesis contributes with novel techniques for building cross-domain recommender systems in an open Web setting
    corecore