429 research outputs found

    Movie Recommendation System based on Dempster-Shafer theory

    Get PDF
    Σε παρούσα πτυχιακή, μελετάμε το θέμα του Χειρισμού της Αβεβαιότητας στα Συστήματα Συστάσεων. Υλοποιήσαμε ένα σύστημα συστάσεων για ταινίες με τη μέθοδο Collaborative Filtering χρησιμοποιώντας τη γλώσσα προγραμματισμού Python. Χρησιμοποιούμε τη θεωρία Dempster-Shafer για να μεταφέρουμε τις αβεβαιότητες που πηγάζουν από τις ατέλειες στις αξιολογήσεις χρηστών στη διαδικασία λήψης αποφάσεων. Μετατρέποντας τις βαθμολογίες σε συναρτήσεις μάζας και pignistic πιθανότητες μετράμε τις ομοιότητες μεταξύ των χρηστών και χρησιμοποιούμε τον κανόνα Dempster's rule of combination για να προβλέψουμε την βαθμολογία των χρηστών σε ταινίες που δεν έχουν αξιολογήσει.In this thesis, we study the subject of Handling Uncertainty in Recommendation Systems. We implemented a Collaborative Filtering movie recommendation system using the Python programming language. We use Dempster-Shafer theory to propagate the uncertainties arising from imperfections in user ratings to the decision-making process. By converting ratings into mass functions and pignistic probabilities, we measure the similarities between users and use the Dempster’s rule of combination to predict user ratings in movies they have not rated

    Clouds, p-boxes, fuzzy sets, and other uncertainty representations in higher dimensions

    Get PDF
    Uncertainty modeling in real-life applications comprises some serious problems such as the curse of dimensionality and a lack of sufficient amount of statistical data. In this paper we give a survey of methods for uncertainty handling and elaborate the latest progress towards real-life applications with respect to the problems that come with it. We compare different methods and highlight their relationships. We introduce intuitively the concept of potential clouds, our latest approach which successfully copes with both higher dimensions and incomplete information

    Argumentation systems and belief functions

    Get PDF
    Uncertain knowledge can be represented in the framework of argumentation systems. In this framework, uncertainty is expressed using so-called assumptions. Depending on the setting of the assumptions, a given hypothesis of interest can be proved or falsified. The main goal of assumption-based reasoning is to determine the set of all supporting arguments for a given hypothesis. Such a supporting argument is a particular setting of assumptions. The assignment of probabilities to assumptions leads to the framework of probabilistic argumentation systems and allows an additional quantitative judgement of a given hypothesis. One possibility to compute the degree of support for a given hypothesis is to compute first the corresponding set of supporting arguments and then to derive the desired result. The problem of this approach is that the set of supporting arguments is sometimes very huge and can't be represented explicitly. This thesis proposes an alternative way for computing degrees of support which is often superior to the first approach. Instead of computing a symbolic result from which the numerical result is derived, we avoid symbolic computations right away. This can be done due to the fact that degree of support corresponds to the notion of normalized belief in Dempster-Shafer theory. We will show how a probabilistic argumentation system can be transformed into a set of independent mass functions. For efficient computations, the local computation framework of Shenoy is used. In this framework, computation is based on a message-passing scheme in a join tree. Four different architectures could be used for propagating potentials in the join tree. These architectures correspond to a complete compilation of the knowledge which allows to answer queries fast. In contrast, this thesis proposes a new method which corresponds to a partial compilation of the knowledge. This method is particularly interesting if there are only a few queries. In addition, it can prevent that the join tree has to be reconstructed in order to answer a given query. Finally, the language ABEL is presented. It allows to express probabilistic argumentations systems in a convenient way. We will show how several examples from different domains can be modeled using ABEL. These examples are also used to point out important aspects of the computational theory presented in the first chapters of this thesis.Das Konzept der Argumentations-Systeme dient dem Zweck der Darstellung von insicherer oder unpräziser Information. Unsicherheit wird in Argumentations- Systemen durch sogenannte Annahmen dargestellt. Eine gegebene Hypothese kann dann in Abhängigkeit der Annahmen bewiesen oder verworfen werden. Hauptaufgabe des Annahmen-basierten Schliessens ist die Bestimmung von Argumenten welche eine gegebene Hypothese stützen. Die Zuordnung von Wahrscheinlichkeiten zu den Annahmen führt zum Konzept der probabilistischen Argumentations-Systeme. Eine zusätzliche quantitative Beurteilung einer gegebenen Hypothese wird dadurch möglich. Ein erster Ansatz den Grad der Unterstützung einer Hypothese zu berechnen besteht darin, zuerst die Menge aller stützenden Argumente zu berechnen Das gewünschte numerische Resultat kann dann daraus abgeleitet werden. Häufig ist dieser Ansatz jedoch nicht durchführbar weil die Menge der unterstützenden Argumente zu gross und deshalb nicht explizit darstellbar ist. In dieser Arbeit stellen wir einen alternativen Ansatz zur Berechnung des Grades der Unterstützung einer Hypothese vor. Dieser alternative Ansatz ist oft effizienter als der erste Ansatz. Anstatt ein symbolisches Zwischenresultats zu berechnen von welchem dann das numerische Endresultat abgeleitet wird, vermeiden wir symbolisches Rechnen schon ganz zu Beginn. Dies ist möglich weil der Grad der Unterstützung zum Begriff der Glaubwürdigkeit in der Dempster-Shafer Theorie äquivalent ist. Wir werden zeigen wie ein gegebenes probabilistisches Argumentations-System in eine Menge von equivalenten Mass Funktionen überführt werden kann. Als Grundlage für die Berechnungen wird das Konzept der Valuations Netzwerke verwendet. Dadurch wird versucht, die Berechnungen möglichst effizient durchzuführen. Es gibt dabei vier verschiedene Rechenarchitekturen. Diese vier Rechenarchitekturen entsprechen einer vollständigen Kompilation der vorhandenen Informationen. Der Vorteil davon ist, dass Abfragen dann sehr schnell beantwortet werden können. Im Gegensatz dazu stellen wir in dieser Arbeit eine neue Methode vor die eher einer partiellen Kompiliation der vorhandenen Informationen entspricht. Diese neue Methode ist vorallem interessant, falls nur wenige Abfragen zu beantworten sind. Des weitern kann diese Methode verhindern, dass ein Valuationsnetz zur Beantwortung einer Abfrage neu konstruiert werden muss. Zum Schluss geben wir eine Einführung in die Modellierspreche ABEL. Diese Sprache erlaubt, probabilistische Argumentations-Systeme auf eine geeignete und komfortable Art und Weise zu formulieren. Wir zeigen wie Beispiele aus verschiedenen Anwendungsgebieten mit ABEL modelliert werden können. Diese Beispiele werden auch dazu verwendet, wichtige Aspekte der in den ersten Kapiteln dieser Arbeit dargestellten Rechentheorie zu unterstreichen

    Uncertainty management in multidisciplinary design of critical safety systems

    Get PDF
    Managing the uncertainty in multidisciplinary design of safety-critical systems requires not only the availability of a single approach or methodology to deal with uncertainty but a set of different strategies and scalable computational tools (that is, by making use of the computational power of a cluster and grid computing). The availability of multiple tools and approaches for dealing with uncertainties allows cross validation of the results and increases the confidence in the performed analysis. This paper presents a unified theory and an integrated and open general-purpose computational framework to deal with scarce data, and aleatory and epistemic uncertainties. It allows solving of the different tasks necessary to manage the uncertainty, such as uncertainty characterization, sensitivity analysis, uncertainty quantification, and robust design. The proposed computational framework is generally applicable to solve different problems in different fields and be numerically efficient and scalable, allowing for a significant reduction of the computational time required for uncertainty management and robust design. The applicability of the proposed approach is demonstrated by solving a multidisciplinary design of a critical system proposed by NASA Langley Research Center in the multidisciplinary uncertainty quantification challenge problem

    CONFIDENCE-BASED DECISION-MAKING SUPPORT FOR MULTI-SENSOR SYSTEMS

    Get PDF
    We live in a world where computer systems are omnipresent and are connected to more and more sensors. Ranging from small individual electronic assistants like smartphones to complex autonomous robots, from personal wearable health devices to professional eHealth frameworks, all these systems use the sensors’ data in order to make appropriate decisions according to the context they measure. However, in addition to complete failures leading to the lack of data delivery, these sensors can also send bad data due to influences from the environment which can sometimes be hard to detect by the computer system when checking each sensor individually. The computer system should be able to use its set of sensors as a whole in order to mitigate the influence of malfunctioning sensors, to overcome the absence of data coming from broken sensors, and to handle possible conflicting information coming from several sensors. In this thesis, we propose a computational model based on a two layer software architecture to overcome this challenge. In a first layer, classification algorithms will check for malfunctioning sensors and attribute a confidence value to each sensor. In the second layer, a rule-based proactive engine will then build a representation of the context of the system and use it along some empirical knowledge about the weaknesses of the different sensors to further tweak this confidence value. Furthermore, the system will then check for conflicting data between sensors. This can be done by having several sensors that measure the same parameters or by having multiple sensors that can be used together to calculate an estimation of a parameter given by another sensor. A confidence value will be calculated for this estimation as well, based on the confidence values of the related sensors. The successive design refinement steps of our model are shown over the course of three experiments. The first two experiments, located in the eHealth domain, have been used to better identify the challenges of such multi-sensor systems, while the third experiment, which consists of a virtual robot simulation, acts as a proof of concept for the semi-generic model proposed in this thesis

    Managing Uncertainty and Vagueness in Semantic Web

    Get PDF
    Ο Σημασιολογικός Ιστός στοχεύει στην διεκπεραίωση εργασιών σε υπολογιστικά συστήματα χωρίς την ανθρώπινη παρέμβαση. Προκειμένου να επιτευχθεί ο στόχος αυτός, εισάγεται η έννοια της πληροφορίας που είναι επεξεργάσιμη από μηχανές. Στα περισσότερα προβλήματα, η έννοια της πληροφορίας είναι συνυφασμένη με την έννοια της αβεβαιότητας και της ασάφειας. Και οι δύο έννοιες περιγράφονται με την κοινή ονομασία ατελής πληροφορία. Δεδομένου ότι ο Σημασιολογικός Ιστός απαρτίζεται από ένα σύνολο τεχνολογιών και των θεωριών που τις διέπουν, οποιαδήποτε μέθοδος αναπαράστασης θα πρέπει να βρίσκεται σε συμφωνία με άλλες υπάρχουσες. Συγκεκριμένα, το θεωρητικό πλαίσιο πρέπει να εντάσσεται ομαλά στη θεωρία που εφαρμόζεται στο Σημασιολογικό Ιστό. Η δε υλοποίησή του, ιδανικό είναι, να υποστηριχθεί με χρήση μεθόδων του Σημασιολογικού Ιστού, στις οποίες κυριαρχεί εκείνη των οντολογιών. Στη διατριβή μας, ορίσαμε μία μέθοδο αναπαράστασης της αβεβαιότητας και της ασάφειας μέσω ενός ενιαίου πλαισίου. Το μοντέλο Dempster-Shafer χρησιμοποιήθηκε για την αναπαράσταση της αβεβαιότητας και το μοντέλο Ασαφούς Λογικής και Ασαφών Συνόλων για την αναπαράσταση της ασάφειας. Για το λόγο αυτό, ορίσαμε το θεωρητικό πλαίσιο, στοχεύοντας σε ένα συνδυασμό ALC Λογικών Περιγραφών (Description Logics) με το μοντέλο Dempster-Shafer. Κατά τη διάρκεια της έρευνάς μας υλοποιήσαμε μεταοντολογίες για την αναπαράσταση της αβεβαιότητας και της ασάφειας και στη συνέχεια μελετήσαμε την συμπεριφορά τους σε πραγματικές εφαρμογές.Semantic Web has been designed for processing tasks without human intervention. In this context, the term machine processable information has been introduced. In most Semantic Web tasks, we come across information incompleteness issues, aka uncertainty and vagueness. For this reason, a method that represents uncertainty and vagueness under a common framework has to be defined. Semantic Web technologies are defined through a Semantic Web Stack and are based on a clear formal foundation. Therefore, any representation scheme should be aligned with these technologies and be formally defined. As the concept of ontologies is significant in the Semantic Web for representing knowledge, any framework is desirable to be built upon it. In our work, we have defined an approach for representing uncertainty and vagueness under a common framework. Uncertainty is represented through Dempster-Shafer model, whereas vagueness has been represented through Fuzzy Logic and Fuzzy Sets. For this reason, we have defined our theoretical framework, aimed at a combination of the classical crisp DL ALC with a Dempster-Shafer module. As a next step, we added fuzziness to this model. Throughout our work, we have implemented metaontologies in order to represent uncertain and vague concepts and, next, we have tested our methodology in real-world applications

    Automatic road network extraction from high resolution satellite imagery using spectral classification methods

    Get PDF
    Road networks play an important role in a number of geospatial applications, such as cartographic, infrastructure planning and traffic routing software. Automatic and semi-automatic road network extraction techniques have significantly increased the extraction rate of road networks. Automated processes still yield some erroneous and incomplete results and costly human intervention is still required to evaluate results and correct errors. With the aim of improving the accuracy of road extraction systems, three objectives are defined in this thesis: Firstly, the study seeks to develop a flexible semi-automated road extraction system, capable of extracting roads from QuickBird satellite imagery. The second objective is to integrate a variety of algorithms within the road network extraction system. The benefits of using each of these algorithms within the proposed road extraction system, is illustrated. Finally, a fully automated system is proposed by incorporating a number of the algorithms investigated throughout the thesis. CopyrightDissertation (MSc)--University of Pretoria, 2010.Computer Scienceunrestricte
    corecore