38 research outputs found

    The Encyclopedia of Neutrosophic Researchers - vol. 1

    Get PDF
    This is the first volume of the Encyclopedia of Neutrosophic Researchers, edited from materials offered by the authors who responded to the editor’s invitation. The authors are listed alphabetically. The introduction contains a short history of neutrosophics, together with links to the main papers and books. Neutrosophic set, neutrosophic logic, neutrosophic probability, neutrosophic statistics, neutrosophic measure, neutrosophic precalculus, neutrosophic calculus and so on are gaining significant attention in solving many real life problems that involve uncertainty, impreciseness, vagueness, incompleteness, inconsistent, and indeterminacy. In the past years the fields of neutrosophics have been extended and applied in various fields, such as: artificial intelligence, data mining, soft computing, decision making in incomplete / indeterminate / inconsistent information systems, image processing, computational modelling, robotics, medical diagnosis, biomedical engineering, investment problems, economic forecasting, social science, humanistic and practical achievements

    Stable non-standard imprecise probabilities

    Get PDF
    Stability arises as the consistency criterion in a betting interpretation for hyperreal imprecise previsions, that is imprecise previsions (and probabilities) which may take infinitesimal values. The purpose of this work is to extend the notion of stable coherence introduced in [8] to conditional hyperreal imprecise probabilities. Our investigation extends the de Finetti-Walley operational characterisation of (imprecise) prevision to conditioning on events which are considered "practically impossible" but not "logically impossible"

    Génération automatique d'alignements complexes d'ontologies

    Get PDF
    Le web de données liées (LOD) est composé de nombreux entrepôts de données. Ces données sont décrites par différents vocabulaires (ou ontologies). Chaque ontologie a une terminologie et une modélisation propre ce qui les rend hétérogènes. Pour lier et rendre les données du web de données liées interopérables, les alignements d'ontologies établissent des correspondances entre les entités desdites ontologies. Il existe de nombreux systèmes d'alignement qui génèrent des correspondances simples, i.e., ils lient une entité à une autre entité. Toutefois, pour surmonter l'hétérogénéité des ontologies, des correspondances plus expressives sont parfois nécessaires. Trouver ce genre de correspondances est un travail fastidieux qu'il convient d'automatiser. Dans le cadre de cette thèse, une approche d'alignement complexe basée sur des besoins utilisateurs et des instances communes est proposée. Le domaine des alignements complexes est relativement récent et peu de travaux adressent la problématique de leur évaluation. Pour pallier ce manque, un système d'évaluation automatique basé sur de la comparaison d'instances est proposé. Ce système est complété par un jeu de données artificiel sur le domaine des conférences.The Linked Open Data (LOD) cloud is composed of data repositories. The data in the repositories are described by vocabularies also called ontologies. Each ontology has its own terminology and model. This leads to heterogeneity between them. To make the ontologies and the data they describe interoperable, ontology alignments establish correspondences, or links between their entities. There are many ontology matching systems which generate simple alignments, i.e., they link an entity to another. However, to overcome the ontology heterogeneity, more expressive correspondences are sometimes needed. Finding this kind of correspondence is a fastidious task that can be automated. In this thesis, an automatic complex matching approach based on a user's knowledge needs and common instances is proposed. The complex alignment field is still growing and little work address the evaluation of such alignments. To palliate this lack, we propose an automatic complex alignment evaluation system. This system is based on instances. A famous alignment evaluation dataset has been extended for this evaluation

    Formal Methods of Argumentation as Models of Engineering Design Decisions and Processes

    Get PDF
    Complex engineering projects comprise many individual design decisions. As these decisions are made over the course of months, even years, and across different teams of engineers, it is common for them to be based on different, possibly conflicting assumptions. The longer these inconsistencies go undetected, the costlier they are to resolve. Therefore it is important to spot them as early as possible. There is currently no software aimed explicitly at detecting inconsistencies in interrelated design decisions. This thesis is a step towards the development of such tools. We use formal methods of argumentation, a branch of artificial intelligence, as the foundation of a logical model of design decisions capable of handling inconsistency. It has three parts. First, argumentation is used to model the pros and cons of individual decisions and to reason about the possible worlds in which these arguments are justified. In the second part we study sequences of interrelated decisions. We identify cases where the arguments in one decision invalidate the justification for another decision, and develop a measure of the impact that choosing a specific option has on the consistency of the overall design. The final part of the thesis is concerned with non-deductive arguments, which are used in design debates, for example to draw analogies between past and current problems. Our model integrates deductive and non-deductive arguments side-by-side. This work is supported by our collaboration with the engineering department of Queen’s University Belfast and an industrial partner. The thesis contains two case studies of realistic problems and parts of it were implemented as software prototypes. We also give theoretical results demonstrating the internal consistency of our model

    Proceedings of the 5th International Workshop "What can FCA do for Artificial Intelligence?", FCA4AI 2016(co-located with ECAI 2016, The Hague, Netherlands, August 30th 2016)

    Get PDF
    International audienceThese are the proceedings of the fifth edition of the FCA4AI workshop (http://www.fca4ai.hse.ru/). Formal Concept Analysis (FCA) is a mathematically well-founded theory aimed at data analysis and classification that can be used for many purposes, especially for Artificial Intelligence (AI) needs. The objective of the FCA4AI workshop is to investigate two main main issues: how can FCA support various AI activities (knowledge discovery, knowledge representation and reasoning, learning, data mining, NLP, information retrieval), and how can FCA be extended in order to help AI researchers to solve new and complex problems in their domain. Accordingly, topics of interest are related to the following: (i) Extensions of FCA for AI: pattern structures, projections, abstractions. (ii) Knowledge discovery based on FCA: classification, data mining, pattern mining, functional dependencies, biclustering, stability, visualization. (iii) Knowledge processing based on concept lattices: modeling, representation, reasoning. (iv) Application domains: natural language processing, information retrieval, recommendation, mining of web of data and of social networks, etc

    Efektivní algoritmy pro problémy se sociálním vlivem u velkých sítí

    Get PDF
    In recent years, the dizzying explosion of data and information results from social networks with millions to billions of users, such as Facebook, YouTube, Twitter, and LinkedIn. Users can use online social networks (OSNs) to quickly trade information, communicate with other users, and keep their information up-to-date. The challenge of spreading information on social networks that arises in practice requires effective information management solutions, such as disseminating useful information, maximizing the influence of information transmission, and preventing disinformation, rumors, and viruses from being disseminated. Motivated by the above issues, we investigate the problem of information diffusion on OSNs. We study this problem based on two models, Independent Cascade (IC) and Linear Threshold (LT), and classical Influence Maximization (IM) in online social networks. In addition, we investigate various aspects of IM problems, such as budget variations, topics of interest, multiple competitors, and others. Moreover, we also investigate and apply the theory of combinatorial optimization problems to solve one of the current concerns in social networks, maximizing the influence on the groups and topics in social networks. In general, the main goals of the Ph.D thesis proposal are as follows. 1. We investigate the Multi-Threshold problem for IM, which is a variant of the IM problem with threshold constraints. We propose an efficient algorithm that IM for multiple thresholds in the social network. In particular, we develop a novel algorithmic framework that can use the solution to a smaller threshold to find that of larger ones. 2. We study the Group Influence Maximization problem and introduce an efficient group influence maximization algorithm with more advantages than each node’s influence in networks, using a novel sampling technique to estimate the epsilon group function. We also devised an approximation algorithm to estimate multiple candidate solutions with theoretical guarantee. 3. We investigate an approach for Influence Maximization problem with k-topic under constraints in social network. More specifically, we also study a streaming algorithm that combines an optimization algorithm to improve the approximation algorithm and theoretical guarantee in terms of solution quality and running time.V posledních letech je závratná exploze dat a informací výsledkem sociálních sítí s miliony až miliardami uživatelů, jako jsou Facebook, YouTube, Twitter a LinkedIn. Uživatelé mohou využívat online sociální sítě (OSNs) k rychlému obchodování s informacemi, komunikaci s ostatními uživateli a udržování jejich informací v aktuálním stavu. Výzva šíření informací na sociálních sítích, která se v praxi objevuje, vyžaduje efektivní řešení správy informací, jako je šíření užitečných informací, maximalizace vlivu přenosu informací a zabránění šíření dezinformací, fám a virů. Motivováni výše uvedenými problémy zkoumáme problém šíření informací na OSN. Tento problém studujeme na základě dvou modelů, Independent Cascade (IC) a Linear Threshold (LT) a klasické Influence Maximization (IM) v online sociálních sítích. Kromě toho zkoumáme různé aspekty problémů s rychlým zasíláním zpráv, jako jsou změny rozpočtu, témata zájmu, více konkurentů a další. Kromě toho také zkoumáme a aplikujeme teorii kombinatorických optimalizačních problémů k vyřešení jednoho ze současných problémů v sociálních sítích, maximalizujeme vliv na skupiny a témata v sociálních sítích. Obecně lze říci, že hlavní cíle Ph.D. návrh diplomové práce je následující. 1. Zkoumáme problém Multi-Threshold pro IM, což je varianta problému IM s prahovými omezeními. Navrhujeme účinný algoritmus, který IM pro více prahů v sociální síti. Zejména vyvíjíme nový algoritmický rámec, který může použít řešení pro menší práh k nalezení prahu většího. 2. Studujeme problém maximalizace vlivu skupiny a zavádíme účinný algoritmus maxima- lizace vlivu skupiny s více výhodami, než je vliv každého uzlu v sítích, pomocí nové vzorkovací techniky k odhadu funkce skupiny epsilon. Navrhujeme také aproximační algoritmus pro odhad více kandidátních řešení s teoretickou zárukou. 3. Zkoumáme přístup pro maximalizaci vlivu s k-téma pod omezeními v rozsáhlé síti. Konkrétněji budeme studovat novou metriku, která kombinuje optimalizační algoritmus pro zlepšení aproximačního algoritmu z hlediska kvality řešení a doby běhu na základě kliky a komunity v komplexních sítích.460 - Katedra informatikyvyhově
    corecore