255 research outputs found

    Inconsistency and Incompleteness in Relational Databases and Logic Programs

    Get PDF
    The aim of this thesis is to study the role played by negation in databases and to develop data models that can handle inconsistent and incomplete information. We develop models that also allow incompleteness through disjunctive information under both the CWA and the OWA in relational databases. In the area of logic programming, extended logic programs allow explicit representation of negative information. As a result, a number of extended logic programs have an inconsistent semantics. We present a translation of extended logic programs to normal logic programs that is more tolerant to inconsistencies. Extended logic programs have also been used widely in order to compute the repairs of an inconsistent database. We present some preliminary ideas on how source information can be incorporated into the repair program in order to produce a subset of the set of all repairs based on a preference for certain sources over others

    More than the sum of its parts – pattern mining, neural networks, and how they complement each other

    Get PDF
    In this thesis we explore pattern mining and deep learning. Often seen as orthogonal, we show that these fields complement each other and propose to combine them to gain from each other’s strengths. We, first, show how to efficiently discover succinct and non-redundant sets of patterns that provide insight into data beyond conjunctive statements. We leverage the interpretability of such patterns to unveil how and which information flows through neural networks, as well as what characterizes their decisions. Conversely, we show how to combine continuous optimization with pattern discovery, proposing a neural network that directly encodes discrete patterns, which allows us to apply pattern mining at a scale orders of magnitude larger than previously possible. Large neural networks are, however, exceedingly expensive to train for which ‘lottery tickets’ – small, well-trainable sub-networks in randomly initialized neural networks – offer a remedy. We identify theoretical limitations of strong tickets and overcome them by equipping these tickets with the property of universal approximation. To analyze whether limitations in ticket sparsity are algorithmic or fundamental, we propose a framework to plant and hide lottery tickets. With novel ticket benchmarks we then conclude that the limitation is likely algorithmic, encouraging further developments for which our framework offers means to measure progress.In dieser Arbeit befassen wir uns mit Mustersuche und Deep Learning. Oft als gegensĂ€tzlich betrachtet, verbinden wir diese Felder, um von den StĂ€rken beider zu profitieren. Wir zeigen erst, wie man effizient prĂ€gnante Mengen von Mustern entdeckt, die Einsichten ĂŒber konjunktive Aussagen hinaus geben. Wir nutzen dann die Interpretierbarkeit solcher Muster, um zu verstehen wie und welche Information durch neuronale Netze fließen und was ihre Entscheidungen charakterisiert. Umgekehrt verbinden wir kontinuierliche Optimierung mit Mustererkennung durch ein neuronales Netz welches diskrete Muster direkt abbildet, was Mustersuche in einigen GrĂ¶ĂŸenordnungen höher erlaubt als bisher möglich. Das Training großer neuronaler Netze ist jedoch extrem teuer, fĂŒr das ’Lotterietickets’ – kleine, gut trainierbare Subnetzwerke in zufĂ€llig initialisierten neuronalen Netzen – eine Lösung bieten. Wir zeigen theoretische EinschrĂ€nkungen von starken Tickets und wie man diese ĂŒberwindet, indem man die Tickets mit der Eigenschaft der universalen Approximierung ausstattet. Um zu beantworten, ob EinschrĂ€nkungen in TicketgrĂ¶ĂŸe algorithmischer oder fundamentaler Natur sind, entwickeln wir ein Rahmenwerk zum Einbetten und Verstecken von Tickets, die als ModellfĂ€lle dienen. Basierend auf unseren Ergebnissen schließen wir, dass die EinschrĂ€nkungen algorithmische Ursachen haben, was weitere Entwicklungen begĂŒnstigt, fĂŒr die unser Rahmenwerk Fortschrittsevaluierungen ermöglicht

    Advances and Applications of Dezert-Smarandache Theory (DSmT), Vol. 1

    Get PDF
    The Dezert-Smarandache Theory (DSmT) of plausible and paradoxical reasoning is a natural extension of the classical Dempster-Shafer Theory (DST) but includes fundamental differences with the DST. DSmT allows to formally combine any types of independent sources of information represented in term of belief functions, but is mainly focused on the fusion of uncertain, highly conflicting and imprecise quantitative or qualitative sources of evidence. DSmT is able to solve complex, static or dynamic fusion problems beyond the limits of the DST framework, especially when conflicts between sources become large and when the refinement of the frame of the problem under consideration becomes inaccessible because of vague, relative and imprecise nature of elements of it. DSmT is used in cybernetics, robotics, medicine, military, and other engineering applications where the fusion of sensors\u27 information is required

    Advances and Applications of DSmT for Information Fusion

    Get PDF
    This book is devoted to an emerging branch of Information Fusion based on new approach for modelling the fusion problematic when the information provided by the sources is both uncertain and (highly) conflicting. This approach, known in literature as DSmT (standing for Dezert-Smarandache Theory), proposes new useful rules of combinations

    Value Co-Creation in Smart Services: A Functional Affordances Perspective on Smart Personal Assistants

    Get PDF
    In the realm of smart services, smart personal assistants (SPAs) have become a popular medium for value co-creation between service providers and users. The market success of SPAs is largely based on their innovative material properties, such as natural language user interfaces, machine learning-powered request handling and service provision, and anthropomorphism. In different combinations, these properties offer users entirely new ways to intuitively and interactively achieve their goals and thus co-create value with service providers. But how does the nature of the SPA shape value co-creation processes? In this paper, we look through a functional affordances lens to theorize about the effects of different types of SPAs (i.e., with different combinations of material properties) on users’ value co-creation processes. Specifically, we collected SPAs from research and practice by reviewing scientific literature and web resources, developed a taxonomy of SPAs’ material properties, and performed a cluster analysis to group SPAs of a similar nature. We then derived 2 general and 11 cluster-specific propositions on how different material properties of SPAs can yield different affordances for value co-creation. With our work, we point out that smart services require researchers and practitioners to fundamentally rethink value co-creation as well as revise affordances theory to address the dynamic nature of smart technology as a service counterpart

    A new approach of top-down induction of decision trees for knowledge discovery

    Get PDF
    Top-down induction of decision trees is the most popular technique for classification in the field of data mining and knowledge discovery. Quinlan developed the basic induction algorithm of decision trees, ID3 (1984), and extended to C4.5 (1993). There is a lot of research work for dealing with a single attribute decision-making node (so-called the first-order decision) of decision trees. Murphy and Pazzani (1991) addressed about multiple-attribute conditions at decision-making nodes. They show that higher order decision-making generates smaller decision trees and better accuracy. However, there always exist NP-complete combinations of multiple-attribute decision-makings.;We develop a new algorithm of second-order decision-tree inductions (SODI) for nominal attributes. The induction rules of first-order decision trees are combined by \u27AND\u27 logic only, but those of SODI consist of \u27AND\u27, \u27OR\u27, and \u27OTHERWISE\u27 logics. It generates more accurate results and smaller decision trees than any first-order decision tree inductions.;Quinlan used information gains via VC-dimension (Vapnik-Chevonenkis; Vapnik, 1995) for clustering the experimental values for each numerical attribute. However, many researchers have discovered the weakness of the use of VC-dim analysis. Bennett (1997) sophistically applies support vector machines (SVM) to decision tree induction. We suggest a heuristic algorithm (SVMM; SVM for Multi-category) that combines a TDIDT scheme with SVM. In this thesis it will be also addressed how to solve multiclass classification problems.;Our final goal for this thesis is IDSS (Induction of Decision Trees using SODI and SVMM). We will address how to combine SODI and SVMM for the construction of top-down induction of decision trees in order to minimize the generalized penalty cost

    Multispace & Multistructure. Neutrosophic Transdisciplinarity (100 Collected Papers of Sciences), Vol. IV

    Get PDF
    The fourth volume, in my book series of “Collected Papers”, includes 100 published and unpublished articles, notes, (preliminary) drafts containing just ideas to be further investigated, scientific souvenirs, scientific blogs, project proposals, small experiments, solved and unsolved problems and conjectures, updated or alternative versions of previous papers, short or long humanistic essays, letters to the editors - all collected in the previous three decades (1980-2010) – but most of them are from the last decade (2000-2010), some of them being lost and found, yet others are extended, diversified, improved versions. This is an eclectic tome of 800 pages with papers in various fields of sciences, alphabetically listed, such as: astronomy, biology, calculus, chemistry, computer programming codification, economics and business and politics, education and administration, game theory, geometry, graph theory, information fusion, neutrosophic logic and set, non-Euclidean geometry, number theory, paradoxes, philosophy of science, psychology, quantum physics, scientific research methods, and statistics. It was my preoccupation and collaboration as author, co-author, translator, or cotranslator, and editor with many scientists from around the world for long time. Many topics from this book are incipient and need to be expanded in future explorations

    Hinge-Loss Markov Random Fields and Probabilistic Soft Logic: A Scalable Approach to Structured Prediction

    Get PDF
    A fundamental challenge in developing impactful artificial intelligence technologies is balancing the ability to model rich, structured domains with the ability to scale to big data. Many important problem areas are both richly structured and large scale, from social and biological networks, to knowledge graphs and the Web, to images, video, and natural language. In this thesis I introduce two new formalisms for modeling structured data, distinguished from previous approaches by their ability to both capture rich structure and scale to big data. The first, hinge-loss Markov random fields (HL-MRFs), is a new kind of probabilistic graphical model that generalizes different approaches to convex inference. I unite three views of inference from the randomized algorithms, probabilistic graphical models, and fuzzy logic communities, showing that all three views lead to the same inference objective. I then derive HL-MRFs by generalizing this unified objective. The second new formalism, probabilistic soft logic (PSL), is a probabilistic programming language that makes HL-MRFs easy to define, refine, and reuse for relational data. PSL uses a syntax based on first-order logic to compactly specify complex models. I next introduce an algorithm for inferring most-probable variable assignments (MAP inference) for HL-MRFs that is extremely scalable, much more so than commercially available software, because it uses message passing to leverage the sparse dependency structures common in inference tasks. I then show how to learn the parameters of HL-MRFs using a number of learning objectives. The learned HL-MRFs are as accurate as traditional, discrete models, but much more scalable. To enable HL-MRFs and PSL to capture even richer dependencies, I then extend learning to support latent variables, i.e., variables without training labels. To overcome the bottleneck of repeated inferences required during learning, I introduce paired-dual learning, which interleaves inference and parameter updates. Paired-dual learning learns accurate models and is also scalable, often completing before traditional methods make even one parameter update. Together, these algorithms enable HL-MRFs and PSL to model rich, structured data at scales not previously possible
    • 

    corecore