325 research outputs found

    The Hyperdimensional Transform: a Holographic Representation of Functions

    Full text link
    Integral transforms are invaluable mathematical tools to map functions into spaces where they are easier to characterize. We introduce the hyperdimensional transform as a new kind of integral transform. It converts square-integrable functions into noise-robust, holographic, high-dimensional representations called hyperdimensional vectors. The central idea is to approximate a function by a linear combination of random functions. We formally introduce a set of stochastic, orthogonal basis functions and define the hyperdimensional transform and its inverse. We discuss general transform-related properties such as its uniqueness, approximation properties of the inverse transform, and the representation of integrals and derivatives. The hyperdimensional transform offers a powerful, flexible framework that connects closely with other integral transforms, such as the Fourier, Laplace, and fuzzy transforms. Moreover, it provides theoretical foundations and new insights for the field of hyperdimensional computing, a computing paradigm that is rapidly gaining attention for efficient and explainable machine learning algorithms, with potential applications in statistical modelling and machine learning. In addition, we provide straightforward and easily understandable code, which can function as a tutorial and allows for the reproduction of the demonstrated examples, from computing the transform to solving differential equations

    Neutrosophic SuperHyperAlgebra and New Types of Topologies

    Get PDF
    In general, a system S (that may be a company, association, institution, society, country, etc.) is formed by sub-systems Si { or P(S), the powerset of S }, and each sub-system Si is formed by sub-sub-systems Sij { or P(P(S)) = P2(S) } and so on. That’s why the n-th PowerSet of a Set S { defined recursively and denoted by Pn(S) = P(Pn-1(S) } was introduced, to better describes the organization of people, beings, objects etc. in our real world. The n-th PowerSet was used in defining the SuperHyperOperation, SuperHyperAxiom, and their corresponding Neutrosophic SuperHyperOperation, Neutrosophic SuperHyperAxiom in order to build the SuperHyperAlgebra and Neutrosophic SuperHyperAlgebra. In general, in any field of knowledge, one in fact encounters SuperHyperStructures. Also, six new types of topologies have been introduced in the last years (2019-2022), such as: Refined Neutrosophic Topology, Refined Neutrosophic Crisp Topology, NeutroTopology, AntiTopology, SuperHyperTopology, and Neutrosophic SuperHyperTopology

    Fuzzy Systems

    Get PDF
    This book presents some recent specialized works of theoretical study in the domain of fuzzy systems. Over eight sections and fifteen chapters, the volume addresses fuzzy systems concepts and promotes them in practical applications in the following thematic areas: fuzzy mathematics, decision making, clustering, adaptive neural fuzzy inference systems, control systems, process monitoring, green infrastructure, and medicine. The studies published in the book develop new theoretical concepts that improve the properties and performances of fuzzy systems. This book is a useful resource for specialists, engineers, professors, and students

    Startup’s critical failure factors dynamic modeling using FCM

    Get PDF
    The emergence of startups and their influence on a country's economic growth has become a significant concern for governments. The failure of these ventures leads to substantial depletion of financial resources and workforce, resulting in detrimental effects on a country's economic climate. At various stages of a startup's lifecycle, numerous factors can affect the growth of a startup and lead to failure. Numerous scholars and authors have primarily directed their attention toward studying the successes of these ventures. Previous research review of critical failure factors (CFFs) reveals a dearth of research that comprehensively investigates the introduction of all failure factors and their interdependent influences. This study investigates and categorizes the failure factors across various stages of a startup's life cycle to provide a deeper insight into how they might interact and reinforce one another. Employing expert perspectives, the authors construct fuzzy cognitive maps (FCMs) to visualize the CFFs within entrepreneurial ventures and examine these factors' influence across the four growth stages of a venture. Our primary aim is to construct a model that captures the complexities and uncertainties surrounding startup failure, unveiling the concealed interconnections among CFFs. The FCMs model empowers entrepreneurs to anticipate potential failures under diverse scenarios based on the dynamic behavior of these factors. The proposed model equips entrepreneurs and decision-makers with a comprehensive understanding of the collective influence exerted by various factors on the failure of entrepreneurial ventures

    Advances in Reinforcement Learning

    Get PDF
    Reinforcement Learning (RL) is a very dynamic area in terms of theory and application. This book brings together many different aspects of the current research on several fields associated to RL which has been growing rapidly, producing a wide variety of learning algorithms for different applications. Based on 24 Chapters, it covers a very broad variety of topics in RL and their application in autonomous systems. A set of chapters in this book provide a general overview of RL while other chapters focus mostly on the applications of RL paradigms: Game Theory, Multi-Agent Theory, Robotic, Networking Technologies, Vehicular Navigation, Medicine and Industrial Logistic

    A Systematic Survey on Deep Generative Models for Graph Generation

    Full text link
    Graphs are important data representations for describing objects and their relationships, which appear in a wide diversity of real-world scenarios. As one of a critical problem in this area, graph generation considers learning the distributions of given graphs and generating more novel graphs. Owing to its wide range of applications, generative models for graphs have a rich history, which, however, are traditionally hand-crafted and only capable of modeling a few statistical properties of graphs. Recent advances in deep generative models for graph generation is an important step towards improving the fidelity of generated graphs and paves the way for new kinds of applications. This article provides an extensive overview of the literature in the field of deep generative models for the graph generation. Firstly, the formal definition of deep generative models for the graph generation as well as preliminary knowledge is provided. Secondly, two taxonomies of deep generative models for unconditional, and conditional graph generation respectively are proposed; the existing works of each are compared and analyzed. After that, an overview of the evaluation metrics in this specific domain is provided. Finally, the applications that deep graph generation enables are summarized and five promising future research directions are highlighted

    Full Issue

    Get PDF

    Inquisitive Pattern Recognition

    Get PDF
    The Department of Defense and the Department of the Air Force have funded automatic target recognition for several decades with varied success. The foundation of automatic target recognition is based upon pattern recognition. In this work, we present new pattern recognition concepts specifically in the area of classification and propose new techniques that will allow one to determine when a classifier is being arrogant. Clearly arrogance in classification is an undesirable attribute. A human is being arrogant when their expressed conviction in a decision overstates their actual experience in making similar decisions. Likewise given an input feature vector, we say a classifier is arrogant in its classification if its veracity is high yet its experience is low. Conversely a classifier is non-arrogant in its classification if there is a reasonable balance between its veracity and its experience. We quantify this balance and we discuss new techniques that will detect arrogance in a classifier. Inquisitiveness is in many ways the opposite of arrogance. In nature inquisitiveness is an eagerness for knowledge characterized by the drive to question to seek a deeper understanding and to challenge assumptions. The human capacity to doubt present beliefs allows us to acquire new experiences and to learn from our mistakes. Within the discrete world of computers, inquisitive pattern recognition is the constructive investigation and exploitation of conflict in information. This research defines inquisitiveness within the context of self-supervised machine learning and introduces mathematical theory and computational methods for quantifying incompleteness that is for isolating unstable, nonrepresentational regions in present information models

    Metasemantics and fuzzy mathematics

    Get PDF
    The present thesis is an inquiry into the metasemantics of natural languages, with a particular focus on the philosophical motivations for countenancing degreed formal frameworks for both psychosemantics and truth-conditional semantics. Chapter 1 sets out to offer a bird's eye view of our overall research project and the key questions that we set out to address. Chapter 2 provides a self-contained overview of the main empirical findings in the cognitive science of concepts and categorisation. This scientific background is offered in light of the fact that most variants of psychologically-informed semantics see our network of concepts as providing the raw materials on which lexical and sentential meanings supervene. Consequently, the metaphysical study of internalistically-construed meanings and the empirical study of our mental categories are overlapping research projects. Chapter 3 closely investigates a selection of species of conceptual semantics, together with reasons for adopting or disavowing them. We note that our ultimate aim is not to defend these perspectives on the study of meaning, but to argue that the project of making them formally precise naturally invites the adoption of degreed mathematical frameworks (e.g. probabilistic or fuzzy). In Chapter 4, we switch to the orthodox framework of truth-conditional semantics, and we present the limitations of a philosophical position that we call "classicism about vagueness". In the process, we come up with an empirical hypothesis for the psychological pull of the inductive soritical premiss and we make an original objection against the epistemicist position, based on computability theory. Chapter 5 makes a different case for the adoption of degreed semantic frameworks, based on their (quasi-)superior treatments of the paradoxes of vagueness. Hence, the adoption of tools that allow for graded membership are well-motivated under both semantic internalism and semantic externalism. At the end of this chapter, we defend an unexplored view of vagueness that we call "practical fuzzicism". Chapter 6, viz. the final chapter, is a metamathematical enquiry into both the fuzzy model-theoretic semantics and the fuzzy Davidsonian semantics for formal languages of type-free truth in which precise truth-predications can be expressed
    • …
    corecore