27 research outputs found

    A Novel Machine Learning Classifier Based on a Qualia Modeling Agent (QMA)

    Get PDF
    This dissertation addresses a problem found in supervised machine learning (ML) classification, that the target variable, i.e., the variable a classifier predicts, has to be identified before training begins and cannot change during training and testing. This research develops a computational agent, which overcomes this problem. The Qualia Modeling Agent (QMA) is modeled after two cognitive theories: Stanovich\u27s tripartite framework, which proposes learning results from interactions between conscious and unconscious processes; and, the Integrated Information Theory (IIT) of Consciousness, which proposes that the fundamental structural elements of consciousness are qualia. By modeling the informational relationships of qualia, the QMA allows for retaining and reasoning-over data sets in a non-ontological, non-hierarchical qualia space (QS). This novel computational approach supports concept drift, by allowing the target variable to change ad infinitum without re-training while achieving classification accuracy comparable to or greater than benchmark classifiers. Additionally, the research produced a functioning model of Stanovich\u27s framework, and a computationally tractable working solution for a representation of qualia, which when exposed to new examples, is able to match the causal structure and generate new inferences

    A Hybrid Simulation Methodology To Evaluate Network Centricdecision Making Under Extreme Events

    Get PDF
    Currently the network centric operation and network centric warfare have generated a new area of research focused on determining how hierarchical organizations composed by human beings and machines make decisions over collaborative environments. One of the most stressful scenarios for these kinds of organizations is the so-called extreme events. This dissertation provides a hybrid simulation methodology based on classical simulation paradigms combined with social network analysis for evaluating and improving the organizational structures and procedures, mainly the incident command systems and plans for facing those extreme events. According to this, we provide a methodology for generating hypotheses and afterwards testing organizational procedures either in real training systems or simulation models with validated data. As long as the organization changes their dyadic relationships dynamically over time, we propose to capture the longitudinal digraph in time and analyze it by means of its adjacency matrix. Thus, by using an object oriented approach, three domains are proposed for better understanding the performance and the surrounding environment of an emergency management organization. System dynamics is used for modeling the critical infrastructure linked to the warning alerts of a given organization at federal, state and local levels. Discrete simulations based on the defined concept of community of state enables us to control the complete model. Discrete event simulation allows us to create entities that represent the data and resource flows within the organization. We propose that cognitive models might well be suited in our methodology. For instance, we show how the team performance decays in time, according to the Yerkes-Dodson curve, affecting the measures of performance of the whole organizational system. Accordingly we suggest that the hybrid model could be applied to other types of organizations, such as military peacekeeping operations and joint task forces. Along with providing insight about organizations, the methodology supports the analysis of the after action review (AAR), based on collection of data obtained from the command and control systems or the so-called training scenarios. Furthermore, a rich set of mathematical measures arises from the hybrid models such as triad census, dyad census, eigenvalues, utilization, feedback loops, etc., which provides a strong foundation for studying an emergency management organization. Future research will be necessary for analyzing real data and validating the proposed methodology

    Multilayer Networks

    Full text link
    In most natural and engineered systems, a set of entities interact with each other in complicated patterns that can encompass multiple types of relationships, change in time, and include other types of complications. Such systems include multiple subsystems and layers of connectivity, and it is important to take such "multilayer" features into account to try to improve our understanding of complex systems. Consequently, it is necessary to generalize "traditional" network theory by developing (and validating) a framework and associated tools to study multilayer systems in a comprehensive fashion. The origins of such efforts date back several decades and arose in multiple disciplines, and now the study of multilayer networks has become one of the most important directions in network science. In this paper, we discuss the history of multilayer networks (and related concepts) and review the exploding body of work on such networks. To unify the disparate terminology in the large body of recent work, we discuss a general framework for multilayer networks, construct a dictionary of terminology to relate the numerous existing concepts to each other, and provide a thorough discussion that compares, contrasts, and translates between related notions such as multilayer networks, multiplex networks, interdependent networks, networks of networks, and many others. We also survey and discuss existing data sets that can be represented as multilayer networks. We review attempts to generalize single-layer-network diagnostics to multilayer networks. We also discuss the rapidly expanding research on multilayer-network models and notions like community structure, connected components, tensor decompositions, and various types of dynamical processes on multilayer networks. We conclude with a summary and an outlook.Comment: Working paper; 59 pages, 8 figure

    Towards Deep Learning with Competing Generalisation Objectives

    Get PDF
    The unreasonable effectiveness of Deep Learning continues to deliver unprecedented Artificial Intelligence capabilities to billions of people. Growing datasets and technological advances keep extending the reach of expressive model architectures trained through efficient optimisations. Thus, deep learning approaches continue to provide increasingly proficient subroutines for, among others, computer vision and natural interaction through speech and text. Due to their scalable learning and inference priors, higher performance is often gained cost-effectively through largely automatic training. As a result, new and improved capabilities empower more people while the costs of access drop. The arising opportunities and challenges have profoundly influenced research. Quality attributes of scalable software became central desiderata of deep learning paradigms, including reusability, efficiency, robustness and safety. Ongoing research into continual, meta- and robust learning aims to maximise such scalability metrics in addition to multiple generalisation criteria, despite possible conflicts. A significant challenge is to satisfy competing criteria automatically and cost-effectively. In this thesis, we introduce a unifying perspective on learning with competing generalisation objectives and make three additional contributions. When autonomous learning through multi-criteria optimisation is impractical, it is reasonable to ask whether knowledge of appropriate trade-offs could make it simultaneously effective and efficient. Informed by explicit trade-offs of interest to particular applications, we developed and evaluated bespoke model architecture priors. We introduced a novel architecture for sim-to-real transfer of robotic control policies by learning progressively to generalise anew. Competing desiderata of continual learning were balanced through disjoint capacity and hierarchical reuse of previously learnt representations. A new state-of-the-art meta-learning approach is then proposed. We showed that meta-trained hypernetworks efficiently store and flexibly reuse knowledge for new generalisation criteria through few-shot gradient-based optimisation. Finally, we characterised empirical trade-offs between the many desiderata of adversarial robustness and demonstrated a novel defensive capability of implicit neural networks to hinder many attacks simultaneously

    Model of models -- Part 1

    Full text link
    This paper proposes a new cognitive model, acting as the main component of an AGI agent. The model is introduced in its mature intelligence state, and as an extension of previous models, DENN, and especially AKREM, by including operational models (frames/classes) and will. This model's core assumption is that cognition is about operating on accumulated knowledge, with the guidance of an appropriate will. Also, we assume that the actions, part of knowledge, are learning to be aligned with will, during the evolution phase that precedes the mature intelligence state. In addition, this model is mainly based on the duality principle in every known intelligent aspect, such as exhibiting both top-down and bottom-up model learning, generalization verse specialization, and more. Furthermore, a holistic approach is advocated for AGI designing, and cognition under constraints or efficiency is proposed, in the form of reusability and simplicity. Finally, reaching this mature state is described via a cognitive evolution from infancy to adulthood, utilizing a consolidation principle. The final product of this cognitive model is a dynamic operational memory of models and instances. Lastly, some examples and preliminary ideas for the evolution phase to reach the mature state are presented.Comment: arXiv admin note: text overlap with arXiv:2301.1355

    Dynamic Community Detection Method of a Social Network Based on Node Embedding Representation

    Get PDF
    Copyright © 2022 by the authors. The node embedding method enables network structure feature learning and representation for social network community detection. However, the traditional node embedding method only focuses on a node’s individual feature representation and ignores the global topological feature representation of the network. Traditional community detection methods cannot use the static node vector from the traditional node embedding method to calculate the dynamic features of the topological structure. In this study, an incremental dynamic community detection model based on a graph neural network node embedding representation is proposed, comprising the following aspects. A node embedding model based on influence random walk improves the information enrichment of the node feature vector representation, which improves the performance of the initial static community detection, whose results are used as the original structure of dynamic community detection. By combining a cohesion coefficient and ordinary modularity, a new modularity calculation method is proposed that uses an incremental training method to obtain node vector representation to detect a dynamic community from the perspectives of coarse- and fine-grained adjustments. A performance analysis based on two dynamic network datasets shows that the proposed method performs better than benchmark algorithms based on time complexity, community detection accuracy, and other indicators.National Natural Science Foundation of China (61802258, 61572326); Natural Science Foundation of Shanghai (18ZR1428300)
    corecore