27 research outputs found
A Novel Machine Learning Classifier Based on a Qualia Modeling Agent (QMA)
This dissertation addresses a problem found in supervised machine learning (ML) classification, that the target variable, i.e., the variable a classifier predicts, has to be identified before training begins and cannot change during training and testing. This research develops a computational agent, which overcomes this problem. The Qualia Modeling Agent (QMA) is modeled after two cognitive theories: Stanovich\u27s tripartite framework, which proposes learning results from interactions between conscious and unconscious processes; and, the Integrated Information Theory (IIT) of Consciousness, which proposes that the fundamental structural elements of consciousness are qualia. By modeling the informational relationships of qualia, the QMA allows for retaining and reasoning-over data sets in a non-ontological, non-hierarchical qualia space (QS). This novel computational approach supports concept drift, by allowing the target variable to change ad infinitum without re-training while achieving classification accuracy comparable to or greater than benchmark classifiers. Additionally, the research produced a functioning model of Stanovich\u27s framework, and a computationally tractable working solution for a representation of qualia, which when exposed to new examples, is able to match the causal structure and generate new inferences
Recommended from our members
Hypernetworks Analysis of RoboCup Interactions
Robotic soccer simulations are controlled environments in which the rich variety of interactions among agents make them good candidates to be studied as complex adaptive systems. The challenge is to create an autonomous team of soccer agents that can adapt and improve its behaviour as it plays other teams. By analogy with chess, the movements of the soccer agents and the ball form ever-changing networks as players in one team form structures that give their team an advantage. For example, the Defender’s Dilemma involves relationships between an attacker with the ball, a team-mate and a defender. The defender must choose between tackling the player with the ball, or taking a position to intercept a pass to the other attacker. Since these structures involve more that two interacting entities it is necessary to go beyond networks to multidimensional hypernetworks. In this context, this thesis investigates (i) is it possible to identify patterns of play, that lead a team to obtain an advantage ?, (ii) is it possible to forecast with a good degree of accuracy if a certain game action or sequence of game actions is going to be successful, before it has been completed ?, and (iii) is it possible to make behavioural patterns emerge in the game without specifying the behavioural rules in detail ? To investigate these research questions we devised two methods to analyse the interactions between robotic players, one based on traditional programming and one based on Deep Learning. The first method identified thousands of Defender’s Dilemma configurations from RoboCup 2D simulator games and found a statistically significant association between winning and the creation of the defender’s dilemma by the attackers of the winning team. The second method showed that a feedforward Artificial Neural Network trained on thousands of games can take as input the current game configuration and forecast to a high degree of accuracy if the current action will end up in a goal or not. Finally, we designed our own fast and simple robotic soccer simulator for investigating Reinforcement Learning. This showed that Reinforcement Learning using Proximal Policy Optimization could train two agents in the task of scoring a goal, using only basic actions without using pre-built hand-programmed skills. These experiments provide evidence that it is possible: to identify advantageous patterns of play; to forecast if an action or sequence of actions will be successful; and to make behavioural patterns emerge in the game without specifying the behavioural rules in detail
Recommended from our members
Applications of Complexity Theory in an English Metropolitan Police Force
This research addresses the question “Can the methods of complexity theory be used in UK policing as an enabling tool for policy intervention, in providing evidence of possible consequences before policies are implemented?”
A literature study shows complexity theory is without consensus on epistemology and application. Methodology is developed for exploring motivational consequences of policies on the workforce, involving building mathematical models using hypernetwork theory as the basis for computer simulation and a promising route to engage with practitioners. It is illustrated by the motivation of Police Community Support Officers (PCSOs) in the Neighbourhood Policing system of Greater Manchester Police. The computational model is based on qualitative data collected using Thematic Coding. This produced ‘behavioural codes’ defined as vertices. Six most prevalent were selected and combined into a hypersimplex:
Purpose (feedback)
Availability of supervision
Threat of harm
Relevance to role
Orientation to geographic responsibility
Lone working
The acronym facilitated design of the simulation. It also offers a conceptual model where combinations of vertices connect in given scenarios, supporting dialogue between policy maker and computer modeller and insights into simulated possible policy consequences on staff. This enables the policy maker to explore sufficient conditions for outcomes to be obtained that satisfied the policy objectives but not at the expense of staff motivation.
Experiments established two concepts: combinatorial compensation and combinatorial tempering as adaptation of simulated policy conditions. The nature of hypernetworks introduce non-linearity into policy design, as multiple dimensions are being considered to achieve objectives, where combinations are not predictable from individual dimensions.
A central tenet of the thesis is complexity science can be applied without computer programming skills, and ‘modelling’ can be done long before writing code. Following this there is an iterative interaction as the policy maker uses the program and requests the programmer for new or changed functionality.
The method is proposed as a general framework for Agent-Based Modelling (ABM) of human systems
A Hybrid Simulation Methodology To Evaluate Network Centricdecision Making Under Extreme Events
Currently the network centric operation and network centric warfare have generated a new area of research focused on determining how hierarchical organizations composed by human beings and machines make decisions over collaborative environments. One of the most stressful scenarios for these kinds of organizations is the so-called extreme events. This dissertation provides a hybrid simulation methodology based on classical simulation paradigms combined with social network analysis for evaluating and improving the organizational structures and procedures, mainly the incident command systems and plans for facing those extreme events. According to this, we provide a methodology for generating hypotheses and afterwards testing organizational procedures either in real training systems or simulation models with validated data. As long as the organization changes their dyadic relationships dynamically over time, we propose to capture the longitudinal digraph in time and analyze it by means of its adjacency matrix. Thus, by using an object oriented approach, three domains are proposed for better understanding the performance and the surrounding environment of an emergency management organization. System dynamics is used for modeling the critical infrastructure linked to the warning alerts of a given organization at federal, state and local levels. Discrete simulations based on the defined concept of community of state enables us to control the complete model. Discrete event simulation allows us to create entities that represent the data and resource flows within the organization. We propose that cognitive models might well be suited in our methodology. For instance, we show how the team performance decays in time, according to the Yerkes-Dodson curve, affecting the measures of performance of the whole organizational system. Accordingly we suggest that the hybrid model could be applied to other types of organizations, such as military peacekeeping operations and joint task forces. Along with providing insight about organizations, the methodology supports the analysis of the after action review (AAR), based on collection of data obtained from the command and control systems or the so-called training scenarios. Furthermore, a rich set of mathematical measures arises from the hybrid models such as triad census, dyad census, eigenvalues, utilization, feedback loops, etc., which provides a strong foundation for studying an emergency management organization. Future research will be necessary for analyzing real data and validating the proposed methodology
Recommended from our members
A Programmable Streaming Framework for Extreme-Scale Scientific Visualizations
Emerging computational and acquisition technologies are empowering scientists to conduct simulations and experiments on an unprecedented scale. These advancements can push the frontiers of science and technology with groundbreaking discoveries. However, they also pose significant challenges to traditional scientific visualization workflows. Firstly, the data generated by modern scientific studies using these technologies tends to be extremely large and complex, often resulting in slow processing and rendering times. This demands the development of visualization algorithms that can effectively scale with the size of the data. Secondly, state-of-the-art simulations and experiments produce data at extraordinary rates, complicating the task of generating valuable visualization results for scientists. Therefore, there's a pressing need for more adaptive and intelligent visualization workflows. Lastly, although new computer hardware and architecture can speed up the visualization process, significant performance variations still exist among visualization algorithms due to differing design choices. As a result, optimizing algorithms to better leverage emerging hardware features for enhanced efficiency remains an ongoing necessity.This dissertation addresses the aforementioned challenges by introducing a programmable streaming framework enhanced with implicit neural representation, designed for visualizing extreme-scale scientific data. Specifically, it unfolds three innovative methodologies:Firstly, the framework offers a reactive and declarative programming language for streamlining image generation, layout and interaction creation, and I/O processes, eliminating the need for users to manually control all visualization parameters and procedures. This language enables scientists to define highly adaptive visualization workflows through high-level, rule-based grammars. The system then automatically optimizes the low-level implementation according to these specifications, facilitating the creation of more efficient visualization workflows with simpler coding.Secondly, the framework features a scalable, hardware-accelerated streaming visualization system that allows visualization processes to run concurrently with I/O operations. This system not only achieves state-of-the-art scalability but can also effectively manages complex, multi-resolution data structures. It delivers accurate rendering outcomes, reduces memory usage, and leverages emerging hardware capabilities more efficiently.Finally, the framework integrates implicit neural representation (INR) techniques for data compression and interactive visualization. The use of INRs significantly reduces data size while preserving high-frequency details. Additionally, it enables direct access to spatial locations at any desired resolution, obviating the need for decompression or interpolation.In summary, this dissertation research addresses long-standing challenges inherent in extreme-scale scientific visualization by introducing novel designs and methodologies. The presented framework not only enables more efficient and adaptive visualization workflows but also leverages the latest hardware acceleration and data compression techniques. The implications of these advancements extend beyond mere technical improvements; they pave the way for deeper insights and discoveries across a broad spectrum of scientific studies. This research, therefore, represents a significant leap forward, with the potential to transform the landscape of scientific visualization
Multilayer Networks
In most natural and engineered systems, a set of entities interact with each
other in complicated patterns that can encompass multiple types of
relationships, change in time, and include other types of complications. Such
systems include multiple subsystems and layers of connectivity, and it is
important to take such "multilayer" features into account to try to improve our
understanding of complex systems. Consequently, it is necessary to generalize
"traditional" network theory by developing (and validating) a framework and
associated tools to study multilayer systems in a comprehensive fashion. The
origins of such efforts date back several decades and arose in multiple
disciplines, and now the study of multilayer networks has become one of the
most important directions in network science. In this paper, we discuss the
history of multilayer networks (and related concepts) and review the exploding
body of work on such networks. To unify the disparate terminology in the large
body of recent work, we discuss a general framework for multilayer networks,
construct a dictionary of terminology to relate the numerous existing concepts
to each other, and provide a thorough discussion that compares, contrasts, and
translates between related notions such as multilayer networks, multiplex
networks, interdependent networks, networks of networks, and many others. We
also survey and discuss existing data sets that can be represented as
multilayer networks. We review attempts to generalize single-layer-network
diagnostics to multilayer networks. We also discuss the rapidly expanding
research on multilayer-network models and notions like community structure,
connected components, tensor decompositions, and various types of dynamical
processes on multilayer networks. We conclude with a summary and an outlook.Comment: Working paper; 59 pages, 8 figure
Recommended from our members
Automatic Multilevel Feature Abstraction in Adaptable Machine Vision Systems
Vision is a complex task which can be accomplished with apparent ease by biological systems, but for which the design of artificial systems is difficult. Although machine vision systems can be successfully designed for a specific task, under certain conditions, they are likely to fail if circumstances change. This was the motivation for the research into ways in which systems can be self-designing and adaptable to new visual tasks. The research was conducted in three vital areas of concern for machine vision systems.
The first area is finding a suitable architecture for forming an appropriate representation for the current task. The research investigated the application of Hypernetworks theory to building a multilevel, generally-applicable representation, through repeated application of a fundamental 'self-similarity' principle, that parts of objects assembled under a particular relation at one level, form whole objects at the next. Results show that this is potentially a powerful approach for autonomously generating an adaptable system-architecture suitable for multiple visual tasks.
The second area is the autonomous extraction of suitable low-level features, which the research investigated through random generation of minimally-constrained pixel-configurations and algorithmic generation of homogeneous and heterogeneous polygons. The results suggest that, despite the simplicity of the features making them vulnerable to image transformations, these are promising approaches worth developing further.
The third area is automatic feature selection. The research explored management of 'dimensionality' and of 'combinatorial explosion', as well as how to locate relevant features at multiple representation levels, in the context of 'emergence' of structure. Results indicate that this approach can find useful 'intermediate-level' constructs through analysis of the connectivity of the simplices representing objects at higher levels.
The research concludes that the proposed novel approaches to tackling the above issues, in particular the application of hypernetworks to the formation of multilevel representations and the resulting emergence of higher-level structure, is fruitful
Towards Deep Learning with Competing Generalisation Objectives
The unreasonable effectiveness of Deep Learning continues to deliver unprecedented Artificial Intelligence capabilities to billions of people. Growing datasets and technological advances keep extending the reach of expressive model architectures trained through efficient optimisations. Thus, deep learning approaches continue to provide increasingly proficient subroutines for, among others, computer vision and natural interaction through speech and text. Due to their scalable learning and inference priors, higher performance is often gained cost-effectively through largely automatic training. As a result, new and improved capabilities empower more people while the costs of access drop.
The arising opportunities and challenges have profoundly influenced research. Quality attributes of scalable software became central desiderata of deep learning paradigms, including reusability, efficiency, robustness and safety. Ongoing research into continual, meta- and robust learning aims to maximise such scalability metrics in addition to multiple generalisation criteria, despite possible conflicts. A significant challenge is to satisfy competing criteria automatically and cost-effectively.
In this thesis, we introduce a unifying perspective on learning with competing generalisation objectives and make three additional contributions. When autonomous learning through multi-criteria optimisation is impractical, it is reasonable to ask whether knowledge of appropriate trade-offs could make it simultaneously effective and efficient. Informed by explicit trade-offs of interest to particular applications, we developed and evaluated bespoke model architecture priors. We introduced a novel architecture for sim-to-real transfer of robotic control policies by learning progressively to generalise anew. Competing desiderata of continual learning were balanced through disjoint capacity and hierarchical reuse of previously learnt representations. A new state-of-the-art meta-learning approach is then proposed. We showed that meta-trained hypernetworks efficiently store and flexibly reuse knowledge for new generalisation criteria through few-shot gradient-based optimisation. Finally, we characterised empirical trade-offs between the many desiderata of adversarial robustness and demonstrated a novel defensive capability of implicit neural networks to hinder many attacks simultaneously
Model of models -- Part 1
This paper proposes a new cognitive model, acting as the main component of an
AGI agent. The model is introduced in its mature intelligence state, and as an
extension of previous models, DENN, and especially AKREM, by including
operational models (frames/classes) and will. This model's core assumption is
that cognition is about operating on accumulated knowledge, with the guidance
of an appropriate will. Also, we assume that the actions, part of knowledge,
are learning to be aligned with will, during the evolution phase that precedes
the mature intelligence state. In addition, this model is mainly based on the
duality principle in every known intelligent aspect, such as exhibiting both
top-down and bottom-up model learning, generalization verse specialization, and
more. Furthermore, a holistic approach is advocated for AGI designing, and
cognition under constraints or efficiency is proposed, in the form of
reusability and simplicity. Finally, reaching this mature state is described
via a cognitive evolution from infancy to adulthood, utilizing a consolidation
principle. The final product of this cognitive model is a dynamic operational
memory of models and instances. Lastly, some examples and preliminary ideas for
the evolution phase to reach the mature state are presented.Comment: arXiv admin note: text overlap with arXiv:2301.1355
Dynamic Community Detection Method of a Social Network Based on Node Embedding Representation
Copyright © 2022 by the authors. The node embedding method enables network structure feature learning and representation for social network community detection. However, the traditional node embedding method only focuses on a node’s individual feature representation and ignores the global topological feature representation of the network. Traditional community detection methods cannot use the static node vector from the traditional node embedding method to calculate the dynamic features of the topological structure. In this study, an incremental dynamic community detection model based on a graph neural network node embedding representation is proposed, comprising the following aspects. A node embedding model based on influence random walk improves the information enrichment of the node feature vector representation, which improves the performance of the initial static community detection, whose results are used as the original structure of dynamic community detection. By combining a cohesion coefficient and ordinary modularity, a new modularity calculation method is proposed that uses an incremental training method to obtain node vector representation to detect a dynamic community from the perspectives of coarse- and fine-grained adjustments. A performance analysis based on two dynamic network datasets shows that the proposed method performs better than benchmark algorithms based on time complexity, community detection accuracy, and other indicators.National Natural Science Foundation of China (61802258, 61572326); Natural Science Foundation of Shanghai (18ZR1428300)