58 research outputs found

    Booleovská faktorová analýza atraktorovou neuronovou sítí

    Get PDF
    Import 23/08/2017Methods for the discovery of hidden structures of high-dimensional binary data rank among the most important challenges facing the community of machine learning researchers at present. There are many approaches in the literature that try to solve this hitherto rather ill-defined task. The Boolean factor analysis (BFA) studied in this work represents a hidden structure of binary data as Boolean superposition of binary factors complied with the BFA generative model of signals, and the criterion of optimality of BFA solution is given. In these terms, the BFA is a well-defined task completely analogous to linear factor analysis. The main contributions of the dissertation thesis are as follows: Firstly, an efficient BFA method, based on the original attractor neural network with increasing activity (ANNIA), which is subsequently improved through a combination with the expectation-maximization method(EM),so LANNIA method has been developed. Secondly, the characteristics of the ANNIA that are important for LANNIA and ANNIA methods functioning were analyzed. Then the functioning of both methods was validated on artificially generated data sets. Next, the method was applied to real-world data from different areas of science to demonstrate their contribution to this type of analysis. Finally, the BFA method was compared with related methods, including applicability analysis.Jednou z nejdůležitějších výzev současnosti, která stojí před komunitou badatelů z oblasti strojového učení je výzkum metod pro analýzu vysoce-dimenzionálních binárních dat s cílem odhalení jejich skryté struktury. V literatuře můžeme nalézt mnoho přístupů, které se snaží tuto doposud poněkud vágně definovanou úlohu řešit. Booleovská Faktorová Analýza (BFA), jež je předmětem této práce, předpokládá, že skrytou strukturu binárních dat lze reprezentovat jako booleovskou superpozici binárních faktorů tak, aby co nejlépe odpovídala generativnímu modelu signálů BFA a danému kritériu optimálnosti. Za těchto podmínek je BFA dob��e definovaná úloha zcela analogická lineární faktorové analýze. Hlavní přínosy disertační práce, jsou následující: Za prvé byl vyvinut efektivní způsob BFA založený na původní atraktorové neuronové síti s rostoucí aktivitou (ANNIA), která byla následně zlepšena kombinací s metodou expectation–maximization (EM)a tak vytvo5ena metoda LANNIA. Dále byly provedeny analýzy charakteristik ANNIA, které jsou důležité pro fungování obou metod. Funkčnost obou metod byla také ověřena na uměle vytvořených souborech dat pokrývajících celou škálu parametrů generativního modelu. Dále je v práci ukázáno použití metod na reálných datech z různých oblastí vědy s cílem prokázat jejich přínos pro tento typ analýzy. A konečně bylo provedeno i srovnání metod BFA se podobnými metodami včetně analýzy jejich použitelnosti.460 - Katedra informatikyvyhově

    AI of Brain and Cognitive Sciences: From the Perspective of First Principles

    Full text link
    Nowadays, we have witnessed the great success of AI in various applications, including image classification, game playing, protein structure analysis, language translation, and content generation. Despite these powerful applications, there are still many tasks in our daily life that are rather simple to humans but pose great challenges to AI. These include image and language understanding, few-shot learning, abstract concepts, and low-energy cost computing. Thus, learning from the brain is still a promising way that can shed light on the development of next-generation AI. The brain is arguably the only known intelligent machine in the universe, which is the product of evolution for animals surviving in the natural environment. At the behavior level, psychology and cognitive sciences have demonstrated that human and animal brains can execute very intelligent high-level cognitive functions. At the structure level, cognitive and computational neurosciences have unveiled that the brain has extremely complicated but elegant network forms to support its functions. Over years, people are gathering knowledge about the structure and functions of the brain, and this process is accelerating recently along with the initiation of giant brain projects worldwide. Here, we argue that the general principles of brain functions are the most valuable things to inspire the development of AI. These general principles are the standard rules of the brain extracting, representing, manipulating, and retrieving information, and here we call them the first principles of the brain. This paper collects six such first principles. They are attractor network, criticality, random network, sparse coding, relational memory, and perceptual learning. On each topic, we review its biological background, fundamental property, potential application to AI, and future development.Comment: 59 pages, 5 figures, review articl

    UNCOVERING PATTERNS IN COMPLEX DATA WITH RESERVOIR COMPUTING AND NETWORK ANALYTICS: A DYNAMICAL SYSTEMS APPROACH

    Get PDF
    In this thesis, we explore methods of uncovering underlying patterns in complex data, and making predictions, through machine learning and network science. With the availability of more data, machine learning for data analysis has advanced rapidly. However, there is a general lack of approaches that might allow us to 'open the black box'. In the machine learning part of this thesis, we primarily use an architecture called Reservoir Computing for time-series prediction and image classification, while exploring how information is encoded in the reservoir dynamics. First, we investigate the ways in which a Reservoir Computer (RC) learns concepts such as 'similar' and 'different', and relationships such as 'blurring', 'rotation' etc. between image pairs, and generalizes these concepts to different classes unseen during training. We observe that the high dimensional reservoir dynamics display different patterns for different relationships. This clustering allows RCs to perform significantly better in generalization with limited training compared with state-of-the-art pair-based convolutional/deep Siamese Neural Networks. Second, we demonstrate the utility of an RC in the separation of superimposed chaotic signals. We assume no knowledge of the dynamical equations that produce the signals, and require only that the training data consist of finite time samples of the component signals. We find that our method significantly outperforms the optimal linear solution to the separation problem, the Wiener filter. To understand how representations of signals are encoded in an RC during learning, we study its dynamical properties when trained to predict chaotic Lorenz signals. We do so by using a novel, mathematical fixed-point-finding technique called directional fibers. We find that, after training, the high dimensional RC dynamics includes fixed points that map to the known Lorenz fixed points, but the RC also has spurious fixed points, which are relevant to how its predictions break down. While machine learning is a useful data processing tool, its success often relies on a useful representation of the system's information. In contrast, systems with a large numbers of interacting components may be better analyzed by modeling them as networks. While numerous advances in network science have helped us analyze such systems, tools that identify properties on networks modeling multi-variate time-evolving data (such as disease data) are limited. We close this gap by introducing a novel data-driven, network-based Trajectory Profile Clustering (TPC) algorithm for 1) identification of disease subtypes and 2) early prediction of subtype/disease progression patterns. TPC identifies subtypes by clustering patients with similar disease trajectory profiles derived from bipartite patient-variable networks. Applying TPC to a Parkinson’s dataset, we identify 3 distinct subtypes. Additionally, we show that TPC predicts disease subtype 4 years in advance with 74% accuracy

    Brain-Inspired Computational Intelligence via Predictive Coding

    Full text link
    Artificial intelligence (AI) is rapidly becoming one of the key technologies of this century. The majority of results in AI thus far have been achieved using deep neural networks trained with the error backpropagation learning algorithm. However, the ubiquitous adoption of this approach has highlighted some important limitations such as substantial computational cost, difficulty in quantifying uncertainty, lack of robustness, unreliability, and biological implausibility. It is possible that addressing these limitations may require schemes that are inspired and guided by neuroscience theories. One such theory, called predictive coding (PC), has shown promising performance in machine intelligence tasks, exhibiting exciting properties that make it potentially valuable for the machine learning community: PC can model information processing in different brain areas, can be used in cognitive control and robotics, and has a solid mathematical grounding in variational inference, offering a powerful inversion scheme for a specific class of continuous-state generative models. With the hope of foregrounding research in this direction, we survey the literature that has contributed to this perspective, highlighting the many ways that PC might play a role in the future of machine learning and computational intelligence at large.Comment: 37 Pages, 9 Figure

    Dynamical Systems in Spiking Neuromorphic Hardware

    Get PDF
    Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks – akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networks—a state-of-the-art deep recurrent architecture—in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case
    corecore