18 research outputs found

    Modelling the Developing Mind: From Structure to Change

    Get PDF
    This paper presents a theory of cognitive change. The theory assumes that the fundamental causes of cognitive change reside in the architecture of mind. Thus, the architecture of mind as specified by the theory is described first. It is assumed that the mind is a three-level universe involving (1) a processing system that constrains processing potentials, (2) a set of specialized capacity systems that guide understanding of different reality and knowledge domains, and (3) a hypecognitive system that monitors and controls the functioning of all other systems. The paper then specifies the types of change that may occur in cognitive development (changes within the levels of mind, changes in the relations between structures across levels, changes in the efficiency of a structure) and a series of general (e.g., metarepresentation) and more specific mechanisms (e.g., bridging, interweaving, and fusion) that bring the changes about. It is argued that different types of change require different mechanisms. Finally, a general model of the nature of cognitive development is offered. The relations between the theory proposed in the paper and other theories and research in cognitive development and cognitive neuroscience is discussed throughout the paper

    SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS

    Get PDF
    After production and operations, finance and investments are one of the most frequent areas of neural network applications in business. The lack of standardized paradigms that can determine the efficiency of certain NN architectures in a particular problem domain is still present. The selection of NN architecture needs to take into consideration the type of the problem, the nature of the data in the model, as well as some strategies based on result comparison. The paper describes previous research in that area and suggests a forward strategy for selecting best NN algorithm and structure. Since the strategy includes both parameter-based and variable-based testings, it can be used for selecting NN architectures as well as for extracting models. The backpropagation, radialbasis, modular, LVQ and probabilistic neural network algorithms were used on two independent sets: stock market and credit scoring data. The results show that neural networks give better accuracy comparing to multiple regression and logistic regression models. Since it is model-independant, the strategy can be used by researchers and professionals in other areas of application

    Investigation of the CasCor family of learning algorithms

    Get PDF

    Automated Architecture Design for Deep Neural Networks

    Get PDF
    Machine learning has made tremendous progress in recent years and received large amounts of public attention. Though we are still far from designing a full artificially intelligent agent, machine learning has brought us many applications in which computers solve human learning tasks remarkably well. Much of this progress comes from a recent trend within machine learning, called deep learning. Deep learning models are responsible for many state-of-the-art applications of machine learning. Despite their success, deep learning models are hard to train, very difficult to understand, and often times so complex that training is only possible on very large GPU clusters. Lots of work has been done on enabling neural networks to learn efficiently. However, the design and architecture of such neural networks is often done manually through trial and error and expert knowledge. This thesis inspects different approaches, existing and novel, to automate the design of deep feedforward neural networks in an attempt to create less complex models with good performance that take away the burden of deciding on an architecture and make it more efficient to design and train such deep networks.Comment: Undergraduate Thesi
    corecore