1,112 research outputs found

    Deep Neural Networks - A Brief History

    Full text link
    Introduction to deep neural networks and their history.Comment: 14 pages, 14 figure

    Processing of regular and irregular past tense morphology in highly proficient second language learners of English: a self-paced reading study

    Get PDF
    Dual-system models suggest that English past tense morphology involves two processing routes: rule application for regular verbs and memory retrieval for irregular verbs (Pinker, 1999). In second language (L2) processing research, Ullman (2001a) suggested that both verb types are retrieved from memory, but more recently Clahsen and Felser (2006) and Ullman (2004) argued that past tense rule application can be automatised with experience by L2 learners. To address this controversy, we tested highly proficient Greek-English learners with naturalistic or classroom L2 exposure compared to native English speakers in a self-paced reading task involving past tense forms embedded in plausible sentences. Our results suggest that, irrespective to the type of exposure, proficient L2 learners of extended L2 exposure apply rule-based processing

    Of the Helmholtz Club, South-Californian seedbed for visual and cognitive neuroscience, and its patron Francis Crick

    Get PDF
    Taking up the view that semi-institutional gatherings such as clubs, societies, research schools, have been instrumental in creating sheltered spaces from which many a 20th-century project-driven interdisciplinary research programme could develop and become established within the institutions of science, the paper explores the history of one such gathering from its inception in the early 1980s into the 2000s, the Helmholtz Club, which brought together scientists from such various research fields as neuroanatomy, neurophysiology, psychophysics, computer science and engineering, who all had an interest in the study of the visual system and of higher cognitive functions relying on visual perception such as visual consciousness. It argues that British molecular biologist turned South Californian neuroscientist Francis Crick had an early and lasting influence over the Helmholtz Club of which he was a founding pillar, and that from its inception, the club served as a constitutive element in his long-term plans for a neuroscience of vision and of cognition. Further, it argues that in this role, the Helmholtz Club served many purposes, the primary of which was to be a social forum for interdisciplinary discussion, where ‘discussion’ was not mere talk but was imbued with an epistemic value and as such, carefully cultivated. Finally, it questions what counts as ‘doing science’ and in turn, definitions of success and failure—and provides some material evidence towards re-appraising the successfulness of Crick’s contribution to the neurosciences

    Conjunction and Negation of Natural Concepts: A Quantum-theoretic Modeling

    Full text link
    We perform two experiments with the aim to investigate the effects of negation on the combination of natural concepts. In the first experiment, we test the membership weights of a list of exemplars with respect to two concepts, e.g., {\it Fruits} and {\it Vegetables}, and their conjunction {\it Fruits And Vegetables}. In the second experiment, we test the membership weights of the same list of exemplars with respect to the same two concepts, but negating the second, e.g., {\it Fruits} and {\it Not Vegetables}, and again their conjunction {\it Fruits And Not Vegetables}. The collected data confirm existing results on conceptual combination, namely, they show dramatic deviations from the predictions of classical (fuzzy set) logic and probability theory. More precisely, they exhibit conceptual vagueness, gradeness of membership, overextension and double overextension of membership weights with respect to the given conjunctions. Then, we show that the quantum probability model in Fock space recently elaborated to model Hampton's data on concept conjunction (Hampton, 1988a) and disjunction (Hampton, 1988b) faithfully accords with the collected data. Our quantum-theoretic modeling enables to describe these non-classical effects in terms of genuine quantum effects, namely `contextuality', `superposition', `interference' and `emergence'. The obtained results confirm and strenghten the analysis in Aerts (2009a) and Sozzo (2014) on the identification of quantum aspects in experiments on conceptual vagueness. Our results can be inserted within the general research on the identification of quantum structures in cognitive and decision processes.Comment: 32 pages, standard latex, no figures, 16 tables. arXiv admin note: text overlap with arXiv:1311.6050; and text overlap with arXiv:0805.3850 by other author

    An analog feedback associative memory

    Get PDF
    A method for the storage of analog vectors, i.e., vectors whose components are real-valued, is developed for the Hopfield continuous-time network. An important requirement is that each memory vector has to be an asymptotically stable (i.e. attractive) equilibrium of the network. Some of the limitations imposed by the continuous Hopfield model on the set of vectors that can be stored are pointed out. These limitations can be relieved by choosing a network containing visible as well as hidden units. An architecture consisting of several hidden layers and a visible layer, connected in a circular fashion, is considered. It is proved that the two-layer case is guaranteed to store any number of given analog vectors provided their number does not exceed 1 + the number of neurons in the hidden layer. A learning algorithm that correctly adjusts the locations of the equilibria and guarantees their asymptotic stability is developed. Simulation results confirm the effectiveness of the approach

    Building Machines That Learn and Think Like People

    Get PDF
    Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar

    Deep Learning the Effects of Photon Sensors on the Event Reconstruction Performance in an Antineutrino Detector

    Full text link
    We provide a fast approach incorporating the usage of deep learning for evaluating the effects of photon sensors in an antineutrino detector on the event reconstruction performance therein. This work is an attempt to harness the power of deep learning for detector designing and upgrade planning. Using the Daya Bay detector as a benchmark case and the vertex reconstruction performance as the objective for the deep neural network, we find that the photomultiplier tubes (PMTs) have different relative importance to the vertex reconstruction. More importantly, the vertex position resolutions for the Daya Bay detector follow approximately a multi-exponential relationship with respect to the number of PMTs and hence, the coverage. This could also assist in deciding on the merits of installing additional PMTs for future detector plans. The approach could easily be used with other objectives in place of vertex reconstruction
    corecore