81 research outputs found

    If deep learning is the answer, then what is the question?

    Full text link
    Neuroscience research is undergoing a minor revolution. Recent advances in machine learning and artificial intelligence (AI) research have opened up new ways of thinking about neural computation. Many researchers are excited by the possibility that deep neural networks may offer theories of perception, cognition and action for biological brains. This perspective has the potential to radically reshape our approach to understanding neural systems, because the computations performed by deep networks are learned from experience, not endowed by the researcher. If so, how can neuroscientists use deep networks to model and understand biological brains? What is the outlook for neuroscientists who seek to characterise computations or neural codes, or who wish to understand perception, attention, memory, and executive functions? In this Perspective, our goal is to offer a roadmap for systems neuroscience research in the age of deep learning. We discuss the conceptual and methodological challenges of comparing behaviour, learning dynamics, and neural representation in artificial and biological systems. We highlight new research questions that have emerged for neuroscience as a direct consequence of recent advances in machine learning.Comment: 4 Figures, 17 Page

    Performance-based standards for South African car-carriers

    Get PDF
    Until recently, car-carriers in South Africa operated under abnormal load permits allowing a finite relaxation of legal height and length limits. This practice is being phased out, and exemption will only be granted if a car-carrier complies with the Australian Performance-Based Standards (PBS) scheme. A low-speed turning model was developed in Matlab®, and used to benchmark the tail swing performance of the existing South African car-carrier fleet. About 80 per cent of the fleet were shown to not comply with the 0.30 m tail swing limit, due to South Africa’s inadequate rear overhang legislation which permits tail swing of up to 1.25 m. TruckSim® was used to conduct detailed PBS assessments of two car-carrier designs. Critical performance areas were identified; most notably yaw damping and tail swing for the truck and tag-trailer combination, and maximum of difference and difference of maxima for the tractor and semitrailer combination. These were remedied through appropriate design modifications. The Matlab® model was shown to be versatile, accurate and efficient, with potential for future application. The TruckSim® assessments highlighted complexities unique to car-carriers in a PBS context and showed how these may be addressed. This research has shown the benefit of PBS for heavy vehicles, and has guided car-carrier design to improve safety

    Neural knowledge assembly in humans and neural networks

    Get PDF
    Human understanding of the world can change rapidly when new information comes to light, such as when a plot twist occurs in a work of fiction. This flexible "knowledge assembly" requires few-shot reorganization of neural codes for relations among objects and events. However, existing computational theories are largely silent about how this could occur. Here, participants learned a transitive ordering among novel objects within two distinct contexts before exposure to new knowledge that revealed how they were linked. Blood-oxygen-level-dependent (BOLD) signals in dorsal frontoparietal cortical areas revealed that objects were rapidly and dramatically rearranged on the neural manifold after minimal exposure to linking information. We then adapt online stochastic gradient descent to permit similar rapid knowledge assembly in a neural network model

    Orthogonal representations for robust context-dependent task performance in brains and neural networks

    Get PDF
    How do neural populations code for multiple, potentially conflicting tasks? Here we used computational simulations involving neural networks to define “lazy” and “rich” coding solutions to this context-dependent decision-making problem, which trade off learning speed for robustness. During lazy learning the input dimensionality is expanded by random projections to the network hidden layer, whereas in rich learning hidden units acquire structured representations that privilege relevant over irrelevant features. For context-dependent decision-making, one rich solution is to project task representations onto low-dimensional and orthogonal manifolds. Using behavioral testing and neuroimaging in humans and analysis of neural signals from macaque prefrontal cortex, we report evidence for neural coding patterns in biological brains whose dimensionality and neural geometry are consistent with the rich learning regime

    Are task representations gated in macaque prefrontal cortex?

    Full text link
    A recent paper (Flesch et al, 2022) describes behavioural and neural data suggesting that task representations are gated in the prefrontal cortex in both humans and macaques. This short note proposes an alternative explanation for the reported results from the macaque data

    Abrupt and spontaneous strategy switches emerge in simple regularised neural networks

    Full text link
    Humans sometimes have an insight that leads to a sudden and drastic performance improvement on the task they are working on. Sudden strategy adaptations are often linked to insights, considered to be a unique aspect of human cognition tied to complex processes such as creativity or meta-cognitive reasoning. Here, we take a learning perspective and ask whether insight-like behaviour can occur in simple artificial neural networks, even when the models only learn to form input-output associations through gradual gradient descent. We compared learning dynamics in humans and regularised neural networks in a perceptual decision task that included a hidden regularity to solve the task more efficiently. Our results show that only some humans discover this regularity, whose behaviour was marked by a sudden and abrupt strategy switch that reflects an aha-moment. Notably, we find that simple neural networks with a gradual learning rule and a constant learning rate closely mimicked behavioural characteristics of human insight-like switches, exhibiting delay of insight, suddenness and selective occurrence in only some networks. Analyses of network architectures and learning dynamics revealed that insight-like behaviour crucially depended on a regularised gating mechanism and noise added to gradient updates, which allowed the networks to accumulate "silent knowledge" that is initially suppressed by regularised (attentional) gating. This suggests that insight-like behaviour can arise naturally from gradual learning in simple neural networks, where it reflects the combined influences of noise, gating and regularisation.Comment: 17 pages, 5 figure

    Reproducibility in high-throughput density functional theory: a comparison of AFLOW, Materials Project, and OQMD

    Full text link
    A central challenge in high throughput density functional theory (HT-DFT) calculations is selecting a combination of input parameters and post-processing techniques that can be used across all materials classes, while also managing accuracy-cost tradeoffs. To investigate the effects of these parameter choices, we consolidate three large HT-DFT databases: Automatic-FLOW (AFLOW), the Materials Project (MP), and the Open Quantum Materials Database (OQMD), and compare reported properties across each pair of databases for materials calculated using the same initial crystal structure. We find that HT-DFT formation energies and volumes are generally more reproducible than band gaps and total magnetizations; for instance, a notable fraction of records disagree on whether a material is metallic (up to 7%) or magnetic (up to 15%). The variance between calculated properties is as high as 0.105 eV/atom (median relative absolute difference, or MRAD, of 6%) for formation energy, 0.65 {\AA}3^3/atom (MRAD of 4%) for volume, 0.21 eV (MRAD of 9%) for band gap, and 0.15 μB\mu_{\rm B}/formula unit (MRAD of 8%) for total magnetization, comparable to the differences between DFT and experiment. We trace some of the larger discrepancies to choices involving pseudopotentials, the DFT+U formalism, and elemental reference states, and argue that further standardization of HT-DFT would be beneficial to reproducibility.Comment: Authors VIH and CKHB contributed equally to this wor

    Simple inhibitors of histone deacetylase activity that combine features of short-chain fatty acid and hydroxamic acid inhibitors

    Get PDF
    Butyric acid and trichostatin A (TSA) are anti-cancer compounds that cause the upregulation of genes involved in differentiation and cell cycle regulation by inhibiting histone deacetylase (HDAC) activity. In this study we have synthesized and evaluated compounds that combine the bioavailability of short-chain fatty acids, like butyric acid, with the bidentate binding ability of TSA. A series of analogs were made to examine the effects of chain length, simple aromatic cap groups, and substituted hydroxamates on the compounds\u27 ability to inhibit rat-liver HDAC using a fluorometric assay. In keeping with previous structure-activity relationships, the most effective inhibitors consisted of longer chains and hydroxamic acid groups. It was found that 5-phenylvaleric hydroxamic acid and 4-benzoylbutyric hydroxamic acid were the most potent inhibitors with IC50\u27s of 5 microM and 133 microM respectively
    corecore