7,672 research outputs found

    Undergraduate Catalog of Studies, 2023-2024

    Get PDF

    Computational techniques to interpret the neural code underlying complex cognitive processes

    Get PDF
    Advances in large-scale neural recording technology have significantly improved the capacity to further elucidate the neural code underlying complex cognitive processes. This thesis aimed to investigate two research questions in rodent models. First, what is the role of the hippocampus in memory and specifically what is the underlying neural code that contributes to spatial memory and navigational decision-making. Second, how is social cognition represented in the medial prefrontal cortex at the level of individual neurons. To start, the thesis begins by investigating memory and social cognition in the context of healthy and diseased states that use non-invasive methods (i.e. fMRI and animal behavioural studies). The main body of the thesis then shifts to developing our fundamental understanding of the neural mechanisms underpinning these cognitive processes by applying computational techniques to ana lyse stable large-scale neural recordings. To achieve this, tailored calcium imaging and behaviour preprocessing computational pipelines were developed and optimised for use in social interaction and spatial navigation experimental analysis. In parallel, a review was conducted on methods for multivariate/neural population analysis. A comparison of multiple neural manifold learning (NML) algorithms identified that non linear algorithms such as UMAP are more adaptable across datasets of varying noise and behavioural complexity. Furthermore, the review visualises how NML can be applied to disease states in the brain and introduces the secondary analyses that can be used to enhance or characterise a neural manifold. Lastly, the preprocessing and analytical pipelines were combined to investigate the neural mechanisms in volved in social cognition and spatial memory. The social cognition study explored how neural firing in the medial Prefrontal cortex changed as a function of the social dominance paradigm, the "Tube Test". The univariate analysis identified an ensemble of behavioural-tuned neurons that fire preferentially during specific behaviours such as "pushing" or "retreating" for the animal’s own behaviour and/or the competitor’s behaviour. Furthermore, in dominant animals, the neural population exhibited greater average firing than that of subordinate animals. Next, to investigate spatial memory, a spatial recency task was used, where rats learnt to navigate towards one of three reward locations and then recall the rewarded location of the session. During the task, over 1000 neurons were recorded from the hippocampal CA1 region for five rats over multiple sessions. Multivariate analysis revealed that the sequence of neurons encoding an animal’s spatial position leading up to a rewarded location was also active in the decision period before the animal navigates to the rewarded location. The result posits that prospective replay of neural sequences in the hippocampal CA1 region could provide a mechanism by which decision-making is supported

    An examination of the verbal behaviour of intergroup discrimination

    Get PDF
    This thesis examined relationships between psychological flexibility, psychological inflexibility, prejudicial attitudes, and dehumanization across three cross-sectional studies with an additional proposed experimental study. Psychological flexibility refers to mindful attention to the present moment, willing acceptance of private experiences, and engaging in behaviours congruent with one’s freely chosen values. Inflexibility, on the other hand, indicates a tendency to suppress unwanted thoughts and emotions, entanglement with one’s thoughts, and rigid behavioural patterns. Study 1 found limited correlations between inflexibility and sexism, racism, homonegativity, and dehumanization. Study 2 demonstrated more consistent positive associations between inflexibility and prejudice. And Study 3 controlled for right-wing authoritarianism and social dominance orientation, finding inflexibility predicted hostile sexism and racism beyond these factors. While showing some relationships, particularly with sexism and racism, psychological inflexibility did not consistently correlate with varied prejudices across studies. The proposed randomized controlled trial aims to evaluate an Acceptance and Commitment Therapy intervention to reduce sexism through enhanced psychological flexibility. Overall, findings provide mixed support for the utility of flexibility-based skills in addressing complex societal prejudices. Research should continue examining flexibility integrated with socio-cultural approaches to promote equity

    Classical and quantum algorithms for scaling problems

    Get PDF
    This thesis is concerned with scaling problems, which have a plethora of connections to different areas of mathematics, physics and computer science. Although many structural aspects of these problems are understood by now, we only know how to solve them efficiently in special cases.We give new algorithms for non-commutative scaling problems with complexity guarantees that match the prior state of the art. To this end, we extend the well-known (self-concordance based) interior-point method (IPM) framework to Riemannian manifolds, motivated by its success in the commutative setting. Moreover, the IPM framework does not obviously suffer from the same obstructions to efficiency as previous methods. It also yields the first high-precision algorithms for other natural geometric problems in non-positive curvature.For the (commutative) problems of matrix scaling and balancing, we show that quantum algorithms can outperform the (already very efficient) state-of-the-art classical algorithms. Their time complexity can be sublinear in the input size; in certain parameter regimes they are also optimal, whereas in others we show no quantum speedup over the classical methods is possible. Along the way, we provide improvements over the long-standing state of the art for searching for all marked elements in a list, and computing the sum of a list of numbers.We identify a new application in the context of tensor networks for quantum many-body physics. We define a computable canonical form for uniform projected entangled pair states (as the solution to a scaling problem), circumventing previously known undecidability results. We also show, by characterizing the invariant polynomials, that the canonical form is determined by evaluating the tensor network contractions on networks of bounded size

    Backpropagation Beyond the Gradient

    Get PDF
    Automatic differentiation is a key enabler of deep learning: previously, practitioners were limited to models for which they could manually compute derivatives. Now, they can create sophisticated models with almost no restrictions and train them using first-order, i. e. gradient, information. Popular libraries like PyTorch and TensorFlow compute this gradient efficiently, automatically, and conveniently with a single line of code. Under the hood, reverse-mode automatic differentiation, or gradient backpropagation, powers the gradient computation in these libraries. Their entire design centers around gradient backpropagation. These frameworks are specialized around one specific task—computing the average gradient in a mini-batch. This specialization often complicates the extraction of other information like higher-order statistical moments of the gradient, or higher-order derivatives like the Hessian. It limits practitioners and researchers to methods that rely on the gradient. Arguably, this hampers the field from exploring the potential of higher-order information and there is evidence that focusing solely on the gradient has not lead to significant recent advances in deep learning optimization. To advance algorithmic research and inspire novel ideas, information beyond the batch-averaged gradient must be made available at the same level of computational efficiency, automation, and convenience. This thesis presents approaches to simplify experimentation with rich information beyond the gradient by making it more readily accessible. We present an implementation of these ideas as an extension to the backpropagation procedure in PyTorch. Using this newly accessible information, we demonstrate possible use cases by (i) showing how it can inform our understanding of neural network training by building a diagnostic tool, and (ii) enabling novel methods to efficiently compute and approximate curvature information. First, we extend gradient backpropagation for sequential feedforward models to Hessian backpropagation which enables computing approximate per-layer curvature. This perspective unifies recently proposed block- diagonal curvature approximations. Like gradient backpropagation, the computation of these second-order derivatives is modular, and therefore simple to automate and extend to new operations. Based on the insight that rich information beyond the gradient can be computed efficiently and at the same time, we extend the backpropagation in PyTorch with the BackPACK library. It provides efficient and convenient access to statistical moments of the gradient and approximate curvature information, often at a small overhead compared to computing just the gradient. Next, we showcase the utility of such information to better understand neural network training. We build the Cockpit library that visualizes what is happening inside the model during training through various instruments that rely on BackPACK’s statistics. We show how Cockpit provides a meaningful statistical summary report to the deep learning engineer to identify bugs in their machine learning pipeline, guide hyperparameter tuning, and study deep learning phenomena. Finally, we use BackPACK’s extended automatic differentiation functionality to develop ViViT, an approach to efficiently compute curvature information, in particular curvature noise. It uses the low-rank structure of the generalized Gauss-Newton approximation to the Hessian and addresses shortcomings in existing curvature approximations. Through monitoring curvature noise, we demonstrate how ViViT’s information helps in understanding challenges to make second-order optimization methods work in practice. This work develops new tools to experiment more easily with higher-order information in complex deep learning models. These tools have impacted works on Bayesian applications with Laplace approximations, out-of-distribution generalization, differential privacy, and the design of automatic differentia- tion systems. They constitute one important step towards developing and establishing more efficient deep learning algorithms

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Machine learning applications in search algorithms for gravitational waves from compact binary mergers

    Get PDF
    Gravitational waves from compact binary mergers are now routinely observed by Earth-bound detectors. These observations enable exciting new science, as they have opened a new window to the Universe. However, extracting gravitational-wave signals from the noisy detector data is a challenging problem. The most sensitive search algorithms for compact binary mergers use matched filtering, an algorithm that compares the data with a set of expected template signals. As detectors are upgraded and more sophisticated signal models become available, the number of required templates will increase, which can make some sources computationally prohibitive to search for. The computational cost is of particular concern when low-latency alerts should be issued to maximize the time for electromagnetic follow-up observations. One potential solution to reduce computational requirements that has started to be explored in the last decade is machine learning. However, different proposed deep learning searches target varying parameter spaces and use metrics that are not always comparable to existing literature. Consequently, a clear picture of the capabilities of machine learning searches has been sorely missing. In this thesis, we closely examine the sensitivity of various deep learning gravitational-wave search algorithms and introduce new methods to detect signals from binary black hole and binary neutron star mergers at previously untested statistical confidence levels. By using the sensitive distance as our core metric, we allow for a direct comparison of our algorithms to state-of-the-art search pipelines. As part of this thesis, we organized a global mock data challenge to create a benchmark for machine learning search algorithms targeting compact binaries. This way, the tools developed in this thesis are made available to the greater community by publishing them as open source software. Our studies show that, depending on the parameter space, deep learning gravitational-wave search algorithms are already competitive with current production search pipelines. We also find that strategies developed for traditional searches can be effectively adapted to their machine learning counterparts. In regions where matched filtering becomes computationally expensive, available deep learning algorithms are also limited in their capability. We find reduced sensitivity to long duration signals compared to the excellent results for short-duration binary black hole signals

    On the path integration system of insects: there and back again

    Get PDF
    Navigation is an essential capability of animate organisms and robots. Among animate organisms of particular interest are insects because they are capable of a variety of navigation competencies solving challenging problems with limited resources, thereby providing inspiration for robot navigation. Ants, bees and other insects are able to return to their nest using a navigation strategy known as path integration. During path integration, the animal maintains a running estimate of the distance and direction to its nest as it travels. This estimate, known as the `home vector', enables the animal to return to its nest. Path integration was the technique used by sea navigators to cross the open seas in the past. To perform path integration, both sailors and insects need access to two pieces of information, their direction and their speed of motion over time. Neurons encoding the heading and speed have been found to converge on a highly conserved region of the insect brain, the central complex. It is, therefore, believed that the central complex is key to the computations pertaining to path integration. However, several questions remain about the exact structure of the neuronal circuit that tracks the animal's heading, how it differs between insect species, and how the speed and direction are integrated into a home vector and maintained in memory. In this thesis, I have combined behavioural, anatomical, and physiological data with computational modelling and agent simulations to tackle these questions. Analysis of the internal compass circuit of two insect species with highly divergent ecologies, the fruit fly Drosophila melanogaster and the desert locust Schistocerca gregaria, revealed that despite 400 million years of evolutionary divergence, both species share a fundamentally common internal compass circuit that keeps track of the animal's heading. However, subtle differences in the neuronal morphologies result in distinct circuit dynamics adapted to the ecology of each species, thereby providing insights into how neural circuits evolved to accommodate species-specific behaviours. The fast-moving insects need to update their home vector memory continuously as they move, yet they can remember it for several hours. This conjunction of fast updating and long persistence of the home vector does not directly map to current short, mid, and long-term memory accounts. An extensive literature review revealed a lack of available memory models that could support the home vector memory requirements. A comparison of existing behavioural data with the homing behaviour of simulated robot agents illustrated that the prevalent hypothesis, which posits that the neural substrate of the path integration memory is a bump attractor network, is contradicted by behavioural evidence. An investigation of the type of memory utilised during path integration revealed that cold-induced anaesthesia disrupts the ability of ants to return to their nest, but it does not eliminate their ability to move in the correct homing direction. Using computational modelling and simulated agents, I argue that the best explanation for this phenomenon is not two separate memories differently affected by temperature but a shared memory that encodes both the direction and distance. The results presented in this thesis shed some more light on the labyrinth that researchers of animal navigation have been exploring in their attempts to unravel a few more rounds of Ariadne's thread back to its origin. The findings provide valuable insights into the path integration system of insects and inspiration for future memory research, advancing path integration techniques in robotics, and developing novel neuromorphic solutions to computational problems
    • …
    corecore