408 research outputs found

    2017 GREAT Day Program

    Get PDF
    SUNY Geneseo’s Eleventh Annual GREAT Day.https://knightscholar.geneseo.edu/program-2007/1011/thumbnail.jp

    Hemodynamic Quantifications By Contrast-Enhanced Ultrasound:From In-Vitro Modelling To Clinical Validation

    Get PDF

    Neural function approximation on graphs: shape modelling, graph discrimination & compression

    Get PDF
    Graphs serve as a versatile mathematical abstraction of real-world phenomena in numerous scientific disciplines. This thesis is part of the Geometric Deep Learning subject area, a family of learning paradigms, that capitalise on the increasing volume of non-Euclidean data so as to solve real-world tasks in a data-driven manner. In particular, we focus on the topic of graph function approximation using neural networks, which lies at the heart of many relevant methods. In the first part of the thesis, we contribute to the understanding and design of Graph Neural Networks (GNNs). Initially, we investigate the problem of learning on signals supported on a fixed graph. We show that treating graph signals as general graph spaces is restrictive and conventional GNNs have limited expressivity. Instead, we expose a more enlightening perspective by drawing parallels between graph signals and signals on Euclidean grids, such as images and audio. Accordingly, we propose a permutation-sensitive GNN based on an operator analogous to shifts in grids and instantiate it on 3D meshes for shape modelling (Spiral Convolutions). Following, we focus on learning on general graph spaces and in particular on functions that are invariant to graph isomorphism. We identify a fundamental trade-off between invariance, expressivity and computational complexity, which we address with a symmetry-breaking mechanism based on substructure encodings (Graph Substructure Networks). Substructures are shown to be a powerful tool that provably improves expressivity while controlling computational complexity, and a useful inductive bias in network science and chemistry. In the second part of the thesis, we discuss the problem of graph compression, where we analyse the information-theoretic principles and the connections with graph generative models. We show that another inevitable trade-off surfaces, now between computational complexity and compression quality, due to graph isomorphism. We propose a substructure-based dictionary coder - Partition and Code (PnC) - with theoretical guarantees that can be adapted to different graph distributions by estimating its parameters from observations. Additionally, contrary to the majority of neural compressors, PnC is parameter and sample efficient and is therefore of wide practical relevance. Finally, within this framework, substructures are further illustrated as a decisive archetype for learning problems on graph spaces.Open Acces

    Brain Computations and Connectivity [2nd edition]

    Get PDF
    This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations. Brain Computations and Connectivity is about how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed. The aim of this book is to elucidate what is computed in different brain systems; and to describe current biologically plausible computational approaches and models of how each of these brain systems computes. Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions. This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed, and updates by much new evidence including the connectivity of the human brain the earlier book: Rolls (2021) Brain Computations: What and How, Oxford University Press. Brain Computations and Connectivity will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics

    Hemodynamic Quantifications By Contrast-Enhanced Ultrasound:From In-Vitro Modelling To Clinical Validation

    Get PDF

    Measuring spectrally resolved information processing in neural data

    Get PDF
    Background: The human brain, an incredibly complex biological system comprising billions of neurons and trillions of synapses, possesses remarkable capabilities for information processing and distributed computations. Neurons, the fundamental building blocks, perform elementary operations on their inputs and collaborate extensively to execute intricate computations, giving rise to cognitive functions and behavior. Notably, distributed information processing in the brain heavily relies on rhythmic neural activity characterized by synchronized oscillations at specific frequencies. These oscillations play a crucial role in coordinating brain activity and facilitating communication between different neural circuits [1], effectively acting as temporal windows that enable efficient information exchange within specific frequency ranges. To understand distributed information processing in neural systems, breaking down its components, i.e., —information transfer, storage, and modification can be helpful, but requires precise mathematical definitions for each respective component. Thankfully, these definitions have recently become available [2]. Information theory is a natural choice for measuring information processing, as it offers a mathematically complete description of the concept of information and communication. The fundamental information-processing operations, are considered essential prerequisites for achieving universal information processing in any system [3]. By quantifying and analyzing these operations, we gain valuable insights into the brain’s complex computation and cognitive abilities. As information processing in the brain is intricately tied to rhythmic behavior, there is a need to establish a connection between information theoretic measures and frequency components. Previous attempts to achieve frequency-resolved information theoretic measures have mostly relied on narrowband filtering [4], which comes with several known issues of phase shifting and high false positive rate results [5], or simplifying the computation to few variables [6], that might result in missing important information in the analysed brain signals. Therefore, the current work aims to establish a frequency-resolved measure of two crucial components of information processing: information transfer and information storage. By proposing methodological advancements, this research seeks to shed light on the role of neural oscillations in information processing within the brain. Furthermore, a more comprehensive investigation was carried out on the communication between two critical brain regions responsible for motor inhibition in the frontal cortex (right Inferior Frontal gyrus (rIFG) and pre-Supplementary motor cortex (pre-SMA)). Here, neural oscillations in the beta band (12 − 30 Hz) have been proposed to have a pivotal role in response inhibition. A long-standing question in the field was to disentangle which of these two brain areas first signals the stopping process and drives the other [7]. Furthermore, it was hypothesized that beta oscillations carry the information transfer between these regions. The present work addresses the methodological problems and investigates spectral information processing in neural data, in three studies. Study 1 focuses on the critical role of information transfer, measured by transfer entropy, in distributed computation. Understanding the patterns of information transfer is essential for unraveling the computational algorithms in complex systems, such as the brain. As many natural systems rely on rhythmic processes for distributed computations, a frequency-resolved measure of information transfer becomes highly valuable. To address this, a novel algorithm is presented, efficiently identifying frequencies responsible for sending and receiving information in a network. The approach utilizes the invertible maximum overlap discrete wavelet transform (MODWT) to create surrogate data for computing transfer entropy, eliminating issues associated with phase shifts and filtering. However, measuring frequency-resolved information transfer poses a Partial information decomposition problem [8] that is yet to be fully resolved. The algorithm’s performance is validated using simulated data and applied to human magnetoencephalography (MEG) and ferret local field potential recordings (LFP). In human MEG, the study unveils a complex spectral configuration of cortical information transmission, showing top-down information flow from very high frequencies (above 100Hz) to both similarly high frequencies and frequencies around 20Hz in the temporal cortex. Contrary to the current assumption, the findings suggest that low frequencies do not solely send information to high frequencies. In the ferret LFP, the prefrontal cortex demonstrates the transmission of information at low frequencies, specifically within the range of 4-8 Hz. On the receiving end, V1 exhibits a preference for operating at very high frequency > 125 Hz. The spectrally resolved transfer entropy promises to deepen our understanding of rhythmic information exchange in natural systems, shedding light on the computational properties of oscillations on cognitive functions. In study 2, the primary focus lay on the second fundamental aspect of information processing: the active information storage (AIS). The AIS estimates how much information in the next measurements of the process can be predicted by examining its paste state. In processes that either produce little information (low entropy) or that are highly unpredictable, the AIS is low, whereas processes that are predictable but visit many different states with equal probabilities, exhibit high AIS [9]. Within this context, we introduced a novel spectrally-resolved AIS. Utilizing intracortical recordings of neural activity in anesthetized ferrets before and after loss of consciousness (LOC), the study reveals that the modulation of AIS by anesthesia is highly specific to different frequency bands, cortical layers, and brain regions. The findings reveal that the effects of anesthesia on AIS are prominent in the supragranular layers for the high/low gamma band, while the alpha/beta band exhibits the strongest decrease in AIS at infragranular layers, in accordance with the predictive coding theory. Additionally, the isoflurane impacts local information processing in a frequency-specific manner. For instance, increases in isoflurane concentration lead to a decrease in AIS in the alpha frequency but to an increase in AIS in the delta frequency range (<2Hz). In sum, analyzing spectrally-resolved AIS provides valuable insights into changes in cortical information processing under anesthesia. With rhythmic neural activity playing a significant role in biological neural systems, the introduction of frequency-specific components in active information storage allows a deeper understanding of local information processing in different brain areas and under various conditions. In study 3, to further verify the pivotal role of neural oscillations in information processing, we investigated the neural network mechanisms underlying response inhibition. A long-standing debate has centered around identifying the cortical initiator of response inhibition in the beta band, with two main regions proposed: the right rIFG and the pre-SMA. This third study aimed to determine which of these regions is activated first and exerts a potential information exchange on the other. Using high temporal resolution magnetoencephalography (MEG) and a relatively large cohort of subjects. A significant breakthrough is achieved by demonstrating that the rIFG is activated significantly earlier than the pre-SMA. The onset of beta band activity in the rIFG occurred at around 140 ms after the STOP signal. Further analyses showed that the beta-band activity in the rIFG was crucial for successful stopping, as evidenced by its predictive value for stopping performance. Connectivity analysis revealed that the rIFG sends information in the beta band to the pre-SMA but not vice versa, emphasizing the rIFG’s dominance in the response inhibition process. The results provide strong support for the hypothesis that the rIFG initiates stopping and utilizes beta-band oscillations for this purpose. These findings have significant implications, suggesting the possibility of spatially localized oscillation based interventions for response inhibition. Conclusion: In conclusion, the present work proposes a novel algorithm for uncovering the frequencies at which information is transferred between sources and targets in the brain, providing valuable insights into the computational dynamics of neural processes. The spectrally resolved transfer entropy was successfully applied to experimental neural data of intracranial recordings in ferrets and MEG recordings of humans. Furthermore, the study on active information storage (AIS) analysis under anesthesia revealed that the spectrally resolved AIS offers unique additional insights beyond traditional spectral power analysis. By examining changes in neural information processing, the study demonstrates how AIS analysis can deepen the understanding of anesthesia’s effects on cortical information processing. Moreover, the third study’s findings provide strong evidence supporting the critical role of beta oscillations in information processing, particularly in response inhibition. The research successfully demonstrates that beta oscillations in the rIFG functions as the key initiator of the response inhibition process, acting as a top-down control mechanism. The identification of beta oscillations as a crucial factor in information processing opens possibilities for further research and targeted interventions in neurological disorders. Taken together, the current work highlights the role of spectrally-resolved information processing in neural systems by not only introducing novel algorithms, but also successfully applying them to experimental oscillatory neural activity in relation to low-level cortical information processing (anesthesia) as well as high-level processes (cognitive response inhibition)

    Flexible estimation of temporal point processes and graphs

    Get PDF
    Handling complex data types with spatial structures, temporal dependencies, or discrete values, is generally a challenge in statistics and machine learning. In the recent years, there has been an increasing need of methodological and theoretical work to analyse non-standard data types, for instance, data collected on protein structures, genes interactions, social networks or physical sensors. In this thesis, I will propose a methodology and provide theoretical guarantees for analysing two general types of discrete data emerging from interactive phenomena, namely temporal point processes and graphs. On the one hand, temporal point processes are stochastic processes used to model event data, i.e., data that comes as discrete points in time or space where some phenomenon occurs. Some of the most successful applications of these discrete processes include online messages, financial transactions, earthquake strikes, and neuronal spikes. The popularity of these processes notably comes from their ability to model unobserved interactions and dependencies between temporally and spatially distant events. However, statistical methods for point processes generally rely on estimating a latent, unobserved, stochastic intensity process. In this context, designing flexible models and consistent estimation methods is often a challenging task. On the other hand, graphs are structures made of nodes (or agents) and edges (or links), where an edge represents an interaction or relationship between two nodes. Graphs are ubiquitous to model real-world social, transport, and mobility networks, where edges can correspond to virtual exchanges, physical connections between places, or migrations across geographical areas. Besides, graphs are used to represent correlations and lead-lag relationships between time series, and local dependence between random objects. Graphs are typical examples of non-Euclidean data, where adequate distance measures, similarity functions, and generative models need to be formalised. In the deep learning community, graphs have become particularly popular within the field of geometric deep learning. Structure and dependence can both be modelled by temporal point processes and graphs, although predominantly, the former act on the temporal domain while the latter conceptualise spatial interactions. Nonetheless, some statistical models combine graphs and point processes in order to account for both spatial and temporal dependencies. For instance, temporal point processes have been used to model the birth times of edges and nodes in temporal graphs. Moreover, some multivariate point processes models have a latent graph parameter governing the pairwise causal relationships between the components of the process. In this thesis, I will notably study such a model, called the Hawkes model, as well as graphs evolving in time. This thesis aims at designing inference methods that provide flexibility in the contexts of temporal point processes and graphs. This manuscript is presented in an integrated format, with four main chapters and two appendices. Chapters 2 and 3 are dedicated to the study of Bayesian nonparametric inference methods in the generalised Hawkes point process model. While Chapter 2 provides theoretical guarantees for existing methods, Chapter 3 also proposes, analyses, and evaluates a novel variational Bayes methodology. The other main chapters introduce and study model-free inference approaches for two estimation problems on graphs, namely spectral methods for the signed graph clustering problem in Chapter 4, and a deep learning algorithm for the network change point detection task on temporal graphs in Chapter 5. Additionally, Chapter 1 provides an introduction and background preliminaries on point processes and graphs. Chapter 6 concludes this thesis with a summary and critical thinking on the works in this manuscript, and proposals for future research. Finally, the appendices contain two supplementary papers. The first one, in Appendix A, initiated after the COVID-19 outbreak in March 2020, is an application of a discrete-time Hawkes model to COVID-related deaths counts during the first wave of the pandemic. The second work, in Appendix B, was conducted during an internship at Amazon Research in 2021, and proposes an explainability method for anomaly detection models acting on multivariate time series

    Prediction of poor health in small ruminants and companion animals with accelerometers and machine learning

    Get PDF
    Global warming is one of the biggest challenge of our times, and significant efforts are being undertaken by academics, industries and other actors to tackle the problem. In the agricultural field precision farming is part of the solution to environmental sustainability and has been researched increasingly in recent years. Indeed, it has the potential to effectively increase livestock yield and decrease production carbon footprint while maintaining welfare. The thesis begins with a review of developments in automated animal monitoring and then moves on to a case study of a health monitoring system for small-ruminant in South Africa. As a demonstration and validation of the potential use case of the system, the method we propose is then applied to another study which aims to study feline health. Lower and Middle Income countries will be strongly affected by the changing climate and its impacts. We devise our method based on two South African small scale sheep and goat farms where assessment of the health status of individual animals is a key step in the timely and targeted treatment of infections, which is critical in the fight against anthelmintic and antimicrobial resistance. The FAMACHA scoring system has been used successfully to detect anaemia caused by infection with the parasitic nematode Haemonchus contortus in small ruminants and is an effective way to identify individuals in need of treatment. However, assessing FAMACHA is labour-intensive and costly as individuals must be manually examined at frequent intervals. Here, we used accelerometers to measure the individual activity of extensively grazed small ruminants exposed to natural Haemonchus contortus worm infection in southern Africa over long time scales (13+ months). When combined with machine learning for missing data imputation and classification, we find that this activity data can predict poorer health as well as those individuals that respond to treatment, with precision up to 80%. We demonstrate that these classifiers remain robust over time. Interpretation of trained classifiers reveals that poorer health can be predicted mainly by the night-time activity levels in the sheep. Our study reveals behavioural patterns across two small ruminant species, which low-cost biologgers can exploit to detect subtle changes in animal health and enable timely and targeted intervention. This has real potential to improve economic outcomes and animal welfare as well as limit the use of anthelmintic drugs and diminish pressures on anthelmintic resistance in both commercial and resource-poor communal farming. The validation of the proposed techniques with a different study group will be discussed in the latter part of the thesis. We used the accelerometry data of indoor cats equipped with wearable accelerometers in conjunction with their health status to detect signs of degenerative joint disease, and adapted our machine-learning pipeline to analyse bursts of high activity in the cats. We were able to classify high-activity events with precision up to 70% despite the relatively small dataset adding further evidence to the viability of animal health monitoring with accelerometers
    • …
    corecore