17 research outputs found

    Study of the convergence of the Meshless Lattice Boltzmann Method in Taylor-Green and annular channel flows

    Full text link
    The Meshless Lattice Boltzmann Method (MLBM) is a numerical tool that relieves the standard Lattice Boltzmann Method (LBM) from regular lattices and, at the same time, decouples space and velocity discretizations. In this study, we investigate the numerical convergence of MLBM in two benchmark tests: the Taylor-Green vortex and annular (bent) channel flow. We compare our MLBM results to LBM and to the analytical solution of the Navier-Stokes equation. We investigate the method's convergence in terms of the discretization parameter, the interpolation order, and the LBM streaming distance refinement. We observe that MLBM outperforms LBM in terms of the error value for the same number of nodes discretizing the domain. We find that LBM errors at a given streaming distance δx\delta x and timestep length δt\delta t are the asymptotic lower bounds of MLBM errors with the same streaming distance and timestep length. Finally, we suggest an expression for the MLBM error that consists of the LBM error and other terms related to the semi-Lagrangian nature of the discussed method itself

    Deep learning for diffusion in porous media

    Full text link
    We adopt convolutional neural networks (CNN) to predict the basic properties of the porous media. Two different media types are considered: one mimics the sandstone, and the other mimics the systems derived from the extracellular space of biological tissues. The Lattice Boltzmann Method is used to obtain the labeled data necessary for performing supervised learning. We distinguish two tasks. In the first, networks based on the analysis of the system's geometry predict porosity and effective diffusion coefficient. In the second, networks reconstruct the system's geometry and concentration map. In the first task, we propose two types of CNN models: the C-Net and the encoder part of the U-Net. Both networks are modified by adding a self-normalization module. The models predict with reasonable accuracy but only within the data type, they are trained on. For instance, the model trained on sandstone-like samples overshoots or undershoots for biological-like samples. In the second task, we propose the usage of the U-Net architecture. It accurately reconstructs the concentration fields. Moreover, the network trained on one data type works well for the other. For instance, the model trained on sandstone-like samples works perfectly on biological-like samples.Comment: 17 pages, 19 figure

    Neurophysiological markers of successful learning in healthy aging

    Get PDF
    The capacity to learn and memorize is a key determinant for the quality of life but is known to decline to varying degrees with age. However, neural correlates of memory formation and the critical features that determine the extent to which aging affects learning are still not well understood. By employing a visual sequence learning task, we were able to track the behavioral and neurophysiological markers of gradual learning over several repetitions, which is not possible in traditional approaches that utilize a remember vs. forgotten comparison. On a neurophysiological level, we focused on two learning-related centro-parietal event-related potential (ERP) components: the expectancy-driven P300 and memory-related broader positivity (BP). Our results revealed that although both age groups showed significant learning progress, young individuals learned faster and remembered more stimuli than older participants. Successful learning was directly linked to a decrease of P300 and BP amplitudes. However, young participants showed larger P300 amplitudes with a sharper decrease during the learning, even after correcting for an observed age-related longer P300 latency and increased P300 peak variability. Additionally, the P300 amplitude predicted learning success in both age groups and showed good test-retest reliability. On the other hand, the memory formation processes, reflected by the BP amplitude, revealed a similar level of engagement in both age groups. However, this engagement did not translate into the same learning progress in the older participants. We suggest that the slower and more variable timing of the stimulus identification process reflected in the P300 means that despite the older participants engaging the memory formation process, there is less time for it to translate the categorical stimulus location information into a solidified memory trace. The results highlight the important role of the P300 and BP as a neurophysiological marker of learning and may enable the development of preventive measures for cognitive decline

    The AI Neuropsychologist: Automatic scoring of memory deficits with deep learning

    Full text link
    Memory deficits are a hallmark of many different neurological and psychiatric conditions. The Rey-Osterrieth complex figure (ROCF) is the state–of-the-art assessment tool for neuropsychologists across the globe to assess the degree of non-verbal visual memory deterioration. To obtain a score, a trained clinician inspects a patient’s ROCF drawing and quantifies deviations from the original figure. This manual procedure is time-consuming, slow and scores vary depending on the clinician’s experience, motivation and tiredness. Here, we leverage novel deep learning architectures to automatize the rating of memory deficits. For this, a multi-head convolutional neural network was trained on 20225 ROCF drawings. Unbiased ground truth ROCF scores were obtained from crowdsourced human intelligence. The neural network outperforms both online raters and clinicians. Our AI-powered scoring system provides healthcare institutions worldwide with a digital tool to assess objectively, reliably and time-efficiently the performance in the ROCF test from hand-drawn images

    Contralateral delay activity as a marker of visual working memory capacity: a multi-site registered replication

    Get PDF
    Visual working memory (VWM) is a temporary storage system capable of retaining information that can be accessed and manipulated by higher cognitive processes, thereby facilitating a wide range of cognitive functions. Electroencephalography (EEG) is used to understand the neural correlates of VWM with high temporal precision, and one commonly used EEG measure is an event-related potential called the contralateral delay activity (CDA). In a landmark study by Vogel and Machizawa (2004), the authors found that the CDA amplitude increases with the number of items stored in VWM and plateaus around three to four items, which is thought to represent the typical adult working memory capacity. Critically, this study also showed that the increase in CDA amplitude between two-item and four-item arrays correlated with individual subjects’ VWM performance. Although these results have been supported by subsequent studies, a recent study suggested that the number of subjects used in experiments investigating the CDA may not be sufficient to detect differences in set size and to provide a reliable account of the relationship between behaviorally measured VWM capacity and the CDA amplitude. To address this, the current study, as part of the #EEGManyLabs project, aims to conduct a multi-site replication of Vogel and Machizawa's (2004) seminal study on a large sample of participants, with a pre-registered analysis plan. Through this, our goal is to contribute to deepening our understanding of the neural correlates of visual working memory

    UZH Reproducibility Day Demo

    No full text

    Data

    No full text

    Experimental task

    No full text
    corecore