343 research outputs found

    Review on the Brownian Dynamics Simulation of Bead-Rod-Spring Models Encountered in Computational Rheology

    Get PDF
    Kinetic theory is a mathematical framework intended to relate directly the most relevant characteristics of the molecular structure to the rheological behavior of the bulk system. In other words, kinetic theory is a micro-to-macro approach for solving the flow of complex fluids that circumvents the use of closure relations and offers a better physical description of the phenomena involved in the flow processes. Cornerstone models in kinetic theory employ beads, rods and springs for mimicking the molecular structure of the complex fluid. The generalized bead-rod-spring chain includes the most basic models in kinetic theory: the freely jointed bead-spring chain and the freely-jointed bead-rod chain. Configuration of simple coarse-grained models can be represented by an equivalent Fokker-Planck (FP) diffusion equation, which describes the evolution of the configuration distribution function in the physical and configurational spaces. FP equation can be a complex mathematical object, given its multidimensionality, and solving it explicitly can become a difficult task. Even more, in some cases, obtaining an equivalent FP equation is not possible given the complexity of the coarse-grained molecular model. Brownian dynamics can be employed as an alternative extensive numerical method for approaching the configuration distribution function of a given kinetic-theory model that avoid obtaining and/or resolving explicitly an equivalent FP equation. The validity of this discrete approach is based on the mathematical equivalence between a continuous diffusion equation and a stochastic differential equation as demonstrated by Itô in the 1940s. This paper presents a review of the fundamental issues in the BD simulation of the linear viscoelastic behavior of bead-rod-spring coarse grained models in dilute solution. In the first part of this work, the BD numerical technique is introduced. An overview of the mathematical framework of the BD and a review of the scope of applications are presented. Subsequently, the links between the rheology of complex fluids, the kinetic theory and the BD technique are established at the light of the stochastic nature of the bead-rod-spring models. Finally, the pertinence of the present state-of-the-art review is explained in terms of the increasing interest for the stochastic micro-to-macro approaches for solving complex fluids problems. In the second part of this paper, a detailed description of the BD algorithm used for simulating a small-amplitude oscillatory deformation test is given. Dynamic properties are employed throughout this work to characterise the linear viscoelastic behavior of bead-rod-spring models in dilute solution. In the third and fourth part of this article, an extensive discussion about the main issues of a BD simulation in linear viscoelasticity of diluted suspensions is tackled at the light of the classical multi-bead-spring chain model and the multi-bead-rod chain model, respectively. Kinematic formulations, integration schemes and expressions to calculate the stress tensor are revised for several classical models: Rouse and Zimm theories in the case of multi-bead-spring chains, and Kramers chain and semi-flexible filaments in the case of multi-bead-rod chains. The implemented BD technique is, on the one hand, validated in front of the analytical or exact numerical solutions known of the equivalent FP equations for those classic kinetic theory models; and, on the other hand, is control-set thanks to the analysis of the main numerical issues involved in a BD simulation. Finally, the review paper is closed by some concluding remarks

    Review on the Brownian Dynamics Simulation of Bead-Rod-Spring Models Encountered in Computational Rheology

    Get PDF
    Kinetic theory is a mathematical framework intended to relate directly the most relevant characteristics of the molecular structure to the rheological behavior of the bulk system. In other words, kinetic theory is a micro-to-macro approach for solving the flow of complex fluids that circumvents the use of closure relations and offers a better physical description of the phenomena involved in the flow processes. Cornerstone models in kinetic theory employ beads, rods and springs for mimicking the molecular structure of the complex fluid. The generalized bead-rod-spring chain includes the most basic models in kinetic theory: the freely jointed bead-spring chain and the freely-jointed bead-rod chain. Configuration of simple coarse-grained models can be represented by an equivalent Fokker-Planck (FP) diffusion equation, which describes the evolution of the configuration distribution function in the physical and configurational spaces. FP equation can be a complex mathematical object, given its multidimensionality, and solving it explicitly can become a difficult task. Even more, in some cases, obtaining an equivalent FP equation is not possible given the complexity of the coarse-grained molecular model. Brownian dynamics can be employed as an alternative extensive numerical method for approaching the configuration distribution function of a given kinetic-theory model that avoid obtaining and/or resolving explicitly an equivalent FP equation. The validity of this discrete approach is based on the mathematical equivalence between a continuous diffusion equation and a stochastic differential equation as demonstrated by Itô in the 1940s. This paper presents a review of the fundamental issues in the BD simulation of the linear viscoelastic behavior of bead-rod-spring coarse grained models in dilute solution. In the first part of this work, the BD numerical technique is introduced. An overview of the mathematical framework of the BD and a review of the scope of applications are presented. Subsequently, the links between the rheology of complex fluids, the kinetic theory and the BD technique are established at the light of the stochastic nature of the bead-rod-spring models. Finally, the pertinence of the present state-of-the-art review is explained in terms of the increasing interest for the stochastic micro-to-macro approaches for solving complex fluids problems. In the second part of this paper, a detailed description of the BD algorithm used for simulating a small-amplitude oscillatory deformation test is given. Dynamic properties are employed throughout this work to characterise the linear viscoelastic behavior of bead-rod-spring models in dilute solution. In the third and fourth part of this article, an extensive discussion about the main issues of a BD simulation in linear viscoelasticity of diluted suspensions is tackled at the light of the classical multi-bead-spring chain model and the multi-bead-rod chain model, respectively. Kinematic formulations, integration schemes and expressions to calculate the stress tensor are revised for several classical models: Rouse and Zimm theories in the case of multi-bead-spring chains, and Kramers chain and semi-flexible filaments in the case of multi-bead-rod chains. The implemented BD technique is, on the one hand, validated in front of the analytical or exact numerical solutions known of the equivalent FP equations for those classic kinetic theory models; and, on the other hand, is control-set thanks to the analysis of the main numerical issues involved in a BD simulation. Finally, the review paper is closed by some concluding remarks

    Fiber consistency measures on brain tracts from digital streamline, stochastic and global tractography

    Get PDF
    La tractografía es el proceso que se emplea para estimar la estructura de las fibras nerviosas del interior del cerebro in vivo a partir de datos de Resonancia Magnética (MR). Existen varios métodos de tractografía, que generalmente se dividen en locales y globales. Los primeros intentan reconstruir cada fibra por separado, mientras que los segundos intentan reconstruir todas las estructuras neuronales a la vez, buscando una configuración que mejor se ajusta a los datos proporcionados. Dichos métodos globales han demostrado ser más precisos y fiables que los métodos de tractografía local, para datos sintéticos. Sin embargo hasta la fecha no hay estudios que definan la relación entre los parámetros de adquisición de la MR y los resultados de tractografía estocástica o global con datos reales. Esta tésis de Master pretende mostrar la influencia de ciertos parámetros de adquisición como el factor de difusión de las secuencias de adquisición, el espaciado entre voxels o el número de gradientes en la variabilidad de las tractografías obtenidas.Teoría de la Señal, Comunicaciones e Ingeniería TelemáticaMáster en Investigación en Tecnologías de la Información y las Comunicacione

    New tractography methods based on parametric models of white matter fibre dispersion

    Get PDF
    Diffusion weighted magnetic resonance imaging (DW-MRI) is a powerful imaging technique that can probe the complex structure of the body, revealing structural trends which exist at scales far below the voxel resolution. Tractography utilises the information derived from DW-MRI to examine the structure of white matter. Using information derived from DW-MRI, tractography can estimate connectivity between distinct, functional cortical and sub-cortical regions of grey matter. Understanding how seperate functional regions of the brain are connected as part of a network is key to understanding how the brain works. Tractography has been used to deliniate many known white matter structures and has also revealed structures not fully understood from anatomy due to limitations of histological examination. However, there still remain many shortcomings of tractography, many anatomical features for which tractography algorithms are known to fail, which leads to discrepancies between known anatomy and tractography results. With the aim of approaching a complete picture of the human connectome via tractography, we seek to address the shortcomings in current tractography techniques by exploiting new advances in modelling techniques used in DW-MRI, which provide more accurate representation of underlying white matter anatomy. This thesis introduces a methodology for fully utilising new tissue models in DWMRI to improve tractography. It is known from histology that there are regions of white matter where fibres disperse or curve rapidly at length scales below the DW-MRI voxel resolution. One area where dispersion is particularly prominent is the corona radiata. New DW-MRI models capture dispersion utilising specialised parametric probability distributions. We present novel tractography algorithms utilising these parametric models of dispersion in tractography to improve connectivity estimation in areas of dispersing fibres. We first present an algorithm utilising the the new parametric models of dispersion for tractography in a simple Bayesian framework. We then present an extension to this algorithm which introduces a framework to pool neighbourhood information from multiple voxels in the neighbournhood surrounding the tract in order to better estimate connectivity, introducing the new concept of the neighbourhood-informed orientation distribution function (NI-ODF). Specifically, using neighbourhood exploration we address the ambiguity arising in ’fanning polarity’. In regions of dispersing fibres, the antipodal symmetry inherent in DW-MRI makes it impossible to resolve the polarity of a dispersing fibre configuration from a local voxel-wise model in isolation, by pooling information from neighbouring voxels, we show that this issue can be addressed. We evaluate the newly proposed tractography methods using synthetic phantoms simulating canonical fibre configurations and validate the ability to effectively navigate regions of dispersing fibres and resolve fanning polarity. We then validate that the algorithms perform effectively in real in vivo data, using DW-MRI data from 5 healthy subjects. We show that by utilising models of dispersion, we recover a wider range of connectivity compared to other standard algorithms when tracking through an area of the brain known to have significant white fibre dispersion - the corona radiata. We then examine the impact of the new algorithm on global connectivity estimates in the brain. We find that whole brain connectivity networks derived using the new tractography method feature strong connectivity between frontal lobe regions. This is in contrast to networks derived using competing tractography methods which do not account for sub-voxel fibre dispersion. We also compare thalamo-cortical connectivity estimated using the newly proposed tractography method and compare with a compteing tractography method, finding that the recovered connectivity profiles are largely similar, with some differences in thalamo-cortical connections to regions of the frontal lobe. The results suggest that fibre dispersion is an important structural feature to model in the basis of a tractography algorithm, as it has a strong effect on connectivity estimation

    On noise, uncertainty and inference for computational diffusion MRI

    Get PDF
    Diffusion Magnetic Resonance Imaging (dMRI) has revolutionised the way brain microstructure and connectivity can be studied. Despite its unique potential in mapping the whole brain, biophysical properties are inferred from measurements rather than being directly observed. This indirect mapping from noisy data creates challenges and introduces uncertainty in the estimated properties. Hence, dMRI frameworks capable to deal with noise and uncertainty quantification are of great importance and are the topic of this thesis. First, we look into approaches for reducing uncertainty, by de-noising the dMRI signal. Thermal noise can have detrimental effects for modalities where the information resides in the signal attenuation, such as dMRI, that has inherently low-SNR data. We highlight the dual effect of noise, both in increasing variance, but also introducing bias. We then design a framework for evaluating denoising approaches in a principled manner. By setting objective criteria based on what a well-behaved denoising algorithm should offer, we provide a bespoke dataset and a set of evaluations. We demonstrate that common magnitude-based denoising approaches usually reduce noise-related variance from the signal, but do not address the bias effects introduced by the noise floor. Our framework also allows to better characterise scenarios where denoising can be beneficial (e.g. when done in complex domain) and can open new opportunities, such as pushing spatio-temporal resolution boundaries. Subsequently, we look into approaches for mapping uncertainty and design two inference frameworks for dMRI models, one using classical Bayesian methods and another using more recent data-driven algorithms. In the first approach, we build upon the univariate random-walk Metropolis-Hastings MCMC, an extensively used sampling method to sample from the posterior distribution of model parameters given the data. We devise an efficient adaptive multivariate MCMC scheme, relying upon the assumption that groups of model parameters can be jointly estimated if a proper covariance matrix is defined. In doing so, our algorithm increases the sampling efficiency, while preserving accuracy and precision of estimates. We show results using both synthetic and in-vivo dMRI data. In the second approach, we resort to Simulation-Based Inference (SBI), a data-driven approach that avoids the need for iterative model inversions. This is achieved by using neural density estimators to learn the inverse mapping from the forward generative process (simulations) to the parameters of interest that have generated those simulations. By addressing the problem via learning approaches offers the opportunity to achieve inference amortisation, boosting efficiency by avoiding the necessity of repeating the inference process for each new unseen dataset. It also allows inversion of forward processes (i.e. a series of processing steps) rather than only models. We explore different neural network architectures to perform conditional density estimation of the posterior distribution of parameters. Results and comparisons obtained against MCMC suggest speed-ups of 2-3 orders of magnitude in the inference process while keeping the accuracy in the estimates

    Multiscale Methods for Random Composite Materials

    Get PDF
    Simulation of material behaviour is not only a vital tool in accelerating product development and increasing design efficiency but also in advancing our fundamental understanding of materials. While homogeneous, isotropic materials are often simple to simulate, advanced, anisotropic materials pose a more sizeable challenge. In simulating entire composite components such as a 25m aircraft wing made by stacking several 0.25mm thick plies, finite element models typically exceed millions or even a billion unknowns. This problem is exacerbated by the inclusion of sub-millimeter manufacturing defects for two reasons. Firstly, a finer resolution is required which makes the problem larger. Secondly, defects introduce randomness. Traditionally, this randomness or uncertainty has been quantified heuristically since commercial codes are largely unsuccessful in solving problems of this size. This thesis develops a rigorous uncertainty quantification (UQ) framework permitted by a state of the art finite element package \texttt{dune-composites}, also developed here, designed for but not limited to composite applications. A key feature of this open-source package is a robust, parallel and scalable preconditioner \texttt{GenEO}, that guarantees constant iteration counts independent of problem size. It boasts near perfect scaling properties in both, a strong and a weak sense on over 15,00015,000 cores. It is numerically verified by solving industrially motivated problems containing upwards of 200 million unknowns. Equipped with the capability of solving expensive models, a novel stochastic framework is developed to quantify variability in part performance arising from localized out-of-plane defects. Theoretical part strength is determined for independent samples drawn from a distribution inferred from B-scans of wrinkles. Supported by literature, the results indicate a strong dependence between maximum misalignment angle and strength knockdown based on which an engineering model is presented to allow rapid estimation of residual strength bypassing expensive simulations. The engineering model itself is built from a large set of simulations of residual strength, each of which is computed using the following two step approach. First, a novel parametric representation of wrinkles is developed where the spread of parameters defines the wrinkle distribution. Second, expensive forward models are only solved for independent wrinkles using \texttt{dune-composites}. Besides scalability the other key feature of \texttt{dune-composites}, the \texttt{GenEO} coarse space, doubles as an excellent multiscale basis which is exploited to build high quality reduced order models that are orders of magnitude smaller. This is important because it enables multiple coarse solves for the cost of one fine solve. In an MCMC framework, where many solves are wasted in arriving at the next independent sample, this is a sought after quality because it greatly increases effective sample size for a fixed computational budget thus providing a route to high-fidelity UQ. This thesis exploits both, new solvers and multiscale methods developed here to design an efficient Bayesian framework to carry out previously intractable (large scale) simulations calibrated by experimental data. These new capabilities provide the basis for future work on modelling random heterogeneous materials while also offering the scope for building virtual test programs including nonlinear analyses, all of which can be implemented within a probabilistic setting

    On noise, uncertainty and inference for computational diffusion MRI

    Get PDF
    Diffusion Magnetic Resonance Imaging (dMRI) has revolutionised the way brain microstructure and connectivity can be studied. Despite its unique potential in mapping the whole brain, biophysical properties are inferred from measurements rather than being directly observed. This indirect mapping from noisy data creates challenges and introduces uncertainty in the estimated properties. Hence, dMRI frameworks capable to deal with noise and uncertainty quantification are of great importance and are the topic of this thesis. First, we look into approaches for reducing uncertainty, by de-noising the dMRI signal. Thermal noise can have detrimental effects for modalities where the information resides in the signal attenuation, such as dMRI, that has inherently low-SNR data. We highlight the dual effect of noise, both in increasing variance, but also introducing bias. We then design a framework for evaluating denoising approaches in a principled manner. By setting objective criteria based on what a well-behaved denoising algorithm should offer, we provide a bespoke dataset and a set of evaluations. We demonstrate that common magnitude-based denoising approaches usually reduce noise-related variance from the signal, but do not address the bias effects introduced by the noise floor. Our framework also allows to better characterise scenarios where denoising can be beneficial (e.g. when done in complex domain) and can open new opportunities, such as pushing spatio-temporal resolution boundaries. Subsequently, we look into approaches for mapping uncertainty and design two inference frameworks for dMRI models, one using classical Bayesian methods and another using more recent data-driven algorithms. In the first approach, we build upon the univariate random-walk Metropolis-Hastings MCMC, an extensively used sampling method to sample from the posterior distribution of model parameters given the data. We devise an efficient adaptive multivariate MCMC scheme, relying upon the assumption that groups of model parameters can be jointly estimated if a proper covariance matrix is defined. In doing so, our algorithm increases the sampling efficiency, while preserving accuracy and precision of estimates. We show results using both synthetic and in-vivo dMRI data. In the second approach, we resort to Simulation-Based Inference (SBI), a data-driven approach that avoids the need for iterative model inversions. This is achieved by using neural density estimators to learn the inverse mapping from the forward generative process (simulations) to the parameters of interest that have generated those simulations. By addressing the problem via learning approaches offers the opportunity to achieve inference amortisation, boosting efficiency by avoiding the necessity of repeating the inference process for each new unseen dataset. It also allows inversion of forward processes (i.e. a series of processing steps) rather than only models. We explore different neural network architectures to perform conditional density estimation of the posterior distribution of parameters. Results and comparisons obtained against MCMC suggest speed-ups of 2-3 orders of magnitude in the inference process while keeping the accuracy in the estimates

    Improving the Tractography Pipeline: on Evaluation, Segmentation, and Visualization

    Get PDF
    Recent advances in tractography allow for connectomes to be constructed in vivo. These have applications for example in brain tumor surgery and understanding of brain development and diseases. The large size of the data produced by these methods lead to a variety problems, including how to evaluate tractography outputs, development of faster processing algorithms for tractography and clustering, and the development of advanced visualization methods for verification and exploration. This thesis presents several advances in these fields. First, an evaluation is presented for the robustness to noise of multiple commonly used tractography algorithms. It employs a Monte–Carlo simulation of measurement noise on a constructed ground truth dataset. As a result of this evaluation, evidence for obustness of global tractography is found, and algorithmic sources of uncertainty are identified. The second contribution is a fast clustering algorithm for tractography data based on k–means and vector fields for representing the flow of each cluster. It is demonstrated that this algorithm can handle large tractography datasets due to its linear time and memory complexity, and that it can effectively integrate interrupted fibers that would be rejected as outliers by other algorithms. Furthermore, a visualization for the exploration of structural connectomes is presented. It uses illustrative rendering techniques for efficient presentation of connecting fiber bundles in context in anatomical space. Visual hints are employed to improve the perception of spatial relations. Finally, a visualization method with application to exploration and verification of probabilistic tractography is presented, which improves on the previously presented Fiber Stippling technique. It is demonstrated that the method is able to show multiple overlapping tracts in context, and correctly present crossing fiber configurations

    Time-bin encoding for optical quantum computing

    Get PDF
    Scalability has been a longstanding issue in implementing large-scale photonic experiments for optical quantum computing. Traditional encodings based on the polarisation or spatial degrees of freedom become extremely resource-demanding when the number of modes becomes large, as the need for many nonclassical sources of light and the number of beam splitters required become unfeasible. Alternatively, time-bin encoding paves the way to overcome some of these limitations, as it only requires a single quantum light source and can be scaled to many temporal modes through judicious choice of pulse sequence and delays. Such an apparatus constitutes an important step toward large-scale experiments with low resource consumption. This work focuses on the time-bin encoding implementation. First, we assess its feasibility by thoroughly investigating its performance through numerical simulations under realistic conditions. We identify the critical components of the architecture and find that it can achieve performances comparable to state-of-the-art devices. Moreover, we consider two implementation approaches, in fibre and free space, and enumerate their strengths and weaknesses. Subsequently, we delve into the lab to explore these schemes and the key components involved therein. For the fibre case, we report the first implementation of time-bin encoded Gaussian boson sampling and use the samples obtained from the device to search for dense subgraphs of sizes three and four in a 10-node graph. Finally, we complement the study of the time-bin encoding with two side projects that contribute to the broad spectrum of enabling techniques for quantum information science. First, we demonstrate the ability to perform photon-number resolving measurements with a commercial superconducting nanowire single-photon detector system and apply it to improve the statistics of a heralded single-photon source. Second, we demonstrate that by employing a phase-tunable coherent state, we can fully characterise a multimode Gaussian state through solely the low-order photon statistics.Open Acces

    Modeling, Characterizing and Reconstructing Mesoscale Microstructural Evolution in Particulate Processing and Solid-State Sintering

    Get PDF
    abstract: In material science, microstructure plays a key role in determining properties, which further determine utility of the material. However, effectively measuring microstructure evolution in real time remains an challenge. To date, a wide range of advanced experimental techniques have been developed and applied to characterize material microstructure and structural evolution on different length and time scales. Most of these methods can only resolve 2D structural features within a narrow range of length scale and for a single or a series of snapshots. The currently available 3D microstructure characterization techniques are usually destructive and require slicing and polishing the samples each time a picture is taken. Simulation methods, on the other hand, are cheap, sample-free and versatile without the special necessity of taking care of the physical limitations, such as extreme temperature or pressure, which are prominent issues for experimental methods. Yet the majority of simulation methods are limited to specific circumstances, for example, first principle computation can only handle several thousands of atoms, molecular dynamics can only efficiently simulate a few seconds of evolution of a system with several millions particles, and finite element method can only be used in continuous medium, etc. Such limitations make these individual methods far from satisfaction to simulate macroscopic processes that a material sample undergoes up to experimental level accuracy. Therefore, it is highly desirable to develop a framework that integrate different simulation schemes from various scales to model complicated microstructure evolution and corresponding properties. Guided by such an objective, we have made our efforts towards incorporating a collection of simulation methods, including finite element method (FEM), cellular automata (CA), kinetic Monte Carlo (kMC), stochastic reconstruction method, Discrete Element Method (DEM), etc, to generate an integrated computational material engineering platform (ICMEP), which could enable us to effectively model microstructure evolution and use the simulated microstructure to do subsequent performance analysis. In this thesis, we will introduce some cases of building coupled modeling schemes and present the preliminary results in solid-state sintering. For example, we use coupled DEM and kinetic Monte Carlo method to simulate solid state sintering, and use coupled FEM and cellular automata method to model microstrucutre evolution during selective laser sintering of titanium alloy. Current results indicate that joining models from different length and time scales is fruitful in terms of understanding and describing microstructure evolution of a macroscopic physical process from various perspectives.Dissertation/ThesisDoctoral Dissertation Materials Science and Engineering 201
    • …
    corecore