1,942 research outputs found

    Regularization modeling for large-eddy simulation of homogeneous isotropic decaying turbulence

    Get PDF
    Inviscid regularization modeling of turbulent flow is investigated. Homogeneous, isotropic, decaying turbulence is simulated at a range of filter widths. A coarse-graining of turbulent flow arises from the direct regularization of the convective nonlinearity in the Navier–Stokes equations. The regularization is translated into its corresponding sub-filter model to close the equations for large-eddy simulation (LES). The accuracy with which primary turbulent flow features are captured by this modeling is investigated for the Leray regularization, the Navier–Stokes-α formulation (NS-α), the simplified Bardina model and a modified Leray approach. On a PDE level, each regularization principle is known to possess a unique, strong solution with known regularity properties. When used as turbulence closure for numerical simulations, significant differences between these models are observed. Through a comparison with direct numerical simulation (DNS) results, a detailed assessment of these regularization principles is made. The regularization models retain much of the small-scale variability in the solution. The smaller resolved scales are dominated by the specific sub-filter model adopted. We find that the Leray model is in general closest to the filtered DNS results, the modified Leray model is found least accurate and the simplified Bardina and NS-α models are in between, as far as accuracy is concerned. This rough ordering is based on the energy decay, the Taylor Reynolds number and the velocity skewness, and on detailed characteristics of the energy dynamics, including spectra of the energy, the energy transfer and the transfer power. At filter widths up to about 10% of the computational domain-size, the Leray and NS-α predictions were found to correlate well with the filtered DNS data. Each of the regularization models underestimates the energy decay rate and overestimates the tail of the energy spectrum. The correspondence with unfiltered DNS spectra was observed often to be closer than with filtered DNS for several of the regularization models

    Leray and LANS-α\alpha modeling of turbulent mixing

    Get PDF
    Mathematical regularisation of the nonlinear terms in the Navier-Stokes equations provides a systematic approach to deriving subgrid closures for numerical simulations of turbulent flow. By construction, these subgrid closures imply existence and uniqueness of strong solutions to the corresponding modelled system of equations. We will consider the large eddy interpretation of two such mathematical regularisation principles, i.e., Leray and LANSα-\alpha regularisation. The Leray principle introduces a {\bfi smoothed transport velocity} as part of the regularised convective nonlinearity. The LANSα-\alpha principle extends the Leray formulation in a natural way in which a {\bfi filtered Kelvin circulation theorem}, incorporating the smoothed transport velocity, is explicitly satisfied. These regularisation principles give rise to implied subgrid closures which will be applied in large eddy simulation of turbulent mixing. Comparison with filtered direct numerical simulation data, and with predictions obtained from popular dynamic eddy-viscosity modelling, shows that these mathematical regularisation models are considerably more accurate, at a lower computational cost.Comment: 42 pages, 12 figure

    Analysis of spatial relationships in three dimensions: tools for the study of nerve cell patterning

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Multiple technologies have been brought to bear on understanding the three-dimensional morphology of individual neurons and glia within the brain, but little progress has been made on understanding the rules controlling cellular patterning. We describe new matlab-based software tools, now available to the scientific community, permitting the calculation of spatial statistics associated with 3D point patterns. The analyses are largely derived from the Delaunay tessellation of the field, including the nearest neighbor and Voronoi domain analyses, and from the spatial autocorrelogram.</p> <p>Results</p> <p>Our tools enable the analysis of the spatial relationship between neurons within the central nervous system in 3D, and permit the modeling of these fields based on lattice-like simulations, and on simulations of minimal-distance spacing rules. Here we demonstrate the utility of our analysis methods to discriminate between two different simulated neuronal populations.</p> <p>Conclusion</p> <p>Together, these tools can be used to reveal the presence of nerve cell patterning and to model its foundation, in turn informing on the potential developmental mechanisms that govern its establishment. Furthermore, in conjunction with analyses of dendritic morphology, they can be used to determine the degree of dendritic coverage within a volume of tissue exhibited by mature nerve cells.</p

    Numerical simulation of electrocardiograms for full cardiac cycles in healthy and pathological conditions

    Get PDF
    This work is dedicated to the simulation of full cycles of the electrical activity of the heart and the corresponding body surface potential. The model is based on a realistic torso and heart anatomy, including ventricles and atria. One of the specificities of our approach is to model the atria as a surface, which is the kind of data typically provided by medical imaging for thin volumes. The bidomain equations are considered in their usual formulation in the ventricles, and in a surface formulation on the atria. Two ionic models are used: the Courtemanche-Ramirez-Nattel model on the atria, and the "Minimal model for human Ventricular action potentials" (MV) by Bueno-Orovio, Cherry and Fenton in the ventricles. The heart is weakly coupled to the torso by a Robin boundary condition based on a resistor- capacitor transmission condition. Various ECGs are simulated in healthy and pathological conditions (left and right bundle branch blocks, Bachmann's bundle block, Wolff-Parkinson-White syndrome). To assess the numerical ECGs, we use several qualitative and quantitative criteria found in the medical literature. Our simulator can also be used to generate the signals measured by a vest of electrodes. This capability is illustrated at the end of the article

    Quantitative expansion microscopy for the characterization of the spectrin periodic skeleton of axons using fluorescence microscopy

    Get PDF
    Fluorescent nanoscopy approaches have been used to characterize the periodic organization of actin,spectrin and associated proteins in neuronal axons and dendrites. This membrane-associated periodicskeleton (MPS) is conserved across animals, suggesting it is a fundamental component of neuronalextensions. The nanoscale architecture of the arrangement (190 nm) is below the resolution limitof conventional fluorescent microscopy. Fluorescent nanoscopy, on the other hand, requires costlyequipment and special analysis routines, which remain inaccessible to most research groups. Thisreport aims to resolve this issue by using protein-retention expansion microscopy (pro-ExM) to revealthe MPS of axons. ExM uses reagents and equipment that are readily accessible in most neurobiologylaboratories. We first explore means to accurately estimate the expansion factors of protein structureswithin cells. We then describe the protocol that produces an expanded specimen that can be examinedwith any fluorescent microscopy allowing quantitative nanoscale characterization of the MPS. Wevalidate ExM results by direct comparison to stimulated emission depletion (STED) nanoscopy. Weconclude that ExM facilitates three-dimensional, multicolor and quantitative characterization of theMPS using accessible reagents and conventional fluorescent microscopes.Fil: Martínez, Gaby F.. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto de Investigación Médica Mercedes y Martín Ferreyra. Universidad Nacional de Córdoba. Instituto de Investigación Médica Mercedes y Martín Ferreyra; ArgentinaFil: Gazal, Nahir Guadalupe. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto de Investigación Médica Mercedes y Martín Ferreyra. Universidad Nacional de Córdoba. Instituto de Investigación Médica Mercedes y Martín Ferreyra; ArgentinaFil: Quassollo Infanzon, Gonzalo Emiliano. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto de Investigación Médica Mercedes y Martín Ferreyra. Universidad Nacional de Córdoba. Instituto de Investigación Médica Mercedes y Martín Ferreyra; ArgentinaFil: Szalai, Alan Marcelo. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Parque Centenario. Centro de Investigaciones en Bionanociencias "Elizabeth Jares Erijman"; ArgentinaFil: Del Cid Pellitero, Esther. No especifíca;Fil: Durcan, Thomas M.. No especifíca;Fil: Fon, Edward A.. No especifíca;Fil: Bisbal, Mariano. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto de Investigación Médica Mercedes y Martín Ferreyra. Universidad Nacional de Córdoba. Instituto de Investigación Médica Mercedes y Martín Ferreyra; ArgentinaFil: Stefani, Fernando Daniel. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Parque Centenario. Centro de Investigaciones en Bionanociencias "Elizabeth Jares Erijman"; ArgentinaFil: Unsain, Nicolas. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto de Investigación Médica Mercedes y Martín Ferreyra. Universidad Nacional de Córdoba. Instituto de Investigación Médica Mercedes y Martín Ferreyra; Argentin

    Optical flow estimation using steered-L1 norm

    Get PDF
    Motion is a very important part of understanding the visual picture of the surrounding environment. In image processing it involves the estimation of displacements for image points in an image sequence. In this context dense optical flow estimation is concerned with the computation of pixel displacements in a sequence of images, therefore it has been used widely in the field of image processing and computer vision. A lot of research was dedicated to enable an accurate and fast motion computation in image sequences. Despite the recent advances in the computation of optical flow, there is still room for improvements and optical flow algorithms still suffer from several issues, such as motion discontinuities, occlusion handling, and robustness to illumination changes. This thesis includes an investigation for the topic of optical flow and its applications. It addresses several issues in the computation of dense optical flow and proposes solutions. Specifically, this thesis is divided into two main parts dedicated to address two main areas of interest in optical flow. In the first part, image registration using optical flow is investigated. Both local and global image registration has been used for image registration. An image registration based on an improved version of the combined Local-global method of optical flow computation is proposed. A bi-lateral filter was used in this optical flow method to improve the edge preserving performance. It is shown that image registration via this method gives more robust results compared to the local and the global optical flow methods previously investigated. The second part of this thesis encompasses the main contribution of this research which is an improved total variation L1 norm. A smoothness term is used in the optical flow energy function to regularise this function. The L1 is a plausible choice for such a term because of its performance in preserving edges, however this term is known to be isotropic and hence decreases the penalisation near motion boundaries in all directions. The proposed improved L1 (termed here as the steered-L1 norm) smoothness term demonstrates similar performance across motion boundaries but improves the penalisation performance along such boundaries

    Stochastic simulation and spatial statistics of large datasets using parallel computing

    Get PDF
    Lattice models are a way of representing spatial locations in a grid where each cell is in a certain state and evolves according to transition rules and rates dependent on a surrounding neighbourhood. These models are capable of describing many phenomena such as the simulation and growth of a forest fire front. These spatial simulation models as well as spatial descriptive statistics such as Ripley\u27s K-function have wide applicability in spatial statistics but in general do not scale well for large datasets. Parallel computing (high performance computing) is one solution that can provide limited scalability to these applications. This is done using the message passing interface (MPI) framework implemented in R through the Rmpi package. Other useful techniques in spatial statistics such as point pattern reconstruction and Markov Chain Monte Carlo (MCMC) methods are discussed from a parallel computing perspective as well. In particular, an improved point pattern reconstruction is given and implemented in parallel. Single chain MCMC methods are also examined and improved upon to give faster convergence using parallel computing. Optimizations, and complications that arise from parallelizing existing spatial statistics algorithms are discussed and methods are implemented in an accompanying R package, parspatstat

    Variational Methods for the Modelling of Inelastic Solids

    Get PDF
    This workshop brought together two communities working on the same topic from different perspectives. It strengthened the exchange of ideas between experts from both mathematics and mechanics working on a wide range of questions related to the understanding and the prediction of processes in solids. Common tools in the analysis include the development of models within the broad framework of continuum mechanics, calculus of variations, nonlinear partial differential equations, nonlinear functional analysis, Gamma convergence, dimension reduction, homogenization, discretization methods and numerical simulations. The applications of these theories include but are not limited to nonlinear models in plasticity, microscopic theories at different scales, the role of pattern forming processes, effective theories, and effects in singular structures like blisters or folding patterns in thin sheets, passage from atomistic or discrete models to continuum models, interaction of scales and passage from the consideration of one specific time step to the continuous evolution of the system, including the evolution of appropriate measures of the internal structure of the system

    Variational methods for shape and image registrations.

    Get PDF
    Estimating and analysis of deformation, either rigid or non-rigid, is an active area of research in various medical imaging and computer vision applications. Its importance stems from the inherent inter- and intra-variability in biological and biomedical object shapes and from the dynamic nature of the scenes usually dealt with in computer vision research. For instance, quantifying the growth of a tumor, recognizing a person\u27s face, tracking a facial expression, or retrieving an object inside a data base require the estimation of some sort of motion or deformation undergone by the object of interest. To solve these problems, and other similar problems, registration comes into play. This is the process of bringing into correspondences two or more data sets. Depending on the application at hand, these data sets can be for instance gray scale/color images or objects\u27 outlines. In the latter case, one talks about shape registration while in the former case, one talks about image/volume registration. In some situations, the combinations of different types of data can be used complementarily to establish point correspondences. One of most important image analysis tools that greatly benefits from the process of registration, and which will be addressed in this dissertation, is the image segmentation. This process consists of localizing objects in images. Several challenges are encountered in image segmentation, including noise, gray scale inhomogeneities, and occlusions. To cope with such issues, the shape information is often incorporated as a statistical model into the segmentation process. Building such statistical models requires a good and accurate shape alignment approach. In addition, segmenting anatomical structures can be accurately solved through the registration of the input data set with a predefined anatomical atlas. Variational approaches for shape/image registration and segmentation have received huge interest in the past few years. Unlike traditional discrete approaches, the variational methods are based on continuous modelling of the input data through the use of Partial Differential Equations (PDE). This brings into benefit the extensive literature on theory and numerical methods proposed to solve PDEs. This dissertation addresses the registration problem from a variational point of view, with more focus on shape registration. First, a novel variational framework for global-to-local shape registration is proposed. The input shapes are implicitly represented through their signed distance maps. A new Sumof- Squared-Differences (SSD) criterion which measures the disparity between the implicit representations of the input shapes, is introduced to recover the global alignment parameters. This new criteria has the advantages over some existing ones in accurately handling scale variations. In addition, the proposed alignment model is less expensive computationally. Complementary to the global registration field, the local deformation field is explicitly established between the two globally aligned shapes, by minimizing a new energy functional. This functional incrementally and simultaneously updates the displacement field while keeping the corresponding implicit representation of the globally warped source shape as close to a signed distance function as possible. This is done under some regularization constraints that enforce the smoothness of the recovered deformations. The overall process leads to a set of coupled set of equations that are simultaneously solved through a gradient descent scheme. Several applications, where the developed tools play a major role, are addressed throughout this dissertation. For instance, some insight is given as to how one can solve the challenging problem of three dimensional face recognition in the presence of facial expressions. Statistical modelling of shapes will be presented as a way of benefiting from the proposed shape registration framework. Second, this dissertation will visit th
    corecore