47 research outputs found

    Well-posedness of a mathematical model for Alzheimer's disease

    Get PDF
    We consider the existence and uniqueness of solutions of an initial-boundary value problem for a coupled system of PDE's arising in a model for Alzheimer's disease. Apart from reaction diffusion equations, the system contains a transport equation in a bounded interval for a probability measure which is related to the malfunctioning of neurons. The main ingredients to prove existence are: the method of characteristics for the transport equation, a priori estimates for solutions of the reaction diffusion equations, a variant of the classical contraction theorem, and the Wasserstein metric for the part concerning the probability measure. We stress that all hypotheses on the data are not suggested by mathematical artefacts, but are naturally imposed by modelling considerations. In particular the use of a probability measure is natural from a modelling point of view. The nontrivial part of the analysis is the suitable combination of the various mathematical tools, which is not quite routine and requires various technical adjustments

    THE SYNERGISTIC INTERPLAY OF AMYLOID BETA AND TAU PROTEINS IN ALZHEIMER'S DISEASE: A COMPARTMENTAL MATHEMATICAL MODEL

    Get PDF
    The purpose of this Note is to present and discuss some mathematical results concerning a compartmental model for the synergistic interplay of Amyloid beta and tau proteins in the onset and progression of Alzheimer's disease. We model the possible mechanisms of interaction between the two proteins by a system of Smoluchowski equations for the Amyloid beta concentration, an evolution equation for the dynamics of misfolded tau and a kinetic-type transport equation for a function taking into accout the degree of malfunctioning of neurons. We provide a well-posedness results for our system of equations. This work extends results obtained in collaboration with M.Bertsch, B.Franchi and A.Tosin

    A symptotic and blow-up dynamics of keller-segel chemotaxis equations in scale of banach spaces.

    Get PDF
    Doctor of Philosophy in Mathematics, Statistics and Computer Science. University of KwaZulu-Natal, Durban 2015.Abstract available in PDF File

    Mathematical Modeling of Prion Disease

    Get PDF
    The prion hypothesis, once a heretical violation of the central dogma of molecular biology, has become an accepted mechanism used to explain a host of progressive neurodegenerative diseases in mammals and heritable phenotypes in yeast. From the beginning, mathematical models have been an essential tool in studying prion and other protein misfolding/aggregation processes. In this work, we review some of the major mathematical studies that have contributed to our understanding of prion disease and discuss trends in current and future studies

    Doctor of Philosophy

    Get PDF
    dissertationAn important aspect of medical research is the understanding of anatomy and its relation to function in the human body. For instance, identifying changes in the brain associated with cognitive decline helps in understanding the process of aging and age-related neurological disorders. The field of computational anatomy provides a rich mathematical setting for statistical analysis of complex geometrical structures seen in 3D medical images. At its core, computational anatomy is based on the representation of anatomical shape and its variability as elements of nonflat manifold of diffeomorphisms with an associated Riemannian structure. Although such manifolds effectively represent natural biological variability, intrinsic methods of statistical analysis within these spaces remain deficient at large. This dissertation contributes two critical missing pieces for statistics in diffeomorphisms: (1) multivariate regression models for cross-sectional study of shapes, and (2) generalization of classical Euclidean, mixed-effects models to manifolds for longitudinal studies. These models are based on the principle that statistics on manifold-valued information must respect the intrinsic geometry of that space. The multivariate regression methods provide statistical descriptors of the relationships of anatomy with clinical indicators. The novel theory of hierarchical geodesic models (HGMs) is developed as a natural generalization of hierarchical linear models (HLMs) to describe longitudinal data on curved manifolds. Using a hierarchy of geodesics, the HGMs address the challenge of modeling the shape-data with unbalanced designs typically arising as a result of follow-up medical studies. More generally, this research establishes a mathematical foundation to study dynamics of changes in anatomy and the associated clinical progression with time. This dissertation also provides efficient algorithms that utilize state-of-the-art high performance computing architectures to solve models on large-scale, longitudinal imaging data. These manifold-based methods are applied to predictive modeling of neurological disorders such as Alzheimer's disease. Overall, this dissertation enables clinicians and researchers to better utilize the structural information available in medical images

    On a stochastic particle model of the Keller-Segel equation and its macroscopic limit

    Get PDF
    The aim of this thesis is to derive the two-dimensional Keller-Segel equation for chemo- taxis from a stochastic system of N interacting particles in the situation in which bounded solutions are guaranteed to exist globally in time, that is in the case of subcritical chemo- sensitivityZiel dieser Arbeit ist die Herleitung der zwei-dimensionalen Keller-Segel Gleichung für Chemotaxis aus einem wechselwirkenden, stochastischen N-Teilchen System, wenn die Existenz von beschränkten, für alle Zeiten definierten Lösungen vorgegeben ist. Dies entspricht dem unterkritischen Fal

    Magnetoencephalography

    Get PDF
    This is a practical book on MEG that covers a wide range of topics. The book begins with a series of reviews on the use of MEG for clinical applications, the study of cognitive functions in various diseases, and one chapter focusing specifically on studies of memory with MEG. There are sections with chapters that describe source localization issues, the use of beamformers and dipole source methods, as well as phase-based analyses, and a step-by-step guide to using dipoles for epilepsy spike analyses. The book ends with a section describing new innovations in MEG systems, namely an on-line real-time MEG data acquisition system, novel applications for MEG research, and a proposal for a helium re-circulation system. With such breadth of topics, there will be a chapter that is of interest to every MEG researcher or clinician

    On a stochastic particle model of the Keller-Segel equation and its macroscopic limit

    Get PDF
    The aim of this thesis is to derive the two-dimensional Keller-Segel equation for chemo- taxis from a stochastic system of N interacting particles in the situation in which bounded solutions are guaranteed to exist globally in time, that is in the case of subcritical chemo- sensitivityZiel dieser Arbeit ist die Herleitung der zwei-dimensionalen Keller-Segel Gleichung für Chemotaxis aus einem wechselwirkenden, stochastischen N-Teilchen System, wenn die Existenz von beschränkten, für alle Zeiten definierten Lösungen vorgegeben ist. Dies entspricht dem unterkritischen Fal

    Using state-of-the-art inverse problem techniques to develop reconstruction methods for fluorescence diffuse optical

    Get PDF
    An inverse problem is a mathematical framework that is used to obtain info about a physical object or system from observed measurements. It usually appears when we wish to obtain information about internal data from outside measurements and has many applications in science and technology such as medical imaging, geophysical imaging, image deblurring, image inpainting, electromagnetic scattering, acoustics, machine learning, mathematical finance, physics, etc. The main goal of this PhD thesis was to use state-of-the-art inverse problem techniques to develop modern reconstruction methods for solving the fluorescence diffuse optical tomography (fDOT) problem. fDOT is a molecular imaging technique that enables the quantification of tomographic (3D) bio-distributions of fluorescent tracers in small animals. One of the main difficulties in fDOT is that the high absorption and scattering properties of biological tissues lead to an ill-posed inverse problem, yielding multiple nonunique and unstable solutions to the reconstruction problem. Thus, the problem requires regularization to achieve a stable solution. The so called “non-contact fDOT scanners” are based on using CCDs as virtual detectors instead of optic fibers in contact with the sample. These non-contact systems generate huge datasets that lead to computationally demanding inverse problem. Therefore, techniques to minimize the size of the acquired datasets without losing image performance are highly advisable. The first part of this thesis addresses the optimization of experimental setups to reduce the dataset size, by using l₂–based regularization techniques. The second part, based on the success of l₁ regularization techniques for denoising and image reconstruction, is devoted to advanced regularization problem using l₁–based techniques, and the last part introduces compressed sensing (CS) theory, which enables further reduction of the acquired dataset size. The main contributions of this thesis are: 1) A feasibility study (the first one for fDOT to our knowledge) of the automatic Ucurve method to select the regularization parameter (l₂–norm). The U-curve method has shown to be an excellent automatic method to deal with large datasets because it reduces the regularization parameter search to a suitable interval. 2) Once we found an automatic method to choose the l₂ regularization parameter for fDOT, singular value analysis (SVA) of fDOT forward matrix was used to maximize the information content in acquired measurements and minimize the computational cost. It was shown for the first time that large meshes can be reduced in the z direction, without any loss in imaging performance but reducing computational times and memory requirements. 3) Dealing with l₁–based regularization techniques, we presented a novel iterative algorithm, ART-SB, that combines the advantage of Algebraic reconstruction method (ART) in handling large datasets with Split Bregman (SB) denoising, an approach which has been shown to be optimum for Total Variation (TV) denoising. SB has been implemented in a cost-efficient way to handle large datasets. This makes ART-SB more computationally efficient than previous TV-based reconstruction algorithms and most splitting approaches. 4) Finally, we proposed a novel approach to CS for fDOT, named the SB-SVA iterative method. This approach is based on the analysis-based co-sparse representation model, where an analysis operator multiplies the image transforming it in a sparse one. Taking advantage of the CS-SB algorithm, we restrict the solution reached at each CS-SB iteration to a certain space where the singular values of the forward matrix and the sparsity structure combine in beneficial manner. In this way, SB-SVA forces indirectly the wellconditioninig of the forward matrix while designing (learning) the analysis operator and finding the solution. Furthermore, SB-SVA outperforms the CS-SB algorithm in terms of image quality and needs fewer acquisition parameters. The approaches presented here have been validated with experimental. -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------El problema inverso consiste en un conjunto de técnicas matemáticas para obtener información sobre un fenómeno físico a partir de una serie de observaciones, medidas o datos. Dicho problema aparece en muchas aplicaciones científicas y tecnológicas como pueden ser imagen médica, imagen geofísica, acústica, aprendizaje máquina, física, etc. El principal objetivo de esta tesis doctoral fue utilizar la teoría del problema inverso para desarrollar nuevos métodos de reconstrucción para el problema de tomografía óptica difusiva por fluorescencia (fDOT), también llamada tomografía molecular de fluorescencia (FMT). fDOT es una modalidad de imagen médica que permite obtener de manera noinvasiva la distribución espacial 3D de la concentración de sondas moleculares fluorescentes en animales pequeños in-vivo. Una de las dificultades principales del problema inverso en fDOT, es que, debido a la alta difusión y absorción de los tejidos biológicos, es un problema fuertemente mal condicionado. Su solución no es única y presenta fuertes inestabilidades, por lo que el problema debe ser regularizado para obtener una solución estable. Los llamados escáneres fDOT “sin contacto” se basan en utilizar cámaras CCD como detectores virtuales, en vez de fibras ópticas en contacto con la muestras. Estos sistemas, necesitan un volumen de datos muy elevado para obtener una buena calidad de imagen y el coste computacional de hallar la solución llega a ser muy grande. Por esta razón, es importante optimizar el sistema, es decir, maximizar la información contenida en los datos adquiridos a la vez que minimizamos el coste computacional. La primera parte de esta tesis se centra en optimizar el sistema de adquisición, reduciendo el volumen de datos necesario usando técnicas de regularización basadas en la norma l₂. La segunda parte se inspira en el gran éxito de las técnicas de regularización basadas en la norma l₁ para la reconstrucción de imagen, y se centra en regularizar el problema fDOT mediante dichas técnicas. El trabajo finaliza introduciendo la técnica de “compressed sensing” (CS), que permite también reducir el número de datos necesarios sin por ello perder calidad de imagen. Las contribuciones principales de esta tesis son: 1) Realización de un estudio de viabilidad, por primera vez en fDOT, del método automático U-curva para seleccionar el parámetro de regularización (norma l₂). U-curva mostró ser un método óptimo para problemas con un volumen elevado de datos, ya que dicho método ofrece un intervalo donde encontrar el parámetro de regularización. 2) Una vez encontrado el método automático de selección de parámetro de regularización se realizó un estudio de la matriz del sistema de fDOT basado en el análisis de valores singulares (SVA), con la finalidad de maximizar la información contenida en los datos adquiridos y minimizar el coste computacional. Por primera vez se demostró que el uso de un mallado con menor densidad en la dirección perpendicular al plano obtiene mejores resultados que el uso convencional de una distribución isotrópica del mismo. 3) En la segunda parte de esta tesis, usando técnicas de regularización basadas en la norma l₁, se presenta un nuevo algoritmo iterativo, ART-SB, que combina la capacidad de la técnica de reconstrucción algebraica (ART) para lidiar con problemas con muchos datos con la efectividad del método Split Bregman (SB) para reducir ruido en la imagen mediante su variación total (TV). SB fue implementado de forma eficiente para procesar un elevado volumen de datos, de manera que ART-SB es computacionalmente más eficiente que otros algoritmos de reconstrucción presentados previamente en la literatura, basados en la TV de la imagen y que la mayoría de las técnicas llamadas de “splitting”. 4) Finalmente, proponemos una nueva aproximación iterativa a CS para fDOT, llamada SB-SVA. Esta aproximación se basa en el llamado modelo analítico co-disperso (co-sparse), donde un operador analítico multiplica la imagen convirtiéndola en una imagen dispersa. Este método aprovecha el método SB para CS (CS-SB) para restringir la solución alcanzada en cada iteración a un espacio determinado, donde los valores singulares de la matriz del sistema y la dispersión (“sparsity”) de la solución en dicha iteración combinen beneficiosamente; es decir, donde valores singulares muy pequeños no estén asociados a valores distintos de cero de la solución “sparse”. SB-SVA mejora el mal condicionamiento de la matriz del sistema a la vez que diseña el operador apropiado a través del cual la imagen se puede representar de forma dispersa y soluciona el problema de CS. Además, SB-SVA mostró mejores resultados que CS-SB en cuanto a calidad de imagen, requiriendo menor número de parámetros de adquisición. Todas las aproximaciones que presentamos en esta tesis fueron validadas con datos experimentales
    corecore