616 research outputs found
A comparative study of altered hemodynamics in iliac vein compression syndrome
Introduction: Iliac vein compression syndrome (IVCS) is present in over 20% of the population and is associated with left leg pain, swelling, and thrombosis. IVCS symptoms are thought to be induced by altered pelvic hemodynamics, however, there currently exists a knowledge gap on the hemodynamic differences between IVCS and healthy patients. To elucidate those differences, we carried out a patient-specific, computational modeling comparative study.Methods: Computed tomography and ultrasound velocity and area data were used to build and validate computational models for a cohort of IVCS (N = 4, Subject group) and control (N = 4, Control group) patients. Flow, cross-sectional area, and shear rate were compared between the right common iliac vein (RCIV) and left common iliac vein (LCIV) for each group and between the Subject and Control groups for the same vessel.Results: For the IVCS patients, LCIV mean shear rate was higher than RCIV mean shear rate (550 ± 103 sâ1 vs. 113 ± 48 sâ1, p = 0.0009). Furthermore, LCIV mean shear rate was higher in the Subject group than in the Control group (550 ± 103 sâ1 vs. 75 ± 37 sâ1, p = 0.0001). Lastly, the LCIV/RCIV shear rate ratio was 4.6 times greater in the Subject group than in the Control group (6.56 ± 0.9 vs. 1.43 ± 0.6, p = 0.00008).Discussion: Our analyses revealed that IVCS patients have elevated shear rates which may explain a higher thrombosis risk and suggest that their thrombus initiation process may share aspects of arterial thrombosis. We have identified hemodynamic metrics that revealed profound differences between IVCS patients and Controls, and between RCIV and LCIV in the IVCS patients. Based on these metrics, we propose that non-invasive measurement of shear rate may aid with stratification of patients with moderate compression in which treatment is highly variable. More investigation is needed to assess the prognostic value of shear rate and shear rate ratio as clinical metrics and to understand the mechanisms of thrombus formation in IVCS patients
Understanding and controlling structural distortions underlying superconductivity in lanthanum cuprates
The suppression of superconductivity in layered lanthanum cuprates near x = 1/8 coincides with a structural phase transition from a low-temperature orthorhombic to a low-temperature tetragonal phase. The low-temperature phases are characterised by a static tilt of the CuO6 octahedra away from the layering axis in distinct directions. It remained an open question whether the orthorhombic-to-tetragonal phase transition would only occur in the context of competing electronic orders in the lanthanum cuprates.
This thesis proposes a novel approach to studying the orthorhombic-to-tetragonal phase transition using the novel La2MgO4 system. La2MgO4 adopts the layered Ruddlesden-Pepper structure of the lanthanum cuprates but lacks the strong electron correlations and octahedral distortions associated with the Jahn-Teller active Cu site. Combining first-principles simula- tions using density-functional theory with experimental data on the novel La2MgO4 system, the context in which these structural phases can occur is detailed, outlining the key param- eters determining the stability of the phase which suppresses bulk superconductivity. The same sequence of structural phase transitions occurs in La2MgO4 as in La1.875Ba0.125CuO4, and the tetragonal phase is stabilised via steric effects beyond a critical octahedral tilt magnitude. Larger Jahn-Teller distortions favour the orthorhombic phase.
The effect of isotropic and anisotropic pressure on La2MgO4 and La2CuO4 is explored. These form the basis for a structural mechanism to understand the experimental trends of the bulk superconducting transition temperature under uniaxial pressure. Finally, the justification for the methodology used throughout this thesis to simulate these systems is provided, highlighting that DFT+U accurately describes their atomic and electronic structure.Open Acces
Peering into the Dark: Investigating dark matter and neutrinos with cosmology and astrophysics
The LCDM model of modern cosmology provides a highly accurate description of our universe.
However, it relies on two mysterious components, dark matter and dark energy. The cold dark matter
paradigm does not provide a satisfying description of its particle nature, nor any link to the Standard
Model of particle physics.
I investigate the consequences for cosmological structure formation in models with a coupling
between dark matter and Standard Model neutrinos, as well as probes of primordial black holes as
dark matter.
I examine the impact that such an interaction would have through both linear perturbation theory and
nonlinear N-body simulations. I present limits on the possible interaction strength from cosmic
microwave background, large scale structure, and galaxy population data, as well as forecasts on the
future sensitivity. I provide an analysis of what is necessary to distinguish the cosmological impact of
interacting dark matter from similar effects. Intensity mapping of the 21 cm line of neutral hydrogen at
high redshift using next generation observatories, such as the SKA, would provide the strongest
constraints yet on such interactions, and may be able to distinguish between different scenarios
causing suppressed small scale structure. I also present a novel type of probe of structure formation,
using the cosmological gravitational wave signal of high redshift compact binary mergers to provide
information about structure formation, and thus the behaviour of dark matter. Such observations
would also provide competitive constraints.
Finally, I investigate primordial black holes as an alternative dark matter candidate, presenting an
analysis and framework for the evolution of extended mass populations over cosmological time and
computing the present day gamma ray signal, as well as the allowed local evaporation rate. This is
used to set constraints on the allowed population of low mass primordial black holes, and the
likelihood of witnessing an evaporation
TensorMD: Scalable Tensor-Diagram based Machine Learning Interatomic Potential on Heterogeneous Many-Core Processors
Molecular dynamics simulations have emerged as a potent tool for
investigating the physical properties and kinetic behaviors of materials at the
atomic scale, particularly in extreme conditions. Ab initio accuracy is now
achievable with machine learning based interatomic potentials. With recent
advancements in high-performance computing, highly accurate and large-scale
simulations become feasible. This study introduces TensorMD, a new machine
learning interatomic potential (MLIP) model that integrates physical principles
and tensor diagrams. The tensor formalism provides a more efficient computation
and greater flexibility for use with other scientific codes. Additionally, we
proposed several portable optimization strategies and developed a highly
optimized version for the new Sunway supercomputer. Our optimized TensorMD can
achieve unprecedented performance on the new Sunway, enabling simulations of up
to 52 billion atoms with a time-to-solution of 31 ps/step/atom, setting new
records for HPC + AI + MD
The future of cosmology? A case for CMB spectral distortions
This thesis treats the topic of CMB Spectral Distortions (SDs), which
represent any deviation from a pure black body shape of the CMB energy
spectrum. As such, they can be used to probe the inflationary, expansion and
thermal evolution of the universe both within CDM and beyond it. The
currently missing observation of this rich probe of the universe makes of it an
ideal target for future observational campaigns. In fact, while the
CDM signal guarantees a discovery, the sensitivity to a wide variety
of new physics opens the door to an enormous uncharted territory. In light of
these considerations, the thesis opens by reviewing the topic of CMB SDs in a
pedagogical and illustrative fashion, aimed at waking the interest of the
broader community. This introductory premise sets the stage for the first main
contribution of the thesis to the field of SDs: their implementation in the
Boltzmann solver CLASS and the parameter inference code MontePython. The
CLASS+MontePython pipeline is publicly available, fast, it includes all sources
of SDs within CDM and many others beyond that, and allows to
consistently account for any observational setup. By means of these numerical
tools, the second main contribution of the thesis consists in showcasing the
versatility and competitiveness of SDs for several cosmological models as well
as for a number of different mission designs. Among others, the results cover
features in the primordial power spectrum, primordial gravitational waves,
non-standard dark matter properties, primordial black holes, primordial
magnetic fields and Hubble tension. Finally, the manuscript is disseminated
with (20) follow-up ideas that naturally extend the work carried out so far,
highlighting how rich of unexplored possibilities the field of CMB SDs still
is. The hope is that these suggestions will become a propeller for further
interesting developments.Comment: PhD thesis. Pedagogical review of theory, experimental status and
numerical tools (CLASS+MontePython) with broad overview of applications.
Includes 20 original follow-up idea
LoopTune: Optimizing Tensor Computations with Reinforcement Learning
Advanced compiler technology is crucial for enabling machine learning
applications to run on novel hardware, but traditional compilers fail to
deliver performance, popular auto-tuners have long search times and
expert-optimized libraries introduce unsustainable costs. To address this, we
developed LoopTune, a deep reinforcement learning compiler that optimizes
tensor computations in deep learning models for the CPU. LoopTune optimizes
tensor traversal order while using the ultra-fast lightweight code generator
LoopNest to perform hardware-specific optimizations. With a novel graph-based
representation and action space, LoopTune speeds up LoopNest by 3.2x,
generating an order of magnitude faster code than TVM, 2.8x faster than
MetaSchedule, and 1.08x faster than AutoTVM, consistently performing at the
level of the hand-tuned library Numpy. Moreover, LoopTune tunes code in order
of seconds
Novel neural architectures & algorithms for efficient inference
In the last decade, the machine learning universe embraced deep neural networks (DNNs) wholeheartedly with the advent of neural architectures such as recurrent neural networks (RNNs), convolutional neural networks (CNNs), transformers, etc. These models have empowered many applications, such as ChatGPT, Imagen, etc., and have achieved state-of-the-art (SOTA) performance on many vision, speech, and language modeling tasks. However, SOTA performance comes with various issues, such as large model size, compute-intensive training, increased inference latency, higher working memory, etc. This thesis aims at improving the resource efficiency of neural architectures, i.e., significantly reducing the computational, storage, and energy consumption of a DNN without any significant loss in performance.
Towards this goal, we explore novel neural architectures as well as training algorithms that allow low-capacity models to achieve near SOTA performance. We divide this thesis into two dimensions: \textit{Efficient Low Complexity Models}, and \textit{Input Hardness Adaptive Models}.
Along the first dimension, i.e., \textit{Efficient Low Complexity Models}, we improve DNN performance by addressing instabilities in the existing architectures and training methods. We propose novel neural architectures inspired by ordinary differential equations (ODEs) to reinforce input signals and attend to salient feature regions. In addition, we show that carefully designed training schemes improve the performance of existing neural networks. We divide this exploration into two parts:
\textsc{(a) Efficient Low Complexity RNNs.} We improve RNN resource efficiency by addressing poor gradients, noise amplifications, and BPTT training issues. First, we improve RNNs by solving ODEs that eliminate vanishing and exploding gradients during the training. To do so, we present Incremental Recurrent Neural Networks (iRNNs) that keep track of increments in the equilibrium surface. Next, we propose Time Adaptive RNNs that mitigate the noise propagation issue in RNNs by modulating the time constants in the ODE-based transition function. We empirically demonstrate the superiority of ODE-based neural architectures over existing RNNs. Finally, we propose Forward Propagation Through Time (FPTT) algorithm for training RNNs. We show that FPTT yields significant gains compared to the more conventional Backward Propagation Through Time (BPTT) scheme.
\textsc{(b) Efficient Low Complexity CNNs.} Next, we improve CNN architectures by reducing their resource usage. They require greater depth to generate high-level features, resulting in computationally expensive models. We design a novel residual block, the Global layer, that constrains the input and output features by approximately solving partial differential equations (PDEs). It yields better receptive fields than traditional convolutional blocks and thus results in shallower networks. Further, we reduce the model footprint by enforcing a novel inductive bias that formulates the output of a residual block as a spatial interpolation between high-compute anchor pixels and low-compute cheaper pixels. This results in spatially interpolated convolutional blocks (SI-CNNs) that have better compute and performance trade-offs. Finally, we propose an algorithm that enforces various distributional constraints during training in order to achieve better generalization. We refer to this scheme as distributionally constrained learning (DCL).
In the second dimension, i.e., \textit{Input Hardness Adaptive Models}, we introduce the notion of the hardness of any input relative to any architecture. In the first dimension, a neural network allocates the same resources, such as compute, storage, and working memory, for all the inputs. It inherently assumes that all examples are equally hard for a model. In this dimension, we challenge this assumption using input hardness as our reasoning that some inputs are relatively easy for a network to predict compared to others. Input hardness enables us to create selective classifiers wherein a low-capacity network handles simple inputs while abstaining from a prediction on the complex inputs. Next, we create hybrid models that route the hard inputs from the low-capacity abstaining network to a high-capacity expert model. We design various architectures that adhere to this hybrid inference style. Further, input hardness enables us to selectively distill the knowledge of a high-capacity model into a low-capacity model by cleverly discarding hard inputs during the distillation procedure.
Finally, we conclude this thesis by sketching out various interesting future research directions that emerge as an extension of different ideas explored in this work
Surface-Based tools for Characterizing the Human Brain Cortical Morphology
Tesis por compendio de publicacionesThe cortex of the human brain is highly convoluted. These characteristic convolutions
present advantages over lissencephalic brains. For instance, gyrification allows an expansion
of cortical surface area without significantly increasing the cranial volume, thus
facilitating the pass of the head through the birth channel. Studying the human brainâs
cortical morphology and the processes leading to the cortical folds has been critical for an
increased understanding of the pathological processes driving psychiatric disorders such
as schizophrenia, bipolar disorders, autism, or major depression. Furthermore, charting
the normal developmental changes in cortical morphology during adolescence or aging
can be of great importance for detecting deviances that may be precursors for pathology.
However, the exact mechanisms that push cortical folding remain largely unknown.
The accurate characterization of the neurodevelopment processes is challenging. Multiple
mechanisms co-occur at a molecular or cellular level and can only be studied through
the analysis of ex-vivo samples, usually of animal models. Magnetic Resonance Imaging
can partially fill the breach, allowing the portrayal of the macroscopic processes surfacing
on in-vivo samples.
Different metrics have been defined to measure cortical structure to describe the brainâs
morphological changes and infer the associated microstructural events. Metrics such as
cortical thickness, surface area, or cortical volume help establish a relation between the
measured voxels on a magnetic resonance image and the underlying biological processes.
However, the existing methods present limitations or room for improvement.
Methods extracting the lines representing the gyral and sulcal morphology tend to
over- or underestimate the total length. These lines can provide important information
about how sulcal and gyral regions function differently due to their distinctive ontogenesis.
Nevertheless, some methods label every small fold on the cortical surface as a sulcal
fundus, thus losing the perspective of lines that travel through the deeper zones of a sulcal
basin. On the other hand, some methods are too restrictive, labeling sulcal fundi only for
a bunch of primary folds.
To overcome this issue, we have proposed a Laplacian-collapse-based algorithm that
can delineate the lines traversing the top regions of the gyri and the fundi of the sulci
avoiding anastomotic sulci. For this, the cortex, represented as a 3D surface, is segmented
into gyral and sulcal surfaces attending to the curvature and depth at every point
of the mesh. Each resulting surface is spatially filtered, smoothing the boundaries. Then,
a Laplacian-collapse-based algorithm is applied to obtain a thinned representation of the
morphology of each structure. These thin curves are processed to detect where the extremities
or endpoints lie. Finally, sulcal fundi and gyral crown lines are obtained by
eroding the surfaces while preserving the structure topology and connectivity between
the endpoints. The assessment of the presented algorithm showed that the labeled sulcal lines were close to the proposed ground truth length values while crossing through the
deeper (and more curved) regions. The tool also obtained reproducibility scores better or
similar to those of previous algorithms.
A second limitation of the existing metrics concerns the measurement of sulcal width.
This metric, understood as the physical distance between the points on opposite sulcal
banks, can come in handy in detecting cortical flattening or complementing the information
provided by cortical thickness, gyrification index, or such features. Nevertheless,
existing methods only provided averaged measurements for different predefined sulcal
regions, greatly restricting the possibilities of sulcal width and ignoring the intra-region
variability.
Regarding this, we developed a method that estimates the distance from each sulcal
point in the cortex to its corresponding opposite, thus providing a per-vertex map of the
physical sulcal distances. For this, the cortical surface is sampled at different depth levels,
detecting the points where the sulcal banks change. The points corresponding to each sulcal
wall are matched with the closest point on a different one. The distance between those
points is the sulcal width. The algorithm was validated against a simulated sulcus that
resembles a simple fold. Then the tool was used on a real dataset and compared against
two widely-used sulcal width estimation methods, averaging the proposed algorithmâs
values into the same region definition those reference tools use. The resulting values were
similar for the proposed and the reference methods, thus demonstrating the algorithmâs
accuracy.
Finally, both algorithms were tested on a real aging population dataset to prove the
methodsâ potential in a use-case scenario. The main idea was to elucidate fine-grained
morphological changes in the human cortex with aging by conducting three analyses: a
comparison of the age-dependencies of cortical thickness in gyral and sulcal lines, an
analysis of how the sulcal and gyral length changes with age, and a vertex-wise study of
sulcal width and cortical thickness.
These analyses showed a general flattening of the cortex with aging, with interesting
findings such as a differential age-dependency of thickness thinning in the sulcal and
gyral regions. By demonstrating that our method can detect this difference, our results
can pave the way for future in vivo studies focusing on macro- and microscopic changes
specific to gyri or sulci. Our method can generate new brain-based biomarkers specific
to sulci and gyri, and these can be used on large samples to establish normative models
to which patients can be compared. In parallel, the vertex-wise analyses show that sulcal
width is very sensitive to changes during aging, independent of cortical thickness. This
corroborates the concept of sulcal width as a metric that explains, in the least, the unique
variance of morphology not fully captured by existing metrics. Our method allows for
sulcal width vertex-wise analyses that were not possible previously, potentially changing
our understanding of how changes in sulcal width shape cortical morphology.
In conclusion, this thesis presents two new tools, open source and publicly available, for estimating cortical surface-based morphometrics. The methods have been validated
and assessed against existing algorithms. They have also been tested on a real dataset,
providing new, exciting insights into cortical morphology and showing their potential for
defining innovative biomarkers.Programa de Doctorado en Ciencia y TecnologĂa BiomĂ©dica por la Universidad Carlos III de MadridPresidente: Juan Domingo Gispert LĂłpez.- Secretario: Norberto Malpica GonzĂĄlez de Vega.- Vocal: Gemma Cristina MontĂ© Rubi
Cosmology with the Laser Interferometer Space Antenna
The Laser Interferometer Space Antenna (LISA) has two scientific objectives of cosmological focus: to probe the expansion rate of the universe, and to understand stochastic gravitational-wave backgrounds and their implications for early universe and particle physics, from the MeV to the Planck scale. However, the range of potential cosmological applications of gravitational-wave observations extends well beyond these two objectives. This publication presents a summary of the state of the art in LISA cosmology, theory and methods, and identifies new opportunities to use gravitational-wave observations by LISA to probe the universe
Pressure gradients in molecular dynamics simulations of nano-confined fluid flow
Ein detailliertes VerstĂ€ndnis des Verhaltens von Schmierstoffen in engen Spalten ist fĂŒr eine Reihe von medizinischen und industriellen Anwendungen entscheidend. Die hydrodynamischen Grundgleichungen bieten genaue Lösungen, sofern die kontaktierenden Körper ausreichend weit voneinander entfernt sind. Unter extremen Belastungsbedingungen werden jedoch Abweichungen von den Navier-Stokes-Fourier-Gleichungen beobachtet. Dies liegt hauptsĂ€chlich an der Bedeutung atomare Effekte, die eine homogenisierte Betrachtung im Rahmen von Kontinuumstheorien nicht mehr erlauben, sodass die FlĂŒssigkeit als Ansammlung diskreter Partikel behandelt werden muss. Der multiskalige Charakter des Problems wird im Bereich der Grenzreibung umso deutlicher. In diesem Regime wird das Schmiermittel durch Druckgradienten angetrieben, die sich aus der Variation der Spalthöhe zwischen den kontaktierenden Körpern ergeben. In der atomistischen Modellierung werden ĂŒblicherweise Nichtgleichgewichts-Molekulardynamik (NEMD) Simulationen periodischer, reprĂ€sentativer Volumenelemente (RVE) verwendet, bei denen der Schmierfilm von flachen WĂ€nden eingeschlossen wird. Aufgrund der PeriodizitĂ€t stellt das Einstellen von Druckgradienten in solchen Modellen eine HĂŒrde dar. In dieser Arbeit wurde die ``Pump\u27\u27-Methode entwickelt, um Druckgradienten in periodischen Systemen einzufĂŒhren, indem eine lokale Störung aufgebracht wird, die unter Einhaltung der Impulserhaltung einen druckgetriebenen Fluss des Schmiermittels induziert. Dabei kann sowohl der Massenfluss als auch der Druckgradient, durch Festlegen atomarer KrĂ€fte, als unabhĂ€ngige Variable gewĂ€hlt werden. Die Methode wurde fĂŒr kompressible Fluide mit unterschiedlichen Benetzungseigenschaften und in Verbindung mit verschiedenen Thermostat-Strategien getestet. Dabei werden die thermodynamischen FeldgröĂen Druck, Temperatur und Geschwindigkeit des Schmierstoffs in Spalthöhen bis zu drei MolekĂŒldurchmessern gemessen. Die Pump-Methode kann auf KanĂ€le beliebiger Geometrie angewendet werden, was die Anwendung zur Untersuchung hydrodynamischer Kavitation ermöglicht -- ein PhĂ€nomen, welches in der Natur allgegenwĂ€rtig ist, jedoch auf molekularer Ebene bisher kaum untersucht wurde. Dazu wurde die Kanalgeometrie anhand einer SensitivitĂ€tsanalyse optimiert. AnschlieĂend wurde die Lebensdauer der Kavitationsblasen, sowie deren Wachstum und Zusammenbruch mit den theoretischen, hydrodynamischen Vorhersagen verglichen. Im Rahmen eines Multiskalenansatzes fĂŒr Schmierungsprobleme kann die Pump-Methode zur Einstellung der Randbedingungen eines molekularen Systems im Einklang mit Kontinuumssimulationen verwendet werden
- âŠ