5 research outputs found
A Comparative Study of Population-Graph Construction Methods and Graph Neural Networks for Brain Age Regression
The difference between the chronological and biological brain age of a
subject can be an important biomarker for neurodegenerative diseases, thus
brain age estimation can be crucial in clinical settings. One way to
incorporate multimodal information into this estimation is through population
graphs, which combine various types of imaging data and capture the
associations among individuals within a population. In medical imaging,
population graphs have demonstrated promising results, mostly for
classification tasks. In most cases, the graph structure is pre-defined and
remains static during training. However, extracting population graphs is a
non-trivial task and can significantly impact the performance of Graph Neural
Networks (GNNs), which are sensitive to the graph structure. In this work, we
highlight the importance of a meaningful graph construction and experiment with
different population-graph construction methods and their effect on GNN
performance on brain age estimation. We use the homophily metric and graph
visualizations to gain valuable quantitative and qualitative insights on the
extracted graph structures. For the experimental evaluation, we leverage the UK
Biobank dataset, which offers many imaging and non-imaging phenotypes. Our
results indicate that architectures highly sensitive to the graph structure,
such as Graph Convolutional Network (GCN) and Graph Attention Network (GAT),
struggle with low homophily graphs, while other architectures, such as
GraphSage and Chebyshev, are more robust across different homophily ratios. We
conclude that static graph construction approaches are potentially insufficient
for the task of brain age estimation and make recommendations for alternative
research directions.Comment: Accepted at GRAIL, MICCAI 202
SuNeRF: Validation of a 3D Global Reconstruction of the Solar Corona Using Simulated EUV Images
Extreme Ultraviolet (EUV) light emitted by the Sun impacts satellite
operations and communications and affects the habitability of planets.
Currently, EUV-observing instruments are constrained to viewing the Sun from
its equator (i.e., ecliptic), limiting our ability to forecast EUV emission for
other viewpoints (e.g. solar poles), and to generalize our knowledge of the
Sun-Earth system to other host stars. In this work, we adapt Neural Radiance
Fields (NeRFs) to the physical properties of the Sun and demonstrate that
non-ecliptic viewpoints could be reconstructed from observations limited to the
solar ecliptic. To validate our approach, we train on simulations of solar EUV
emission that provide a ground truth for all viewpoints. Our model accurately
reconstructs the simulated 3D structure of the Sun, achieving a peak
signal-to-noise ratio of 43.3 dB and a mean absolute relative error of 0.3\%
for non-ecliptic viewpoints. Our method provides a consistent 3D reconstruction
of the Sun from a limited number of viewpoints, thus highlighting the potential
to create a virtual instrument for satellite observations of the Sun. Its
extension to real observations will provide the missing link to compare the Sun
to other stars and to improve space-weather forecasting.Comment: Accepted at Machine Learning and the Physical Sciences workshop,
NeurIPS 202
Sun Neural Radiance Fields (SuNeRFs): From Images to 4D Models of the Solar Atmosphere
EUV-observing instruments are limited in their numbers and have mainly been constrained to viewing the Sun from the ecliptic. For example, the Solar Dynamics Observatory (SDO; 2010-present) provides images of the Sun in EUV from the perspective of the Earth-Sun line. Two additional viewpoints are provided by the STEREO twin satellites pulling Ahead (STEREO-A; 2006-present) and falling Behind (STEREO-B; 2006-2014) of Earth's orbit. No satellites observe the solar poles directly. However, a complete image of the 3D Sun is required to fully understand the dynamics of the Sun (from eruptive events to space weather in the solar system), to forecast EUV radiation to protect our assets in space, to relate the Sun to other stars in the universe, and to generalize our knowledge of the Sun-Earth system to other host stars. To maximize the science return of multiple viewpoints, we propose a novel approach that unifies and smoothly integrates data from multiple perspectives into a consistent 3D representation of the solar corona. More specifically, we leverage Neural Radiance Fields (NeRFs) which are neural networks that achieve state-of-the-art 3D scene representation and generate novel views from a limited number of input images. We adapted a Sun NeRF (SuNeRF) to generate a physically-consistent representation of the 3D Sun, with the inclusion of radiative transfer and geometric ray sampling that matches the physical reality of optically thin plasma in the solar atmosphere. SuNeRFs leverage existing multi-viewpoint observations and act as virtual instruments that can fly out of the ecliptic, that can view the poles, and that can be placed anywhere in the solar system to generate novel views. Our pipeline is an example of how novel deep learning techniques can be used to significantly enhance observational capabilities by the creation of virtual instruments