61 research outputs found
Dual Meta-Learning with Longitudinally Generalized Regularization for One-Shot Brain Tissue Segmentation Across the Human Lifespan
Brain tissue segmentation is essential for neuroscience and clinical studies.
However, segmentation on longitudinal data is challenging due to dynamic brain
changes across the lifespan. Previous researches mainly focus on
self-supervision with regularizations and will lose longitudinal generalization
when fine-tuning on a specific age group. In this paper, we propose a dual
meta-learning paradigm to learn longitudinally consistent representations and
persist when fine-tuning. Specifically, we learn a plug-and-play feature
extractor to extract longitudinal-consistent anatomical representations by
meta-feature learning and a well-initialized task head for fine-tuning by
meta-initialization learning. Besides, two class-aware regularizations are
proposed to encourage longitudinal consistency. Experimental results on the
iSeg2019 and ADNI datasets demonstrate the effectiveness of our method. Our
code is available at https://github.com/ladderlab-xjtu/DuMeta.Comment: ICCV 202
Bilevel optimization, deep learning and fractional Laplacian regularization with applications in tomography
The article of record as published may be located at https://doi.org/10.1088/1361-6420/ab80d7Funded by Naval Postgraduate SchoolIn this work we consider a generalized bilevel optimization framework for solv- ing inverse problems. We introduce fractional Laplacian as a regularizer to improve the reconstruction quality, and compare it with the total variation regularization. We emphasize that the key advantage of using fractional Laplacian as a regularizer is that it leads to a linear operator, as opposed to the total varia- tion regularization which results in a nonlinear degenerate operator. Inspired by residual neural networks, to learn the optimal strength of regularization and the exponent of fractional Laplacian, we develop a dedicated bilevel opti- mization neural network with a variable depth for a general regularized inverse problem. We illustrate how to incorporate various regularizer choices into our proposed network. As an example, we consider tomographic reconstruction as a model problem and show an improvement in reconstruction quality, especially for limited data, via fractional Laplacian regularization. We successfully learn the regularization strength and the fractional exponent via our proposed bilevel optimization neural network. We observe that the fractional Laplacian regular- ization outperforms total variation regularization. This is specially encouraging, and important, in the case of limited and noisy data.The first and third authors are partially supported by NSF grants DMS-1818772, DMS-1913004, the Air Force Office of Scientific Research under Award No.: FA9550-19-1-0036, and the Department of Navy, Naval PostGraduate School under Award No.: N00244-20-1-0005. The third author is also partially supported by a Provost award at George Mason University under the Industrial Immersion Program. The second author is partially supported by DOE Office of Science under Contract No. DE-AC02-06CH11357.The first and third authors are partially supported by NSF grants DMS-1818772, DMS-1913004, the Air Force Office of Scientific Research under Award No.: FA9550-19-1-0036, and the Department of Navy, Naval PostGraduate School under Award No.: N00244-20-1-0005. The third author is also partially supported by a Provost award at George Mason University under the Industrial Immersion Program. The second author is partially supported by DOE Office of Science under Contract No. DE-AC02-06CH11357
Bilevel Training Schemes in Imaging for Total Variation--Type Functionals with Convex Integrands
In the context of image processing, given a -th order, homogeneous and
linear differential operator with constant coefficients, we study a class of
variational problems whose regularizing terms depend on the operator.
Precisely, the regularizers are integrals of spatially inhomogeneous integrands
with convex dependence on the differential operator applied to the image
function. The setting is made rigorous by means of the theory of Radon measures
and of suitable function spaces modeled on . We prove the lower
semicontinuity of the functionals at stake and existence of minimizers for the
corresponding variational problems. Then, we embed the latter into a bilevel
scheme in order to automatically compute the space-dependent regularization
parameters, thus allowing for good flexibility and preservation of details in
the reconstructed image. We establish existence of optima for the scheme and we
finally substantiate its feasibility by numerical examples in image denoising.
The cases that we treat are Huber versions of the first and second order total
variation with both the Huber and the regularization parameter being spatially
dependent. Notably the spatially dependent version of second order total
variation produces high quality reconstructions when compared to
regularizations of similar type, and the introduction of the spatially
dependent Huber parameter leads to a further enhancement of the image details.Comment: 27 pages, 6 figure
Direct stellarator coil optimization for nested magnetic surfaces with precise quasisymmetry
We present a robust optimization algorithm for the design of electromagnetic
coils that generate vacuum magnetic fields with nested flux surfaces and
precise quasisymmetry. The method is based on a bilevel optimization problem,
where the outer coil optimization is constrained by a set of inner
least-squares optimization problems whose solutions describe magnetic surfaces.
The outer optimization objective targets coils that generate a field with
nested magnetic surfaces and good quasisymmetry. The inner optimization
problems identify magnetic surfaces when they exist, and approximate surfaces
in the presence of magnetic islands or chaos. We show that this formulation can
be used to heal islands and chaos, thus producing coils that result in magnetic
fields with precise quasisymmetry. We show that the method can be initialized
with coils from the traditional two stage coil design process, as well as coils
from a near axis expansion optimization. We present a numerical example where
island chains are healed and quasisymmetry is optimized up to surfaces with
aspect ratio 6. Another numerical example illustrates that the aspect ratio of
nested flux surfaces with optimized quasisymmetry can be decreased from 6 to
approximately 4. The last example shows that our approach is robust and a
cold-start using coils from a near-axis expansion optimization
Entropic regularization approach for mathematical programs with equilibrium constraints
A new smoothing approach based on entropic perturbation is proposed for solving mathematical programs with equilibrium constraints. Some of the desirable properties of the smoothing function are shown. The viability of the proposed approach is supported by a computational study on a set of well-known test problems.Entropic regularization;Smoothing approach;Mathematical programs with equilibrium constraints
Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective
We present a new dataset condensation framework termed Squeeze, Recover and
Relabel (SReL) that decouples the bilevel optimization of model and
synthetic data during training, to handle varying scales of datasets, model
architectures and image resolutions for effective dataset condensation. The
proposed method demonstrates flexibility across diverse dataset scales and
exhibits multiple advantages in terms of arbitrary resolutions of synthesized
images, low training cost and memory consumption with high-resolution training,
and the ability to scale up to arbitrary evaluation network architectures.
Extensive experiments are conducted on Tiny-ImageNet and full ImageNet-1K
datasets. Under 50 IPC, our approach achieves the highest 42.5% and 60.8%
validation accuracy on Tiny-ImageNet and ImageNet-1K, outperforming all
previous state-of-the-art methods by margins of 14.5% and 32.9%, respectively.
Our approach also outperforms MTT by approximately 52 (ConvNet-4) and
16 (ResNet-18) faster in speed with less memory consumption of
11.6 and 6.4 during data synthesis. Our code and condensed
datasets of 50, 200 IPC with 4K recovery budget are available at
https://zeyuanyin.github.io/projects/SRe2L/.Comment: Technical repor
- …