555 research outputs found
A localisation theorem for singularity categories of proper dg algebras
Given a recollement of three proper dg algebras over a noetherian commutative
ring, e.g. three algebras which are finitely generated over the base ring,
which extends one step downwards, it is shown that there is a short exact
sequence of their singularity categories. This allows us to recover and
generalise some known results on singularity categories of finite-dimensional
algebras.Comment: Section 3 is new, and in Section 4 the base ring is changed from a
field to a commutative noetherian ring and in Section 6 the base ring is
changed from a field to an arbitrary commutative rin
Solving Inverse Obstacle Scattering Problem with Latent Surface Representations
We propose a novel iterative numerical method to solve the three-dimensional
inverse obstacle scattering problem of recovering the shape of the obstacle
from far-field measurements. To address the inherent ill-posed nature of the
inverse problem, we advocate the use of a trained latent representation of
surfaces as the generative prior. This prior enjoys excellent expressivity
within the given class of shapes, and meanwhile, the latent dimensionality is
low, which greatly facilitates the computation. Thus, the admissible manifold
of surfaces is realistic and the resulting optimization problem is less
ill-posed. We employ the shape derivative to evolve the latent surface
representation, by minimizing the loss, and we provide a local convergence
analysis of a gradient descent type algorithm to a stationary point of the
loss. We present several numerical examples, including also backscattered and
phaseless data, to showcase the effectiveness of the proposed algorithm
Learning Robust Representation for Joint Grading of Ophthalmic Diseases via Adaptive Curriculum and Feature Disentanglement
Diabetic retinopathy (DR) and diabetic macular edema (DME) are leading causes
of permanent blindness worldwide. Designing an automatic grading system with
good generalization ability for DR and DME is vital in clinical practice.
However, prior works either grade DR or DME independently, without considering
internal correlations between them, or grade them jointly by shared feature
representation, yet ignoring potential generalization issues caused by
difficult samples and data bias. Aiming to address these problems, we propose a
framework for joint grading with the dynamic difficulty-aware weighted loss
(DAW) and the dual-stream disentangled learning architecture (DETACH). Inspired
by curriculum learning, DAW learns from simple samples to difficult samples
dynamically via measuring difficulty adaptively. DETACH separates features of
grading tasks to avoid potential emphasis on the bias. With the addition of DAW
and DETACH, the model learns robust disentangled feature representations to
explore internal correlations between DR and DME and achieve better grading
performance. Experiments on three benchmarks show the effectiveness and
robustness of our framework under both the intra-dataset and cross-dataset
tests.Comment: Accepted by MICCAI2
Recollements of partially wrapped Fukaya categories and surface cuts
In this paper we use recollements to investigate partially wrapped Fukaya
categories of surfaces with marked points. In particular, we show that cutting
surfaces gives rise to recollements of the corresponding partially wrapped
Fukaya categories. Our approach is based on the fact that the partially wrapped
Fukaya category of a surface with marked points is triangle equivalent to the
perfect derived category of a homologically smooth and proper graded gentle
algebra with zero differential as shown by Haiden, Katzarkov and Kontsevich.
Using this, we study particular generators of partially wrapped Fukaya
categories, namely full exceptional sequences, silting objects and
simple-minded collections. In particular, we fully characterise the existence
of full exceptional sequences and we give an example of a partially wrapped
Fukaya category which does not admit a silting object, that is a generator with
no positive self-extensions.Comment: 30 pages. Comments are welcome v2: changes to title, abstract and
introductio
- …