97 research outputs found
Distance-Decay Relationship for Biological Wastewater Treatment Plants.
UnlabelledPatterns in the spatial distribution of organisms provide important information about mechanisms underlying biodiversity and the complexity of ecosystems. One of the most well-documented spatial patterns is the distance-decay relationship, which is a universal biogeographic pattern observed repeatedly for plant and animal communities, particularly for microorganisms in natural ecosystems such as soil, ocean, and salt marsh sediment. However, it is uncertain whether the microorganisms exhibit a distance-decay pattern in engineered ecosystems. Therefore, we measured the distance-decay relationship across various microbial functional and phylogenetic groups in 26 biological wastewater treatment plants (WWTPs) in China using a functional gene array (GeoChip 4.2). We found that microbial communities of activated sludge in WWTPs exhibited a significant but very weak distance-decay relationship. The taxon-area z values for different functional and phylogenetic groups were <0.0065, which is about 1 to 2 orders of magnitude lower than those observed in microbial communities elsewhere. Variation-partitioning analysis (VPA) showed that the relationships were driven by both environmental heterogeneity and geographic distance. Collectively, these results provided new insights into the spatial scaling of microbial communities in engineering ecosystems and highlighted the importance of environmental heterogeneity and geographic distance in shaping biogeographic patterns.ImportanceDetermining the distance-decay relationship of microbial biodiversity is important but challenging in microbial ecology. All studies to date are based on natural environments; thus, it remains unclear whether there is such a relationship in an engineered ecosystem. The present study shows that there is a very weak distance-decay relationship in an engineered ecosystem (WWTPs) at the regional-to-continental scale. This study makes fundamental contributions to a mechanistic, predictive understanding of microbial biogeography
UAE: Universal Anatomical Embedding on Multi-modality Medical Images
Identifying specific anatomical structures (\textit{e.g.}, lesions or
landmarks) in medical images plays a fundamental role in medical image
analysis. Exemplar-based landmark detection methods are receiving increasing
attention since they can detect arbitrary anatomical points in inference while
do not need landmark annotations in training. They use self-supervised learning
to acquire a discriminative embedding for each voxel within the image. These
approaches can identify corresponding landmarks through nearest neighbor
matching and has demonstrated promising results across various tasks. However,
current methods still face challenges in: (1) differentiating voxels with
similar appearance but different semantic meanings (\textit{e.g.}, two adjacent
structures without clear borders); (2) matching voxels with similar semantics
but markedly different appearance (\textit{e.g.}, the same vessel before and
after contrast injection); and (3) cross-modality matching (\textit{e.g.},
CT-MRI landmark-based registration). To overcome these challenges, we propose
universal anatomical embedding (UAE), which is a unified framework designed to
learn appearance, semantic, and cross-modality anatomical embeddings.
Specifically, UAE incorporates three key innovations: (1) semantic embedding
learning with prototypical contrastive loss; (2) a fixed-point-based matching
strategy; and (3) an iterative approach for cross-modality embedding learning.
We thoroughly evaluated UAE across intra- and inter-modality tasks, including
one-shot landmark detection, lesion tracking on longitudinal CT scans, and
CT-MRI affine/rigid registration with varying field of view. Our results
suggest that UAE outperforms state-of-the-art methods, offering a robust and
versatile approach for landmark based medical image analysis tasks. Code and
trained models are available at: \href{https://shorturl.at/bgsB3
Anatomy-Aware Lymph Node Detection in Chest CT using Implicit Station Stratification
Finding abnormal lymph nodes in radiological images is highly important for
various medical tasks such as cancer metastasis staging and radiotherapy
planning. Lymph nodes (LNs) are small glands scattered throughout the body.
They are grouped or defined to various LN stations according to their
anatomical locations. The CT imaging appearance and context of LNs in different
stations vary significantly, posing challenges for automated detection,
especially for pathological LNs. Motivated by this observation, we propose a
novel end-to-end framework to improve LN detection performance by leveraging
their station information. We design a multi-head detector and make each head
focus on differentiating the LN and non-LN structures of certain stations.
Pseudo station labels are generated by an LN station classifier as a form of
multi-task learning during training, so we do not need another explicit LN
station prediction model during inference. Our algorithm is evaluated on 82
patients with lung cancer and 91 patients with esophageal cancer. The proposed
implicit station stratification method improves the detection sensitivity of
thoracic lymph nodes from 65.1% to 71.4% and from 80.3% to 85.5% at 2 false
positives per patient on the two datasets, respectively, which significantly
outperforms various existing state-of-the-art baseline techniques such as
nnUNet, nnDetection and LENS
SAME++: A Self-supervised Anatomical eMbeddings Enhanced medical image registration framework using stable sampling and regularized transformation
Image registration is a fundamental medical image analysis task. Ideally,
registration should focus on aligning semantically corresponding voxels, i.e.,
the same anatomical locations. However, existing methods often optimize
similarity measures computed directly on intensities or on hand-crafted
features, which lack anatomical semantic information. These similarity measures
may lead to sub-optimal solutions where large deformations, complex anatomical
differences, or cross-modality imagery exist. In this work, we introduce a fast
and accurate method for unsupervised 3D medical image registration building on
top of a Self-supervised Anatomical eMbedding (SAM) algorithm, which is capable
of computing dense anatomical correspondences between two images at the voxel
level. We name our approach SAM-Enhanced registration (SAME++), which
decomposes image registration into four steps: affine transformation, coarse
deformation, deep non-parametric transformation, and instance optimization.
Using SAM embeddings, we enhance these steps by finding more coherent
correspondence and providing features with better semantic guidance. We
extensively evaluated SAME++ using more than 50 labeled organs on three
challenging inter-subject registration tasks of different body parts. As a
complete registration framework, SAME++ markedly outperforms leading methods by
- in terms of Dice score while being orders of magnitude faster
than numerical optimization-based methods. Code is available at
\url{https://github.com/alibaba-damo-academy/same}
Parse and Recall: Towards Accurate Lung Nodule Malignancy Prediction like Radiologists
Lung cancer is a leading cause of death worldwide and early screening is
critical for improving survival outcomes. In clinical practice, the contextual
structure of nodules and the accumulated experience of radiologists are the two
core elements related to the accuracy of identification of benign and malignant
nodules. Contextual information provides comprehensive information about
nodules such as location, shape, and peripheral vessels, and experienced
radiologists can search for clues from previous cases as a reference to enrich
the basis of decision-making. In this paper, we propose a radiologist-inspired
method to simulate the diagnostic process of radiologists, which is composed of
context parsing and prototype recalling modules. The context parsing module
first segments the context structure of nodules and then aggregates contextual
information for a more comprehensive understanding of the nodule. The prototype
recalling module utilizes prototype-based learning to condense previously
learned cases as prototypes for comparative analysis, which is updated online
in a momentum way during training. Building on the two modules, our method
leverages both the intrinsic characteristics of the nodules and the external
knowledge accumulated from other nodules to achieve a sound diagnosis. To meet
the needs of both low-dose and noncontrast screening, we collect a large-scale
dataset of 12,852 and 4,029 nodules from low-dose and noncontrast CTs
respectively, each with pathology- or follow-up-confirmed labels. Experiments
on several datasets demonstrate that our method achieves advanced screening
performance on both low-dose and noncontrast scenarios.Comment: MICCAI 202
Continual Segment: Towards a Single, Unified and Accessible Continual Segmentation Model of 143 Whole-body Organs in CT Scans
Deep learning empowers the mainstream medical image segmentation methods.
Nevertheless current deep segmentation approaches are not capable of
efficiently and effectively adapting and updating the trained models when new
incremental segmentation classes (along with new training datasets or not) are
required to be added. In real clinical environment, it can be preferred that
segmentation models could be dynamically extended to segment new organs/tumors
without the (re-)access to previous training datasets due to obstacles of
patient privacy and data storage. This process can be viewed as a continual
semantic segmentation (CSS) problem, being understudied for multi-organ
segmentation. In this work, we propose a new architectural CSS learning
framework to learn a single deep segmentation model for segmenting a total of
143 whole-body organs. Using the encoder/decoder network structure, we
demonstrate that a continually-trained then frozen encoder coupled with
incrementally-added decoders can extract and preserve sufficiently
representative image features for new classes to be subsequently and validly
segmented. To maintain a single network model complexity, we trim each decoder
progressively using neural architecture search and teacher-student based
knowledge distillation. To incorporate with both healthy and pathological
organs appearing in different datasets, a novel anomaly-aware and confidence
learning module is proposed to merge the overlapped organ predictions,
originated from different decoders. Trained and validated on 3D CT scans of
2500+ patients from four datasets, our single network can segment total 143
whole-body organs with very high accuracy, closely reaching the upper bound
performance level by training four separate segmentation models (i.e., one
model per dataset/task)
Effective Lymph Nodes Detection in CT Scans Using Location Debiased Query Selection and Contrastive Query Representation in Transformer
Lymph node (LN) assessment is a critical, indispensable yet very challenging
task in the routine clinical workflow of radiology and oncology. Accurate LN
analysis is essential for cancer diagnosis, staging, and treatment planning.
Finding scatteredly distributed, low-contrast clinically relevant LNs in 3D CT
is difficult even for experienced physicians under high inter-observer
variations. Previous automatic LN detection works typically yield limited
recall and high false positives (FPs) due to adjacent anatomies with similar
image intensities, shapes, or textures (vessels, muscles, esophagus, etc). In
this work, we propose a new LN DEtection TRansformer, named LN-DETR, to achieve
more accurate performance. By enhancing the 2D backbone with a multi-scale 2.5D
feature fusion to incorporate 3D context explicitly, more importantly, we make
two main contributions to improve the representation quality of LN queries. 1)
Considering that LN boundaries are often unclear, an IoU prediction head and a
location debiased query selection are proposed to select LN queries of higher
localization accuracy as the decoder query's initialization. 2) To reduce FPs,
query contrastive learning is employed to explicitly reinforce LN queries
towards their best-matched ground-truth queries over unmatched query
predictions. Trained and tested on 3D CT scans of 1067 patients (with 10,000+
labeled LNs) via combining seven LN datasets from different body parts (neck,
chest, and abdomen) and pathologies/cancers, our method significantly improves
the performance of previous leading methods by > 4-5% average recall at the
same FP rates in both internal and external testing. We further evaluate on the
universal lesion detection task using NIH DeepLesion benchmark, and our method
achieves the top performance of 88.46% averaged recall across 0.5 to 4 FPs per
image, compared with other leading reported results.Comment: Technical repor
Matching in the Wild: Learning Anatomical Embeddings for Multi-Modality Images
Radiotherapists require accurate registration of MR/CT images to effectively
use information from both modalities. In a typical registration pipeline, rigid
or affine transformations are applied to roughly align the fixed and moving
images before proceeding with the deformation step. While recent learning-based
methods have shown promising results in the rigid/affine step, these methods
often require images with similar field-of-view (FOV) for successful alignment.
As a result, aligning images with different FOVs remains a challenging task.
Self-supervised landmark detection methods like self-supervised Anatomical
eMbedding (SAM) have emerged as a useful tool for mapping and cropping images
to similar FOVs. However, these methods are currently limited to intra-modality
use only. To address this limitation and enable cross-modality matching, we
propose a new approach called Cross-SAM. Our approach utilizes a novel
iterative process that alternates between embedding learning and CT-MRI
registration. We start by applying aggressive contrast augmentation on both CT
and MRI images to train a SAM model. We then use this SAM to identify
corresponding regions on paired images using robust grid-points matching,
followed by a point-set based affine/rigid registration, and a deformable
fine-tuning step to produce registered paired images. We use these registered
pairs to enhance the matching ability of SAM, which is then processed
iteratively. We use the final model for cross-modality matching tasks. We
evaluated our approach on two CT-MRI affine registration datasets and found
that Cross-SAM achieved robust affine registration on both datasets,
significantly outperforming other methods and achieving state-of-the-art
performance
- …