449 research outputs found
10,000+ Times Accelerated Robust Subset Selection (ARSS)
Subset selection from massive data with noised information is increasingly
popular for various applications. This problem is still highly challenging as
current methods are generally slow in speed and sensitive to outliers. To
address the above two issues, we propose an accelerated robust subset selection
(ARSS) method. Specifically in the subset selection area, this is the first
attempt to employ the -norm based measure for the
representation loss, preventing large errors from dominating our objective. As
a result, the robustness against outlier elements is greatly enhanced.
Actually, data size is generally much larger than feature length, i.e. . Based on this observation, we propose a speedup solver (via ALM and
equivalent derivations) to highly reduce the computational cost, theoretically
from to . Extensive experiments on ten benchmark
datasets verify that our method not only outperforms state of the art methods,
but also runs 10,000+ times faster than the most related method
GFI1 downregulation promotes inflammation-linked metastasis of colorectal cancer.
Inflammation is frequently associated with initiation, progression, and metastasis of colorectal cancer (CRC). Here, we unveil a CRC-specific metastatic programme that is triggered via the transcriptional repressor, GFI1. Using data from a large cohort of clinical samples including inflammatory bowel disease and CRC, and a cellular model of CRC progression mediated by cross-talk between the cancer cell and the inflammatory microenvironment, we identified GFI1 as a gating regulator responsible for a constitutively activated signalling circuit that renders CRC cells competent for metastatic spread. Further analysis of mouse models with metastatic CRC and human clinical specimens reinforced the influence of GFI1 downregulation in promoting CRC metastatic spread. The novel role of GFI1 is uncovered for the first time in a human solid tumour such as CRC. Our results imply that GFI1 is a potential therapeutic target for interfering with inflammation-induced CRC progression and spread
DEEP REPRESENTATION LEARNING ON GIGA-PIXEL WHOLE SLIDE IMAGES
I present my work towards solving the fundamental, challenging and valuable problem for automatically processing the giga-pixel level whole slide pathology images (WSIs): the representation of them. Specifically, I target on solving the combinations of three critical aspects of the problem: (1) it\u27s not engineering feasible to directly fit them into existing convolutional neural networks because they are too large; (2) pre-trained parameters from other domains may not be effectively transferred to pathology images, and (3) both the image samples and annotations for those images are rarely available. To evaluate the effectiveness of the developed methods, I mainly focus on the primary and important applications in medicine: the survival prediction and clinical outcome prediction. I approach to the solution step by step. Firstly, I solve the major problem of effectively making survival prediction from images by proposing a DeepConvSurv network, which integrates both the strengths of convolutional neural network and Cox proportional hazard model. To better utilize the clinical information from the patients and partially solve the problem of effectively training models on the small size medical datasets, I propose a DeepMutliSurv based on DeepConvSurv, which trains multiple tasks on multi-modality data. Due to the scarce of available annotations on the WSIs, I further create an innovative framework named WSISA to do survival prediction based on all the WSIs provided by the patients without any annotations from the pathologists. Last but not least, a powerful end-to-end WSI representation learning method WSINet is developed, which solves the three major challenges efficiently and effectively. WSINet can be adopted in various WSI based applications like survival prediction, tumor subtype classification, and biochemical indicator prediction, etc. due to its compelling end-to-end learning and representation nature
Scribble Hides Class: Promoting Scribble-Based Weakly-Supervised Semantic Segmentation with Its Class Label
Scribble-based weakly-supervised semantic segmentation using sparse scribble
supervision is gaining traction as it reduces annotation costs when compared to
fully annotated alternatives. Existing methods primarily generate pseudo-labels
by diffusing labeled pixels to unlabeled ones with local cues for supervision.
However, this diffusion process fails to exploit global semantics and
class-specific cues, which are important for semantic segmentation. In this
study, we propose a class-driven scribble promotion network, which utilizes
both scribble annotations and pseudo-labels informed by image-level classes and
global semantics for supervision. Directly adopting pseudo-labels might
misguide the segmentation model, thus we design a localization rectification
module to correct foreground representations in the feature space. To further
combine the advantages of both supervisions, we also introduce a distance
entropy loss for uncertainty reduction, which adapts per-pixel confidence
weights according to the reliable region determined by the scribble and
pseudo-label's boundary. Experiments on the ScribbleSup dataset with different
qualities of scribble annotations outperform all the previous methods,
demonstrating the superiority and robustness of our method.The code is
available at
https://github.com/Zxl19990529/Class-driven-Scribble-Promotion-Network
Low-Rank Mixture-of-Experts for Continual Medical Image Segmentation
The primary goal of continual learning (CL) task in medical image
segmentation field is to solve the "catastrophic forgetting" problem, where the
model totally forgets previously learned features when it is extended to new
categories (class-level) or tasks (task-level). Due to the privacy protection,
the historical data labels are inaccessible. Prevalent continual learning
methods primarily focus on generating pseudo-labels for old datasets to force
the model to memorize the learned features. However, the incorrect
pseudo-labels may corrupt the learned feature and lead to a new problem that
the better the model is trained on the old task, the poorer the model performs
on the new tasks. To avoid this problem, we propose a network by introducing
the data-specific Mixture of Experts (MoE) structure to handle the new tasks or
categories, ensuring that the network parameters of previous tasks are
unaffected or only minimally impacted. To further overcome the tremendous
memory costs caused by introducing additional structures, we propose a Low-Rank
strategy which significantly reduces memory cost. We validate our method on
both class-level and task-level continual learning challenges. Extensive
experiments on multiple datasets show our model outperforms all other methods
- …
