1,438 research outputs found
We're Not Using Videos Effectively: An Updated Domain Adaptive Video Segmentation Baseline
There has been abundant work in unsupervised domain adaptation for semantic
segmentation (DAS) seeking to adapt a model trained on images from a labeled
source domain to an unlabeled target domain. While the vast majority of prior
work has studied this as a frame-level Image-DAS problem, a few Video-DAS works
have sought to additionally leverage the temporal signal present in adjacent
frames. However, Video-DAS works have historically studied a distinct set of
benchmarks from Image-DAS, with minimal cross-benchmarking. In this work, we
address this gap. Surprisingly, we find that (1) even after carefully
controlling for data and model architecture, state-of-the-art Image-DAS methods
(HRDA and HRDA+MIC) outperform Video-DAS methods on established Video-DAS
benchmarks (+14.5 mIoU on ViperCityscapesSeq, +19.0 mIoU on
SynthiaCityscapesSeq), and (2) naive combinations of Image-DAS and
Video-DAS techniques only lead to marginal improvements across datasets. To
avoid siloed progress between Image-DAS and Video-DAS, we open-source our
codebase with support for a comprehensive set of Video-DAS and Image-DAS
methods on a common benchmark. Code available at
https://github.com/SimarKareer/UnifiedVideoDAComment: TMLR 202
Cali-Sketch: Stroke Calibration and Completion for High-Quality Face Image Generation from Poorly-Drawn Sketches
Image generation task has received increasing attention because of its wide
application in security and entertainment. Sketch-based face generation brings
more fun and better quality of image generation due to supervised interaction.
However, When a sketch poorly aligned with the true face is given as input,
existing supervised image-to-image translation methods often cannot generate
acceptable photo-realistic face images. To address this problem, in this paper
we propose Cali-Sketch, a poorly-drawn-sketch to photo-realistic-image
generation method. Cali-Sketch explicitly models stroke calibration and image
generation using two constituent networks: a Stroke Calibration Network (SCN),
which calibrates strokes of facial features and enriches facial details while
preserving the original intent features; and an Image Synthesis Network (ISN),
which translates the calibrated and enriched sketches to photo-realistic face
images. In this way, we manage to decouple a difficult cross-domain translation
problem into two easier steps. Extensive experiments verify that the face
photos generated by Cali-Sketch are both photo-realistic and faithful to the
input sketches, compared with state-of-the-art methodsComment: 10 pages, 12 figure
P{\O}DA: Prompt-driven Zero-shot Domain Adaptation
Domain adaptation has been vastly investigated in computer vision but still
requires access to target images at train time, which might be intractable in
some uncommon conditions. In this paper, we propose the task of `Prompt-driven
Zero-shot Domain Adaptation', where we adapt a model trained on a source domain
using only a single general textual description of the target domain, i.e., a
prompt. First, we leverage a pretrained contrastive vision-language model
(CLIP) to optimize affine transformations of source features, steering them
towards target text embeddings, while preserving their content and semantics.
Second, we show that augmented features can be used to perform zero-shot domain
adaptation for semantic segmentation. Experiments demonstrate that our method
significantly outperforms CLIP-based style transfer baselines on several
datasets for the downstream task at hand. Our prompt-driven approach even
outperforms one-shot unsupervised domain adaptation on some datasets, and gives
comparable results on others. Our code is available at
https://github.com/astra-vision/PODA.Comment: Project page: https://astra-vision.github.io/PODA
Cancer diagnosis using deep learning: A bibliographic review
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
- …