100 research outputs found
Rethinking Image Forgery Detection via Contrastive Learning and Unsupervised Clustering
Image forgery detection aims to detect and locate forged regions in an image.
Most existing forgery detection algorithms formulate classification problems to
classify pixels into forged or pristine. However, the definition of forged and
pristine pixels is only relative within one single image, e.g., a forged region
in image A is actually a pristine one in its source image B (splicing forgery).
Such a relative definition has been severely overlooked by existing methods,
which unnecessarily mix forged (pristine) regions across different images into
the same category. To resolve this dilemma, we propose the FOrensic ContrAstive
cLustering (FOCAL) method, a novel, simple yet very effective paradigm based on
contrastive learning and unsupervised clustering for the image forgery
detection. Specifically, FOCAL 1) utilizes pixel-level contrastive learning to
supervise the high-level forensic feature extraction in an image-by-image
manner, explicitly reflecting the above relative definition; 2) employs an
on-the-fly unsupervised clustering algorithm (instead of a trained one) to
cluster the learned features into forged/pristine categories, further
suppressing the cross-image influence from training data; and 3) allows to
further boost the detection performance via simple feature-level concatenation
without the need of retraining. Extensive experimental results over six public
testing datasets demonstrate that our proposed FOCAL significantly outperforms
the state-of-the-art competing algorithms by big margins: +24.3% on Coverage,
+18.6% on Columbia, +17.5% on FF++, +14.2% on MISD, +13.5% on CASIA and +10.3%
on NIST in terms of IoU. The paradigm of FOCAL could bring fresh insights and
serve as a novel benchmark for the image forgery detection task. The code is
available at https://github.com/HighwayWu/FOCAL
Deep Learning for Genomics: A Concise Overview
Advancements in genomic research such as high-throughput sequencing
techniques have driven modern genomic studies into "big data" disciplines. This
data explosion is constantly challenging conventional methods used in genomics.
In parallel with the urgent demand for robust algorithms, deep learning has
succeeded in a variety of fields such as vision, speech, and text processing.
Yet genomics entails unique challenges to deep learning since we are expecting
from deep learning a superhuman intelligence that explores beyond our knowledge
to interpret the genome. A powerful deep learning model should rely on
insightful utilization of task-specific knowledge. In this paper, we briefly
discuss the strengths of different deep learning models from a genomic
perspective so as to fit each particular task with a proper deep architecture,
and remark on practical considerations of developing modern deep learning
architectures for genomics. We also provide a concise review of deep learning
applications in various aspects of genomic research, as well as pointing out
potential opportunities and obstacles for future genomics applications.Comment: Invited chapter for Springer Book: Handbook of Deep Learning
Application
Weakly Supervised 3D Open-vocabulary Segmentation
Open-vocabulary segmentation of 3D scenes is a fundamental function of human
perception and thus a crucial objective in computer vision research. However,
this task is heavily impeded by the lack of large-scale and diverse 3D
open-vocabulary segmentation datasets for training robust and generalizable
models. Distilling knowledge from pre-trained 2D open-vocabulary segmentation
models helps but it compromises the open-vocabulary feature as the 2D models
are mostly finetuned with close-vocabulary datasets. We tackle the challenges
in 3D open-vocabulary segmentation by exploiting pre-trained foundation models
CLIP and DINO in a weakly supervised manner. Specifically, given only the
open-vocabulary text descriptions of the objects in a scene, we distill the
open-vocabulary multimodal knowledge and object reasoning capability of CLIP
and DINO into a neural radiance field (NeRF), which effectively lifts 2D
features into view-consistent 3D segmentation. A notable aspect of our approach
is that it does not require any manual segmentation annotations for either the
foundation models or the distillation process. Extensive experiments show that
our method even outperforms fully supervised models trained with segmentation
annotations in certain scenes, suggesting that 3D open-vocabulary segmentation
can be effectively learned from 2D images and text-image pairs. Code is
available at \url{https://github.com/Kunhao-Liu/3D-OVS}.Comment: Accepted to NeurIPS 202
An In-Depth Statistical Review of Retinal Image Processing Models from a Clinical Perspective
The burgeoning field of retinal image processing is critical in facilitating early diagnosis and treatment of retinal diseases, which are amongst the leading causes of vision impairment globally. Despite rapid advancements, existing machine learning models for retinal image processing are characterized by significant limitations, including disparities in pre-processing, segmentation, and classification methodologies, as well as inconsistencies in post-processing operations. These limitations hinder the realization of accurate, reliable, and clinically relevant outcomes. This paper provides an in-depth statistical review of extant machine learning models used in retinal image processing, meticulously comparing them based on their internal operating characteristics and performance levels. By adopting a robust analytical approach, our review delineates the strengths and weaknesses of current models, offering comprehensive insights that are instrumental in guiding future research and development in this domain. Furthermore, this review underscores the potential clinical impacts of these models, highlighting their pivotal role in enhancing diagnostic accuracy, prognostic assessments, and therapeutic interventions for retinal disorders. In conclusion, our work not only bridges the existing knowledge gap in the literature but also paves the way for the evolution of more sophisticated and clinically-aligned retinal image processing models, ultimately contributing to improved patient outcomes and advancements in ophthalmic care
Multimedia Forensics
This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field
A Dimensional Structure based Knowledge Distillation Method for Cross-Modal Learning
Due to limitations in data quality, some essential visual tasks are difficult
to perform independently. Introducing previously unavailable information to
transfer informative dark knowledge has been a common way to solve such hard
tasks. However, research on why transferred knowledge works has not been
extensively explored. To address this issue, in this paper, we discover the
correlation between feature discriminability and dimensional structure (DS) by
analyzing and observing features extracted from simple and hard tasks. On this
basis, we express DS using deep channel-wise correlation and intermediate
spatial distribution, and propose a novel cross-modal knowledge distillation
(CMKD) method for better supervised cross-modal learning (CML) performance. The
proposed method enforces output features to be channel-wise independent and
intermediate ones to be uniformly distributed, thereby learning semantically
irrelevant features from the hard task to boost its accuracy. This is
especially useful in specific applications where the performance gap between
dual modalities is relatively large. Furthermore, we collect a real-world CML
dataset to promote community development. The dataset contains more than 10,000
paired optical and radar images and is continuously being updated. Experimental
results on real-world and benchmark datasets validate the effectiveness of the
proposed method
Investigation of Solar Flare Classification to Identify Optimal Performance
When an intense brightness for a small amount of time is seen in the sun, then we can say that a solar flare emerged. As solar flares are made up of high energy photons and particles, thus causing the production of high electric fields and currents and therefore results in the disruption in space-borne or ground-based technological system. It also becomes a challenging task to extract its important features for prediction. Convolutional Neural Networks have gain a significant amount of popularity in the classification and localization tasks. This paper has given stress on the classification of the solar flares emerged on different years by stacking different convolutional layers followed by max pooling layers. From the reference of Alexnet, the pooling layer employed in this paper is the overlapping pooling. Also two different activation functions that are ELU and CReLU have been used to investigate how many number of convolutional layers with a particular activation function provides with the best results on this dataset as the size of the dataset in this domain is always small. The proposed investigation can be further used in a novel solar prediction systems
Multimedia Forensics
This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field
- …