100 research outputs found
Automated High-resolution Earth Observation Image Interpretation: Outcome of the 2020 Gaofen Challenge
In this article, we introduce the 2020 Gaofen Challenge and relevant scientific outcomes. The 2020 Gaofen Challenge is an international competition, which is organized by the China High-Resolution Earth Observation Conference Committee and the Aerospace Information Research Institute, Chinese Academy of Sciences and technically cosponsored by the IEEE Geoscience and Remote Sensing Society and the International Society for Photogrammetry and Remote Sensing. It aims at promoting the academic development of automated high-resolution earth observation image interpretation. Six independent tracks have been organized in this challenge, which cover the challenging problems in the field of object detection and semantic segmentation. With the development of convolutional neural networks, deep-learning-based methods have achieved good performance on image interpretation. In this article, we report the details and the best-performing methods presented so far in the scope of this challenge
Object Detection in 20 Years: A Survey
Object detection, as of one the most fundamental and challenging problems in
computer vision, has received great attention in recent years. Its development
in the past two decades can be regarded as an epitome of computer vision
history. If we think of today's object detection as a technical aesthetics
under the power of deep learning, then turning back the clock 20 years we would
witness the wisdom of cold weapon era. This paper extensively reviews 400+
papers of object detection in the light of its technical evolution, spanning
over a quarter-century's time (from the 1990s to 2019). A number of topics have
been covered in this paper, including the milestone detectors in history,
detection datasets, metrics, fundamental building blocks of the detection
system, speed up techniques, and the recent state of the art detection methods.
This paper also reviews some important detection applications, such as
pedestrian detection, face detection, text detection, etc, and makes an in-deep
analysis of their challenges as well as technical improvements in recent years.Comment: This work has been submitted to the IEEE TPAMI for possible
publicatio
Rethinking RGB-D Salient Object Detection: Models, Data Sets, and Large-Scale Benchmarks
The use of RGB-D information for salient object detection has been
extensively explored in recent years. However, relatively few efforts have been
put towards modeling salient object detection in real-world human activity
scenes with RGBD. In this work, we fill the gap by making the following
contributions to RGB-D salient object detection. (1) We carefully collect a new
SIP (salient person) dataset, which consists of ~1K high-resolution images that
cover diverse real-world scenes from various viewpoints, poses, occlusions,
illuminations, and backgrounds. (2) We conduct a large-scale (and, so far, the
most comprehensive) benchmark comparing contemporary methods, which has long
been missing in the field and can serve as a baseline for future research. We
systematically summarize 32 popular models and evaluate 18 parts of 32 models
on seven datasets containing a total of about 97K images. (3) We propose a
simple general architecture, called Deep Depth-Depurator Network (D3Net). It
consists of a depth depurator unit (DDU) and a three-stream feature learning
module (FLM), which performs low-quality depth map filtering and cross-modal
feature learning respectively. These components form a nested structure and are
elaborately designed to be learned jointly. D3Net exceeds the performance of
any prior contenders across all five metrics under consideration, thus serving
as a strong model to advance research in this field. We also demonstrate that
D3Net can be used to efficiently extract salient object masks from real scenes,
enabling effective background changing application with a speed of 65fps on a
single GPU. All the saliency maps, our new SIP dataset, the D3Net model, and
the evaluation tools are publicly available at
https://github.com/DengPingFan/D3NetBenchmark.Comment: Accepted in TNNLS20. 15 pages, 12 figures. Code:
https://github.com/DengPingFan/D3NetBenchmar
Salient Object Detection via Integrity Learning
Albeit current salient object detection (SOD) works have achieved fantastic
progress, they are cast into the shade when it comes to the integrity of the
predicted salient regions. We define the concept of integrity at both the micro
and macro level. Specifically, at the micro level, the model should highlight
all parts that belong to a certain salient object, while at the macro level,
the model needs to discover all salient objects from the given image scene. To
facilitate integrity learning for salient object detection, we design a novel
Integrity Cognition Network (ICON), which explores three important components
to learn strong integrity features. 1) Unlike the existing models that focus
more on feature discriminability, we introduce a diverse feature aggregation
(DFA) component to aggregate features with various receptive fields (i.e.,,
kernel shape and context) and increase the feature diversity. Such diversity is
the foundation for mining the integral salient objects. 2) Based on the DFA
features, we introduce the integrity channel enhancement (ICE) component with
the goal of enhancing feature channels that highlight the integral salient
objects at the macro level, while suppressing the other distracting ones. 3)
After extracting the enhanced features, the part-whole verification (PWV)
method is employed to determine whether the part and whole object features have
strong agreement. Such part-whole agreements can further improve the
micro-level integrity for each salient object. To demonstrate the effectiveness
of ICON, comprehensive experiments are conducted on seven challenging
benchmarks, where promising results are achieved
Deep learning in crowd counting: A survey
Counting high-density objects quickly and accurately is a popular area of research. Crowd counting has significant social and economic value and is a major focus in artificial intelligence. Despite many advancements in this field, many of them are not widely known, especially in terms of research data. The authors proposed a three-tier standardised dataset taxonomy (TSDT). The Taxonomy divides datasets into small-scale, large-scale and hyper-scale, according to different application scenarios. This theory can help researchers make more efficient use of datasets and improve the performance of AI algorithms in specific fields. Additionally, the authors proposed a new evaluation index for the clarity of the dataset: average pixel occupied by each object (APO). This new evaluation index is more suitable for evaluating the clarity of the dataset in the object counting task than the image resolution. Moreover, the authors classified the crowd counting methods from a data-driven perspective: multi-scale networks, single-column networks, multi-column networks, multi-task networks, attention networks and weak-supervised networks and introduced the classic crowd counting methods of each class. The authors classified the existing 36 datasets according to the theory of three-tier standardised dataset taxonomy and discussed and evaluated these datasets. The authors evaluated the performance of more than 100 methods in the past five years on different levels of popular datasets. Recently, progress in research on small-scale datasets has slowed down. There are few new datasets and algorithms on small-scale datasets. The studies focused on large or hyper-scale datasets appear to be reaching a saturation point. The combined use of multiple approaches began to be a major research direction. The authors discussed the theoretical and practical challenges of crowd counting from the perspective of data, algorithms and computing resources. The field of crowd counting is moving towards combining multiple methods and requires fresh, targeted datasets. Despite advancements, the field still faces challenges such as handling real-world scenarios and processing large crowds in real-time. Researchers are exploring transfer learning to overcome the limitations of small datasets. The development of effective algorithms for crowd counting remains a challenging and important task in computer vision and AI, with many opportunities for future research.BHF, AA/18/3/34220Hope Foundation for Cancer Research,
RM60G0680GCRF,
P202PF11;Sino‐UK Industrial Fund,
RP202G0289LIAS, P202ED10, P202RE969Data
Science Enhancement Fund,
P202RE237Sino‐UK Education Fund, OP202006Fight for Sight, 24NN201Royal Society
International Exchanges Cost Share Award, RP202G0230MRC, MC_PC_17171BBSRC, RM32G0178B
U-Net and its variants for medical image segmentation: theory and applications
U-net is an image segmentation technique developed primarily for medical
image analysis that can precisely segment images using a scarce amount of
training data. These traits provide U-net with a very high utility within the
medical imaging community and have resulted in extensive adoption of U-net as
the primary tool for segmentation tasks in medical imaging. The success of
U-net is evident in its widespread use in all major image modalities from CT
scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a
segmentation tool, there have been instances of the use of U-net in other
applications. As the potential of U-net is still increasing, in this review we
look at the various developments that have been made in the U-net architecture
and provide observations on recent trends. We examine the various innovations
that have been made in deep learning and discuss how these tools facilitate
U-net. Furthermore, we look at image modalities and application areas where
U-net has been applied.Comment: 42 pages, in IEEE Acces
- …