5 research outputs found
Bounding Box-Free Instance Segmentation Using Semi-Supervised Learning for Generating a City-Scale Vehicle Dataset
Vehicle classification is a hot computer vision topic, with studies ranging
from ground-view up to top-view imagery. In remote sensing, the usage of
top-view images allows for understanding city patterns, vehicle concentration,
traffic management, and others. However, there are some difficulties when
aiming for pixel-wise classification: (a) most vehicle classification studies
use object detection methods, and most publicly available datasets are designed
for this task, (b) creating instance segmentation datasets is laborious, and
(c) traditional instance segmentation methods underperform on this task since
the objects are small. Thus, the present research objectives are: (1) propose a
novel semi-supervised iterative learning approach using GIS software, (2)
propose a box-free instance segmentation approach, and (3) provide a city-scale
vehicle dataset. The iterative learning procedure considered: (1) label a small
number of vehicles, (2) train on those samples, (3) use the model to classify
the entire image, (4) convert the image prediction into a polygon shapefile,
(5) correct some areas with errors and include them in the training data, and
(6) repeat until results are satisfactory. To separate instances, we considered
vehicle interior and vehicle borders, and the DL model was the U-net with the
Efficient-net-B7 backbone. When removing the borders, the vehicle interior
becomes isolated, allowing for unique object identification. To recover the
deleted 1-pixel borders, we proposed a simple method to expand each prediction.
The results show better pixel-wise metrics when compared to the Mask-RCNN (82%
against 67% in IoU). On per-object analysis, the overall accuracy, precision,
and recall were greater than 90%. This pipeline applies to any remote sensing
target, being very efficient for segmentation and generating datasets.Comment: 38 pages, 10 figures, submitted to journa
Fastaer det: Fast aerial embedded real-time detection
Automated detection of objects in aerial imagery is the basis for many applications, such as search and rescue operations, activity monitoring or mapping. However, in many cases it is beneficial to employ a detector on-board of the aerial platform in order to avoid latencies, make basic decisions within the platform and save transmission bandwidth. In this work, we address the task of designing such an on-board aerial object detector, which meets certain requirements in accuracy, inference speed and power consumption. For this, we first outline a generally applicable design process for such on-board methods and then follow this process to develop our own set of models for the task. Specifically, we first optimize a baseline model with regards to accuracy while not increasing runtime. We then propose a fast detection head to significantly improve runtime at little cost in accuracy. Finally, we discuss several aspects to consider during deployment and in the runtime environment. Our resulting four models that operate at 15, 30, 60 and 90 FPS on an embedded Jetson AGX device are published for future benchmarking and comparison by the community
Object Detection in 20 Years: A Survey
Object detection, as of one the most fundamental and challenging problems in
computer vision, has received great attention in recent years. Its development
in the past two decades can be regarded as an epitome of computer vision
history. If we think of today's object detection as a technical aesthetics
under the power of deep learning, then turning back the clock 20 years we would
witness the wisdom of cold weapon era. This paper extensively reviews 400+
papers of object detection in the light of its technical evolution, spanning
over a quarter-century's time (from the 1990s to 2019). A number of topics have
been covered in this paper, including the milestone detectors in history,
detection datasets, metrics, fundamental building blocks of the detection
system, speed up techniques, and the recent state of the art detection methods.
This paper also reviews some important detection applications, such as
pedestrian detection, face detection, text detection, etc, and makes an in-deep
analysis of their challenges as well as technical improvements in recent years.Comment: This work has been submitted to the IEEE TPAMI for possible
publicatio
Deep learning & remote sensing : pushing the frontiers in image segmentation
Dissertação (Mestrado em Informática) — Universidade de BrasÃlia, Instituto de Ciências Exatas, Departamento de Ciência da Computação, BrasÃlia, 2022.A segmentação de imagens visa simplificar o entendimento de imagens digitais e métodos de
aprendizado profundo usando redes neurais convolucionais permitem a exploração de diferentes
tarefas (e.g., segmentação semântica, instância e panóptica). A segmentação semântica atribui
uma classe a cada pixel em uma imagem, a segmentação de instância classifica objetos a nÃvel
de pixel com um identificador exclusivo para cada alvo e a segmentação panóptica combina
instâncias com diferentes planos de fundo. Os dados de sensoriamento remoto são muito adequados para desenvolver novos algoritmos. No entanto, algumas particularidades impedem que o
sensoriamento remoto com imagens orbitais e aéreas cresça quando comparado à s imagens tradicionais (e.g., fotos de celulares): (1) as imagens são muito extensas, (2) apresenta caracterÃsticas
diferentes (e.g., número de canais e formato de imagem), (3) um grande número de etapas de préprocessamento e pós-processamento (e.g., extração de quadros e classificação de cenas grandes) e
(4) os softwares para rotulagem e treinamento de modelos não são compatÃveis. Esta dissertação
visa avançar nas três principais categorias de segmentação de imagens. Dentro do domÃnio de
segmentação de instâncias, propusemos três experimentos. Primeiro, aprimoramos a abordagem
de segmentação de instância baseada em caixa para classificar cenas grandes. Em segundo
lugar, criamos um método sem caixas delimitadoras para alcançar resultados de segmentação
de instâncias usando modelos de segmentação semântica em um cenário com objetos esparsos.
Terceiro, aprimoramos o método anterior para cenas aglomeradas e desenvolvemos o primeiro
estudo considerando aprendizado semissupervisionado usando sensoriamento remoto e dados
GIS. Em seguida, no domÃnio da segmentação panóptica, apresentamos o primeiro conjunto de
dados de segmentação panóptica de sensoriamento remoto e dispomos de uma metodologia para
conversão de dados GIS no formato COCO. Como nosso primeiro estudo considerou imagens
RGB, estendemos essa abordagem para dados multiespectrais. Por fim, melhoramos o método
box-free inicialmente projetado para segmentação de instâncias para a tarefa de segmentação
panóptica. Esta dissertação analisou vários métodos de segmentação e tipos de imagens, e as
soluções desenvolvidas permitem a exploração de novas tarefas , a simplificação da rotulagem
de dados e uma forma simplificada de obter previsões de instância e panópticas usando modelos
simples de segmentação semântica.Image segmentation aims to simplify the understanding of digital images. Deep learning-based
methods using convolutional neural networks have been game-changing, allowing the exploration
of different tasks (e.g., semantic, instance, and panoptic segmentation). Semantic segmentation
assigns a class to every pixel in an image, instance segmentation classifies objects at a pixel
level with a unique identifier for each target, and panoptic segmentation combines instancelevel predictions with different backgrounds. Remote sensing data largely benefits from those
methods, being very suitable for developing new DL algorithms and creating solutions using
top-view images. However, some peculiarities prevent remote sensing using orbital and aerial
imagery from growing when compared to traditional ground-level images (e.g., camera photos):
(1) The images are extensive, (2) it presents different characteristics (e.g., number of channels
and image format), (3) a high number of pre-processes and post-processes steps (e.g., extracting
patches and classifying large scenes), and (4) most open software for labeling and deep learning applications are not friendly to remote sensing due to the aforementioned reasons. This
dissertation aimed to improve all three main categories of image segmentation. Within the instance segmentation domain, we proposed three experiments. First, we enhanced the box-based
instance segmentation approach for classifying large scenes, allowing practical pipelines to be
implemented. Second, we created a bounding-box free method to reach instance segmentation
results by using semantic segmentation models in a scenario with sparse objects. Third, we
improved the previous method for crowded scenes and developed the first study considering
semi-supervised learning using remote sensing and GIS data. Subsequently, in the panoptic
segmentation domain, we presented the first remote sensing panoptic segmentation dataset containing fourteen classes and disposed of software and methodology for converting GIS data into
the panoptic segmentation format. Since our first study considered RGB images, we extended
our approach to multispectral data. Finally, we leveraged the box-free method initially designed
for instance segmentation to the panoptic segmentation task. This dissertation analyzed various
segmentation methods and image types, and the developed solutions enable the exploration of
new tasks (such as panoptic segmentation), the simplification of labeling data (using the proposed semi-supervised learning procedure), and a simplified way to obtain instance and panoptic
predictions using simple semantic segmentation models