11,065 research outputs found
A Survey on Evolutionary Computation for Computer Vision and Image Analysis: Past, Present, and Future Trends
Computer vision (CV) is a big and important field
in artificial intelligence covering a wide range of applications.
Image analysis is a major task in CV aiming to extract, analyse
and understand the visual content of images. However, imagerelated
tasks are very challenging due to many factors, e.g., high
variations across images, high dimensionality, domain expertise
requirement, and image distortions. Evolutionary computation
(EC) approaches have been widely used for image analysis with
significant achievement. However, there is no comprehensive
survey of existing EC approaches to image analysis. To fill
this gap, this paper provides a comprehensive survey covering
all essential EC approaches to important image analysis tasks
including edge detection, image segmentation, image feature
analysis, image classification, object detection, and others. This
survey aims to provide a better understanding of evolutionary
computer vision (ECV) by discussing the contributions of different
approaches and exploring how and why EC is used for
CV and image analysis. The applications, challenges, issues, and
trends associated to this research field are also discussed and
summarised to provide further guidelines and opportunities for
future research
BSUV-Net: a fully-convolutional neural network for background subtraction of unseen videos
Background subtraction is a basic task in computer vision and video processing often applied as a pre-processing step for object tracking, people recognition, etc. Recently, a number of successful background-subtraction algorithms have been proposed, however nearly all of the top-performing ones are supervised. Crucially, their success relies upon the availability of some annotated frames of the test video during training. Consequently, their performance on completely “unseen” videos is undocumented in the literature. In this work, we propose a new, supervised, background subtraction algorithm for unseen videos (BSUV-Net) based on a fully-convolutional neural network. The input to our network consists of the current frame and two background frames captured at different time scales along with their semantic segmentation maps. In order to reduce the chance of overfitting, we also introduce a new data-augmentation technique which mitigates the impact of illumination difference between the background frames and the current frame. On the CDNet-2014 dataset, BSUV-Net outperforms stateof-the-art algorithms evaluated on unseen videos in terms of several metrics including F-measure, recall and precision.Accepted manuscrip
A fully-convolutional neural network for background subtraction of unseen videos
Background subtraction is a basic task in computer vision
and video processing often applied as a pre-processing step
for object tracking, people recognition, etc. Recently, a number of successful background-subtraction algorithms have
been proposed, however nearly all of the top-performing
ones are supervised. Crucially, their success relies upon
the availability of some annotated frames of the test video
during training. Consequently, their performance on completely “unseen” videos is undocumented in the literature.
In this work, we propose a new, supervised, backgroundsubtraction algorithm for unseen videos (BSUV-Net) based
on a fully-convolutional neural network. The input to our
network consists of the current frame and two background
frames captured at different time scales along with their semantic segmentation maps. In order to reduce the chance
of overfitting, we also introduce a new data-augmentation
technique which mitigates the impact of illumination difference between the background frames and the current frame.
On the CDNet-2014 dataset, BSUV-Net outperforms stateof-the-art algorithms evaluated on unseen videos in terms of
several metrics including F-measure, recall and precision.Accepted manuscrip
Kartezio: Evolutionary Design of Explainable Pipelines for Biomedical Image Analysis
An unresolved issue in contemporary biomedicine is the overwhelming number
and diversity of complex images that require annotation, analysis and
interpretation. Recent advances in Deep Learning have revolutionized the field
of computer vision, creating algorithms that compete with human experts in
image segmentation tasks. Crucially however, these frameworks require large
human-annotated datasets for training and the resulting models are difficult to
interpret. In this study, we introduce Kartezio, a modular Cartesian Genetic
Programming based computational strategy that generates transparent and easily
interpretable image processing pipelines by iteratively assembling and
parameterizing computer vision functions. The pipelines thus generated exhibit
comparable precision to state-of-the-art Deep Learning approaches on instance
segmentation tasks, while requiring drastically smaller training datasets, a
feature which confers tremendous flexibility, speed, and functionality to this
approach. We also deployed Kartezio to solve semantic and instance segmentation
problems in four real-world Use Cases, and showcase its utility in imaging
contexts ranging from high-resolution microscopy to clinical pathology. By
successfully implementing Kartezio on a portfolio of images ranging from
subcellular structures to tumoral tissue, we demonstrated the flexibility,
robustness and practical utility of this fully explicable evolutionary designer
for semantic and instance segmentation.Comment: 36 pages, 6 main Figures. The Extended Data Movie is available at the
following link: https://www.youtube.com/watch?v=r74gdzb6hdA. The source code
is available on Github: https://github.com/KevinCortacero/Kartezi
Semantic Segmentation Network Stacking with Genetic Programming
Bakurov, I., Buzzelli, M., Schettini, R., Castelli, M., & Vanneschi, L. (2023). Semantic Segmentation Network Stacking with Genetic Programming. Genetic Programming And Evolvable Machines, 24(2 — Special Issue on Highlights of Genetic Programming 2022 Events), 1-37. [15]. https://doi.org/10.1007/s10710-023-09464-0---Open access funding provided by FCT|FCCN (b-on). This work was supported by national funds through the FCT (Fundação para a Ciência e a Tecnologia) by the projects GADgET (DSAIPA/DS/0022/2018), AICE (DSAIPA/DS/0113/2019), UIDB/04152/2020 - Centro de Investigação em Gestão de Informação (MagIC)/NOVA IMS, and by the grant SFRH/BD/137277/2018.Semantic segmentation consists of classifying each pixel of an image and constitutes an essential step towards scene recognition and understanding. Deep convolutional encoder–decoder neural networks now constitute state-of-the-art methods in the field of semantic segmentation. The problem of street scenes’ segmentation for automotive applications constitutes an important application field of such networks and introduces a set of imperative exigencies. Since the models need to be executed on self-driving vehicles to make fast decisions in response to a constantly changing environment, they are not only expected to operate reliably but also to process the input images rapidly. In this paper, we explore genetic programming (GP) as a meta-model that combines four different efficiency-oriented networks for the analysis of urban scenes. Notably, we present and examine two approaches. In the first approach, we represent solutions as GP trees that combine networks’ outputs such that each output class’s prediction is obtained through the same meta-model. In the second approach, we propose representing solutions as lists of GP trees, each designed to provide a unique meta-model for a given target class. The main objective is to develop efficient and accurate combination models that could be easily interpreted, therefore allowing gathering some hints on how to improve the existing networks. The experiments performed on the Cityscapes dataset of urban scene images with semantic pixel-wise annotations confirm the effectiveness of the proposed approach. Specifically, our best-performing models improve systems’ generalization ability by approximately 5% compared to traditional ensembles, 30% for the less performing state-of-the-art CNN and show competitive results with respect to state-of-the-art ensembles. Additionally, they are small in size, allow interpretability, and use fewer features due to GP’s automatic feature selection.publishersversionepub_ahead_of_prin
- …