27 research outputs found
Towards Explainability in Monocular Depth Estimation
The estimation of depth in two-dimensional images has long been a challenging
and extensively studied subject in computer vision. Recently, significant
progress has been made with the emergence of Deep Learning-based approaches,
which have proven highly successful. This paper focuses on the explainability
in monocular depth estimation methods, in terms of how humans perceive depth.
This preliminary study emphasizes on one of the most significant visual cues,
the relative size, which is prominent in almost all viewed images. We designed
a specific experiment to mimic the experiments in humans and have tested
state-of-the-art methods to indirectly assess the explainability in the context
defined. In addition, we observed that measuring the accuracy required further
attention and a particular approach is proposed to this end. The results show
that a mean accuracy of around 77% across methods is achieved, with some of the
methods performing markedly better, thus, indirectly revealing their
corresponding potential to uncover monocular depth cues, like relative size
VeSTIS: A Versatile Semi- Automatic Taxon Identification System from Digital Images
In this work we present a flexible Open Source software platform
for training classifiers capable of identifying the taxonomy of a specimen from
digital images. We demonstrate the performance of our system in a pilot
study, building a feed-forward artificial neural network to effectively classify
five different species of marine annelid worms of the class Polychaeta. We
also discuss on the extensibility of the system, and its potential uses either as
a research tool or in assisting routine taxon identification procedures
Color reduction and estimation of the number of dominant colors by using a self-growing and self-organized neural gas
A new method for color reduction in a digital image is proposed, which is based on the development of a new neural network classifier and on a new method for Estimation of the Most Important Classes (EMIC). The proposed neural network combines the features of the well-known Growing Neural Gas (GNG) and the Kohonen Self-Organized Feature Map (KSOFM) neural networks. We call the new neural network Self-Growing and Self-Organized Neural Gas (SGONG). This combination produces a new neural network with outstanding features. The proposed technique utilizes the GNG mechanism of growing the neural lattice and the KSOFM leaning adaptation mechanism. Besides, introducing a number of criteria that have an effect on inserting or removing neurons, it is able to automatically define the number of the created neurons and their topology. Moreover, applying the EMIC method, the produced classes can be filtered and the most important classes can be found. The combination of SGONG and EMIC results in retaining the isolated and significant colors with the minimum number of color classes. The above techniques are able to be fed by both color and spatial features. For this reason a similarity function is used for vector comparison. The method is applicable to any type of color images and it can accommodate any type of color space