1,113 research outputs found
MMFO: modified moth flame optimization algorithm for region based RGB color image segmentation
Region-based color image segmentation is elementary steps in image processing and computer vision. Color image segmentation is a region growing approach in which RGB color image is divided into the different cluster based on their pixel properties. The region-based color image segmentation has faced the problem of multidimensionality. The color image is considered in five-dimensional problems, in which three dimensions in color (RGB) and two dimensions in geometry (luminosity layer and chromaticity layer). In this paper, L*a*b color space conversion has been used to reduce the one dimension and geometrically it converts in the array hence the further one dimension has been reduced. This paper introduced an improved algorithm MMFO (Modified Moth Flame Optimization) Algorithm for RGB color image Segmentation which is based on bio-inspired techniques for color image segmentation. The simulation results of MMFO for region based color image segmentation are performed better as compared to PSO and GA, in terms of computation times for all the images. The experiment results of this method gives clear segments based on the different color and the different no. of clusters is used during the segmentation process
RGB-D Salient Object Detection: A Survey
Salient object detection (SOD), which simulates the human visual perception
system to locate the most attractive object(s) in a scene, has been widely
applied to various computer vision tasks. Now, with the advent of depth
sensors, depth maps with affluent spatial information that can be beneficial in
boosting the performance of SOD, can easily be captured. Although various RGB-D
based SOD models with promising performance have been proposed over the past
several years, an in-depth understanding of these models and challenges in this
topic remains lacking. In this paper, we provide a comprehensive survey of
RGB-D based SOD models from various perspectives, and review related benchmark
datasets in detail. Further, considering that the light field can also provide
depth maps, we review SOD models and popular benchmark datasets from this
domain as well. Moreover, to investigate the SOD ability of existing models, we
carry out a comprehensive evaluation, as well as attribute-based evaluation of
several representative RGB-D based SOD models. Finally, we discuss several
challenges and open directions of RGB-D based SOD for future research. All
collected models, benchmark datasets, source code links, datasets constructed
for attribute-based evaluation, and codes for evaluation will be made publicly
available at https://github.com/taozh2017/RGBDSODsurveyComment: 24 pages, 12 figures. Has been accepted by Computational Visual Medi
MatSpectNet: Material Segmentation Network with Domain-Aware and Physically-Constrained Hyperspectral Reconstruction
Achieving accurate material segmentation for 3-channel RGB images is
challenging due to the considerable variation in a material's appearance.
Hyperspectral images, which are sets of spectral measurements sampled at
multiple wavelengths, theoretically offer distinct information for material
identification, as variations in intensity of electromagnetic radiation
reflected by a surface depend on the material composition of a scene. However,
existing hyperspectral datasets are impoverished regarding the number of images
and material categories for the dense material segmentation task, and
collecting and annotating hyperspectral images with a spectral camera is
prohibitively expensive. To address this, we propose a new model, the
MatSpectNet to segment materials with recovered hyperspectral images from RGB
images. The network leverages the principles of colour perception in modern
cameras to constrain the reconstructed hyperspectral images and employs the
domain adaptation method to generalise the hyperspectral reconstruction
capability from a spectral recovery dataset to material segmentation datasets.
The reconstructed hyperspectral images are further filtered using learned
response curves and enhanced with human perception. The performance of
MatSpectNet is evaluated on the LMD dataset as well as the OpenSurfaces
dataset. Our experiments demonstrate that MatSpectNet attains a 1.60% increase
in average pixel accuracy and a 3.42% improvement in mean class accuracy
compared with the most recent publication. The project code is attached to the
supplementary material and will be published on GitHub.Comment: 7 pages main pape
A Survey on FPGA-Based Sensor Systems: Towards Intelligent and Reconfigurable Low-Power Sensors for Computer Vision, Control and Signal Processing
The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.The research leading to these results has received funding from the Spanish Government and European FEDER funds (DPI2012-32390), the Valencia Regional Government (PROMETEO/2013/085) and the University of Alicante (GRE12-17)
- …