144,113 research outputs found
Citrus Fruit Feature Extraction using Colpromatix Color Code Model
Classification of citrus fruit more precisely and economically under natural illumination circumstances. The aim of this paper was to develop a robust and feature extraction techniques to discover citrus fruit features with different dimensions and under different illumination conditions. To identify object residing in image, the image has to be described or represented by certain features. In this paper, proposed a citrus fruit feature extraction process for deriving the classification. The proposed system present two tasks namely, 1) Image pre-processing: it is carried out using Hybrid Noise filter to remove the noise; ii) Citrus fruit features extraction: Feature extraction using new Colpromatix color space model, Size, Texture, Shape, and Coarseness. The Image Shape is an important visual feature of an image. Difference features representation and description techniques are discuss in this review paper. Feature extraction techniques play an important role in systems for object recognition, matching, extracting, and analysis. It also presents comparison between various techniques
Enhancing face recognition at a distance using super resolution
The characteristics of surveillance video generally include low-resolution images and blurred images. Decreases in image resolution lead to loss of high frequency facial components, which is expected to adversely affect recognition rates. Super resolution (SR) is a technique used to generate a higher resolution image from a given low-resolution, degraded image. Dictionary based super resolution pre-processing techniques have been developed to overcome the problem of low-resolution images in face recognition. However, super resolution reconstruction process, being ill-posed, and results in visual artifacts that can be visually distracting to humans and/or affect machine feature extraction and face recognition algorithms. In this paper, we investigate the impact of two existing super-resolution methods to reconstruct a high resolution from single/multiple low-resolution images on face recognition. We propose an alternative scheme that is based on dictionaries in high frequency wavelet subbands. The performance of the proposed method will be evaluated on databases of high and low-resolution images captured under different illumination conditions and at different distances. We shall demonstrate that the proposed approach at level 3 DWT decomposition has superior performance in comparison to the other super resolution methods
Fusion of Multispectral Data Through Illumination-aware Deep Neural Networks for Pedestrian Detection
Multispectral pedestrian detection has received extensive attention in recent
years as a promising solution to facilitate robust human target detection for
around-the-clock applications (e.g. security surveillance and autonomous
driving). In this paper, we demonstrate illumination information encoded in
multispectral images can be utilized to significantly boost performance of
pedestrian detection. A novel illumination-aware weighting mechanism is present
to accurately depict illumination condition of a scene. Such illumination
information is incorporated into two-stream deep convolutional neural networks
to learn multispectral human-related features under different illumination
conditions (daytime and nighttime). Moreover, we utilized illumination
information together with multispectral data to generate more accurate semantic
segmentation which are used to boost pedestrian detection accuracy. Putting all
of the pieces together, we present a powerful framework for multispectral
pedestrian detection based on multi-task learning of illumination-aware
pedestrian detection and semantic segmentation. Our proposed method is trained
end-to-end using a well-designed multi-task loss function and outperforms
state-of-the-art approaches on KAIST multispectral pedestrian dataset
Ergonomics of the Operative Field in Paediatric Minimal Access Surgery
Imperial Users onl
- …