5 research outputs found
Real-time image dehazing by superpixels segmentation and guidance filter
Haze and fog had a great influence on the quality of images, and to eliminate this, dehazing and defogging are applied. For this purpose, an effective and automatic dehazing method is proposed. To dehaze a hazy image, we need to estimate two important parameters such as atmospheric light and transmission map. For atmospheric light estimation, the superpixels segmentation method is used to segment the input image. Then each superpixel intensities are summed and further compared with each superpixel individually to extract the maximum intense superpixel. Extracting the maximum intense superpixel from the outdoor hazy image automatically selects the hazy region (atmospheric light). Thus, we considered the individual channel intensities of the extracted maximum intense superpixel as an atmospheric light for our proposed algorithm. Secondly, on the basis of measured atmospheric light, an initial transmission map is estimated. The transmission map is further refined through a rolling guidance filter that preserves much of the image information such as textures, structures and edges in the final dehazed output. Finally, the haze-free image is produced by integrating the atmospheric light and refined transmission with the haze imaging model. Through detailed experimentation on several publicly available datasets, we showed that the proposed model achieved higher accuracy and can restore high-quality dehazed images as compared to the state-of-the-art models. The proposed model could be deployed as a real-time application for real-time image processing, real-time remote sensing images, real-time underwater images enhancement, video-guided transportation, outdoor surveillance, and auto-driver backed systems
Hyperspectral Image Classification -- Traditional to Deep Models: A Survey for Future Prospects
Hyperspectral Imaging (HSI) has been extensively utilized in many real-life
applications because it benefits from the detailed spectral information
contained in each pixel. Notably, the complex characteristics i.e., the
nonlinear relation among the captured spectral information and the
corresponding object of HSI data make accurate classification challenging for
traditional methods. In the last few years, Deep Learning (DL) has been
substantiated as a powerful feature extractor that effectively addresses the
nonlinear problems that appeared in a number of computer vision tasks. This
prompts the deployment of DL for HSI classification (HSIC) which revealed good
performance. This survey enlists a systematic overview of DL for HSIC and
compared state-of-the-art strategies of the said topic. Primarily, we will
encapsulate the main challenges of traditional machine learning for HSIC and
then we will acquaint the superiority of DL to address these problems. This
survey breakdown the state-of-the-art DL frameworks into spectral-features,
spatial-features, and together spatial-spectral features to systematically
analyze the achievements (future research directions as well) of these
frameworks for HSIC. Moreover, we will consider the fact that DL requires a
large number of labeled training examples whereas acquiring such a number for
HSIC is challenging in terms of time and cost. Therefore, this survey discusses
some strategies to improve the generalization performance of DL strategies
which can provide some future guidelines
Unsupervised geometrical feature learning from hyperspectral data
© 2016 IEEE. Hyperspectral technology has made significant advancements in the past two decades. Current sensors onboard airborne and space-borne platforms cover large areas of the Earth surface with unprecedented spectral resolutions. These characteristics enable a myriad of applications requiring fine identification of materials. Quite often, these applications rely on complicated methods of data analysis. In essence, the challenges include high dimensionality, spectral mixing, and atmospheric effects. This paper presents a robust unsupervised method to efficiently overcome this issue. The proposed algorithm performs three core tasks to acquire good results: i) optimizing the weights within a fixed threshold value for pure pixel estimation, ii) finding the best-averaged weighted endmember signatures with similarity error below the threshold value, and iii) iterating until a fixed number of average weighted endmembers is chosen. The experimental results on both real and synthetic data demonstrate that the proposed method is more robust and accurate then other geometrical methods