16 research outputs found

    Fast single image defogging with robust sky detection

    Get PDF
    Haze is a source of unreliability for computer vision applications in outdoor scenarios, and it is usually caused by atmospheric conditions. The Dark Channel Prior (DCP) has shown remarkable results in image defogging with three main limitations: 1) high time-consumption, 2) artifact generation, and 3) sky-region over-saturation. Therefore, current work has focused on improving processing time without losing restoration quality and avoiding image artifacts during image defogging. Hence in this research, a novel methodology based on depth approximations through DCP, local Shannon entropy, and Fast Guided Filter is proposed for reducing artifacts and improving image recovery on sky regions with low computation time. The proposed-method performance is assessed using more than 500 images from three datasets: Hybrid Subjective Testing Set from Realistic Single Image Dehazing (HSTS-RESIDE), the Synthetic Objective Testing Set from RESIDE (SOTS-RESIDE) and the HazeRD. Experimental results demonstrate that the proposed approach has an outstanding performance over state-of-the-art methods in reviewed literature, which is validated qualitatively and quantitatively through Peak Signal-to-Noise Ratio (PSNR), Naturalness Image Quality Evaluator (NIQE) and Structural SIMilarity (SSIM) index on retrieved images, considering different visual ranges, under distinct illumination and contrast conditions. Analyzing images with various resolutions, the method proposed in this work shows the lowest processing time under similar software and hardware conditions.This work was supported in part by the Centro en Investigaciones en Óptica (CIO) and the Consejo Nacional de Ciencia y Tecnología (CONACYT), and in part by the Barcelona Supercomputing Center.Peer ReviewedPostprint (published version

    Modelling on-demand preprocessing framework towards practical approach in clinical analysis of diabetic retinopathy

    Get PDF
    Diabetic retinopathy (DR) refers to a complication of diabetes and a prime cause of vision loss in middle-aged people. A timely screening and diagnosis process can reduce the risk of blindness. Fundus imaging is mainly preferred in the clinical analysis of DR. However; the raw fundus images are usually subjected to artifacts, noise, low and varied contrast, which is very hard to process by human visual systems and automated systems. In the existing literature, many solutions are given to enhance the fundus image. However, such approaches are particular and limited to a specific objective that cannot address multiple fundus images. This paper has presented an on-demand preprocessing frame work that integrates different techniques to address geometrical issues, random noises, and comprehensive contrast enhancement solutions. The performance of each preprocessing process is evaluated against peak signal-to-noise ratio (PSNR), and brightness is quantified in the enhanced image. The motive of this paper is to offer a flexible approach of preprocessing mechanism that can meet image enhancement needs based on different preprocessing requirements to improve the quality of fundus imaging towards early-stage diabetic retinopathy identification

    水中イメージングシステムのための画質改善に関する研究

    Get PDF
    Underwater survey systems have numerous scientific or industrial applications in the fields of geology, biology, mining, and archeology. These application fields involve various tasks such as ecological studies, environmental damage assessment, and ancient prospection. During two decades, underwater imaging systems are mainly equipped by Underwater Vehicles (UV) for surveying in water or ocean. Challenges associated with obtaining visibility of objects have been difficult to overcome due to the physical properties of the medium. In the last two decades, sonar is usually used for the detection and recognition of targets in the ocean or underwater environment. However, because of the low quality of images by sonar imaging, optical vision sensors are then used instead of it for short range identification. Optical imaging provides short-range, high-resolution visual information of the ocean floor. However, due to the light transmission’s physical properties in the water medium, the optical imaged underwater images are usually performance as poor visibility. Light is highly attenuated when it travels in the ocean. Consequence, the imaged scenes result as poorly contrasted and hazy-like obstructions. The underwater imaging processing techniques are important to improve the quality of underwater images. As mentioned before, underwater images have poor visibility because of the medium scattering and light distortion. In contrast to common photographs, underwater optical images suffer from poor visibility owing to the medium, which causes scattering, color distortion, and absorption. Large suspended particles cause scattering similar to the scattering of light in fog or turbid water that contain many suspended particles. Color distortion occurs because different wavelengths are attenuated to different degrees in water; consequently, images of ambient in the underwater environments are dominated by a bluish tone, because higher wavelengths are attenuated more quickly. Absorption of light in water substantially reduces its intensity. The random attenuation of light causes a hazy appearance as the light backscattered by water along the line of sight considerably degrades image contrast. Especially, objects at a distance of more than 10 meters from the observation point are almost unreadable because colors are faded as characteristic wavelengths, which are filtered according to the distance traveled by light in water. So, traditional image processing methods are not suitable for processing them well. This thesis proposes strategies and solutions to tackle the above mentioned problems of underwater survey systems. In this thesis, we contribute image pre-processing, denoising, dehazing, inhomogeneities correction, color correction and fusion technologies for underwater image quality improvement. The main content of this thesis is as follows. First, comprehensive reviews of the current and most prominent underwater imaging systems are provided in Chapter 1. A main features and performance based classification criterion for the existing systems is presented. After that, by analyzing the challenges of the underwater imaging systems, a hardware based approach and non-hardware based approach is introduced. In this thesis, we are concerned about the image processing based technologies, which are one of the non-hardware approaches, and take most recent methods to process the low quality underwater images. As the different sonar imaging systems applied in much equipment, such as side-scan sonar, multi-beam sonar. The different sonar acquires different images with different characteristics. Side-scan sonar acquires high quality imagery of the seafloor with very high spatial resolution but poor locational accuracy. On the contrast, multi-beam sonar obtains high precision position and underwater depth in seafloor points. In order to fully utilize all information of these two types of sonars, it is necessary to fuse the two kinds of sonar data in Chapter 2. Considering the sonar image forming principle, for the low frequency curvelet coefficients, we use the maximum local energy method to calculate the energy of two sonar images. For the high frequency curvelet coefficients, we take absolute maximum method as a measurement. The main attributes are: firstly, the multi-resolution analysis method is well adapted the cured-singularities and point-singularities. It is useful for sonar intensity image enhancement. Secondly, maximum local energy is well performing the intensity sonar images, which can achieve perfect fusion result [42]. In Chapter 3, as analyzed the underwater laser imaging system, a Bayesian Contourlet Estimator of Bessel K Form (BCE-BKF) based denoising algorithm is proposed. We take the BCE-BKF probability density function (PDF) to model neighborhood of contourlet coefficients. After that, according to the proposed PDF model, we design a maximum a posteriori (MAP) estimator, which relies on a Bayesian statistics representation of the contourlet coefficients of noisy images. The denoised laser images have better contrast than the others. There are three obvious virtues of the proposed method. Firstly, contourlet transform decomposition prior to curvelet transform and wavelet transform by using ellipse sampling grid. Secondly, BCE-BKF model is more effective in presentation of the noisy image contourlet coefficients. Thirdly, the BCE-BKF model takes full account of the correlation between coefficients [107]. In Chapter 4, we describe a novel method to enhance underwater images by dehazing. In underwater optical imaging, absorption, scattering, and color distortion are three major issues in underwater optical imaging. Light rays traveling through water are scattered and absorbed according to their wavelength. Scattering is caused by large suspended particles that degrade optical images captured underwater. Color distortion occurs because different wavelengths are attenuated to different degrees in water; consequently, images of ambient underwater environments are dominated by a bluish tone. Our key contribution is to propose a fast image and video dehazing algorithm, to compensate the attenuation discrepancy along the propagation path, and to take the influence of the possible presence of an artificial lighting source into consideration [108]. In Chapter 5, we describe a novel method of enhancing underwater optical images or videos using guided multilayer filter and wavelength compensation. In certain circumstances, we need to immediately monitor the underwater environment by disaster recovery support robots or other underwater survey systems. However, due to the inherent optical properties and underwater complex environment, the captured images or videos are distorted seriously. Our key contributions proposed include a novel depth and wavelength based underwater imaging model to compensate for the attenuation discrepancy along the propagation path and a fast guided multilayer filtering enhancing algorithm. The enhanced images are characterized by a reduced noised level, better exposure of the dark regions, and improved global contrast where the finest details and edges are enhanced significantly [109]. The performance of the proposed approaches and the benefits are concluded in Chapter 6. Comprehensive experiments and extensive comparison with the existing related techniques demonstrate the accuracy and effect of our proposed methods.九州工業大学博士学位論文 学位記番号:工博甲第367号 学位授与年月日:平成26年3月25日CHAPTER 1 INTRODUCTION|CHAPTER 2 MULTI-SOURCE IMAGES FUSION|CHAPTER 3 LASER IMAGES DENOISING|CHAPTER 4 OPTICAL IMAGE DEHAZING|CHAPTER 5 SHALLOW WATER DE-SCATTERING|CHAPTER 6 CONCLUSIONS九州工業大学平成25年

    Video Image Enhancement and Machine Learning Pipeline for Underwater Animal Detection and Classification at Cabled Observatories

    Get PDF
    Corrección de una afiliación en Sensors 2023, 23, 16. https://doi.org/10.3390/s23010016An understanding of marine ecosystems and their biodiversity is relevant to sustainable use of the goods and services they offer. Since marine areas host complex ecosystems, it is important to develop spatially widespread monitoring networks capable of providing large amounts of multiparametric information, encompassing both biotic and abiotic variables, and describing the ecological dynamics of the observed species. In this context, imaging devices are valuable tools that complement other biological and oceanographic monitoring devices. Nevertheless, large amounts of images or movies cannot all be manually processed, and autonomous routines for recognizing the relevant content, classification, and tagging are urgently needed. In this work, we propose a pipeline for the analysis of visual data that integrates video/image annotation tools for defining, training, and validation of datasets with video/image enhancement and machine and deep learning approaches. Such a pipeline is required to achieve good performance in the recognition and classification tasks of mobile and sessile megafauna, in order to obtain integrated information on spatial distribution and temporal dynamics. A prototype implementation of the analysis pipeline is provided in the context of deep-sea videos taken by one of the fixed cameras at the LoVe Ocean Observatory network of Lofoten Islands (Norway) at 260 m depth, in the Barents Sea, which has shown good classification results on an independent test dataset with an accuracy value of 76.18% and an area under the curve (AUC) value of 87.59%.This work was developed within the framework of the Tecnoterra (ICM-CSIC/UPC) and the following project activities: ARIM (Autonomous Robotic Sea-Floor Infrastructure for Benthopelagic Monitoring; MarTERA ERA-Net Cofound) and RESBIO (TEC2017-87861-R; Ministerio de Ciencia, Innovación y Universidades)

    Three dimensional reconstruction of plant roots via low energy x-ray computed tomography

    Get PDF
    Plant roots are vital organs for water and nutrient uptake. The structure and spatial distribution of plant roots in the soil affects a plant's physiological functions such as soil-based resource acquisition, yield and its ability to live under abiotic stress. Visualizing and quantifying roots' configuration below the ground can help in identifying the phenotypic traits responsible for a plant's physiological functions. Existing efforts have successfully employed X-ray computed tomography to visualize plant roots in three-dimensions and to quantify their complexity in a non-invasive and non-destructive manner. However, they used expensive and less accessible industrial or medical tomographic systems. This research uses an inexpensive, lab-built X-ray computed tomography (CT) system, operating at lower energy levels (30kV-40kV), to obtain two-dimensional projections of a plant root from different viewpoints. I propose image processing pipelines to segment roots and generate a three-dimensional model of the root system architecture from the two-dimensional projections. Observing that a Gaussian-shaped curve can approximate the cross-sectional intensity profle of a root segment, I propose a novel multi-scale matched filtering with a two-dimensional Gaussian kernel to enhance the root system. The filter assumes different orientations to highlight the root segments grown in different directions. The roots are isolated from the background by manual thresholding, followed by a mathematical morphological process to reduce spurious noise. The segmented images are filtered back projected to generate a three-dimensional model of the plant root system. The results from the research conducted show that the proposed method yields a structurally consistent three-dimensional model of the plant root image set obtained in the air, whereas alternate methods could not process the image set. For plant root images collected in the air, the three-dimensional model generated from the proposed matched-guided filtering and filtered back projection has a better contrast measure (0.0036) compared to the contrast measure (0.099) of the three-dimensional model created from raw images. For plant root images captured in the soil, proposed multiscale matched filtering resulted in better receiver operating characteristic curves than the raw images. Compared to Otsu's thresholding, multi-scale root enhancement and thresholding have reduced the average false positive rate from 0.344 to 0.042, and improved the average F1 score from 0.4 to 0.775. Experimental results show that the proposed root enhancement methods are robust to the number of orientational filters chosen, and are sensitive to the filter length selected. Small size filters are preferred, since increasing the filter length increases the number of false positives around root segments.Includes bibliographical reference

    Automatic 2D-to-3D conversion of single low depth-of-field images

    Get PDF
    This research presents a novel approach to the automatic rendering of 3D stereoscopic disparity image pairs from single 2D low depth-of-field (LDOF) images. Initially a depth map is produced through the assignment of depth to every delineated object and region in the image. Subsequently the left and right disparity images are produced through depth imagebased rendering (DIBR). The objects and regions in the image are initially assigned to one of six proposed groups or labels. Labelling is performed in two stages. The first involves the delineation of the dominant object-of-interest (OOI). The second involves the global object and region grouping of the non-OOI regions. The matting of the OOI is also performed in two stages. Initially the in focus foreground or region-of-interest (ROI) is separated from the out of focus background. This is achieved through the correlation of edge, gradient and higher-order statistics (HOS) saliencies. Refinement of the ROI is performed using k-means segmentation and CIEDE2000 colour-difference matching. Subsequently the OOI is extracted from within the ROI through analysis of the dominant gradients and edge saliencies together with k-means segmentation. Depth is assigned to each of the six labels by correlating Gestalt-based principles with vanishing point estimation, gradient plane approximation and depth from defocus (DfD). To minimise some of the dis-occlusions that are generated through the 3D warping sub-process within the DIBR process the depth map is pre-smoothed using an asymmetric bilateral filter. Hole-filling of the remaining dis-occlusions is performed through nearest-neighbour horizontal interpolation, which incorporates depth as well as direction of warp. To minimising the effects of the lateral striations, specific directional Gaussian and circular averaging smoothing is applied independently to each view, with additional average filtering applied to the border transitions. Each stage of the proposed model is benchmarked against data from several significant publications. Novel contributions are made in the sub-speciality fields of ROI estimation, OOI matting, LDOF image classification, Gestalt-based region categorisation, vanishing point detection, relative depth assignment and hole-filling or inpainting. An important contribution is made towards the overall knowledge base of automatic 2D-to-3D conversion techniques, through the collation of existing information, expansion of existing methods and development of newer concepts

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches

    Computer Vision for Marine Environmental Monitoring

    Get PDF
    Osterloff J. Computer Vision for Marine Environmental Monitoring. Bielefeld: Universität Bielefeld; 2018.Ocean exploration using imaging techniques has recently become very popular as camera systems became affordable and technique developed further. Marine imaging provides a unique opportunity to monitor the marine environment. The visual exploration using images allows to explore the variety of fauna, flora and geological structures of the marine environment. This monitoring creates a bottleneck as a manual evaluation of the large amounts of underwater image data is very time consuming. Information encapsulated in the images need to be extracted so that they can be included in statistical analyzes. Objects of interest (OOI) have to be localized and identified in the recorded images. In order to overcome the bottleneck, computer vision (CV) is applied in this thesis to extract the image information (semi-) automatically. A pre-evaluation of the images by marking OOIs manually, i.e. the manual annotation process, is necessary to provide examples for the applied CV methods. Five major challenges are identified in this thesis to apply of CV for marine environmental monitoring. The challenges can be grouped into challenges caused by underwater image acquisition and by the use of manual annotations for machine learning (ML). The image acquisition challenges are the optical properties challenge, e.g. a wavelength dependent attenuation underwater, and the dynamics of these properties, as different amount of matter in the water column affect colors and illumination in the images. The manual annotation challenges for applying ML for underwater images are, the low number of available manual annotations, the quality of the annotations in terms of correctness and reproducibility and the spatial uncertainty of them. The latter is caused by allowing a spatial uncertainty to speed up the manual annotation process e.g. using point annotations instead of fully outlining OOIs on a pixel level. The challenges are resolved individually in four different new CV approaches. The individual CV approaches allow to extract new biologically relevant information from time-series images recorded underwater. Manual annotations provide the ground truth for the CV systems and therefore for the included ML. Placing annotations manually in underwater images is a challenging task. In order to assess the quality in terms of correctness and reproducibility a detailed quality assessment for manual annotations is presented. This includes the computation of a gold standard to increase the quality of the ground truth for the ML. In the individually tailored CV systems, different ML algorithms are applied and adapted for marine environmental monitoring purposes. Applied ML algorithms cover a broad variety from unsupervised to supervised methods, including deep learning algorithms. Depending on the biologically motivated research question, systems are evaluated individually. The first two CV systems are developed for the _in-situ_ monitoring of the sessile species _Lophelia pertusa_. Visual information of the cold-water coral is extracted automatically from time-series images recorded by a fixed underwater observatory (FUO) located at 260 m depth and 22 km off the Norwegian coast. Color change of a cold water coral reef over time is quantified and the polyp activity of the imaged coral is estimated (semi-) automatically. The systems allow for the first time to document an _in-situ_ change of color of a _Lophelia pertusa_ coral reef and to estimate the polyp activity for half a year with a temporal resolution of one hour. The third CV system presented in this thesis allows to monitor the mobile species shrimp _in-situ_. Shrimp are semitransparent creating additional challenges for localization and identification in images using CV. Shrimp are localized and identified in time-series images recorded by the same FUO. Spatial distribution and temporal occurrence changes are observed by comparing two different time periods. The last CV system presented in this thesis is developed to quantify the impact of sedimentation on calcareous algae samples in a _wet-lab_ experiment. The size and color change of the imaged samples over time can be quantified using a consumer camera and a color reference plate placed in the field of view for each recorded image. Extracting biologically relevant information from underwater images is only the first step for marine environmental monitoring. The extracted image information, like behavior or color change, needs to be related to other environmental parameters. Therefore, also data science methods are applied in this thesis to unveil some of the relations between individual species' information extracted semi-automatically from underwater images and other environmental parameters

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications
    corecore