227 research outputs found

    An underwater image enhancement by reducing speckle noise using modified anisotropic diffusion filter

    Get PDF
    Underwater images are usually suffering from the issues of quality degradation, such as low contrast due to blurring details, color deviations, non-uniform lighting, and noise. Since last few decades, many researches are undergoing for restoration and enhancement for degraded underwater images. In this paper, we proposed a novel algorithm using modified anisotropic diffusion filter with dynamic color balancing strategy. This proposed algorithm performs based on an employing effective noise reduction as well as edge preserving technique with dynamic color correction to make uniform lighting and minimize the speckle noise. Furthermore, reanalyze the contributions and limitations of existing underwater image restoration and enhancement methods. Finally, in this research provided the detailed objective evaluations and compared with the various underwater scenarios for above said challenges also made subjective studies, which shows that our proposed method will improve the quality of the image and significantly enhanced the image

    Variational models for color image processing in the RGB space inspired by human vision Mémoire d'Habilitation a Diriger des Recherches dans la spécialité Mathématiques

    Get PDF
    La recherche que j'ai développée jusqu'à maintenant peut être divisée en quatre catégories principales : les modèles variationnels pourla correction de la couleur basée sur la perception humaine, le transfert d'histogrammes, le traitement d'images à haute gammedynamique et les statistiques d'images naturelles en couleur. Les sujets ci-dessus sont très inter-connectés car la couleur est un sujetfortement inter-disciplinaire

    Contrast limited histogram equalisation revisited

    Get PDF
    Histogram based tone adjustment algorithms have been used in a number of different computer vision applications in the recent years. One of the primary benefits of using the image histogram to derive the tone curve to enhance an image, is that it ensures the scene contents drives the enhancement i.e., each image has a unique tone curve. Perhaps the most well known image enhancement algorithm, Histogram Equalisation (HE), is a contrast adjustment algorithm that uses the image histogram, directly, to define a tone curve that brings out image details. However, HE often makes tone curves with large slopes that generate unpleasing reproductions. Contrast Limited Histogram Equalisation (CLHE) builds naturally upon HE and constrains the slopes of the tone curve such that the reproductions look better. Indeed, in almost all cases CLHE is preferred to HE. In this thesis we explore the CLHE algorithm in detail and highlight the shortcomings of the algorithm. We explore and discuss several approaches aimed at overcoming the limitations of CLHE, while also considering modern histogram based tone adjustment algorithms. The work in this thesis is motivated by the fact that CLHE is very popular in the modern literature. CLHE also - due to it’s inclusion in the Apical Iridix tone mapper - ships in many thousands of cameras

    Visibility recovery on images acquired in attenuating media. Application to underwater, fog, and mammographic imaging

    Get PDF
    136 p.When acquired in attenuating media, digital images of ten suffer from a particularly complex degradation that reduces their visual quality, hindering their suitability for further computational applications, or simply decreasing the visual pleasan tness for the user. In these cases, mathematical image processing reveals it self as an ideal tool to recover some of the information lost during the degradation process. In this dissertation,we deal with three of such practical scenarios in which this problematic is specially relevant, namely, underwater image enhancement, fogremoval and mammographic image processing. In the case of digital mammograms,X-ray beams traverse human tissue, and electronic detectorscapture them as they reach the other side. However, the superposition on a bidimensional image of three-dimensional structures produces low contraste dimages in which structures of interest suffer from a diminished visibility, obstructing diagnosis tasks. Regarding fog removal, the loss of contrast is produced by the atmospheric conditions, and white colour takes over the scene uniformly as distance increases, also reducing visibility.For underwater images, there is an added difficulty, since colour is not lost uniformly; instead, red colours decay the fastest, and green and blue colours typically dominate the acquired images. To address all these challenges,in this dissertation we develop new methodologies that rely on: a)physical models of the observed degradation, and b) the calculus of variations.Equipped with this powerful machinery, we design novel theoreticaland computational tools, including image-dependent functional energies that capture the particularities of each degradation model. These energie sare composed of different integral terms that are simultaneous lyminimized by means of efficient numerical schemes, producing a clean,visually-pleasant and use ful output image, with better contrast and increased visibility. In every considered application, we provide comprehensive qualitative (visual) and quantitative experimental results to validateour methods, confirming that the developed techniques out perform other existing approaches in the literature

    LAPSE: Low-Overhead Adaptive Power Saving and Contrast Enhancement for OLEDs

    Get PDF
    Organic Light Emitting Diode (OLED) display panels are becoming increasingly popular especially in mobile devices; one of the key characteristics of these panels is that their power consumption strongly depends on the displayed image. In this paper we propose LAPSE, a new methodology to concurrently reduce the energy consumed by an OLED display and enhance the contrast of the displayed image, that relies on image-specific pixel-by-pixel transformations. Unlike previous approaches, LAPSE focuses specifically on reducing the overheads required to implement the transformation at runtime. To this end, we propose a transformation that can be executed in real time, either in software, with low time overhead, or in a hardware accelerator with a small area and low energy budget. Despite the significant reduction in complexity, we obtain comparable results to those achieved with more complex approaches in terms of power saving and image quality. Moreover, our method allows to easily explore the full quality-versus-power tradeoff by acting on a few basic parameters; thus, it enables the runtime selection among multiple display quality settings, according to the status of the system

    Deep Learning for Decision Making and Autonomous Complex Systems

    Get PDF
    Deep learning consists of various machine learning algorithms that aim to learn multiple levels of abstraction from data in a hierarchical manner. It is a tool to construct models using the data that mimics a real world process without an exceedingly tedious modelling of the actual process. We show that deep learning is a viable solution to decision making in mechanical engineering problems and complex physical systems. In this work, we demonstrated the application of this data-driven method in the design of microfluidic devices to serve as a map between the user-defined cross-sectional shape of the flow and the corresponding arrangement of micropillars in the flow channel that contributed to the flow deformation. We also present how deep learning can be used in the early detection of combustion instability for prognostics and health monitoring of a combustion engine, such that appropriate measures can be taken to prevent detrimental effects as a result of unstable combustion. One of the applications in complex systems concerns robotic path planning via the systematic learning of policies and associated rewards. In this context, a deep architecture is implemented to infer the expected value of information gained by performing an action based on the states of the environment. We also applied deep learning-based methods to enhance natural low-light images in the context of a surveillance framework and autonomous robots. Further, we looked at how machine learning methods can be used to perform root-cause analysis in cyber-physical systems subjected to a wide variety of operation anomalies. In all studies, the proposed frameworks have been shown to demonstrate promising feasibility and provided credible results for large-scale implementation in the industry

    Improved Human Face Recognition by Introducing a New Cnn Arrangement and Hierarchical Method

    Get PDF
    Human face recognition has become one of the most attractive topics in the fields ‎of biometrics due to its wide applications. The face is a part of the body that carries ‎the most information regarding identification in human interactions. Features such ‎as the composition of facial components, skin tone, face\u27s central axis, distances ‎between eyes, and many more, alongside the other biometrics, are used ‎unconsciously by the brain to distinguish a person. Indeed, analyzing the facial ‎features could be the first method humans use to identify a person in their lives. ‎As one of the main biometric measures, human face recognition has been utilized in ‎various commercial applications over the past two decades. From banking to smart ‎advertisement and from border security to mobile applications. These are a few ‎examples that show us how far these methods have come. We can confidently say ‎that the techniques for face recognition have reached an acceptable level of ‎accuracy to be implemented in some real-life applications. However, there are other ‎applications that could benefit from improvement. Given the increasing demand ‎for the topic and the fact that nowadays, we have almost all the infrastructure that ‎we might need for our application, make face recognition an appealing topic. ‎ When we are evaluating the quality of a face recognition method, there are some ‎benchmarks that we should consider: accuracy, speed, and complexity are the main ‎parameters. Of course, we can measure other aspects of the algorithm, such as size, ‎precision, cost, etc. But eventually, every one of those parameters will contribute to ‎improving one or some of these three concepts of the method. Then again, although ‎we can see a significant level of accuracy in existing algorithms, there is still much ‎room for improvement in speed and complexity. In addition, the accuracy of the ‎mentioned methods highly depends on the properties of the face images. In other ‎words, uncontrolled situations and variables like head pose, occlusion, lighting, ‎image noise, etc., can affect the results dramatically. ‎ Human face recognition systems are used in either identification or verification. In ‎verification, the system\u27s main goal is to check if an input belongs to a pre-determined tag or a person\u27s ID. ‎Almost every face recognition system consists of four major steps. These steps are ‎pre-processing, face detection, feature extraction, and classification. Improvement ‎in each of these steps will lead to the overall enhancement of the system. In this ‎work, the main objective is to propose new, improved and enhanced methods in ‎each of those mentioned steps, evaluate the results by comparing them with other ‎existing techniques and investigate the outcome of the proposed system.

    Low-Overhead Adaptive Brightness Scaling for Energy Reduction in OLED Displays

    Get PDF
    Organic Light Emitting Diode (OLED) is rapidly emerging as the mainstream mobile display technology. This is posing new challenges on the design of energy-saving solutions for OLED displays, specifically intended for interactive devices such as smartphones, smartwatches and tablets. To this date, the standard solution is brightness scaling. However, the amount of the scaling is typically set statically (either by the user, through a setting knob, or by the system in response to predefined events such as low-battery status) and independently of the displayed image. In this work we describe a smart computing technique called Low-Overhead Adaptive Brightness Scaling (LABS), that overcomes these limitations. In LABS, the optimal content-dependent brightness scaling factor is determined automatically for each displayed image, on a frame-by-frame basis, with a low computational cost that allows real-time usage. The basic form of LABS achieves more than 35% power reduction on average, when applied to different image datasets, while maintaining the Mean Structural Similarity Index (MSSIM) between the original and transformed images above 97%

    A novel face recognition system in unconstrained environments using a convolutional neural network

    Get PDF
    The performance of most face recognition systems (FRS) in unconstrained environments is widely noted to be sub-optimal. One reason for this poor performance may be due to the lack of highly effective image pre-processing approaches, which are typically required before the feature extraction and classification stages. Furthermore, it is noted that only minimal face recognition issues are typically considered in most FRS, thus limiting the wide applicability of most FRS in real-life scenarios. Thus, it is envisaged that developing more effective pre-processing techniques, in addition to selecting the correct features for classification, will significantly improve the performance of FRS. The thesis investigates different research works on FRS, its techniques and challenges in unconstrained environments. The thesis proposes a novel image enhancement technique as a pre-processing approach for FRS. The proposed enhancement technique improves on the overall FRS model resulting into an increased recognition performance. Also, a selection of novel hybrid features has been presented that is extracted from the enhanced facial images within the dataset to improve recognition performance. The thesis proposes a novel evaluation function as a component within the image enhancement technique to improve face recognition in unconstrained environments. Also, a defined scale mechanism was designed within the evaluation function to evaluate the enhanced images such that extreme values depict too dark or too bright images. The proposed algorithm enables the system to automatically select the most appropriate enhanced face image without human intervention. Evaluation of the proposed algorithm was done using standard parameters, where it is demonstrated to outperform existing image enhancement techniques both quantitatively and qualitatively. The thesis confirms the effectiveness of the proposed image enhancement technique towards face recognition in unconstrained environments using the convolutional neural network. Furthermore, the thesis presents a selection of hybrid features from the enhanced image that results in effective image classification. Different face datasets were selected where each face image was enhanced using the proposed and existing image enhancement technique prior to the selection of features and classification task. Experiments on the different face datasets showed increased and better performance using the proposed approach. The thesis shows that putting an effective image enhancement technique as a preprocessing approach can improve the performance of FRS as compared to using unenhanced face images. Also, the right features to be extracted from the enhanced face dataset as been shown to be an important factor for the improvement of FRS. The thesis made use of standard face datasets to confirm the effectiveness of the proposed method. On the LFW face dataset, an improved performance recognition rate was obtained when considering all the facial conditions within the face dataset.Thesis (PhD)--University of Pretoria, 2018.CSIR-DST Inter programme bursaryElectrical, Electronic and Computer EngineeringPhDUnrestricte

    Textural Difference Enhancement based on Image Component Analysis

    Get PDF
    In this thesis, we propose a novel image enhancement method to magnify the textural differences in the images with respect to human visual characteristics. The method is intended to be a preprocessing step to improve the performance of the texture-based image segmentation algorithms. We propose to calculate the six Tamura's texture features (coarseness, contrast, directionality, line-likeness, regularity and roughness) in novel measurements. Each feature follows its original understanding of the certain texture characteristic, but is measured by some local low-level features, e.g., direction of the local edges, dynamic range of the local pixel intensities, kurtosis and skewness of the local image histogram. A discriminant texture feature selection method based on principal component analysis (PCA) is then proposed to find the most representative characteristics in describing textual differences in the image. We decompose the image into pairwise components representing the texture characteristics strongly and weakly, respectively. A set of wavelet-based soft thresholding methods are proposed as the dictionaries of morphological component analysis (MCA) to sparsely highlight the characteristics strongly and weakly from the image. The wavelet-based thresholding methods are proposed in pair, therefore each of the resulted pairwise components can exhibit one certain characteristic either strongly or weakly. We propose various wavelet-based manipulation methods to enhance the components separately. For each component representing a certain texture characteristic, a non-linear function is proposed to manipulate the wavelet coefficients of the component so that the component is enhanced with the corresponding characteristic accentuated independently while having little effect on other characteristics. Furthermore, the above three methods are combined into a uniform framework of image enhancement. Firstly, the texture characteristics differentiating different textures in the image are found. Secondly, the image is decomposed into components exhibiting these texture characteristics respectively. Thirdly, each component is manipulated to accentuate the corresponding texture characteristics exhibited there. After re-combining these manipulated components, the image is enhanced with the textural differences magnified with respect to the selected texture characteristics. The proposed textural differences enhancement method is used prior to both grayscale and colour image segmentation algorithms. The convincing results of improving the performance of different segmentation algorithms prove the potential of the proposed textural difference enhancement method
    • …
    corecore