8 research outputs found

    Detail and contrast enhancement in images using dithering and fusion

    Get PDF
    This thesis focuses on two applications of wavelet transforms to achieve image enhancement. One of the applications is image fusion and the other one is image dithering. Firstly, to improve the quality of a fused image, an image fusion technique based on transform domain has been proposed as a part of this research. The proposed fusion technique has also been extended to reduce temporal redundancy associated with the processing. Experimental results show better performance of the proposed methods over other methods. In addition, achievements have been made in terms of enhancing image contrast, capturing more image details and efficiency in processing time when compared to existing methods. Secondly, of all the present image dithering methods, error diffusion-based dithering is the most widely used and explored. Error diffusion, despite its great success, has been lacking in image enhancement aspects because of the softening effects caused by this method. To compensate for the softening effects, wavelet-based dithering was introduced. Although wavelet-based dithering worked well in removing the softening effects, as the method is based on discrete wavelet transform, it lacked in aspects like poor directionality and shift invariance, which are responsible for making the resultant images look sharp and crisp. Hence, a new method named complex wavelet-based dithering has been introduced as part of this research to compensate for the softening effects. Image processed by the proposed method emphasises more on details and exhibits better contrast characteristics in comparison to the existing methods

    Computational experiment of error diffusion dithering for depth reduction in images

    Get PDF
    The halftone technique is a process that employs patterns formed by black and white dots to reduce the number of gray levels in an image. Due to the tendency of the human visual system to soften the distinction between points with different shades, the patterns of black and white dots produce a visual effect as if the image were composed of shades of gray and dark. This technique is quite old and is widely used in printing images in newspapers and magazines, in which only black (ink) and white (paper) levels are needed. There are several methods for generating halftone images. In this article we explore dithering with error diffusion and an analysis of different halftone techniques is presented using error diffusion to change the depth of the image. The results showed that the depth of the image changes 1/8 per channel, this halftone technique can be used to reduce an image weight, losing information but achieving good results, depending on the context. ontext

    Perceptually inspired image estimation and enhancement

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2009.Includes bibliographical references (p. 137-144).In this thesis, we present three image estimation and enhancement algorithms inspired by human vision. In the first part of the thesis, we propose an algorithm for mapping one image to another based on the statistics of a training set. Many vision problems can be cast as image mapping problems, such as, estimating reflectance from luminance, estimating shape from shading, separating signal and noise, etc. Such problems are typically under-constrained, and yet humans are remarkably good at solving them. Classic computational theories about the ability of the human visual system to solve such under-constrained problems attribute this feat to the use of some intuitive regularities of the world, e.g., surfaces tend to be piecewise constant. In recent years, there has been considerable interest in deriving more sophisticated statistical constraints from natural images, but because of the high-dimensional nature of images, representing and utilizing the learned models remains a challenge. Our techniques produce models that are very easy to store and to query. We show these techniques to be effective for a number of applications: removing noise from images, estimating a sharp image from a blurry one, decomposing an image into reflectance and illumination, and interpreting lightness illusions. In the second part of the thesis, we present an algorithm for compressing the dynamic range of an image while retaining important visual detail. The human visual system confronts a serious challenge with dynamic range, in that the physical world has an extremely high dynamic range, while neurons have low dynamic ranges.(cont.) The human visual system performs dynamic range compression by applying automatic gain control, in both the retina and the visual cortex. Taking inspiration from that, we designed techniques that involve multi-scale subband transforms and smooth gain control on subband coefficients, and resemble the contrast gain control mechanism in the visual cortex. We show our techniques to be successful in producing dynamic-range-compressed images without compromising the visibility of detail or introducing artifacts. We also show that the techniques can be adapted for the related problem of "companding", in which a high dynamic range image is converted to a low dynamic range image and saved using fewer bits, and later expanded back to high dynamic range with minimal loss of visual quality. In the third part of the thesis, we propose a technique that enables a user to easily localize image and video editing by drawing a small number of rough scribbles. Image segmentation, usually treated as an unsupervised clustering problem, is extremely difficult to solve. With a minimal degree of user supervision, however, we are able to generate selection masks with good quality. Our technique learns a classifier using the user-scribbled pixels as training examples, and uses the classifier to classify the rest of the pixels into distinct classes. It then uses the classification results as per-pixel data terms, combines them with a smoothness term that respects color discontinuities, and generates better results than state-of-art algorithms for interactive segmentation.by Yuanzhen Li.Ph.D

    Connecting mathematical models for image processing and neural networks

    Get PDF
    This thesis deals with the connections between mathematical models for image processing and deep learning. While data-driven deep learning models such as neural networks are flexible and well performing, they are often used as a black box. This makes it hard to provide theoretical model guarantees and scientific insights. On the other hand, more traditional, model-driven approaches such as diffusion, wavelet shrinkage, and variational models offer a rich set of mathematical foundations. Our goal is to transfer these foundations to neural networks. To this end, we pursue three strategies. First, we design trainable variants of traditional models and reduce their parameter set after training to obtain transparent and adaptive models. Moreover, we investigate the architectural design of numerical solvers for partial differential equations and translate them into building blocks of popular neural network architectures. This yields criteria for stable networks and inspires novel design concepts. Lastly, we present novel hybrid models for inpainting that rely on our theoretical findings. These strategies provide three ways for combining the best of the two worlds of model- and data-driven approaches. Our work contributes to the overarching goal of closing the gap between these worlds that still exists in performance and understanding.Gegenstand dieser Arbeit sind die Zusammenhänge zwischen mathematischen Modellen zur Bildverarbeitung und Deep Learning. Während datengetriebene Modelle des Deep Learning wie z.B. neuronale Netze flexibel sind und gute Ergebnisse liefern, werden sie oft als Black Box eingesetzt. Das macht es schwierig, theoretische Modellgarantien zu liefern und wissenschaftliche Erkenntnisse zu gewinnen. Im Gegensatz dazu bieten traditionellere, modellgetriebene Ansätze wie Diffusion, Wavelet Shrinkage und Variationsansätze eine Fülle von mathematischen Grundlagen. Unser Ziel ist es, diese auf neuronale Netze zu übertragen. Zu diesem Zweck verfolgen wir drei Strategien. Zunächst entwerfen wir trainierbare Varianten von traditionellen Modellen und reduzieren ihren Parametersatz, um transparente und adaptive Modelle zu erhalten. Außerdem untersuchen wir die Architekturen von numerischen Lösern für partielle Differentialgleichungen und übersetzen sie in Bausteine von populären neuronalen Netzwerken. Daraus ergeben sich Kriterien für stabile Netzwerke und neue Designkonzepte. Schließlich präsentieren wir neuartige hybride Modelle für Inpainting, die auf unseren theoretischen Erkenntnissen beruhen. Diese Strategien bieten drei Möglichkeiten, das Beste aus den beiden Welten der modell- und datengetriebenen Ansätzen zu vereinen. Diese Arbeit liefert einen Beitrag zum übergeordneten Ziel, die Lücke zwischen den zwei Welten zu schließen, die noch in Bezug auf Leistung und Modellverständnis besteht.ERC Advanced Grant INCOVI

    The 2005 HST Calibration Workshop Hubble After the Transition to Two-Gyro Mode

    Get PDF
    The 2005 HST Calibration Workshop was held at the Space Telescope Science Institute during October 26, 2005 to bring together members of the observing community, the instrument development teams, and the STScI instrument support teams to share information and techniques. Presentations included the two-gyro performance of HST and FGS, advances in the calibration of a number of instruments, the results of other instruments after their return from space, and the status of still others which are scheduled for installation during the next servicing mission. Cross-calibration between HST and JWST was discussed, as well as the new Guide Star Catalog and advances in data analysis software. This book contains the published record of the workshop, while all the talks and posters are available electronically on the workshop Web site

    Digital watermarking and novel security devices

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Contrast enhancement of dithered images using complex wavelets and novel amplification factors

    No full text
    Dithering creates an illusion of continuous tone output for a binary device. Error diffusion-based dithering or halftoning is an efficient technique that is primarily getting used in printing. In this paper error diffusion halftoning has been used to achieve image dithering based on complex wavelets. Similar to the wavelet-based dithering, Floyd-Steinberg error diffusion has been incorporated, but in addition a new set of sub band amplification factors is coined in the proposed method to further enhance the contrast in a dithered image. Experimental results show that the proposed method is superior to state-of-the-art methods in terms of subjective and objective assessments

    Compressive learning: new models and applications

    Get PDF
    Today’s world is fuelled by data. From self-driving cars through to agriculture, massive amounts of data are used to fit learning models to provide valuable insights and predictions. Such insights come at a significant price as many traditional learning procedures have both memory and computational costs that scale with the size of the data. This quickly becomes prohibitive, even when substantial resources are available. A new way of learning is therefore needed to allow for efficient model fitting in the 21st century. The birth of compressive learning in recent years has provided a novel solution to the bottleneck of learning from big data. Situated at the core of the compressive learning framework is the construction of a so-called sketch. The sketch is a compact representation of the data that provides sufficient information for specific learning tasks. In this thesis we develop the compressive learning framework to a host of new models and applications. In the first part of the thesis, we consider the group of semi-parametric models and demonstrate the unique advantages and challenges associated with creating a compressive learning paradigm for these particular models. Concentrating on the independent component analysis model, we develop a framework of algorithms and theory enabling magnitudes of compression with respect to memory complexity compared to existing methods. In the second part of the thesis, we develop a compressive learning framework to the emerging technology of single-photon counting lidar. We demonstrate that forming a sketch of the time-of-flight data circumvents the inherent data-transfer bottleneck of existing lidar techniques. Finally, we extend the compressive lidar technology by developing both an efficient sketch-based detection algorithm that can detect the presence of a surface solely from the sketch and a sketched plug and play framework that can integrate existing powerful denoisers that are robust to noisy lidar scenes with low photon counts
    corecore