61 research outputs found

    Noise-Enhanced and Human Visual System-Driven Image Processing: Algorithms and Performance Limits

    Get PDF
    This dissertation investigates the problem of image processing based on stochastic resonance (SR) noise and human visual system (HVS) properties, where several novel frameworks and algorithms for object detection in images, image enhancement and image segmentation as well as the method to estimate the performance limit of image segmentation algorithms are developed. Object detection in images is a fundamental problem whose goal is to make a decision if the object of interest is present or absent in a given image. We develop a framework and algorithm to enhance the detection performance of suboptimal detectors using SR noise, where we add a suitable dose of noise into the original image data and obtain the performance improvement. Micro-calcification detection is employed in this dissertation as an illustrative example. The comparative experiments with a large number of images verify the efficiency of the presented approach. Image enhancement plays an important role and is widely used in various vision tasks. We develop two image enhancement approaches. One is based on SR noise, HVS-driven image quality evaluation metrics and the constrained multi-objective optimization (MOO) technique, which aims at refining the existing suboptimal image enhancement methods. Another is based on the selective enhancement framework, under which we develop several image enhancement algorithms. The two approaches are applied to many low quality images, and they outperform many existing enhancement algorithms. Image segmentation is critical to image analysis. We present two segmentation algorithms driven by HVS properties, where we incorporate the human visual perception factors into the segmentation procedure and encode the prior expectation on the segmentation results into the objective functions through Markov random fields (MRF). Our experimental results show that the presented algorithms achieve higher segmentation accuracy than many representative segmentation and clustering algorithms available in the literature. Performance limit, or performance bound, is very useful to evaluate different image segmentation algorithms and to analyze the segmentability of the given image content. We formulate image segmentation as a parameter estimation problem and derive a lower bound on the segmentation error, i.e., the mean square error (MSE) of the pixel labels considered in our work, using a modified Cramér-Rao bound (CRB). The derivation is based on the biased estimator assumption, whose reasonability is verified in this dissertation. Experimental results demonstrate the validity of the derived bound

    An importance driven genetic algorithm for the halftoning process

    Get PDF
    Most evolutionary approaches to halftoning techniques have been concerned with the paramount goal of halftoning: achieving an accurate reproduction of local grayscale intensities while avoiding the introduction of artifacts. A secondary concern in halftoning has been the preservation of edges in the halftoned image. In this paper, we will introduce a new evolutionary approach through the use of an importance function. This approach has at least two main characteristics. First, it can produce results similar to many other halftoning techniques. Second, if the chosen importance function is accordingly changed, areas of the image with high variance can be highlighted.III Workshop de Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI

    ADAPTIVE SAMPLING METHODS FOR TESTING AUTONOMOUS SYSTEMS

    Get PDF
    In this dissertation, I propose a software-in-the-loop testing architecture that uses adaptive sampling to generate test suites for intelligent systems based upon identifying transitions in high-level mission criteria. Simulation-based testing depends on the ability to intelligently create test-cases that reveal the greatest information about the performance of the system in the fewest number of runs. To this end, I focus on the discovery and analysis of performance boundaries. Locations in the testing space where a small change in the test configuration leads to large changes in the vehicle's behavior. These boundaries can be used to characterize the regions of stable performance and identify the critical factors that affect autonomous decision making software. By creating meta-models which predict the locations of these boundaries we can efficiently query the system and find informative test scenarios. These algorithms form the backbone of the Range Adversarial Planning Tool (RAPT): a software system used at naval testing facilities to identify the environmental triggers that will cause faults in the safety behavior of unmanned underwater vehicles (UUVs). This system was used to develop UUV field tests which were validated on a hardware platform at the Keyport Naval Testing Facility. The development of test cases from simulation to deployment in the field required new analytical tools. Tools that were capable of handling uncertainty in the vehicle's performance, and the ability to handle large datasets with high-dimensional outputs. This approach has also been applied to the generation of self-righting plans for unmanned ground vehicles (UGVs) using topological transition graphs. In order to create these graphs, I had to develop a set of manifold sampling and clustering algorithms which could identify paths through stable regions of the configuration space. Finally, I introduce an imitation learning approach for generating surrogate models of the target system's control policy. These surrogate agents can be used in place of the true autonomy to enable faster than real-time simulations. These novel tools for experimental design and behavioral modeling provide a new way of analyzing the performance of robotic and intelligent systems, and provide a designer with actionable feedback

    Efficient Halftoning via Deep Reinforcement Learning

    Full text link
    Halftoning aims to reproduce a continuous-tone image with pixels whose intensities are constrained to two discrete levels. This technique has been deployed on every printer, and the majority of them adopt fast methods (e.g., ordered dithering, error diffusion) that fail to render structural details, which determine halftone's quality. Other prior methods of pursuing visual pleasure by searching for the optimal halftone solution, on the contrary, suffer from their high computational cost. In this paper, we propose a fast and structure-aware halftoning method via a data-driven approach. Specifically, we formulate halftoning as a reinforcement learning problem, in which each binary pixel's value is regarded as an action chosen by a virtual agent with a shared fully convolutional neural network (CNN) policy. In the offline phase, an effective gradient estimator is utilized to train the agents in producing high-quality halftones in one action step. Then, halftones can be generated online by one fast CNN inference. Besides, we propose a novel anisotropy suppressing loss function, which brings the desirable blue-noise property. Finally, we find that optimizing SSIM could result in holes in flat areas, which can be avoided by weighting the metric with the contone's contrast map. Experiments show that our framework can effectively train a light-weight CNN, which is 15x faster than previous structure-aware methods, to generate blue-noise halftones with satisfactory visual quality. We also present a prototype of deep multitoning to demonstrate the extensibility of our method

    Digital Color Imaging

    Full text link
    This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided

    An importance driven genetic algorithm for the halftoning process

    Get PDF
    Most evolutionary approaches to halftoning techniques have been concerned with the paramount goal of halftoning: achieving an accurate reproduction of local grayscale intensities while avoiding the introduction of artifacts. A secondary concern in halftoning has been the preservation of edges in the halftoned image. In this paper, we will introduce a new evolutionary approach through the use of an importance function. This approach has at least two main characteristics. First, it can produce results similar to many other halftoning techniques. Second, if the chosen importance function is accordingly changed, areas of the image with high variance can be highlighted.III Workshop de Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI

    Digital halftoning and the physical reconstruction function

    Get PDF
    Originally presented as author's thesis (Ph. D.--Massachusetts Institute of Technology), 1986.Bibliography: p. 397-405."This work has been supported by the Digital Equipement Corporation."by Robert A. Ulichney

    Wholetoning: Synthesizing Abstract Black-and-White Illustrations

    Get PDF
    Black-and-white imagery is a popular and interesting depiction technique in the visual arts, in which varying tints and shades of a single colour are used. Within the realm of black-and-white images, there is a set of black-and-white illustrations that only depict salient features by ignoring details, and reduce colour to pure black and white, with no intermediate tones. These illustrations hold tremendous potential to enrich decoration, human communication and entertainment. Producing abstract black-and-white illustrations by hand relies on a time consuming and difficult process that requires both artistic talent and technical expertise. Previous work has not explored this style of illustration in much depth, and simple approaches such as thresholding are insufficient for stylization and artistic control. I use the word wholetoning to refer to illustrations that feature a high degree of shape and tone abstraction. In this thesis, I explore computer algorithms for generating wholetoned illustrations. First, I offer a general-purpose framework, “artistic thresholding”, to control the generation of wholetoned illustrations in an intuitive way. The basic artistic thresholding algorithm is an optimization framework based on simulated annealing to get the final bi-level result. I design an extensible objective function from our observations of a lot of wholetoned images. The objective function is a weighted sum over terms that encode features common to wholetoned illustrations. Based on the framework, I then explore two specific wholetoned styles: papercutting and representational calligraphy. I define a paper-cut design as a wholetoned image with connectivity constraints that ensure that it can be cut out from only one piece of paper. My computer generated papercutting technique can convert an original wholetoned image into a paper-cut design. It can also synthesize stylized and geometric patterns often found in traditional designs. Representational calligraphy is defined as a wholetoned image with the constraint that all depiction elements must be letters. The procedure of generating representational calligraphy designs is formalized as a “calligraphic packing” problem. I provide a semi-automatic technique that can warp a sequence of letters to fit a shape while preserving their readability
    corecore