354 research outputs found

    Directional edge and texture representations for image processing

    Get PDF
    An efficient representation for natural images is of fundamental importance in image processing and analysis. The commonly used separable transforms such as wavelets axe not best suited for images due to their inability to exploit directional regularities such as edges and oriented textural patterns; while most of the recently proposed directional schemes cannot represent these two types of features in a unified transform. This thesis focuses on the development of directional representations for images which can capture both edges and textures in a multiresolution manner. The thesis first considers the problem of extracting linear features with the multiresolution Fourier transform (MFT). Based on a previous MFT-based linear feature model, the work extends the extraction method into the situation when the image is corrupted by noise. The problem is tackled by the combination of a "Signal+Noise" frequency model, a refinement stage and a robust classification scheme. As a result, the MFT is able to perform linear feature analysis on noisy images on which previous methods failed. A new set of transforms called the multiscale polar cosine transforms (MPCT) are also proposed in order to represent textures. The MPCT can be regarded as real-valued MFT with similar basis functions of oriented sinusoids. It is shown that the transform can represent textural patches more efficiently than the conventional Fourier basis. With a directional best cosine basis, the MPCT packet (MPCPT) is shown to be an efficient representation for edges and textures, despite its high computational burden. The problem of representing edges and textures in a fixed transform with less complexity is then considered. This is achieved by applying a Gaussian frequency filter, which matches the disperson of the magnitude spectrum, on the local MFT coefficients. This is particularly effective in denoising natural images, due to its ability to preserve both types of feature. Further improvements can be made by employing the information given by the linear feature extraction process in the filter's configuration. The denoising results compare favourably against other state-of-the-art directional representations

    Exploration, Registration, and Analysis of High-Throughput 3D Microscopy Data from the Knife-Edge Scanning Microscope

    Get PDF
    Advances in high-throughput, high-volume microscopy techniques have enabled the acquisition of extremely detailed anatomical structures on human or animal organs. The Knife-Edge Scanning Microscope (KESM) is one of the first instruments to produce sub-micrometer resolution ( ~1 µm^(3)) data from whole small animal brains. We successfully imaged, using the KESM, entire mouse brains stained with Golgi (neuronal morphology), India ink (vascular network), and Nissl (soma distribution). Our data sets fill the gap of most existing data sets which have only partial organ coverage or have orders of magnitude lower resolution. However, even though we have such unprecedented data sets, we still do not have a suitable informatics platform to visualize and quantitatively analyze the data sets. This dissertation is designed to address three key gaps: (1) due to the large volume (several tera voxels) and the multiscale nature, visualization alone is a huge challenge, let alone quantitative connectivity analysis; (2) the size of the uncompressed KESM data exceeds a few terabytes and to compare and combine with other data sets from different imaging modalities, the KESM data must be registered to a standard coordinate space; and (3) quantitative analysis that seeks to count every neuron in our massive, growing, and sparsely labeled data is a serious challenge. The goals of my dissertation are as follows: (1) develop an online neuro-informatics framework for efficient visualization and analysis of the multiscale KESM data sets, (2) develop a robust landmark-based 3D registration method for mapping the KESM Nissl-stained entire mouse data into the Waxholm Space (a canonical coordinate system for the mouse brain), and (3) develop a scalable, incremental learning algorithm for cell detection in high-resolution KESM Nissl data. For the web-based neuroinformatics framework, I prepared multi-scale data sets at different zoom levels from the original data sets. And then I extended Google Maps API to develop atlas features such as scale bars, panel browsing, and transparent overlay for 3D rendering. Next, I adapted the OpenLayers API, which is a free mapping and layering API supporting similar functionality as the Google Maps API. Furthermore, I prepared multi-scale data sets in vector-graphics to improve page loading time by reducing the file size. To better appreciate the full 3D morphology of the objects embedded in the data volumes, I developed a WebGL-based approach that complements the web-based framework for interactive viewing. For the registration work, I adapted and customized a stable 2D rigid deformation method to map our data sets to the Waxholm Space. For the analysis of neuronal distribution, I designed and implemented a scalable, effective quantitative analysis method using supervised learning. I utilized Principal Components Analysis (PCA) in a supervised manner and implemented the algorithm using MapReduce parallelization. I expect my frameworks to enable effective exploration and analysis of our KESM data sets. In addition, I expect my approaches to be broadly applicable to the analysis of other high-throughput medical imaging data

    Laser scanner jitter characterization, page content analysis for optimal rendering, and understanding image graininess

    Get PDF
    In Chapter 1, the electrophotographic (EP) process is widely used in imaging systems such as laser printers and office copiers. In the EP process, laser scanner jitter is a common artifact that mainly appears along the scan direction due to the condition of polygon facets. Prior studies have not focused on the periodic characteristic of laser scanner jitter in terms of the modeling and analysis. This chapter addresses the periodic characteristic of laser scanner jitter in the mathematical model. In the Fourier domain, we derive an analytic expression for laser scanner jitter in general, and extend the expression assuming a sinusoidal displacement. This leads to a simple closed-form expression in terms of Bessel functions of the first kind. We further examine the relationship between the continuous-space halftone image and the periodic laser scanner jitter. The simulation results show that our proposed mathematical model predicts the phenomenon of laser scanner jitter effectively, when compared to the characterization using a test pattern, which consists of a flat field with 25% dot coverage However, there is some mismatches between the analytical spectrum and spectrum of the processed scanned test target. We improve experimental results by directly estimating the displacement instead of assuming a sinusoidal displacement. This gives a better prediction of the phenomenon of laser scanner jitter. ^ In Chapter 2, we describe a segmentation-based object map correction algorithm, which can be integrated in a new imaging pipeline for laser electrophotographic (EP) printers. This new imaging pipeline incorporates the idea of object-oriented halftoning, which applies different halftone screens to different regions of the page, to improve the overall print quality. In particular, smooth areas are halftoned with a low-frequency screen to provide more stable printing; whereas detail areas are halftoned with a high-frequency screen, since this will better reproduce the object detail. In this case, the object detail also serves to mask any print defects that arise from the use of a high frequency screen. These regions are defined by the initial object map, which is translated from the page description language (PDL). However, the information of object type obtained from the PDL may be incorrect. Some smooth areas may be labeled as raster causing them to be halftoned with a high frequency screen, rather than being labeled as vector, which would result in them being rendered with a low frequency screen. To correct the misclassification, we propose an object map correction algorithm that combines information from the incorrect object map with information obtained by segmentation of the continuous-tone RGB rasterized page image. Finally, the rendered image can be halftoned by the object-oriented halftoning approach, based on the corrected object map. Preliminary experimental results indicate the benefits of our algorithm combined with the new imaging pipeline, in terms of correction of misclassification errors. ^ In Chapter 3, we describe a study to understand image graininess. With the emergence of the high-end digital printing technologies, it is of interest to analyze the nature and causes of image graininess in order to understand the factors that prevent high-end digital presses from achieving the same print quality as commercial offset presses. We want to understand how image graininess relates to the halftoning technology and marking technology. This chapter provides three different approaches to understand image graininess. First, we perform a Fourier-based analysis of regular and irregular periodic, clustered-dot halftone textures. With high-end digital printing technology, irregular screens can be considered since they can achieve a better approximation to the screen sets used for commercial offset presses. This is due to the fact that the elements of the periodicity matrix of an irregular screen are rational numbers, rather than integers, which would be the case for a regular screen. From the analytical results, we show that irregular halftone textures generate new frequency components near the spectrum origin; and these frequency components are low enough to be visible to the human viewer. However, regular halftone textures do not have these frequency components. In addition, we provide a metric to measure the nonuniformity of a given halftone texture. The metric indicates that the nonuniformity of irregular halftone textures is higher than the nonuniformity of regular halftone textures. Furthermore, a method to visualize the nonuniformity of given halftone textures is described. The analysis shows that irregular halftone textures are grainier than regular halftone textures. Second, we analyze the regular and irregular periodic, clustered-dot halftone textures by calculating three spatial statistics. First, the disparity between lattice points generated by the periodicity matrix, and centroids of dot clusters are considered. Next, the area of dot clusters in regular and irregular halftone textures is considered. Third, the compactness of dot clusters in the regular and irregular halftone textures is calculated. The disparity of between centroids of irregular dot clusters and lattices points generated by the irregular screen is larger than the disparity of between centroids of regular dot clusters and lattices points generated by the regular screen. Irregular halftone textures have higher variance in the histogram of dot-cluster area. In addition, the compactness measurement shows that irregular dot clusters are less compact than regular dot clusters. But, a clustered-dot halftone algorithm wants to produce clustered-dot as compact as possible. Lastly, we exam the current marking technology by printing the same halftone pattern on different substrates, glossy and polyester media. The experimental results show that the current marking technology provides better print quality on glossy media than on polyester media. With above three different approaches, we conclude that the current halftoning technology introduces image graininess in the spatial domain because of the non-integer elements in the periodicity matrix of the irregular screen and the finite addressability of the marking engine. In addition, the geometric characteristics of irregular dot clusters is more irregular than the geometric characteristics of regular dot clusters. Finally, the marking technology provides inconsistency of print quality between substrates

    Anisotropic Neutron Imaging

    Get PDF
    Anisotropic neutron imaging presents a unique method to process images in order to produce a human-readable, singular, isotropic image from a set of anisotropic neutron images. These images were created using a Sӧller-slit collimator in a rotating device to change the azimuthal orientation of the slits with respect to the imaging plane. A multi-level, 2D-discreet wavelet transform (2D-DWT) was used to extract the information contained within each image and fuse the data into an isotropic resolution image. The footprint of the experimental system is small when compared to other neutron radiography facilities of comparable L/D and has the advantage of a relatively low acquisition time while maintaining high resolution image capture. The 2D-DWT algorithm developed within this dissertation is able to enhance the sharpness of the edges within the final image and remove the artifacts created by the slit-type collimator, which increased human readability in the test circumstanc

    Automated Complexity-Sensitive Image Fusion

    Get PDF
    To construct a complete representation of a scene with environmental obstacles such as fog, smoke, darkness, or textural homogeneity, multisensor video streams captured in diferent modalities are considered. A computational method for automatically fusing multimodal image streams into a highly informative and unified stream is proposed. The method consists of the following steps: 1. Image registration is performed to align video frames in the visible band over time, adapting to the nonplanarity of the scene by automatically subdividing the image domain into regions approximating planar patches 2. Wavelet coefficients are computed for each of the input frames in each modality 3. Corresponding regions and points are compared using spatial and temporal information across various scales 4. Decision rules based on the results of multimodal image analysis are used to combine thewavelet coefficients from different modalities 5. The combined wavelet coefficients are inverted to produce an output frame containing useful information gathered from the available modalities Experiments show that the proposed system is capable of producing fused output containing the characteristics of color visible-spectrum imagery while adding information exclusive to infrared imagery, with attractive visual and informational properties

    AXMEDIS 2008

    Get PDF
    The AXMEDIS International Conference series aims to explore all subjects and topics related to cross-media and digital-media content production, processing, management, standards, representation, sharing, protection and rights management, to address the latest developments and future trends of the technologies and their applications, impacts and exploitation. The AXMEDIS events offer venues for exchanging concepts, requirements, prototypes, research ideas, and findings which could contribute to academic research and also benefit business and industrial communities. In the Internet as well as in the digital era, cross-media production and distribution represent key developments and innovations that are fostered by emergent technologies to ensure better value for money while optimising productivity and market coverage
    corecore