510 research outputs found

    Laser scanner jitter characterization, page content analysis for optimal rendering, and understanding image graininess

    Get PDF
    In Chapter 1, the electrophotographic (EP) process is widely used in imaging systems such as laser printers and office copiers. In the EP process, laser scanner jitter is a common artifact that mainly appears along the scan direction due to the condition of polygon facets. Prior studies have not focused on the periodic characteristic of laser scanner jitter in terms of the modeling and analysis. This chapter addresses the periodic characteristic of laser scanner jitter in the mathematical model. In the Fourier domain, we derive an analytic expression for laser scanner jitter in general, and extend the expression assuming a sinusoidal displacement. This leads to a simple closed-form expression in terms of Bessel functions of the first kind. We further examine the relationship between the continuous-space halftone image and the periodic laser scanner jitter. The simulation results show that our proposed mathematical model predicts the phenomenon of laser scanner jitter effectively, when compared to the characterization using a test pattern, which consists of a flat field with 25% dot coverage However, there is some mismatches between the analytical spectrum and spectrum of the processed scanned test target. We improve experimental results by directly estimating the displacement instead of assuming a sinusoidal displacement. This gives a better prediction of the phenomenon of laser scanner jitter. ^ In Chapter 2, we describe a segmentation-based object map correction algorithm, which can be integrated in a new imaging pipeline for laser electrophotographic (EP) printers. This new imaging pipeline incorporates the idea of object-oriented halftoning, which applies different halftone screens to different regions of the page, to improve the overall print quality. In particular, smooth areas are halftoned with a low-frequency screen to provide more stable printing; whereas detail areas are halftoned with a high-frequency screen, since this will better reproduce the object detail. In this case, the object detail also serves to mask any print defects that arise from the use of a high frequency screen. These regions are defined by the initial object map, which is translated from the page description language (PDL). However, the information of object type obtained from the PDL may be incorrect. Some smooth areas may be labeled as raster causing them to be halftoned with a high frequency screen, rather than being labeled as vector, which would result in them being rendered with a low frequency screen. To correct the misclassification, we propose an object map correction algorithm that combines information from the incorrect object map with information obtained by segmentation of the continuous-tone RGB rasterized page image. Finally, the rendered image can be halftoned by the object-oriented halftoning approach, based on the corrected object map. Preliminary experimental results indicate the benefits of our algorithm combined with the new imaging pipeline, in terms of correction of misclassification errors. ^ In Chapter 3, we describe a study to understand image graininess. With the emergence of the high-end digital printing technologies, it is of interest to analyze the nature and causes of image graininess in order to understand the factors that prevent high-end digital presses from achieving the same print quality as commercial offset presses. We want to understand how image graininess relates to the halftoning technology and marking technology. This chapter provides three different approaches to understand image graininess. First, we perform a Fourier-based analysis of regular and irregular periodic, clustered-dot halftone textures. With high-end digital printing technology, irregular screens can be considered since they can achieve a better approximation to the screen sets used for commercial offset presses. This is due to the fact that the elements of the periodicity matrix of an irregular screen are rational numbers, rather than integers, which would be the case for a regular screen. From the analytical results, we show that irregular halftone textures generate new frequency components near the spectrum origin; and these frequency components are low enough to be visible to the human viewer. However, regular halftone textures do not have these frequency components. In addition, we provide a metric to measure the nonuniformity of a given halftone texture. The metric indicates that the nonuniformity of irregular halftone textures is higher than the nonuniformity of regular halftone textures. Furthermore, a method to visualize the nonuniformity of given halftone textures is described. The analysis shows that irregular halftone textures are grainier than regular halftone textures. Second, we analyze the regular and irregular periodic, clustered-dot halftone textures by calculating three spatial statistics. First, the disparity between lattice points generated by the periodicity matrix, and centroids of dot clusters are considered. Next, the area of dot clusters in regular and irregular halftone textures is considered. Third, the compactness of dot clusters in the regular and irregular halftone textures is calculated. The disparity of between centroids of irregular dot clusters and lattices points generated by the irregular screen is larger than the disparity of between centroids of regular dot clusters and lattices points generated by the regular screen. Irregular halftone textures have higher variance in the histogram of dot-cluster area. In addition, the compactness measurement shows that irregular dot clusters are less compact than regular dot clusters. But, a clustered-dot halftone algorithm wants to produce clustered-dot as compact as possible. Lastly, we exam the current marking technology by printing the same halftone pattern on different substrates, glossy and polyester media. The experimental results show that the current marking technology provides better print quality on glossy media than on polyester media. With above three different approaches, we conclude that the current halftoning technology introduces image graininess in the spatial domain because of the non-integer elements in the periodicity matrix of the irregular screen and the finite addressability of the marking engine. In addition, the geometric characteristics of irregular dot clusters is more irregular than the geometric characteristics of regular dot clusters. Finally, the marking technology provides inconsistency of print quality between substrates

    Optimum Implementation of Compound Compression of a Computer Screen for Real-Time Transmission in Low Network Bandwidth Environments

    Get PDF
    Remote working is becoming increasingly more prevalent in recent times. A large part of remote working involves sharing computer screens between servers and clients. The image content that is presented when sharing computer screens consists of both natural camera captured image data as well as computer generated graphics and text. The attributes of natural camera captured image data differ greatly to the attributes of computer generated image data. An image containing a mixture of both natural camera captured image and computer generated image data is known as a compound image. The research presented in this thesis focuses on the challenge of constructing a compound compression strategy to apply the ‘best fit’ compression algorithm for the mixed content found in a compound image. The research also involves analysis and classification of the types of data a given compound image may contain. While researching optimal types of compression, consideration is given to the computational overhead of a given algorithm because the research is being developed for real time systems such as cloud computing services, where latency has a detrimental impact on end user experience. The previous and current state of the art videos codec’s have been researched along many of the most current publishing’s from academia, to design and implement a novel approach to a low complexity compound compression algorithm that will be suitable for real time transmission. The compound compression algorithm will utilise a mixture of lossless and lossy compression algorithms with parameters that can be used to control the performance of the algorithm. An objective image quality assessment is needed to determine whether the proposed algorithm can produce an acceptable quality image after processing. Both traditional metrics such as Peak Signal to Noise Ratio will be used along with a new more modern approach specifically designed for compound images which is known as Structural Similarity Index will be used to define the quality of the decompressed Image. In finishing, the compression strategy will be tested on a set of generated compound images. Using open source software, the same images will be compressed with the previous and current state of the art video codec’s to compare the three main metrics, compression ratio, computational complexity and objective image quality

    Investigation of the effects of image compression on the geometric quality of digital protogrammetric imagery

    Get PDF
    We are living in a decade, where the use of digital images is becoming increasingly important. Photographs are now converted into digital form, and direct acquisition of digital images is becoming increasing important as sensors and associated electronics. Unlike images in analogue form, digital representation of images allows visual information to· be easily manipulated in useful ways. One practical problem of the digital image representation is that, it requires a very large number of bits and hence one encounters a fairly large volume of data in a digital production environment if they are stored uncompressed on the disk. With the rapid advances in sensor technology and digital electronics, the number of bits grow larger in softcopy photogrammetry, remote sensing and multimedia GIS. As a result, it is desirable to find efficient representation for digital images in order to reduce the memory required for storage, improve the data access rate from storage devices, and reduce the time required for transfer across communication channels. The component of digital image processing that deals with this problem is called image compression. Image compression is a necessity for the utilisation of large digital images in softcopy photogrammetry, remote sensing, and multimedia GIS. Numerous image Compression standards exist today with the common goal of reducing the number of bits needed to store images, and to facilitate the interchange of compressed image data between various devices and applications. JPEG image compression standard is one alternative for carrying out the image compression task. This standard was formed under the auspices ISO and CCITT for the purpose of developing an international standard for the compression and decompression of continuous-tone, still-frame, monochrome and colour images. The JPEG standard algorithm &Us into three general categories: the baseline sequential process that provides a simple and efficient algorithm for most image coding applications, the extended DCT-based process that allows the baseline system to satisfy a broader range of applications, and an independent lossless process for application demanding that type of compression. This thesis experimentally investigates the geometric degradations resulting from lossy JPEG compression on photogrammetric imagery at various levels of quality factors. The effects and the suitability of JPEG lossy image compression on industrial photogrammetric imagery are investigated. Examples are drawn from the extraction of targets in close-range photogrammetric imagery. In the experiments, the JPEG was used to compress and decompress a set of test images. The algorithm has been tested on digital images containing various levels of entropy (a measure of information content of an image) with different image capture capabilities. Residual data was obtained by taking the pixel-by-pixel difference between the original data and the reconstructed data. The image quality measure, root mean square (rms) error of the residual was used as a quality measure to judge the quality of images produced by JPEG(DCT-based) image compression technique. Two techniques, TIFF (IZW) compression and JPEG(DCT-based) compression are compared with respect to compression ratios achieved. JPEG(DCT-based) yields better compression ratios, and it seems to be a good choice for image compression. Further in the investigation, it is found out that, for grey-scale images, the best compression ratios were obtained when the quality factors between 60 and 90 were used (i.e., at a compression ratio of 1:10 to 1:20). At these quality factors the reconstructed data has virtually no degradation in the visual and geometric quality for the application at hand. Recently, many fast and efficient image file formats have also been developed to store, organise and display images in an efficient way. Almost every image file format incorporates some kind of compression method to manage data within common place networks and storage devices. The current major file formats used in softcopy photogrammetry, remote sensing and · multimedia GIS. were also investigated. It was also found out that the choice of a particular image file format for a given application generally involves several interdependent considerations including quality; flexibility; computation; storage, or transmission. The suitability of a file format for a given purpose is · best determined by knowing its original purpose. Some of these are widely used (e.g., TIFF, JPEG) and serve as exchange formats. Others are adapted to the needs of particular applications or particular operating systems

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF

    Patch-based Denoising Algorithms for Single and Multi-view Images

    Get PDF
    In general, all single and multi-view digital images are captured using sensors, where they are often contaminated with noise, which is an undesired random signal. Such noise can also be produced during transmission or by lossy image compression. Reducing the noise and enhancing those images is among the fundamental digital image processing tasks. Improving the performance of image denoising methods, would greatly contribute to single or multi-view image processing techniques, e.g. segmentation, computing disparity maps, etc. Patch-based denoising methods have recently emerged as the state-of-the-art denoising approaches for various additive noise levels. This thesis proposes two patch-based denoising methods for single and multi-view images, respectively. A modification to the block matching 3D algorithm is proposed for single image denoising. An adaptive collaborative thresholding filter is proposed which consists of a classification map and a set of various thresholding levels and operators. These are exploited when the collaborative hard-thresholding step is applied. Moreover, the collaborative Wiener filtering is improved by assigning greater weight when dealing with similar patches. For the denoising of multi-view images, this thesis proposes algorithms that takes a pair of noisy images captured from two different directions at the same time (stereoscopic images). The structural, maximum difference or the singular value decomposition-based similarity metrics is utilized for identifying locations of similar search windows in the input images. The non-local means algorithm is adapted for filtering these noisy multi-view images. The performance of both methods have been evaluated both quantitatively and qualitatively through a number of experiments using the peak signal-to-noise ratio and the mean structural similarity measure. Experimental results show that the proposed algorithm for single image denoising outperforms the original block matching 3D algorithm at various noise levels. Moreover, the proposed algorithm for multi-view image denoising can effectively reduce noise and assist to estimate more accurate disparity maps at various noise levels

    An intelligent system for the classification and selection of novel and efficient lossless image compression algorithms

    Get PDF
    We are currently living in an era revolutionised by the development of smart phones and digital cameras. Most people are using phones and cameras in every aspect of their lives. With this development comes a high level of competition between the technology companies developing these devices, each one trying to enhance its products to meet the new market demands. One of the most sought-after criteria of any smart phone or digital camera is the camera’s resolution. Digital imaging and its applications are growing rapidly; as a result of this growth, the image size is increasing, and alongside this increase comes the important challenge of saving these large-sized images and transferring them over networks. With the increase in image size, the interest in image compression is increasing as well, to improve the storage size and transfer time. In this study, the researcher proposes two new lossless image compression algorithms. Both proposed algorithms focus on decreasing the image size by reducing the image bit-depth through using well defined methods of reducing the coloration between the image intensities.The first proposed lossless image compression algorithm is called Column Subtraction Compression (CSC), which aims to decrease the image size without losing any of the image information by using a colour transformation method as a pre-processing phase, followed by the proposed Column Subtraction Compression function to decrease the image size. The proposed algorithm is specially designed for compressing natural images. The CSC algorithm was evaluated for colour images and compared against benchmark schemes obtained from (Khan et al., 2017). It achieved the best compression size over the existing methods by enhancing the average storage saving of the BBWCA, JPEG 2000 LS, KMTF– BWCA, HEVC and basic BWCA algorithms by 2.5%, 15.6%, 41.6%, 7.8% and 45.07% respectively. The CSC algorithm simple implementation positively affects the execution time and makes it one of the fastest algorithms, since it needed less than 0.5 second for compressing and decompressing natural images obtained from (Khan et al., 2017). The proposed algorithm needs only 19.36 seconds for compressing and decompressing all of the 10 images from the Kodak image set, while the BWCA, KMTF – BWCA and BBWCA need 398.5s, 429.24s and 475.38s respectively. Nevertheless, the CSC algorithm achieved less compression ratio, when compressing low resolution images since it was designed for compressing high resolution images. To solve this issue, the researcher proposed the Low-Resolution Column Subtraction Compression algorithm (LRCSC) to enhance the CSC low compression ratio related to compressing low-resolution images.The LRCSC algorithm starts by using the CSC algorithm as a pre-processing phase, followed by the Huffman algorithm and Run-Length Coding (RLE) to decrease the image size as a final compression phase. The LRCSC enhanced the average storage saving of the CSC algorithm for raster map images by achieving 13.68% better compression size. The LRCSC algorithm decreases the raster map image set size by saving 96% from the original image set size but did not reach the best results when compared with the PNG, GIF, BLiSE and BBWCA where the storage saving is 97.42%, 98.33%, 98.92% and 98.93% respectively. The LRCSC algorithm enhanced the compression execution time with acceptable compression ratio. Both of the proposed algorithms are effective with any image types such as colour or greyscale images. The proposed algorithms save a lot of memory storage and dramatically decreased the execution time.Finally, to take full benefits of the two newly developed algorithms, anew system is developed based on running both of the algorithm for the same input image and then suggest the appropriate algorithm to be used for the de-compression phase
    • …
    corecore