543 research outputs found

    Discovering Regularity in Point Clouds of Urban Scenes

    Full text link
    Despite the apparent chaos of the urban environment, cities are actually replete with regularity. From the grid of streets laid out over the earth, to the lattice of windows thrown up into the sky, periodic regularity abounds in the urban scene. Just as salient, though less uniform, are the self-similar branching patterns of trees and vegetation that line streets and fill parks. We propose novel methods for discovering these regularities in 3D range scans acquired by a time-of-flight laser sensor. The applications of this regularity information are broad, and we present two original algorithms. The first exploits the efficiency of the Fourier transform for the real-time detection of periodicity in building facades. Periodic regularity is discovered online by doing a plane sweep across the scene and analyzing the frequency space of each column in the sweep. The simplicity and online nature of this algorithm allow it to be embedded in scanner hardware, making periodicity detection a built-in feature of future 3D cameras. We demonstrate the usefulness of periodicity in view registration, compression, segmentation, and facade reconstruction. The second algorithm leverages the hierarchical decomposition and locality in space of the wavelet transform to find stochastic parameters for procedural models that succinctly describe vegetation. These procedural models facilitate the generation of virtual worlds for architecture, gaming, and augmented reality. The self-similarity of vegetation can be inferred using multi-resolution analysis to discover the underlying branching patterns. We present a unified framework of these tools, enabling the modeling, transmission, and compression of high-resolution, accurate, and immersive 3D images

    Map online system using internet-based image catalogue

    Get PDF
    Digital maps carry along its geodata information such as coordinate that is important in one particular topographic and thematic map. These geodatas are meaningful especially in military field. Since the maps carry along this information, its makes the size of the images is too big. The bigger size, the bigger storage is required to allocate the image file. It also can cause longer loading time. These conditions make it did not suitable to be applied in image catalogue approach via internet environment. With compression techniques, the image size can be reduced and the quality of the image is still guaranteed without much changes. This report is paying attention to one of the image compression technique using wavelet technology. Wavelet technology is much batter than any other image compression technique nowadays. As a result, the compressed images applied to a system called Map Online that used Internet-based Image Catalogue approach. This system allowed user to buy map online. User also can download the maps that had been bought besides using the searching the map. Map searching is based on several meaningful keywords. As a result, this system is expected to be used by Jabatan Ukur dan Pemetaan Malaysia (JUPEM) in order to make the organization vision is implemented

    A State Table SPHIT Approach for Modified Curvelet-based Medical Image Compression

    Get PDF
    Medical imaging plays a significant role in clinical practice. Storing and transferring a large volume of images can be complex and inefficient. This paper presents the development of a new compression technique that combines the fast discrete curvelet transform (FDCvT) with state table set partitioning in the hierarchical trees (STS) encoding scheme. The curvelet transform is an extension of the wavelet transform algorithm that represents data based on scale and position. Initially, the medical image was decomposed using the FDCvT algorithm. The FDCvT algorithm creates symmetrical values for the detail coefficients, and these coefficients are modified to improve the efficiency of the algorithm. The curvelet coefficients are then encoded using the STS and differential pulse-code modulation (DPCM). The greatest amount of energy is contained in the coarse coefficients, which are encoded using the DPCM method. The finest and modified detail coefficients are encoded using the STS method. A variety of medical modalities, including computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI), are used to verify the performance of the proposed technique. Various quality metrics, including peak signal-to-noise ratio (PSNR), compression ratio (CR), and structural similarity index (SSIM), are used to evaluate the compression results. Additionally, the computation time for the encoding (ET) and decoding (DT) processes is measured. The experimental results showed that the PET image obtained higher values of the PSNR and CR. The CT image provides high quality for the reconstructed image, with an SSIM value of 0.96 and the fastest ET of 0.13 seconds. The MRI image has the shortest DT, which is 0.23 seconds

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    Detection and classification of non-stationary signals using sparse representations in adaptive dictionaries

    Get PDF
    Automatic classification of non-stationary radio frequency (RF) signals is of particular interest in persistent surveillance and remote sensing applications. Such signals are often acquired in noisy, cluttered environments, and may be characterized by complex or unknown analytical models, making feature extraction and classification difficult. This thesis proposes an adaptive classification approach for poorly characterized targets and backgrounds based on sparse representations in non-analytical dictionaries learned from data. Conventional analytical orthogonal dictionaries, e.g., Short Time Fourier and Wavelet Transforms, can be suboptimal for classification of non-stationary signals, as they provide a rigid tiling of the time-frequency space, and are not specifically designed for a particular signal class. They generally do not lead to sparse decompositions (i.e., with very few non-zero coefficients), and use in classification requires separate feature selection algorithms. Pursuit-type decompositions in analytical overcomplete (non-orthogonal) dictionaries yield sparse representations, by design, and work well for signals that are similar to the dictionary elements. The pursuit search, however, has a high computational cost, and the method can perform poorly in the presence of realistic noise and clutter. One such overcomplete analytical dictionary method is also analyzed in this thesis for comparative purposes. The main thrust of the thesis is learning discriminative RF dictionaries directly from data, without relying on analytical constraints or additional knowledge about the signal characteristics. A pursuit search is used over the learned dictionaries to generate sparse classification features in order to identify time windows that contain a target pulse. Two state-of-the-art dictionary learning methods are compared, the K-SVD algorithm and Hebbian learning, in terms of their classification performance as a function of dictionary training parameters. Additionally, a novel hybrid dictionary algorithm is introduced, demonstrating better performance and higher robustness to noise. The issue of dictionary dimensionality is explored and this thesis demonstrates that undercomplete learned dictionaries are suitable for non-stationary RF classification. Results on simulated data sets with varying background clutter and noise levels are presented. Lastly, unsupervised classification with undercomplete learned dictionaries is also demonstrated in satellite imagery analysis

    AA Flexible and Scalable Authentication Scheme for JPEG 2000 Image Codestreams

    Get PDF
    JPEG2000 is an emerging standard for still image compression and is becoming the solution of choice for many digital imaging fields and applications. An important aspect of JPEG2000 is its “compress once, decompress many ways” property [1], i. e., it allows extraction of various sub-images (e.g., images with various resolutions, pixel fidelities, tiles and components) all from a single compressed image codestream. In this paper, we present a flexible and scalable authentication scheme for JPEG2000 images based on the Merkle hash tree and digital signature. Our scheme is fully compatible with JPEG2000 and possesses a “sign once, verify many ways ” property. That is, it allows users to verify the authenticity and integrity of different sub-images extracted from a single compressed codestream protected with a single digital signature. Categories and Subject Descriptors K.4.4 [Computers and Society]: Electronic Commerce— intellectual property, security; I.3.8 [Computer Methodologies]

    High ratio wavelet video compression through real-time rate-distortion estimation.

    Get PDF
    Thesis (M.Sc.Eng.)-University of Natal, Durban, 2003.The success of the wavelet transform in the compression of still images has prompted an expanding effort to exercise this transform in the compression of video. Most existing video compression methods incorporate techniques from still image compression, such techniques being abundant, well defined and successful. This dissertation commences with a thorough review and comparison of wavelet still image compression techniques. Thereafter an examination of wavelet video compression techniques is presented. Currently, the most effective video compression system is the DCT based framework, thus a comparison between these and the wavelet techniques is also given. Based on this review, this dissertation then presents a new, low-complexity, wavelet video compression scheme. Noting from a complexity study that the generation of temporally decorrelated, residual frames represents a significant computational burden, this scheme uses the simplest such technique; difference frames. In the case of local motion, these difference frames exhibit strong spatial clustering of significant coefficients. A simple spatial syntax is created by splitting the difference frame into tiles. Advantage of the spatial clustering may then be taken by adaptive bit allocation between the tiles. This is the central idea of the method. In order to minimize the total distortion of the frame, the scheme uses the new p-domain rate-distortion estimation scheme with global numerical optimization to predict the optimal distribution of bits between tiles. Thereafter each tile is independently wavelet transformed and compressed using the SPIHT technique. Throughout the design process computational efficiency was the design imperative, thus leading to a real-time, software only, video compression scheme. The scheme is finally compared to both the current video compression standards and the leading wavelet schemes from the literature in terms of computational complexity visual quality. It is found that for local motion scenes the proposed algorithm executes approximately an order of magnitude faster than these methods, and presents output of similar quality. This algorithm is found to be suitable for implementation in mobile and embedded devices due to its moderate memory and computational requirements

    A flexible hardware architecture for 2-D discrete wavelet transform: design and FPGA implementation

    Get PDF
    The Discrete Wavelet Transform (DWT) is a powerful signal processing tool that has recently gained widespread acceptance in the field of digital image processing. The multiresolution analysis provided by the DWT addresses the shortcomings of the Fourier Transform and its derivatives. The DWT has proven useful in the area of image compression where it replaces the Discrete Cosine Transform (DCT) in new JPEG2000 and MPEG4 image and video compression standards. The Cohen-Daubechies-Feauveau (CDF) 5/3 and CDF 9/7 DWTs are used for reversible lossless and irreversible lossy compression encoders in the JPEG2000 standard respectively. The design and implementation of a flexible hardware architecture for the 2-D DWT is presented in this thesis. This architecture can be configured to perform both the forward and inverse DWT for any DWTfamily, using fixed-point arithmetic and no auxiliary memory. The Lifting Scheme method is used to perform the DWT instead of the less efficient convolution-based methods. The DWT core is modeled using MATLAB and highly parameterized VHDL. The VHDL model is synthesized to a Xilinx FPGA to prove hardware functionality. The CDF 5/3 and CDF 9/7 versions of the DWT are both modeled and used as comparisons throughout this thesis. The DWT core is used in conjunction with a very simple image denoising module to demonstrate the potential of the DWT core to perform image processing techniques. The CDF 5/3 hardware produces identical results to its theoretical MATLAB model. The fixed point CDF 9/7 deviates very slightly from its floating-point MATLAB model with a ~59dB PSNR deviation for nine levels of DWT decomposition. The execution time for performing both DWTs is nearly identical at -14 clock cycles per image pixel for one level of DWT decomposition. The hardware area generated for the CDF 5/3 is -16,000 gates using only 5% of the Xilinx FPGA hardware area, 2.185 MHz maximum clock speed and 24 mW power consumption. The simple wavelet image denoising techniques resulted in cleaned images up to -27 PSNR

    Doctor of Philosophy

    Get PDF
    dissertationBalancing the trade off between the spatial and temporal quality of interactive computer graphics imagery is one of the fundamental design challenges in the construction of rendering systems. Inexpensive interactive rendering hardware may deliver a high level of temporal performance if the level of spatial image quality is sufficiently constrained. In these cases, the spatial fidelity level is an independent parameter of the system and temporal performance is a dependent variable. The spatial quality parameter is selected for the system by the designer based on the anticipated graphics workload. Interactive ray tracing is one example; the algorithm is often selected due to its ability to deliver a high level of spatial fidelity, and the relatively lower level of temporal performance isreadily accepted. This dissertation proposes an algorithm to perform fine-grained adjustments to the trade off between the spatial quality of images produced by an interactive renderer, and the temporal performance or quality of the rendered image sequence. The approach first determines the minimum amount of sampling work necessary to achieve a certain fidelity level, and then allows the surplus capacity to be directed towards spatial or temporal fidelity improvement. The algorithm consists of an efficient parallel spatial and temporal adaptive rendering mechanism and a control optimization problem which adjusts the sampling rate based on a characterization of the rendered imagery and constraints on the capacity of the rendering system
    corecore