30 research outputs found

    Cost-effective Hardware Design of a SPIHT Compression Algorithm

    Get PDF
    학위논문 (석사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2015. 2. Jae Ha Kim.Set Partitioning In Hierarchical Trees (SPIHT) is one of most popular embedded coding algorithms applied for wavelet coding images. It allows progressive transmission of information and gives high coding efficiency. In addition, it can omit entropy coding of the bit stream by arithmetic code with only small loss in performance. Thus it offers a cheaper and faster hardware design. In this dissertation, a cost-effective design of a SPIHT-based algorithm is proposed. In this algorithm, an image is partitioned into 1x64 blocks, each of which is transformed by DWT to generate wavelet coefficients. The wavelet coefficients are coded by SPIHT to generate bit-stream. Due to the mismatch of the data structure between DWT and SPIHT, the large buffers are required. In order to reduce buffers, a new data structure of wavelet coefficients and partitioned SPIHT are proposed. A wavelet-based block is partitioned into small sub-blocks each of which is compressed independently. To minimize distortion due to the sub-block-based compression, a bit-allocation scheme is proposed. The proposed design is implemented in both software and hardware. Experimental results show that the proposed design reduces the buffer size while minimizing the degradation of the rate-distortion performance. It is proved that the proposed design outperforms previous designs in hardware cost.Chapter Ⅰ. Introduction 1 Chapter Ⅱ. Basic Architecture of the compression algorithm 5 Chapter Ⅲ. A Partitioned NLS Algorithm 17 Chapter Ⅳ. Adjustment of the target bit lengths for individual sub-blocks Chapter Ⅴ. Experimental Results 35 Chapter Ⅵ. Conclusion 48 References 50 Abstract 52 초록 53Maste

    Wildfire Monitoring Based on Energy Efficient Clustering Approach for FANETS

    Get PDF
    Forest fires are a significant threat to the ecological system’s stability. Several attempts have been made to detect forest fires using a variety of approaches, including optical fire sensors, and satellite-based technologies, all of which have been unsuccessful. In today’s world, research on flying ad hoc networks (FANETs) is a thriving field and can be used successfully. This paper describes a unique clustering approach that identifies the presence of a fire zone in a forest and transfers all sensed data to a base station as soon as feasible via wireless communication. The fire department takes the required steps to prevent the spread of the fire. It is proposed in this study that an efficient clustering approach be used to deal with routing and energy challenges to extend the lifetime of an unmanned aerial vehicle (UAV) in case of forest fires. Due to the restricted energy and high mobility, this directly impacts the flying duration and routing of FANET nodes. As a result, it is vital to enhance the lifetime of wireless sensor networks (WSNs) to maintain high system availability. Our proposed algorithm EE-SS regulates the energy usage of nodes while taking into account the features of a disaster region and other factors. For firefighting, sensor nodes are placed throughout the forest zone to collect essential data points for identifying forest fires and dividing them into distinct clusters. All of the sensor nodes in the cluster communicate their packets to the base station continually through the cluster head. When FANET nodes communicate with one another, their transmission range is constantly adjusted to meet their operating requirements. This paper examines the existing clustering techniques for forest fire detection approaches restricted to wireless sensor networks and their limitations. Our newly designed algorithm chooses the most optimum cluster heads (CHs) based on their fitness, reducing the routing overhead and increasing the system’s efficiency. Our proposed method results from simulations are compared with the existing approaches such as LEACH, LEACH-C, PSO-HAS, and SEED. The evaluation is carried out concerning overall energy usage, residual energy, the count of live nodes, the network lifetime, and the time it takes to build a cluster compared to other approaches. As a result, our proposed EE-SS algorithm outperforms all the considered state-of-art algorithms.publishedVersio

    Selection of Wavelet Basis Function for Image Compression : a Review

    Get PDF
    Wavelets are being suggested as a platform for various tasks in image processing. The advantage of wavelets lie in its time frequency resolution. The use of different basis functions in the form of different wavelets made the wavelet analysis as a destination for many applications. The performance of a particular technique depends on the wavelet coefficients arrived after applying the wavelet transform. The coefficients for a specific input signal depends on the basis functions used in the wavelet transform. Hence in this paper toward this end, different basis functions and their features are presented. As the image compression task depends on wavelet transform to large extent from few decades, the selection of basis function for image compression should be taken with care. In this paper, the factors influencing the performance of image compression are presented

    Dimensionality reduction and sparse representations in computer vision

    Get PDF
    The proliferation of camera equipped devices, such as netbooks, smartphones and game stations, has led to a significant increase in the production of visual content. This visual information could be used for understanding the environment and offering a natural interface between the users and their surroundings. However, the massive amounts of data and the high computational cost associated with them, encumbers the transfer of sophisticated vision algorithms to real life systems, especially ones that exhibit resource limitations such as restrictions in available memory, processing power and bandwidth. One approach for tackling these issues is to generate compact and descriptive representations of image data by exploiting inherent redundancies. We propose the investigation of dimensionality reduction and sparse representations in order to accomplish this task. In dimensionality reduction, the aim is to reduce the dimensions of the space where image data reside in order to allow resource constrained systems to handle them and, ideally, provide a more insightful description. This goal is achieved by exploiting the inherent redundancies that many classes of images, such as faces under different illumination conditions and objects from different viewpoints, exhibit. We explore the description of natural images by low dimensional non-linear models called image manifolds and investigate the performance of computer vision tasks such as recognition and classification using these low dimensional models. In addition to dimensionality reduction, we study a novel approach in representing images as a sparse linear combination of dictionary examples. We investigate how sparse image representations can be used for a variety of tasks including low level image modeling and higher level semantic information extraction. Using tools from dimensionality reduction and sparse representation, we propose the application of these methods in three hierarchical image layers, namely low-level features, mid-level structures and high-level attributes. Low level features are image descriptors that can be extracted directly from the raw image pixels and include pixel intensities, histograms, and gradients. In the first part of this work, we explore how various techniques in dimensionality reduction, ranging from traditional image compression to the recently proposed Random Projections method, affect the performance of computer vision algorithms such as face detection and face recognition. In addition, we discuss a method that is able to increase the spatial resolution of a single image, without using any training examples, according to the sparse representations framework. In the second part, we explore mid-level structures, including image manifolds and sparse models, produced by abstracting information from low-level features and offer compact modeling of high dimensional data. We propose novel techniques for generating more descriptive image representations and investigate their application in face recognition and object tracking. In the third part of this work, we propose the investigation of a novel framework for representing the semantic contents of images. This framework employs high level semantic attributes that aim to bridge the gap between the visual information of an image and its textual description by utilizing low level features and mid level structures. This innovative paradigm offers revolutionary possibilities including recognizing the category of an object from purely textual information without providing any explicit visual example

    Development of Some Efficient Lossless and Lossy Hybrid Image Compression Schemes

    Get PDF
    Digital imaging generates a large amount of data which needs to be compressed, without loss of relevant information, to economize storage space and allow speedy data transfer. Though both storage and transmission medium capacities have been continuously increasing over the last two decades, they dont match the present requirement. Many lossless and lossy image compression schemes exist for compression of images in space domain and transform domain. Employing more than one traditional image compression algorithms results in hybrid image compression techniques. Based on the existing schemes, novel hybrid image compression schemes are developed in this doctoral research work, to compress the images effectually maintaining the quality

    Wavelet-based image compression for mobile applications.

    Get PDF
    The transmission of digital colour images is rapidly becoming popular on mobile telephones, Personal Digital Assistant (PDA) technology and other wireless based image services. However, transmitting digital colour images via mobile devices is badly affected by low air bandwidth. Advances in communications Channels (example 3G communication network) go some way to addressing this problem but the rapid increase in traffic and demand for ever better quality images, means that effective data compression techniques are essential for transmitting and storing digital images. The main objective of this thesis is to offer a novel image compression technique that can help to overcome the bandwidth problem. This thesis has investigated and implemented three different wavelet-based compression schemes with a focus on a suitable compression method for mobile applications. The first described algorithm is a dual wavelet compression algorithm, which is a modified conventional wavelet compression method. The algorithm uses different wavelet filters to decompose the luminance and chrominance components separately. In addition, different levels of decomposition can also be applied to each component separately. The second algorithm is segmented wavelet-based, which segments an image into its smooth and nonsmooth parts. Different wavelet filters are then applied to the segmented parts of the image. Finally, the third algorithm is the hybrid wavelet-based compression System (HWCS), where the subject of interest is cropped and is then compressed using a wavelet-based method. The details of the background are reduced by averaging it and sending the background separately from the compressed subject of interest. The final image is reconstructed by replacing the averaged background image pixels with the compressed cropped image. For each algorithm the experimental results presented in this thesis clearly demonstrated that encoder output can be effectively reduced while maintaining an acceptable image visual quality particularly when compared to a conventional wavelet-based compression scheme

    Remote Sensing Data Compression

    Get PDF
    A huge amount of data is acquired nowadays by different remote sensing systems installed on satellites, aircrafts, and UAV. The acquired data then have to be transferred to image processing centres, stored and/or delivered to customers. In restricted scenarios, data compression is strongly desired or necessary. A wide diversity of coding methods can be used, depending on the requirements and their priority. In addition, the types and properties of images differ a lot, thus, practical implementation aspects have to be taken into account. The Special Issue paper collection taken as basis of this book touches on all of the aforementioned items to some degree, giving the reader an opportunity to learn about recent developments and research directions in the field of image compression. In particular, lossless and near-lossless compression of multi- and hyperspectral images still remains current, since such images constitute data arrays that are of extremely large size with rich information that can be retrieved from them for various applications. Another important aspect is the impact of lossless compression on image classification and segmentation, where a reasonable compromise between the characteristics of compression and the final tasks of data processing has to be achieved. The problems of data transition from UAV-based acquisition platforms, as well as the use of FPGA and neural networks, have become very important. Finally, attempts to apply compressive sensing approaches in remote sensing image processing with positive outcomes are observed. We hope that readers will find our book useful and interestin

    Impact of Feature Representation on Remote Sensing Image Retrieval

    Get PDF
    Remote sensing images are acquired using special platforms, sensors and are classified as aerial, multispectral and hyperspectral images. Multispectral and hyperspectral images are represented using large spectral vectors as compared to normal Red, Green, Blue (RGB) images. Hence, remote sensing image retrieval process from large archives is a challenging task.  Remote sensing image retrieval mainly consist of feature representation as first step and finding out similar images to a query image as second step. Feature representation plays important part in the performance of remote sensing image retrieval process. Research work focuses on impact of feature representation of remote sensing images on the performance of remote sensing image retrieval. This study shows that more discriminative features of remote sensing images are needed to improve performance of remote sensing image retrieval process

    Wavelet-based image compression for mobile applications

    Get PDF
    The transmission of digital colour images is rapidly becoming popular on mobile telephones, Personal Digital Assistant (PDA) technology and other wireless based image services. However, transmitting digital colour images via mobile devices is badly affected by low air bandwidth. Advances in communications Channels (example 3G communication network) go some way to addressing this problem but the rapid increase in traffic and demand for ever better quality images, means that effective data compression techniques are essential for transmitting and storing digital images. The main objective of this thesis is to offer a novel image compression technique that can help to overcome the bandwidth problem. This thesis has investigated and implemented three different wavelet-based compression schemes with a focus on a suitable compression method for mobile applications. The first described algorithm is a dual wavelet compression algorithm, which is a modified conventional wavelet compression method. The algorithm uses different wavelet filters to decompose the luminance and chrominance components separately. In addition, different levels of decomposition can also be applied to each component separately. The second algorithm is segmented wavelet-based, which segments an image into its smooth and nonsmooth parts. Different wavelet filters are then applied to the segmented parts of the image. Finally, the third algorithm is the hybrid wavelet-based compression System (HWCS), where the subject of interest is cropped and is then compressed using a wavelet-based method. The details of the background are reduced by averaging it and sending the background separately from the compressed subject of interest. The final image is reconstructed by replacing the averaged background image pixels with the compressed cropped image. For each algorithm the experimental results presented in this thesis clearly demonstrated that encoder output can be effectively reduced while maintaining an acceptable image visual quality particularly when compared to a conventional wavelet-based compression scheme.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore