275 research outputs found

    Depth-first search embedded wavelet algorithm for hardware implementation

    Get PDF
    The emerging technology of image communication over wireless transmission channels requires several new challenges to be simultaneously met at the algorithm and architecture levels. At the algorithm level, desirable features include high coding performance, bit stream scalability, robustness to transmission errors and suitability for content-based coding schemes. At the architecture level, we require efficient architectures for construction of portable devices with small size and low power consumption. An important question is to ask if a single coding algorithm can be designed to meet the diverse requirements. Recently, researchers working on improving different features have converged on a set of coding schemes commonly known as embedded wavelet algorithms. Currently, these algorithms enjoy the highest coding performances reported in the literature. In addition, embedded wavelet algorithms have the natural feature of being able to meet a target bit rate precisely. Furthermore work on improving the algorithm robustness has shown much promise. The potential of embedded wavelet techniques has been acknowledged by its inclusion in the new JPEG2000 and MPEG-4 image and video coding standards

    Flexi-WVSNP-DASH: A Wireless Video Sensor Network Platform for the Internet of Things

    Get PDF
    abstract: Video capture, storage, and distribution in wireless video sensor networks (WVSNs) critically depends on the resources of the nodes forming the sensor networks. In the era of big data, Internet of Things (IoT), and distributed demand and solutions, there is a need for multi-dimensional data to be part of the Sensor Network data that is easily accessible and consumable by humanity as well as machinery. Images and video are expected to become as ubiquitous as is the scalar data in traditional sensor networks. The inception of video-streaming over the Internet, heralded a relentless research for effective ways of distributing video in a scalable and cost effective way. There has been novel implementation attempts across several network layers. Due to the inherent complications of backward compatibility and need for standardization across network layers, there has been a refocused attention to address most of the video distribution over the application layer. As a result, a few video streaming solutions over the Hypertext Transfer Protocol (HTTP) have been proposed. Most notable are Apple’s HTTP Live Streaming (HLS) and the Motion Picture Experts Groups Dynamic Adaptive Streaming over HTTP (MPEG-DASH). These frameworks, do not address the typical and future WVSN use cases. A highly flexible Wireless Video Sensor Network Platform and compatible DASH (WVSNP-DASH) are introduced. The platform's goal is to usher video as a data element that can be integrated into traditional and non-Internet networks. A low cost, scalable node is built from the ground up to be fully compatible with the Internet of Things Machine to Machine (M2M) concept, as well as the ability to be easily re-targeted to new applications in a short time. Flexi-WVSNP design includes a multi-radio node, a middle-ware for sensor operation and communication, a cross platform client facing data retriever/player framework, scalable security as well as a cohesive but decoupled hardware and software design.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Side Information Generation in Distributed Video Coding

    Get PDF
    Distributed Video Coding (DVC) coding paradigm is based largely on two theorems of Information Theory and Coding, which are Slepian-wolf theorem and Wyner-Ziv theorem that were introduced in 1973 and 1976 respectively. DVC bypasses the need of performing Motion Compensation (MC) and Motion Estimation (ME) which are largely responsible for the complex encoder in devices. DVC instead relies on exploiting the source statistics, totally/partially, at only the decoder. Wyner-Ziv coding, a particular case of DVC, which is explored in detail in this thesis. In this scenario, two correlated sources are independently encoded, while the encoded streams are decoded jointly at the single decoder exploiting the correlation between them. Although the distributed coding study dates back to 1970’s, but the practical efforts and developments in the field began only last decade. Upcoming applications (like those of video surveillance, mobile camera, wireless sensor networks) can rely on DVC, as they don’t have high computational capabilities and/or high storage capacity. Current coding paradigms, MPEG-x and H.26x standards, predicts the frame by means of Motion Compensation and Motion Estimation which leads to highly complex encoder. Whilst in WZ coding, the correlation between temporally adjacent frames is performed only at the decoder, which results in fairly low complex encoder. The main objective of the current thesis is to investigate for an improved scheme for Side Information (SI) generation in DVC framework. SI frames, available at the decoder are generated through the means of Radial Basis Function Network (RBFN) neural network. Frames are estimated from decoded key frames block-by-block. RBFN network is trained offline using training patterns from different frames collected from standard video sequences
    corecore