25 research outputs found
An application specific low bit-rate video compression system geared towards vehicle tracking.
Thesis (M.Sc.Eng.)-University of Natal, Durban, 2003.The ability to communicate over a low bit-rate transmission channel has become the order of the day. In the past, transmitted data over a low bit-rate transmission channel, such as a wireless link, has
typically been reserved for speech and data. However, there is currently a great deal of interest being shown in the ability to transmit streaming video over such a link. These transmission channels are
generally bandwidth limited hence bit-rates need to be low. Video on the other hand requires large amounts of bandwidth for real-time streaming applications. Existing Video Compression standards
such as MPEG-l/2 have succeeded in reducing the bandwidth required for transmission by exploiting redundant video information in both the spatial and temporal domains. However such compression
systems are geared towards general applications hence they tend not to be suitable for low bit-rate applications. The objective of this work is to implement such a system. Following an investigation in the field of video compression, existing techniques have been adapted and integrated into an application specific low bit-rate video compression system. The implemented system is application specific as it has been designed to track vehicles of reasonable size within an otherwise static scene. Low bit-rate video is achieved by separating a video scene into two areas of interest, namely the background scene and objects that move with reference to this background. Once the background has been compressed and
transmitted to the decoder, the only data that is subsequently transmitted is that that has resulted from the segmentation and tracking of vehicles within the scene. This data is normally small in comparison with that of the background scene and therefore by only updating the background periodically, the resulting average output bit-rate is low. The implemented system is divided into two parts, namely a still image encoder and decoder based on a Variable Block-Size Discrete Cosine Transform, and a context-specific encoder and decoder that tracks vehicles in motion within a video scene. The encoder system has been implemented on the
Philips TriMedia TM-1300 digital signal processor (DSP). The encoder is able to capture streaming video, compress individual video frames as well as track objects in motion within a video scene. The decoder on the other hand has been implemented on the host PC in which the TriMedia DSP is plugged. A graphic user interface allows a system operator to control the compression system by
configuring various compression variables. For demonstration purposes, the host PC displays the decoded video stream as well as calculated rate metrics such as peak signal to noise ratio and resultant bit-rate. The implementation of the compression system is described whilst incorporating application examples and results. Conclusions are drawn and suggestions for further improvement are offered
Flexi-WVSNP-DASH: A Wireless Video Sensor Network Platform for the Internet of Things
abstract: Video capture, storage, and distribution in wireless video sensor networks
(WVSNs) critically depends on the resources of the nodes forming the sensor
networks. In the era of big data, Internet of Things (IoT), and distributed
demand and solutions, there is a need for multi-dimensional data to be part of
the Sensor Network data that is easily accessible and consumable by humanity as
well as machinery. Images and video are expected to become as ubiquitous as is
the scalar data in traditional sensor networks. The inception of video-streaming
over the Internet, heralded a relentless research for effective ways of
distributing video in a scalable and cost effective way. There has been novel
implementation attempts across several network layers. Due to the inherent
complications of backward compatibility and need for standardization across
network layers, there has been a refocused attention to address most of the
video distribution over the application layer. As a result, a few video
streaming solutions over the Hypertext Transfer Protocol (HTTP) have been
proposed. Most notable are Apple’s HTTP Live Streaming (HLS) and the Motion
Picture Experts Groups Dynamic Adaptive Streaming over HTTP (MPEG-DASH). These
frameworks, do not address the typical and future WVSN use cases. A highly
flexible Wireless Video Sensor Network Platform and compatible DASH (WVSNP-DASH)
are introduced. The platform's goal is to usher video as a data element that
can be integrated into traditional and non-Internet networks. A low cost,
scalable node is built from the ground up to be fully compatible with the
Internet of Things Machine to Machine (M2M) concept, as well as the ability to
be easily re-targeted to new applications in a short time. Flexi-WVSNP design
includes a multi-radio node, a middle-ware for sensor operation and
communication, a cross platform client facing data retriever/player framework,
scalable security as well as a cohesive but decoupled hardware and software
design.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
Parallel scalability of video decoders
An important question is whether emerging
and future applications exhibit sufficient parallelism, in particular thread-level parallelism, to exploit the large numbers of cores future chip multiprocessors (CMPs)
are expected to contain. As a case study we investigate the parallelism available in video decoders, an important application domain now and in the future. Specifically,
we analyze the parallel scalability of the H.264 decoding process. First we discuss the data structures and dependencies of H.264 and show what types of parallelism it allows to be exploited. We also show that previously proposed parallelization strategies such as slice-level, frame-level, and intra-frame macroblock (MB) level parallelism, are not sufficiently scalable.
Based on the observation that inter-frame dependencies have a limited spatial range we propose a new parallelization strategy, called Dynamic 3D-Wave. It allows certain MBs of consecutive frames to be decoded
in parallel. Using this new strategy we analyze the limits to the available MB-level parallelism in H.264. Using real movie sequences we find a maximum MB parallelism ranging from 4000 to 7000. We also perform
a case study to assess the practical value and possibilities of a highly parallelized H.264 application. The results show that H.264 exhibits sufficient parallelism to efficiently exploit the capabilities of future manycore CMPs.Peer ReviewedPostprint (published version
Novel block-based motion estimation and segmentation for video coding
EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Architectures for Adaptive Low-Power Embedded Multimedia Systems
This Ph.D. thesis describes novel hardware/software architectures for adaptive low-power embedded multimedia systems. Novel techniques for run-time adaptive energy management are proposed, such that both HW & SW adapt together to react to the unpredictable scenarios. A complete power-aware H.264 video encoder was developed. Comparison with state-of-the-art demonstrates significant energy savings while meeting the performance constraint and keeping the video quality degradation unnoticeable
DESIGN METHODOLOGIES FOR RELIABLE AND ENERGY-EFFICIENT MULTIPROCESSOR SYSTEM
Ph.DDOCTOR OF PHILOSOPH
The Fifth NASA Symposium on VLSI Design
The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design
Towards Computational Efficiency of Next Generation Multimedia Systems
To address throughput demands of complex applications (like Multimedia), a next-generation system designer needs to co-design and co-optimize the hardware and software layers. Hardware/software knobs must be tuned in synergy to increase the throughput efficiency. This thesis provides such algorithmic and architectural solutions, while considering the new technology challenges (power-cap and memory aging). The goal is to maximize the throughput efficiency, under timing- and hardware-constraints
NASA SERC 1990 Symposium on VLSI Design
This document contains papers presented at the first annual NASA Symposium on VLSI Design. NASA's involvement in this event demonstrates a need for research and development in high performance computing. High performance computing addresses problems faced by the scientific and industrial communities. High performance computing is needed in: (1) real-time manipulation of large data sets; (2) advanced systems control of spacecraft; (3) digital data transmission, error correction, and image compression; and (4) expert system control of spacecraft. Clearly, a valuable technology in meeting these needs is Very Large Scale Integration (VLSI). This conference addresses the following issues in VLSI design: (1) system architectures; (2) electronics; (3) algorithms; and (4) CAD tools