55,755 research outputs found
Scalable wavelet-based coding of irregular meshes with interactive region-of-interest support
This paper proposes a novel functionality in wavelet-based irregular mesh coding, which is interactive region-of-interest (ROI) support. The proposed approach enables the user to define the arbitrary ROIs at the decoder side and to prioritize and decode these regions at arbitrarily high-granularity levels. In this context, a novel adaptive wavelet transform for irregular meshes is proposed, which enables: 1) varying the resolution across the surface at arbitrarily fine-granularity levels and 2) dynamic tiling, which adapts the tile sizes to the local sampling densities at each resolution level. The proposed tiling approach enables a rate-distortion-optimal distribution of rate across spatial regions. When limiting the highest resolution ROI to the visible regions, the fine granularity of the proposed adaptive wavelet transform reduces the required amount of graphics memory by up to 50%. Furthermore, the required graphics memory for an arbitrary small ROI becomes negligible compared to rendering without ROI support, independent of any tiling decisions. Random access is provided by a novel dynamic tiling approach, which proves to be particularly beneficial for large models of over 10(6) similar to 10(7) vertices. The experiments show that the dynamic tiling introduces a limited lossless rate penalty compared to an equivalent codec without ROI support. Additionally, rate savings up to 85% are observed while decoding ROIs of tens of thousands of vertices
Fast Deep Multi-patch Hierarchical Network for Nonhomogeneous Image Dehazing
Recently, CNN based end-to-end deep learning methods achieve superiority in
Image Dehazing but they tend to fail drastically in Non-homogeneous dehazing.
Apart from that, existing popular Multi-scale approaches are runtime intensive
and memory inefficient. In this context, we proposed a fast Deep Multi-patch
Hierarchical Network to restore Non-homogeneous hazed images by aggregating
features from multiple image patches from different spatial sections of the
hazed image with fewer number of network parameters. Our proposed method is
quite robust for different environments with various density of the haze or fog
in the scene and very lightweight as the total size of the model is around 21.7
MB. It also provides faster runtime compared to current multi-scale methods
with an average runtime of 0.0145s to process 1200x1600 HD quality image.
Finally, we show the superiority of this network on Dense Haze Removal to other
state-of-the-art models.Comment: CVPR Workshops Proceedings 202
Random Linear Network Coding for 5G Mobile Video Delivery
An exponential increase in mobile video delivery will continue with the
demand for higher resolution, multi-view and large-scale multicast video
services. Novel fifth generation (5G) 3GPP New Radio (NR) standard will bring a
number of new opportunities for optimizing video delivery across both 5G core
and radio access networks. One of the promising approaches for video quality
adaptation, throughput enhancement and erasure protection is the use of
packet-level random linear network coding (RLNC). In this review paper, we
discuss the integration of RLNC into the 5G NR standard, building upon the
ideas and opportunities identified in 4G LTE. We explicitly identify and
discuss in detail novel 5G NR features that provide support for RLNC-based
video delivery in 5G, thus pointing out to the promising avenues for future
research.Comment: Invited paper for Special Issue "Network and Rateless Coding for
Video Streaming" - MDPI Informatio
Bounds on Contention Management in Radio Networks
The local broadcast problem assumes that processes in a wireless network are
provided messages, one by one, that must be delivered to their neighbors. In
this paper, we prove tight bounds for this problem in two well-studied wireless
network models: the classical model, in which links are reliable and collisions
consistent, and the more recent dual graph model, which introduces unreliable
edges. Our results prove that the Decay strategy, commonly used for local
broadcast in the classical setting, is optimal. They also establish a
separation between the two models, proving that the dual graph setting is
strictly harder than the classical setting, with respect to this primitive
Image-based compression, prioritized transmission and progressive rendering of circular light fields (CLFS) for ancient Chinese artifacts
This paper proposes an efficient algorithm for the compression, prioritized transmission and progressive rendering of circular light field (CLF) for ancient Chinese artifacts. It employs wavelet coder to achieve spatial scalability and divide the compressed data into a lower resolution base layer and an additional enhancement layer. The enhancement layer is coded as in JPEG2000 into packets where the base-layer is coded using disparity compensation prediction (DCP). The frame structure is designed to provide efficient access to the compressed data in order to support selective transmission and decoding. The depth and alpha maps are coded analogously. A prioritized transmission scheme which support interactive progressive rendering is also proposed to further reduce the latency and response time of rendering. © 2010 IEEE.published_or_final_versionThe 2010 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS), Kuala Lumpur Malaysia, 6-9 December 2010. In IEEE APCCAS Proceedings, 2010, p. 340-34
Management of spatial data for visualization on mobile devices
Vector-based mapping is emerging as a preferred format in Location-based
Services(LBS), because it can deliver an up-to-date and interactive map visualization.
The Progressive Transmission(PT) technique has been developed to
enable the ecient transmission of vector data over the internet by delivering
various incremental levels of detail(LoD). However, it is still challenging to apply
this technique in a mobile context due to many inherent limitations of mobile
devices, such as small screen size, slow processors and limited memory. Taking
account of these limitations, PT has been extended by developing a framework of
ecient data management for the visualization of spatial data on mobile devices.
A data generalization framework is proposed and implemented in a software
application. This application can signicantly reduce the volume of data for
transmission and enable quick access to a simplied version of data while preserving
appropriate visualization quality. Using volunteered geographic information
as a case-study, the framework shows
exibility in delivering up-to-date spatial
information from dynamic data sources.
Three models of PT are designed and implemented to transmit the additional
LoD renements: a full scale PT as an inverse of generalisation, a viewdependent
PT, and a heuristic optimised view-dependent PT. These models are
evaluated with user trials and application examples. The heuristic optimised
view-dependent PT has shown a signicant enhancement over the traditional PT
in terms of bandwidth-saving and smoothness of transitions.
A parallel data management strategy associated with three corresponding
algorithms has been developed to handle LoD spatial data on mobile clients.
This strategy enables the map rendering to be performed in parallel with a process
which retrieves the data for the next map location the user will require. A viewdependent
approach has been integrated to monitor the volume of each LoD for
visible area. The demonstration of a
exible rendering style shows its potential
use in visualizing dynamic geoprocessed data. Future work may extend this
to integrate topological constraints and semantic constraints for enhancing the
vector map visualization
3D oceanographic data compression using 3D-ODETLAP
This paper describes a 3D environmental data compression technique for oceanographic datasets. With proper point selection, our method approximates uncompressed marine data using an over-determined system of linear equations based on, but essentially different from, the Laplacian partial differential equation. Then this approximation is refined via an error metric. These two steps work alternatively until a predefined satisfying approximation is found. Using several different datasets and metrics, we demonstrate that our method has an excellent compression ratio. To further evaluate our method, we compare it with 3D-SPIHT. 3D-ODETLAP averages 20% better compression than 3D-SPIHT on our eight test datasets, from World Ocean Atlas 2005. Our method provides up to approximately six times better compression on datasets with relatively small variance. Meanwhile, with the same approximate mean error, we demonstrate a significantly smaller maximum error compared to 3D-SPIHT and provide a feature to keep the maximum error under a user-defined limit
A multi-camera approach to image-based rendering and 3-D/Multiview display of ancient chinese artifacts
published_or_final_versio
- …