11,475 research outputs found
Ubiquitous Scalable Graphics: An End-to-End Framework using Wavelets
Advances in ubiquitous displays and wireless communications have fueled the emergence of exciting mobile graphics applications including 3D virtual product catalogs, 3D maps, security monitoring systems and mobile games. Current trends that use cameras to capture geometry, material reflectance and other graphics elements means that very high resolution inputs is accessible to render extremely photorealistic scenes. However, captured graphics content can be many gigabytes in size, and must be simplified before they can be used on small mobile devices, which have limited resources, such as memory, screen size and battery energy. Scaling and converting graphics content to a suitable rendering format involves running several software tools, and selecting the best resolution for target mobile device is often done by trial and error, which all takes time. Wireless errors can also affect transmitted content and aggressive compression is needed for low-bandwidth wireless networks. Most rendering algorithms are currently optimized for visual realism and speed, but are not resource or energy efficient on mobile device. This dissertation focuses on the improvement of rendering performance by reducing the impacts of these problems with UbiWave, an end-to-end Framework to enable real time mobile access to high resolution graphics using wavelets. The framework tackles the issues including simplification, transmission, and resource efficient rendering of graphics content on mobile device based on wavelets by utilizing 1) a Perceptual Error Metric (PoI) for automatically computing the best resolution of graphics content for a given mobile display to eliminate guesswork and save resources, 2) Unequal Error Protection (UEP) to improve the resilience to wireless errors, 3) an Energy-efficient Adaptive Real-time Rendering (EARR) heuristic to balance energy consumption, rendering speed and image quality and 4) an Energy-efficient Streaming Technique. The results facilitate a new class of mobile graphics application which can gracefully adapt the lowest acceptable rendering resolution to the wireless network conditions and the availability of resources and battery energy on mobile device adaptively
Green compressive sampling reconstruction in IoT networks
In this paper, we address the problem of green Compressed Sensing (CS) reconstruction within Internet of Things (IoT) networks, both in terms of computing architecture and reconstruction algorithms. The approach is novel since, unlike most of the literature dealing with energy efficient gathering of the CS measurements, we focus on the energy efficiency of the signal reconstruction stage given the CS measurements. As a first novel contribution, we present an analysis of the energy consumption within the IoT network under two computing architectures. In the first one, reconstruction takes place within the IoT network and the reconstructed data are encoded and transmitted out of the IoT network; in the second one, all the CS measurements are forwarded to off-network devices for reconstruction and storage, i.e., reconstruction is off-loaded. Our analysis shows that the two architectures significantly differ in terms of consumed energy, and it outlines a theoretically motivated criterion to select a green CS reconstruction computing architecture. Specifically, we present a suitable decision function to determine which architecture outperforms the other in terms of energy efficiency. The presented decision function depends on a few IoT network features, such as the network size, the sink connectivity, and other systems’ parameters. As a second novel contribution, we show how to overcome classical performance comparison of different CS reconstruction algorithms usually carried out w.r.t. the achieved accuracy. Specifically, we consider the consumed energy and analyze the energy vs. accuracy trade-off. The herein presented approach, jointly considering signal processing and IoT network issues, is a relevant contribution for designing green compressive sampling architectures in IoT networks
Semantic Compression for Edge-Assisted Systems
A novel semantic approach to data selection and compression is presented for
the dynamic adaptation of IoT data processing and transmission within "wireless
islands", where a set of sensing devices (sensors) are interconnected through
one-hop wireless links to a computational resource via a local access point.
The core of the proposed technique is a cooperative framework where local
classifiers at the mobile nodes are dynamically crafted and updated based on
the current state of the observed system, the global processing objective and
the characteristics of the sensors and data streams. The edge processor plays a
key role by establishing a link between content and operations within the
distributed system. The local classifiers are designed to filter the data
streams and provide only the needed information to the global classifier at the
edge processor, thus minimizing bandwidth usage. However, the better the
accuracy of these local classifiers, the larger the energy necessary to run
them at the individual sensors. A formulation of the optimization problem for
the dynamic construction of the classifiers under bandwidth and energy
constraints is proposed and demonstrated on a synthetic example.Comment: Presented at the Information Theory and Applications Workshop (ITA),
February 17, 201
JALAD: Joint Accuracy- and Latency-Aware Deep Structure Decoupling for Edge-Cloud Execution
Recent years have witnessed a rapid growth of deep-network based services and
applications. A practical and critical problem thus has emerged: how to
effectively deploy the deep neural network models such that they can be
executed efficiently. Conventional cloud-based approaches usually run the deep
models in data center servers, causing large latency because a significant
amount of data has to be transferred from the edge of network to the data
center. In this paper, we propose JALAD, a joint accuracy- and latency-aware
execution framework, which decouples a deep neural network so that a part of it
will run at edge devices and the other part inside the conventional cloud,
while only a minimum amount of data has to be transferred between them. Though
the idea seems straightforward, we are facing challenges including i) how to
find the best partition of a deep structure; ii) how to deploy the component at
an edge device that only has limited computation power; and iii) how to
minimize the overall execution latency. Our answers to these questions are a
set of strategies in JALAD, including 1) A normalization based in-layer data
compression strategy by jointly considering compression rate and model
accuracy; 2) A latency-aware deep decoupling strategy to minimize the overall
execution latency; and 3) An edge-cloud structure adaptation strategy that
dynamically changes the decoupling for different network conditions.
Experiments demonstrate that our solution can significantly reduce the
execution latency: it speeds up the overall inference execution with a
guaranteed model accuracy loss.Comment: conference, copyright transfered to IEE
- …