144,138 research outputs found
Optimal Energy Allocation for Kalman Filtering over Packet Dropping Links with Imperfect Acknowledgments and Energy Harvesting Constraints
This paper presents a design methodology for optimal transmission energy
allocation at a sensor equipped with energy harvesting technology for remote
state estimation of linear stochastic dynamical systems. In this framework, the
sensor measurements as noisy versions of the system states are sent to the
receiver over a packet dropping communication channel. The packet dropout
probabilities of the channel depend on both the sensor's transmission energies
and time varying wireless fading channel gains. The sensor has access to an
energy harvesting source which is an everlasting but unreliable energy source
compared to conventional batteries with fixed energy storages. The receiver
performs optimal state estimation with random packet dropouts to minimize the
estimation error covariances based on received measurements. The receiver also
sends packet receipt acknowledgments to the sensor via an erroneous feedback
communication channel which is itself packet dropping.
The objective is to design optimal transmission energy allocation at the
energy harvesting sensor to minimize either a finite-time horizon sum or a long
term average (infinite-time horizon) of the trace of the expected estimation
error covariance of the receiver's Kalman filter. These problems are formulated
as Markov decision processes with imperfect state information. The optimal
transmission energy allocation policies are obtained by the use of dynamic
programming techniques. Using the concept of submodularity, the structure of
the optimal transmission energy policies are studied. Suboptimal solutions are
also discussed which are far less computationally intensive than optimal
solutions. Numerical simulation results are presented illustrating the
performance of the energy allocation algorithms.Comment: Submitted to IEEE Transactions on Automatic Control. arXiv admin
note: text overlap with arXiv:1402.663
Restricted Strip Covering and the Sensor Cover Problem
Given a set of objects with durations (jobs) that cover a base region, can we
schedule the jobs to maximize the duration the original region remains covered?
We call this problem the sensor cover problem. This problem arises in the
context of covering a region with sensors. For example, suppose you wish to
monitor activity along a fence by sensors placed at various fixed locations.
Each sensor has a range and limited battery life. The problem is to schedule
when to turn on the sensors so that the fence is fully monitored for as long as
possible. This one dimensional problem involves intervals on the real line.
Associating a duration to each yields a set of rectangles in space and time,
each specified by a pair of fixed horizontal endpoints and a height. The
objective is to assign a position to each rectangle to maximize the height at
which the spanning interval is fully covered. We call this one dimensional
problem restricted strip covering. If we replace the covering constraint by a
packing constraint, the problem is identical to dynamic storage allocation, a
scheduling problem that is a restricted case of the strip packing problem. We
show that the restricted strip covering problem is NP-hard and present an O(log
log n)-approximation algorithm. We present better approximations or exact
algorithms for some special cases. For the uniform-duration case of restricted
strip covering we give a polynomial-time, exact algorithm but prove that the
uniform-duration case for higher-dimensional regions is NP-hard. Finally, we
consider regions that are arbitrary sets, and we present an O(log
n)-approximation algorithm.Comment: 14 pages, 6 figure
Channel Fragmentation in Dynamic Spectrum Access Systems - a Theoretical Study
Dynamic Spectrum Access systems exploit temporarily available spectrum
(`white spaces') and can spread transmissions over a number of non-contiguous
sub-channels. Such methods are highly beneficial in terms of spectrum
utilization. However, excessive fragmentation degrades performance and hence
off-sets the benefits. Thus, there is a need to study these processes so as to
determine how to ensure acceptable levels of fragmentation. Hence, we present
experimental and analytical results derived from a mathematical model. We model
a system operating at capacity serving requests for bandwidth by assigning a
collection of gaps (sub-channels) with no limitations on the fragment size. Our
main theoretical result shows that even if fragments can be arbitrarily small,
the system does not degrade with time. Namely, the average total number of
fragments remains bounded. Within the very difficult class of dynamic
fragmentation models (including models of storage fragmentation), this result
appears to be the first of its kind. Extensive experimental results describe
behavior, at times unexpected, of fragmentation under different algorithms. Our
model also applies to dynamic linked-list storage allocation, and provides a
novel analysis in that domain. We prove that, interestingly, the 50% rule of
the classical (non-fragmented) allocation model carries over to our model.
Overall, the paper provides insights into the potential behavior of practical
fragmentation algorithms
A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing
Edge computing is promoted to meet increasing performance needs of
data-driven services using computational and storage resources close to the end
devices, at the edge of the current network. To achieve higher performance in
this new paradigm one has to consider how to combine the efficiency of resource
usage at all three layers of architecture: end devices, edge devices, and the
cloud. While cloud capacity is elastically extendable, end devices and edge
devices are to various degrees resource-constrained. Hence, an efficient
resource management is essential to make edge computing a reality. In this
work, we first present terminology and architectures to characterize current
works within the field of edge computing. Then, we review a wide range of
recent articles and categorize relevant aspects in terms of 4 perspectives:
resource type, resource management objective, resource location, and resource
use. This taxonomy and the ensuing analysis is used to identify some gaps in
the existing research. Among several research gaps, we found that research is
less prevalent on data, storage, and energy as a resource, and less extensive
towards the estimation, discovery and sharing objectives. As for resource
types, the most well-studied resources are computation and communication
resources. Our analysis shows that resource management at the edge requires a
deeper understanding of how methods applied at different levels and geared
towards different resource types interact. Specifically, the impact of mobility
and collaboration schemes requiring incentives are expected to be different in
edge architectures compared to the classic cloud solutions. Finally, we find
that fewer works are dedicated to the study of non-functional properties or to
quantifying the footprint of resource management techniques, including
edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless
Communications and Mobile Computing journa
Stochastic Dynamic Cache Partitioning for Encrypted Content Delivery
In-network caching is an appealing solution to cope with the increasing
bandwidth demand of video, audio and data transfer over the Internet.
Nonetheless, an increasing share of content delivery services adopt encryption
through HTTPS, which is not compatible with traditional ISP-managed approaches
like transparent and proxy caching. This raises the need for solutions
involving both Internet Service Providers (ISP) and Content Providers (CP): by
design, the solution should preserve business-critical CP information (e.g.,
content popularity, user preferences) on the one hand, while allowing for a
deeper integration of caches in the ISP architecture (e.g., in 5G femto-cells)
on the other hand.
In this paper we address this issue by considering a content-oblivious
ISP-operated cache. The ISP allocates the cache storage to various content
providers so as to maximize the bandwidth savings provided by the cache: the
main novelty lies in the fact that, to protect business-critical information,
ISPs only need to measure the aggregated miss rates of the individual CPs and
do not need to be aware of the objects that are requested, as in classic
caching. We propose a cache allocation algorithm based on a perturbed
stochastic subgradient method, and prove that the algorithm converges close to
the allocation that maximizes the overall cache hit rate. We use extensive
simulations to validate the algorithm and to assess its convergence rate under
stationary and non-stationary content popularity. Our results (i) testify the
feasibility of content-oblivious caches and (ii) show that the proposed
algorithm can achieve within 10\% from the global optimum in our evaluation
- …