52 research outputs found
Participation and Data Valuation in IoT Data Markets through Distributed Coalitions
This paper considers a market for trading Internet of Things (IoT) data that
is used to train machine learning models. The data, either raw or processed, is
supplied to the market platform through a network and the price of such data is
controlled based on the value it brings to the machine learning model. We
explore the correlation property of data in a game-theoretical setting to
eventually derive a simplified distributed solution for a data trading
mechanism that emphasizes the mutual benefit of devices and the market. The key
proposal is an efficient algorithm for markets that jointly addresses the
challenges of availability and heterogeneity in participation, as well as the
transfer of trust and the economic value of data exchange in IoT networks. The
proposed approach establishes the data market by reinforcing collaboration
opportunities between device with correlated data to avoid information leakage.
Therein, we develop a network-wide optimization problem that maximizes the
social value of coalition among the IoT devices of similar data types; at the
same time, it minimizes the cost due to network externalities, i.e., the impact
of information leakage due to data correlation, as well as the opportunity
costs. Finally, we reveal the structure of the formulated problem as a
distributed coalition game and solve it following the simplified
split-and-merge algorithm. Simulation results show the efficacy of our proposed
mechanism design toward a trusted IoT data market, with up to 32.72% gain in
the average payoff for each seller.Comment: 14 pages. Submitted for possible publicatio
Goal-Oriented Communications in Federated Learning via Feedback on Risk-Averse Participation
We treat the problem of client selection in a Federated Learning (FL) setup,
where the learning objective and the local incentives of the participants are
used to formulate a goal-oriented communication problem. Specifically, we
incorporate the risk-averse nature of participants and obtain a
communication-efficient on-device performance, while relying on feedback from
the Parameter Server (\texttt{PS}). A client has to decide its transmission
plan on when not to participate in FL. This is based on its intrinsic
incentive, which is the value of the trained global model upon participation by
this client. Poor updates not only plunge the performance of the global model
with added communication cost but also propagate the loss in performance on
other participating devices. We cast the relevance of local updates as
\emph{semantic information} for developing local transmission strategies, i.e.,
making a decision on when to ``not transmit". The devices use feedback about
the state of the PS and evaluate their contributions in training the learning
model in each aggregation period, which eventually lowers the number of
occupied connections. Simulation results validate the efficacy of our proposed
approach, with up to gain in communication links utilization as
compared with the baselines
Scheduling Policy for Value-of-Information (VoI) in Trajectory Estimation for Digital Twins
This paper presents an approach to schedule observations from different
sensors in an environment to ensure their timely delivery and build a digital
twin (DT) model of the system dynamics. At the cloud platform, DT models
estimate and predict the system's state, then compute the optimal scheduling
policy and resource allocation strategy to be executed in the physical world.
However, given limited network resources, partial state vector information, and
measurement errors at the distributed sensing agents, the acquisition of data
(i.e., observations) for efficient state estimation of system dynamics is a
non-trivial problem. We propose a Value of Information (VoI)-based algorithm
that provides a polynomial-time solution for selecting the most informative
subset of sensing agents to improve confidence in the state estimation of DT
models. Numerical results confirm that the proposed method outperforms other
benchmarks, reducing the communication overhead by half while maintaining the
required estimation accuracy
Ruin Theory for Dynamic Spectrum Allocation in LTE-U Networks
LTE in the unlicensed band (LTE-U) is a promising solution to overcome the
scarcity of the wireless spectrum. However, to reap the benefits of LTE-U, it
is essential to maintain its effective coexistence with WiFi systems. Such a
coexistence, hence, constitutes a major challenge for LTE-U deployment. In this
paper, the problem of unlicensed spectrum sharing among WiFi and LTE-U system
is studied. In particular, a fair time sharing model based on \emph{ruin
theory} is proposed to share redundant spectral resources from the unlicensed
band with LTE-U without jeopardizing the performance of the WiFi system.
Fairness among both WiFi and LTE-U is maintained by applying the concept of the
probability of ruin. In particular, the probability of ruin is used to perform
efficient duty-cycle allocation in LTE-U, so as to provide fairness to the WiFi
system and maintain certain WiFi performance. Simulation results show that the
proposed ruin-based algorithm provides better fairness to the WiFi system as
compared to equal duty-cycle sharing among WiFi and LTE-U.Comment: Accepted in IEEE Communications Letters (09-Dec 2018
Provenance-enabled Packet Path Tracing in the RPL-based Internet of Things
The interconnection of resource-constrained and globally accessible things
with untrusted and unreliable Internet make them vulnerable to attacks
including data forging, false data injection, and packet drop that affects
applications with critical decision-making processes. For data trustworthiness,
reliance on provenance is considered to be an effective mechanism that tracks
both data acquisition and data transmission. However, provenance management for
sensor networks introduces several challenges, such as low energy, bandwidth
consumption, and efficient storage. This paper attempts to identify packet drop
(either maliciously or due to network disruptions) and detect faulty or
misbehaving nodes in the Routing Protocol for Low-Power and Lossy Networks
(RPL) by following a bi-fold provenance-enabled packed path tracing (PPPT)
approach. Firstly, a system-level ordered-provenance information encapsulates
the data generating nodes and the forwarding nodes in the data packet.
Secondly, to closely monitor the dropped packets, a node-level provenance in
the form of the packet sequence number is enclosed as a routing entry in the
routing table of each participating node. Lossless in nature, both approaches
conserve the provenance size satisfying processing and storage requirements of
IoT devices. Finally, we evaluate the efficacy of the proposed scheme with
respect to provenance size, provenance generation time, and energy consumption.Comment: 14 pages, 18 Figure
Value-Based Reinforcement Learning for Digital Twins in Cloud Computing
The setup considered in the paper consists of sensors in a Networked Control
System that are used to build a digital twin (DT) model of the system dynamics.
The focus is on control, scheduling, and resource allocation for sensory
observation to ensure timely delivery to the DT model deployed in the cloud.
Low latency and communication timeliness are instrumental in ensuring that the
DT model can accurately estimate and predict system states. However, acquiring
data for efficient state estimation and control computing poses a non-trivial
problem given the limited network resources, partial state vector information,
and measurement errors encountered at distributed sensors. We propose the
REinforcement learning and Variational Extended Kalman filter with Robust
Belief (REVERB), which leverages a reinforcement learning solution combined
with a Value of Information-based algorithm for performing optimal control and
selecting the most informative sensors to satisfy the prediction accuracy of
DT. Numerical results demonstrate that the DT platform can offer satisfactory
performance while reducing the communication overhead up to five times
Latency-sensitive Service Delivery with UAV-Assisted 5G Networks
In this letter, a novel framework to deliver critical spread out URLLC
services deploying unmanned aerial vehicles (UAVs) in an out-of-coverage area
is developed. To this end, the resource optimization problem, i.e., resource
blocks (RBs) and power allocation, and optimal UAV deployment strategy are
studied for UAV-assisted 5G networks to jointly maximize the average sum-rate
and minimize the transmit power of UAV while satisfying the URLLC requirements.
To cope with the sporadic URLLC traffic problem, an efficient online URLLC
traffic prediction model based on Gaussian Process Regression (GPR) is proposed
which derives optimal URLLC scheduling and transmit power strategy. The
formulated problem is revealed as a mixed-integer nonlinear programming
(MINLP), which is solved following the introduced successive minimization
algorithm. Finally, simulation results are provided to show our proposed
solution approach's efficiency.Comment: Accepted in IEEE Wireless Communications Letter
- …