20,907 research outputs found
Multi-Robot Task Allocation and Scheduling with Spatio-Temporal and Energy Constraints
Autonomy in multi-robot systems is bounded by coordination among its agents. Coordination implies simultaneous task decomposition, task allocation, team formation, task scheduling and routing; collectively termed as task planning. In many real-world applications of multi-robot systems such as commercial cleaning, delivery systems, warehousing and inventory management: spatial & temporal constraints, variable execution time, and energy limitations need to be integrated into the planning module. Spatial constraints comprise of the location of the tasks, their reachability, and the structure of the environment; temporal constraints express task completion deadlines. There has been significant research in multi-robot task allocation involving spatio-temporal constraints. However, limited attention has been paid to combine them with team formation and non- instantaneous task execution time. We achieve team formation by including quota constraints which ensure to schedule the number of robots required to perform the task. We introduce and integrate task activation (time) windows with the team effort of multiple robots in performing tasks for a given duration. Additionally, while visiting tasks in space, energy budget affects the robots operation time. We map energy depletion as a function of time to ensure long-term operation by periodically visiting recharging stations. Research on task planning approaches which combines all these conditions is still lacking. In this thesis, we propose two variants of Team Orienteering Problem with task activation windows and limited energy budget to formulate the simultaneous task allocation and scheduling as an optimization problem. A complete mixed integer linear programming (MILP) formulation for both variants is presented in this work, implemented using Gurobi Optimizer and analyzed for scalability. This work compares the different objectives of the formulation like maximizing the number of tasks visited, minimizing the total distance travelled, and/or maximizing the reward, to suit various applications. Finally, analysis of optimal solutions discover trends in task selection based on the travel cost, task completion rewards, robot\u27s energy level, and the time left to task inactivation
Receding Horizon Temporal Logic Control for Finite Deterministic Systems
This paper considers receding horizon control of finite deterministic
systems, which must satisfy a high level, rich specification expressed as a
linear temporal logic formula. Under the assumption that time-varying rewards
are associated with states of the system and they can be observed in real-time,
the control objective is to maximize the collected reward while satisfying the
high level task specification. In order to properly react to the changing
rewards, a controller synthesis framework inspired by model predictive control
is proposed, where the rewards are locally optimized at each time-step over a
finite horizon, and the immediate optimal control is applied. By enforcing
appropriate constraints, the infinite trajectory produced by the controller is
guaranteed to satisfy the desired temporal logic formula. Simulation results
demonstrate the effectiveness of the approach.Comment: Technical report accompanying a paper to be presented at ACC 201
Extending Demand Response to Tenants in Cloud Data Centers via Non-intrusive Workload Flexibility Pricing
Participating in demand response programs is a promising tool for reducing
energy costs in data centers by modulating energy consumption. Towards this
end, data centers can employ a rich set of resource management knobs, such as
workload shifting and dynamic server provisioning. Nonetheless, these knobs may
not be readily available in a cloud data center (CDC) that serves cloud
tenants/users, because workloads in CDCs are managed by tenants themselves who
are typically charged based on a usage-based or flat-rate pricing and often
have no incentive to cooperate with the CDC operator for demand response and
cost saving. Towards breaking such "split incentive" hurdle, a few recent
studies have tried market-based mechanisms, such as dynamic pricing, inside
CDCs. However, such mechanisms often rely on complex designs that are hard to
implement and difficult to cope with by tenants. To address this limitation, we
propose a novel incentive mechanism that is not dynamic, i.e., it keeps pricing
for cloud resources unchanged for a long period. While it charges tenants based
on a Usage-based Pricing (UP) as used by today's major cloud operators, it
rewards tenants proportionally based on the time length that tenants set as
deadlines for completing their workloads. This new mechanism is called
Usage-based Pricing with Monetary Reward (UPMR). We demonstrate the
effectiveness of UPMR both analytically and empirically. We show that UPMR can
reduce the CDC operator's energy cost by 12.9% while increasing its profit by
4.9%, compared to the state-of-the-art approaches used by today's CDC operators
to charge their tenants
Joint Transmission and Energy Transfer Policies for Energy Harvesting Devices with Finite Batteries
One of the main concerns in traditional Wireless Sensor Networks (WSNs) is
energy efficiency. In this work, we analyze two techniques that can extend
network lifetime. The first is Ambient \emph{Energy Harvesting} (EH), i.e., the
capability of the devices to gather energy from the environment, whereas the
second is Wireless \emph{Energy Transfer} (ET), that can be used to exchange
energy among devices. We study the combination of these techniques, showing
that they can be used jointly to improve the system performance. We consider a
transmitter-receiver pair, showing how the ET improvement depends upon the
statistics of the energy arrivals and the energy consumption of the devices.
With the aim of maximizing a reward function, e.g., the average transmission
rate, we find performance upper bounds with and without ET, define both online
and offline optimization problems, and present results based on realistic
energy arrivals in indoor and outdoor environments. We show that ET can
significantly improve the system performance even when a sizable fraction of
the transmitted energy is wasted and that, in some scenarios, the online
approach can obtain close to optimal performance.Comment: 16 pages, 12 figure
Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey
Wireless sensor networks (WSNs) consist of autonomous and resource-limited
devices. The devices cooperate to monitor one or more physical phenomena within
an area of interest. WSNs operate as stochastic systems because of randomness
in the monitored environments. For long service time and low maintenance cost,
WSNs require adaptive and robust methods to address data exchange, topology
formulation, resource and power optimization, sensing coverage and object
detection, and security challenges. In these problems, sensor nodes are to make
optimized decisions from a set of accessible strategies to achieve design
goals. This survey reviews numerous applications of the Markov decision process
(MDP) framework, a powerful decision-making tool to develop adaptive algorithms
and protocols for WSNs. Furthermore, various solution methods are discussed and
compared to serve as a guide for using MDPs in WSNs
- …