44,938 research outputs found
Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey
Wireless sensor networks (WSNs) consist of autonomous and resource-limited
devices. The devices cooperate to monitor one or more physical phenomena within
an area of interest. WSNs operate as stochastic systems because of randomness
in the monitored environments. For long service time and low maintenance cost,
WSNs require adaptive and robust methods to address data exchange, topology
formulation, resource and power optimization, sensing coverage and object
detection, and security challenges. In these problems, sensor nodes are to make
optimized decisions from a set of accessible strategies to achieve design
goals. This survey reviews numerous applications of the Markov decision process
(MDP) framework, a powerful decision-making tool to develop adaptive algorithms
and protocols for WSNs. Furthermore, various solution methods are discussed and
compared to serve as a guide for using MDPs in WSNs
Task Runtime Prediction in Scientific Workflows Using an Online Incremental Learning Approach
Many algorithms in workflow scheduling and resource provisioning rely on the
performance estimation of tasks to produce a scheduling plan. A profiler that
is capable of modeling the execution of tasks and predicting their runtime
accurately, therefore, becomes an essential part of any Workflow Management
System (WMS). With the emergence of multi-tenant Workflow as a Service (WaaS)
platforms that use clouds for deploying scientific workflows, task runtime
prediction becomes more challenging because it requires the processing of a
significant amount of data in a near real-time scenario while dealing with the
performance variability of cloud resources. Hence, relying on methods such as
profiling tasks' execution data using basic statistical description (e.g.,
mean, standard deviation) or batch offline regression techniques to estimate
the runtime may not be suitable for such environments. In this paper, we
propose an online incremental learning approach to predict the runtime of tasks
in scientific workflows in clouds. To improve the performance of the
predictions, we harness fine-grained resources monitoring data in the form of
time-series records of CPU utilization, memory usage, and I/O activities that
are reflecting the unique characteristics of a task's execution. We compare our
solution to a state-of-the-art approach that exploits the resources monitoring
data based on regression machine learning technique. From our experiments, the
proposed strategy improves the performance, in terms of the error, up to
29.89%, compared to the state-of-the-art solutions.Comment: Accepted for presentation at main conference track of 11th IEEE/ACM
International Conference on Utility and Cloud Computin
Control and Communication Protocols that Enable Smart Building Microgrids
Recent communication, computation, and technology advances coupled with
climate change concerns have transformed the near future prospects of
electricity transmission, and, more notably, distribution systems and
microgrids. Distributed resources (wind and solar generation, combined heat and
power) and flexible loads (storage, computing, EV, HVAC) make it imperative to
increase investment and improve operational efficiency. Commercial and
residential buildings, being the largest energy consumption group among
flexible loads in microgrids, have the largest potential and flexibility to
provide demand side management. Recent advances in networked systems and the
anticipated breakthroughs of the Internet of Things will enable significant
advances in demand response capabilities of intelligent load network of
power-consuming devices such as HVAC components, water heaters, and buildings.
In this paper, a new operating framework, called packetized direct load control
(PDLC), is proposed based on the notion of quantization of energy demand. This
control protocol is built on top of two communication protocols that carry
either complete or binary information regarding the operation status of the
appliances. We discuss the optimal demand side operation for both protocols and
analytically derive the performance differences between the protocols. We
propose an optimal reservation strategy for traditional and renewable energy
for the PDLC in both day-ahead and real time markets. In the end we discuss the
fundamental trade-off between achieving controllability and endowing
flexibility
- …