9,352 research outputs found
The Impact of Stealthy Attacks on Smart Grid Performance: Tradeoffs and Implications
The smart grid is envisioned to significantly enhance the efficiency of
energy consumption, by utilizing two-way communication channels between
consumers and operators. For example, operators can opportunistically leverage
the delay tolerance of energy demands in order to balance the energy load over
time, and hence, reduce the total operational cost. This opportunity, however,
comes with security threats, as the grid becomes more vulnerable to
cyber-attacks. In this paper, we study the impact of such malicious
cyber-attacks on the energy efficiency of the grid in a simplified setup. More
precisely, we consider a simple model where the energy demands of the smart
grid consumers are intercepted and altered by an active attacker before they
arrive at the operator, who is equipped with limited intrusion detection
capabilities. We formulate the resulting optimization problems faced by the
operator and the attacker and propose several scheduling and attack strategies
for both parties. Interestingly, our results show that, as opposed to
facilitating cost reduction in the smart grid, increasing the delay tolerance
of the energy demands potentially allows the attacker to force increased costs
on the system. This highlights the need for carefully constructed and robust
intrusion detection mechanisms at the operator.Comment: Technical report - this work was accepted to IEEE Transactions on
Control of Network Systems, 2016. arXiv admin note: substantial text overlap
with arXiv:1209.176
Multi-Path Alpha-Fair Resource Allocation at Scale in Distributed Software Defined Networks
The performance of computer networks relies on how bandwidth is shared among
different flows. Fair resource allocation is a challenging problem particularly
when the flows evolve over time. To address this issue, bandwidth sharing
techniques that quickly react to the traffic fluctuations are of interest,
especially in large scale settings with hundreds of nodes and thousands of
flows. In this context, we propose a distributed algorithm based on the
Alternating Direction Method of Multipliers (ADMM) that tackles the multi-path
fair resource allocation problem in a distributed SDN control architecture. Our
ADMM-based algorithm continuously generates a sequence of resource allocation
solutions converging to the fair allocation while always remaining feasible, a
property that standard primal-dual decomposition methods often lack. Thanks to
the distribution of all computer intensive operations, we demonstrate that we
can handle large instances at scale
Design of multimedia processor based on metric computation
Media-processing applications, such as signal processing, 2D and 3D graphics
rendering, and image compression, are the dominant workloads in many embedded
systems today. The real-time constraints of those media applications have
taxing demands on today's processor performances with low cost, low power and
reduced design delay. To satisfy those challenges, a fast and efficient
strategy consists in upgrading a low cost general purpose processor core. This
approach is based on the personalization of a general RISC processor core
according the target multimedia application requirements. Thus, if the extra
cost is justified, the general purpose processor GPP core can be enforced with
instruction level coprocessors, coarse grain dedicated hardware, ad hoc
memories or new GPP cores. In this way the final design solution is tailored to
the application requirements. The proposed approach is based on three main
steps: the first one is the analysis of the targeted application using
efficient metrics. The second step is the selection of the appropriate
architecture template according to the first step results and recommendations.
The third step is the architecture generation. This approach is experimented
using various image and video algorithms showing its feasibility
A statistical method for estimating activity uncertainty parameters to improve project forecasting
Just like any physical system, projects have entropy that must be managed by spending energy. The entropy is the project’s tendency to move to a state of disorder (schedule delays, cost overruns), and the energy process is an inherent part of any project management methodology. In order to manage the inherent uncertainty of these projects, accurate estimates (for durations, costs, resources, …) are crucial to make informed decisions. Without these estimates, managers have to fall back to their own intuition and experience, which are undoubtedly crucial for making decisions, but are are often subject to biases and hard to quantify. This paper builds further on two published calibration methods that aim to extract data from real projects and calibrate them to better estimate the parameters for the probability distributions of activity durations. Both methods rely on the lognormal distribution model to estimate uncertainty in activity durations and perform a sequence of statistical hypothesis tests that take the possible presence of two human biases into account. Based on these two existing methods, a new so-called statistical partitioning heuristic is presented that integrates the best elements of the two methods to further improve the accuracy of estimating the distribution of activity duration uncertainty. A computational experiment has been carried out on an empirical database of 83 empirical projects. The experiment shows that the new statistical partitioning method performs at least as good as, and often better than, the two existing calibration methods. The improvement will allow a better quantification of the activity duration uncertainty, which will eventually lead to a better prediction of the project schedule and more realistic expectations about the project outcomes. Consequently, the project manager will be able to better cope with the inherent uncertainty (entropy) of projects with a minimum managerial effort (energy)
Performance optimization and energy efficiency of big-data computing workflows
Next-generation e-science is producing colossal amounts of data, now frequently termed as Big Data, on the order of terabyte at present and petabyte or even exabyte in the predictable future. These scientific applications typically feature data-intensive workflows comprised of moldable parallel computing jobs, such as MapReduce, with intricate inter-job dependencies. The granularity of task partitioning in each moldable job of such big data workflows has a significant impact on workflow completion time, energy consumption, and financial cost if executed in clouds, which remains largely unexplored. This dissertation conducts an in-depth investigation into the properties of moldable jobs and provides an experiment-based validation of the performance model where the total workload of a moldable job increases along with the degree of parallelism. Furthermore, this dissertation conducts rigorous research on workflow execution dynamics in resource sharing environments and explores the interactions between workflow mapping and task scheduling on various computing platforms. A workflow optimization architecture is developed to seamlessly integrate three interrelated technical components, i.e., resource allocation, job mapping, and task scheduling.
Cloud computing provides a cost-effective computing platform for big data workflows where moldable parallel computing models are widely applied to meet stringent performance requirements. Based on the moldable parallel computing performance model, a big-data workflow mapping model is constructed and a workflow mapping problem is formulated to minimize workflow makespan under a budget constraint in public clouds. This dissertation shows this problem to be strongly NP-complete and designs i) a fully polynomial-time approximation scheme for a special case with a pipeline-structured workflow executed on virtual machines of a single class, and ii) a heuristic for a generalized problem with an arbitrary directed acyclic graph-structured workflow executed on virtual machines of multiple classes. The performance superiority of the proposed solution is illustrated by extensive simulation-based results in Hadoop/YARN in comparison with existing workflow mapping models and algorithms.
Considering that large-scale workflows for big data analytics have become a main consumer of energy in data centers, this dissertation also delves into the problem of static workflow mapping to minimize the dynamic energy consumption of a workflow request under a deadline constraint in Hadoop clusters, which is shown to be strongly NP-hard. A fully polynomial-time approximation scheme is designed for a special case with a pipeline-structured workflow on a homogeneous cluster and a heuristic is designed for the generalized problem with an arbitrary directed acyclic graph-structured workflow on a heterogeneous cluster. This problem is further extended to a dynamic version with deadline-constrained MapReduce workflows to minimize dynamic energy consumption in Hadoop clusters. This dissertation proposes a semi-dynamic online scheduling algorithm based on adaptive task partitioning to reduce dynamic energy consumption while meeting performance requirements from a global perspective, and also develops corresponding system modules for algorithm implementation in the Hadoop ecosystem. The performance superiority of the proposed solutions in terms of dynamic energy saving and deadline missing rate is illustrated by extensive simulation results in comparison with existing algorithms, and further validated through real-life workflow implementation and experiments using the Oozie workflow engine in Hadoop/YARN systems
Privacy-preserving Security Inference Towards Cloud-Edge Collaborative Using Differential Privacy
Cloud-edge collaborative inference approach splits deep neural networks
(DNNs) into two parts that run collaboratively on resource-constrained edge
devices and cloud servers, aiming at minimizing inference latency and
protecting data privacy. However, even if the raw input data from edge devices
is not directly exposed to the cloud, state-of-the-art attacks targeting
collaborative inference are still able to reconstruct the raw private data from
the intermediate outputs of the exposed local models, introducing serious
privacy risks. In this paper, a secure privacy inference framework for
cloud-edge collaboration is proposed, termed CIS, which supports adaptively
partitioning the network according to the dynamically changing network
bandwidth and fully releases the computational power of edge devices. To
mitigate the influence introduced by private perturbation, CIS provides a way
to achieve differential privacy protection by adding refined noise to the
intermediate layer feature maps offloaded to the cloud. Meanwhile, with a given
total privacy budget, the budget is reasonably allocated by the size of the
feature graph rank generated by different convolution filters, which makes the
inference in the cloud robust to the perturbed data, thus effectively trade-off
the conflicting problem between privacy and availability. Finally, we construct
a real cloud-edge collaborative inference computing scenario to verify the
effectiveness of inference latency and model partitioning on
resource-constrained edge devices. Furthermore, the state-of-the-art cloud-edge
collaborative reconstruction attack is used to evaluate the practical
availability of the end-to-end privacy protection mechanism provided by CIS
- …