94,190 research outputs found
Edge-Cloud Synergy: Unleashing the Potential of Parallel Processing for Big Data Analytics
If an edge-node orchestrator can partition Big Data tasks of variable computational complexity between the edge and cloud resources, major reductions in total task completion times can be achieved even at low Wide Area Network (WAN) speeds. The percentage time savings are greater with increasing task computational complexity and higher WAN speeds are required for low-complexity tasks. We demonstrate from numerical simulations that low-complexity tasks can benefit either by task partitioning between an edge node and multiple cloud servers. The orchestrator can also achieve greater time benefits by rerouting Big Data tasks directly to a single cloud resource if the balance of parameters (WAN speed and the ratio between edge and cloud processing speeds) is favourable
Arboreal Bound Entanglement
In this paper, we discuss the entanglement properties of graph-diagonal
states, with particular emphasis on calculating the threshold for the
transition between the presence and absence of entanglement (i.e. the
separability point). Special consideration is made of the thermal states of
trees, including the linear cluster state. We characterise the type of
entanglement present, and describe the optimal entanglement witnesses and their
implementation on a quantum computer, up to an additive approximation. In the
case of general graphs, we invoke a relation with the partition function of the
classical Ising model, thereby intimating a connection to computational
complexity theoretic tasks. Finally, we show that the entanglement is robust to
some classes of local perturbations.Comment: 9 pages + appendices, 3 figure
Scheduling Storms and Streams in the Cloud
Motivated by emerging big streaming data processing paradigms (e.g., Twitter
Storm, Streaming MapReduce), we investigate the problem of scheduling graphs
over a large cluster of servers. Each graph is a job, where nodes represent
compute tasks and edges indicate data-flows between these compute tasks. Jobs
(graphs) arrive randomly over time, and upon completion, leave the system. When
a job arrives, the scheduler needs to partition the graph and distribute it
over the servers to satisfy load balancing and cost considerations.
Specifically, neighboring compute tasks in the graph that are mapped to
different servers incur load on the network; thus a mapping of the jobs among
the servers incurs a cost that is proportional to the number of "broken edges".
We propose a low complexity randomized scheduling algorithm that, without
service preemptions, stabilizes the system with graph arrivals/departures; more
importantly, it allows a smooth trade-off between minimizing average
partitioning cost and average queue lengths. Interestingly, to avoid service
preemptions, our approach does not rely on a Gibbs sampler; instead, we show
that the corresponding limiting invariant measure has an interpretation
stemming from a loss system.Comment: 14 page
Complexity of scheduling multiprocessor tasks with prespecified processor allocations
We investigate the computational complexity of scheduling multiprocessor tasks with prespecified processor allocations. We consider two criteria: minimizing schedule length and minimizing the sum of the task completion times. In addition, we investigate the complexity of problems when precedence constraints or release dates are involved
- …