35 research outputs found

    Distributed MST and Broadcast with Fewer Messages, and Faster Gossiping

    Get PDF
    We present a distributed minimum spanning tree algorithm with near-optimal round complexity of O~(D+sqrt{n}) and message complexity O~(min{n^{3/2}, m}). This is the first algorithm with sublinear message complexity and near-optimal round complexity and it improves over the recent algorithms of Elkin [PODC\u2717] and Pandurangan et al. [STOC\u2717], which have the same round complexity but message complexity O~(m). Our method also gives the first broadcast algorithm with o(n) time complexity - when that is possible at all, i.e., when D=o(n) - and o(m) messages. Moreover, our method leads to an O~(sqrt{nD})-round GOSSIP algorithm with bounded-size messages. This is the first such algorithm with a sublinear round complexity

    Multi-Goal Multi-Agent Path Finding via Decoupled and Integrated Goal Vertex Ordering

    Full text link
    We introduce multi-goal multi agent path finding (MAPFMG^{MG}) which generalizes the standard discrete multi-agent path finding (MAPF) problem. While the task in MAPF is to navigate agents in an undirected graph from their starting vertices to one individual goal vertex per agent, MAPFMG^{MG} assigns each agent multiple goal vertices and the task is to visit each of them at least once. Solving MAPFMG^{MG} not only requires finding collision free paths for individual agents but also determining the order of visiting agent's goal vertices so that common objectives like the sum-of-costs are optimized. We suggest two novel algorithms using different paradigms to address MAPFMG^{MG}: a heuristic search-based search algorithm called Hamiltonian-CBS (HCBS) and a compilation-based algorithm built using the SMT paradigm, called SMT-Hamiltonian-CBS (SMT-HCBS). Experimental comparison suggests limitations of compilation-based approach

    Topological Portfolio Selection and Optimization

    Full text link
    Modern portfolio optimization is centered around creating a low-risk portfolio with extensive asset diversification. Following the seminal work of Markowitz, optimal asset allocation can be computed using a constrained optimization model based on empirical covariance. However, covariance is typically estimated from historical lookback observations, and it is prone to noise and may inadequately represent future market behavior. As a remedy, information filtering networks from network science can be used to mitigate the noise in empirical covariance estimation, and therefore, can bring added value to the portfolio construction process. In this paper, we propose the use of the Statistically Robust Information Filtering Network (SR-IFN) which leverages the bootstrapping techniques to eliminate unnecessary edges during the network formation and enhances the network's noise reduction capability further. We apply SR-IFN to index component stock pools in the US, UK, and China to assess its effectiveness. The SR-IFN network is partially disconnected with isolated nodes representing lesser-correlated assets, facilitating the selection of peripheral, diversified and higher-performing portfolios. Further optimization of performance can be achieved by inversely proportioning asset weights to their centrality based on the resultant network

    A Deterministic Algorithm for the MST Problem in Constant Rounds of Congested Clique

    Full text link
    In this paper, we show that the Minimum Spanning Tree problem can be solved \emph{deterministically}, in O(1)\mathcal{O}(1) rounds of the Congested\mathsf{Congested} Clique\mathsf{Clique} model. In the Congested\mathsf{Congested} Clique\mathsf{Clique} model, there are nn players that perform computation in synchronous rounds. Each round consist of a phase of local computation and a phase of communication, in which each pair of players is allowed to exchange O(logn)\mathcal{O}(\log n) bit messages. The studies of this model began with the MST problem: in the paper by Lotker et al.[SPAA'03, SICOMP'05] that defines the Congested\mathsf{Congested} Clique\mathsf{Clique} model the authors give a deterministic O(loglogn)\mathcal{O}(\log \log n) round algorithm that improved over a trivial O(logn)\mathcal{O}(\log n) round adaptation of Bor\r{u}vka's algorithm. There was a sequence of gradual improvements to this result: an O(logloglogn)\mathcal{O}(\log \log \log n) round algorithm by Hegeman et al. [PODC'15], an O(logn)\mathcal{O}(\log^* n) round algorithm by Ghaffari and Parter, [PODC'16] and an O(1)\mathcal{O}(1) round algorithm by Jurdzi\'nski and Nowicki, [SODA'18], but all those algorithms were randomized, which left the question about the existence of any deterministic o(loglogn)o(\log \log n) round algorithms for the Minimum Spanning Tree problem open. Our result resolves this question and establishes that O(1)\mathcal{O}(1) rounds is enough to solve the MST problem in the Congested\mathsf{Congested} Clique\mathsf{Clique} model, even if we are not allowed to use any randomness. Furthermore, the amount of communication needed by the algorithm makes it applicable to some variants of the MPC\mathsf{MPC} model

    Network Filtering of Spatial-temporal GNN for Multivariate Time-series Prediction

    Get PDF
    We propose an architecture for multivariate time-series prediction that integrates a spatial-temporal graph neural network with a filtering module which filters the inverse correlation matrix into a sparse network structure. In contrast with existing sparsification methods adopted in graph neural networks, our model explicitly leverages time-series filtering to overcome the low signal-to-noise ratio typical of complex systems data. We present a set of experiments, where we predict future sales volume from a synthetic time-series sales volume dataset. The proposed spatial-temporal graph neural network displays superior performances to baseline approaches with no graphical information, fully connected, disconnected graphs, and unfiltered graphs, as well as the state-of-the-art spatial-temporal GNN. Comparison of the results with Diffusion Convolutional Recurrent Neural Network (DCRNN) suggests that, by combining a (inferior) GNN with graph sparsification and filtering, one can achieve comparable or better efficacy than the state-of-the-art in multivariate time-series regression
    corecore