ACM|SPAA '24: Proceedings of the 36th ACM Symposium on Parallelism in Algorithms and Architectures
Doi
Abstract
SPAA ’24, June 17–21, 2024, Nantes, FranceWe present an O(1)-round fully-scalable deterministic massively parallel algorithm for computing the min-plus matrix multiplication of unit-Monge matrices. We use this to derive a O(łog n)-round fully-scalable massively parallel algorithm for solving the exact longest increasing subsequence (LIS) problem. For a fully-scalable MPC regime, this result substantially improves the previously known algorithm of O(łog^4 n)-round complexity, and matches the best algorithm for computing the (1+ε)-approximation of LIS
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.