6,618 research outputs found

    Phase transitions in project scheduling.

    Get PDF
    The analysis of the complexity of combinatorial optimization problems has led to the distinction between problems which are solvable in a polynomially bounded amount of time (classified in P) and problems which are not (classified in NP). This implies that the problems in NP are hard to solve whereas the problems in P are not. However, this analysis is based on worst-case scenarios. The fact that a decision problem is shown to be NP-complete or the fact that an optimization problem is shown to be NP-hard implies that, in the worst case, solving it is very hard. Recent computational results obtained with a well known NP-hard problem, namely the resource-constrained project scheduling problem, indicate that many instances are actually easy to solve. These results are in line with those recently obtained by researchers in the area of artificial intelligence, which show that many NP-complete problemsexhibit so-called phase transitions, resulting in a sudden and dramatic change of computational complexity based on one or more order parameters that are characteristic of the system as a whole. In this paper we provide evidence for the existence of phase transitions in various resource-constrained project scheduling problems. We discuss the use of network complexity measures and resource parameters as potential order parameters. We show that while the network complexity measures seem to reveal continuous easy-hard or hard-easy phase-transitions, the resource parameters exhibit an easy-hard-easy transition behaviour.Networks; Problems; Scheduling; Algorithms;

    Isomorphism Checking for Symmetry Reduction

    Get PDF
    In this paper, we show how isomorphism checking can be used as an effective technique for symmetry reduction. Reduced state spaces are equivalent to the original ones under a strong notion of bisimilarity which preserves the multiplicity of outgoing transitions, and therefore also preserves stochastic temporal logics. We have implemented this in a setting where states are arbitrary graphs. Since no efficiently computable canonical representation is known for arbitrary graphs modulo isomorphism, we define an isomorphism-predicting hash function on the basis of an existing partition refinement algorithm. As an example, we report a factorial state space reduction on a model of an ad-hoc network connectivity protocol

    Gibbs distributions for random partitions generated by a fragmentation process

    Full text link
    In this paper we study random partitions of 1,...n, where every cluster of size j can be in any of w\_j possible internal states. The Gibbs (n,k,w) distribution is obtained by sampling uniformly among such partitions with k clusters. We provide conditions on the weight sequence w allowing construction of a partition valued random process where at step k the state has the Gibbs (n,k,w) distribution, so the partition is subject to irreversible fragmentation as time evolves. For a particular one-parameter family of weight sequences w\_j, the time-reversed process is the discrete Marcus-Lushnikov coalescent process with affine collision rate K\_{i,j}=a+b(i+j) for some real numbers a and b. Under further restrictions on a and b, the fragmentation process can be realized by conditioning a Galton-Watson tree with suitable offspring distribution to have n nodes, and cutting the edges of this tree by random sampling of edges without replacement, to partition the tree into a collection of subtrees. Suitable offspring distributions include the binomial, negative binomial and Poisson distributions.Comment: 38 pages, 2 figures, version considerably modified. To appear in the Journal of Statistical Physic

    Expander 0\ell_0-Decoding

    Get PDF
    We introduce two new algorithms, Serial-0\ell_0 and Parallel-0\ell_0 for solving a large underdetermined linear system of equations y=AxRmy = Ax \in \mathbb{R}^m when it is known that xRnx \in \mathbb{R}^n has at most k<mk < m nonzero entries and that AA is the adjacency matrix of an unbalanced left dd-regular expander graph. The matrices in this class are sparse and allow a highly efficient implementation. A number of algorithms have been designed to work exclusively under this setting, composing the branch of combinatorial compressed-sensing (CCS). Serial-0\ell_0 and Parallel-0\ell_0 iteratively minimise yAx^0\|y - A\hat x\|_0 by successfully combining two desirable features of previous CCS algorithms: the information-preserving strategy of ER, and the parallel updating mechanism of SMP. We are able to link these elements and guarantee convergence in O(dnlogk)\mathcal{O}(dn \log k) operations by assuming that the signal is dissociated, meaning that all of the 2k2^k subset sums of the support of xx are pairwise different. However, we observe empirically that the signal need not be exactly dissociated in practice. Moreover, we observe Serial-0\ell_0 and Parallel-0\ell_0 to be able to solve large scale problems with a larger fraction of nonzeros than other algorithms when the number of measurements is substantially less than the signal length; in particular, they are able to reliably solve for a kk-sparse vector xRnx\in\mathbb{R}^n from mm expander measurements with n/m=103n/m=10^3 and k/mk/m up to four times greater than what is achievable by 1\ell_1-regularization from dense Gaussian measurements. Additionally, Serial-0\ell_0 and Parallel-0\ell_0 are observed to be able to solve large problems sizes in substantially less time than other algorithms for compressed sensing. In particular, Parallel-0\ell_0 is structured to take advantage of massively parallel architectures.Comment: 14 pages, 10 figure
    corecore