16,900 research outputs found

    Fast optimization algorithms and the cosmological constant

    Get PDF
    Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of an NP-hard problem. The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 1012010^{-120} in a randomly generated 10910^9-dimensional ADK landscape.Comment: 19 pages, 5 figure

    Towards Lattice Quantum Chromodynamics on FPGA devices

    Get PDF
    In this paper we describe a single-node, double precision Field Programmable Gate Array (FPGA) implementation of the Conjugate Gradient algorithm in the context of Lattice Quantum Chromodynamics. As a benchmark of our proposal we invert numerically the Dirac-Wilson operator on a 4-dimensional grid on three Xilinx hardware solutions: Zynq Ultrascale+ evaluation board, the Alveo U250 accelerator and the largest device available on the market, the VU13P device. In our implementation we separate software/hardware parts in such a way that the entire multiplication by the Dirac operator is performed in hardware, and the rest of the algorithm runs on the host. We find out that the FPGA implementation can offer a performance comparable with that obtained using current CPU or Intel's many core Xeon Phi accelerators. A possible multiple node FPGA-based system is discussed and we argue that power-efficient High Performance Computing (HPC) systems can be implemented using FPGA devices only.Comment: 17 pages, 4 figure

    Dynamic load balancing for the distributed mining of molecular structures

    Get PDF
    In molecular biology, it is often desirable to find common properties in large numbers of drug candidates. One family of methods stems from the data mining community, where algorithms to find frequent graphs have received increasing attention over the past years. However, the computational complexity of the underlying problem and the large amount of data to be explored essentially render sequential algorithms useless. In this paper, we present a distributed approach to the frequent subgraph mining problem to discover interesting patterns in molecular compounds. This problem is characterized by a highly irregular search tree, whereby no reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely, a dynamic partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiverinitiated load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer Institute’s HIV-screening data set, where we were able to show close-to linear speedup in a network of workstations. The proposed approach also allows for dynamic resource aggregation in a non dedicated computational environment. These features make it suitable for large-scale, multi-domain, heterogeneous environments, such as computational grids

    Extremal Optimization for Graph Partitioning

    Full text link
    Extremal optimization is a new general-purpose method for approximating solutions to hard optimization problems. We study the method in detail by way of the NP-hard graph partitioning problem. We discuss the scaling behavior of extremal optimization, focusing on the convergence of the average run as a function of runtime and system size. The method has a single free parameter, which we determine numerically and justify using a simple argument. Our numerical results demonstrate that on random graphs, extremal optimization maintains consistent accuracy for increasing system sizes, with an approximation error decreasing over runtime roughly as a power law t^(-0.4). On geometrically structured graphs, the scaling of results from the average run suggests that these are far from optimal, with large fluctuations between individual trials. But when only the best runs are considered, results consistent with theoretical arguments are recovered.Comment: 34 pages, RevTex4, 1 table and 20 ps-figures included, related papers available at http://www.physics.emory.edu/faculty/boettcher

    RRR: Rank-Regret Representative

    Full text link
    Selecting the best items in a dataset is a common task in data exploration. However, the concept of "best" lies in the eyes of the beholder: different users may consider different attributes more important, and hence arrive at different rankings. Nevertheless, one can remove "dominated" items and create a "representative" subset of the data set, comprising the "best items" in it. A Pareto-optimal representative is guaranteed to contain the best item of each possible ranking, but it can be almost as big as the full data. Representative can be found if we relax the requirement to include the best item for every possible user, and instead just limit the users' "regret". Existing work defines regret as the loss in score by limiting consideration to the representative instead of the full data set, for any chosen ranking function. However, the score is often not a meaningful number and users may not understand its absolute value. Sometimes small ranges in score can include large fractions of the data set. In contrast, users do understand the notion of rank ordering. Therefore, alternatively, we consider the position of the items in the ranked list for defining the regret and propose the {\em rank-regret representative} as the minimal subset of the data containing at least one of the top-kk of any possible ranking function. This problem is NP-complete. We use the geometric interpretation of items to bound their ranks on ranges of functions and to utilize combinatorial geometry notions for developing effective and efficient approximation algorithms for the problem. Experiments on real datasets demonstrate that we can efficiently find small subsets with small rank-regrets
    corecore