87 research outputs found

    A hybrid breakout local search and reinforcement learning approach to the vertex separator problem

    Get PDF
    The Vertex Separator Problem (VSP) is an NP-hard problem which arises from several important domains and applications. In this paper, we present an improved Breakout Local Search for VSP (named BLS-RLE). The distinguishing feature of BLS-RLE is a new parameter control mechanism that draws upon ideas from reinforcement learning theory for an interdependent decision on the number and on the type of perturbation moves. The mechanism complies with the principle “intensification first, minimal diversification only if needed”, and uses a dedicated sampling strategy for a rapid convergence towards a limited set of parameter values that appear to be the most convenient for the given state of search. Extensive experimental evaluations and statistical comparisons on a wide range of benchmark instances show significant improvement in performance of the proposed algorithm over the existing BLS algorithm for VSP. Indeed, out of the 422 tested instances, BLS-RLE was able to attain the best-known solution in 93.8% of the cases, which is around 20% higher compared to the existing BLS. In addition, we provide detailed analyses to evaluate the importance of the key elements of the proposed method and to justify the degree of diversification introduced during perturbation

    Constructing a unifying theory of dynamic programming DCOP algorithms via the generalized distributive law

    Get PDF
    In this paper we propose a novel message-passing algorithm, the so-called Action-GDL, as an extension to the generalized distributive law (GDL) to ef¿ciently solve DCOPs. Action-GDL provides a unifying perspective of several dynamic programming DCOP algorithms that are based on GDL, such as DPOP and DCPOP algorithms. We empirically show how Action-GDL using a novel distributed post-processing heuristic can outperform DCPOP, and by extension DPOP, even when the latter uses the best arrangement provided by multiple state-of-the-art heuristics.Work funded by IEA (TIN2006-15662-C02-01), AT (CONSOLIDER CSD2007-0022, INGENIO 2010) and EVE (TIN2009-14702-C02-01 and 02). Vinyals is supported by the Spanish Ministry of Education (FPU grant AP2006-04636)Peer Reviewe

    Scalable Graph Algorithms using Practically Efficient Data Reductions

    Get PDF

    Enabling Scalability: Graph Hierarchies and Fault Tolerance

    Get PDF
    In this dissertation, we explore approaches to two techniques for building scalable algorithms. First, we look at different graph problems. We show how to exploit the input graph\u27s inherent hierarchy for scalable graph algorithms. The second technique takes a step back from concrete algorithmic problems. Here, we consider the case of node failures in large distributed systems and present techniques to quickly recover from these. In the first part of the dissertation, we investigate how hierarchies in graphs can be used to scale algorithms to large inputs. We develop algorithms for three graph problems based on two approaches to build hierarchies. The first approach reduces instance sizes for NP-hard problems by applying so-called reduction rules. These rules can be applied in polynomial time. They either find parts of the input that can be solved in polynomial time, or they identify structures that can be contracted (reduced) into smaller structures without loss of information for the specific problem. After solving the reduced instance using an exponential-time algorithm, these previously contracted structures can be uncontracted to obtain an exact solution for the original input. In addition to a simple preprocessing procedure, reduction rules can also be used in branch-and-reduce algorithms where they are successively applied after each branching step to build a hierarchy of problem kernels of increasing computational hardness. We develop reduction-based algorithms for the classical NP-hard problems Maximum Independent Set and Maximum Cut. The second approach is used for route planning in road networks where we build a hierarchy of road segments based on their importance for long distance shortest paths. By only considering important road segments when we are far away from the source and destination, we can substantially speed up shortest path queries. In the second part of this dissertation, we take a step back from concrete graph problems and look at more general problems in high performance computing (HPC). Here, due to the ever increasing size and complexity of HPC clusters, we expect hardware and software failures to become more common in massively parallel computations. We present two techniques for applications to recover from failures and resume computation. Both techniques are based on in-memory storage of redundant information and a data distribution that enables fast recovery. The first technique can be used for general purpose distributed processing frameworks: We identify data that is redundantly available on multiple machines and only introduce additional work for the remaining data that is only available on one machine. The second technique is a checkpointing library engineered for fast recovery using a data distribution method that achieves balanced communication loads. Both our techniques have in common that they work in settings where computation after a failure is continued with less machines than before. This is in contrast to many previous approaches that---in particular for checkpointing---focus on systems that keep spare resources available to replace failed machines. Overall, we present different techniques that enable scalable algorithms. While some of these techniques are specific to graph problems, we also present tools for fault tolerant algorithms and applications in a distributed setting. To show that those can be helpful in many different domains, we evaluate them for graph problems and other applications like phylogenetic tree inference

    Design and Construction of a Longitudinally Polarized Solid Nuclear Target for CLAS12

    Get PDF
    A new polarized nuclear target has been developed, constructed, and deployed at Jefferson Laboratory in Newport News, VA for use with the upgraded 12 GeV CEBAF (Continuous Electron Beam Accelerator Facility) accelerator and the Hall B CLAS12 (12 GeV CEBAF Large Acceptance Spectrometer) detector array. This ‘APOLLO’ (Ammonia POLarized LOngitudinally) target is a longitudinally polarized, solid ammonia, nuclear target which employs DNP (Dynamic Nuclear Polarization) to induce a net polarization in samples of protons (NH3) and deuterons (ND3) cooled to 1K via helium evaporation, held in a 5T polarizing field supplied by the CLAS12 spectrometer, and irradiated with 140 GHz microwave radiation. It was utilized in the RGC (Run Group C) experiment suite through a collaboration of the JLab Target Group, Old Dominion University, Christopher Newport University, the University of Virginia, and the CLAS Collaboration. RGC comprised six experiments which measured multiple spin-dependent observables across a wide kinematic phase space for use in nucleon spin studies. The dimensional constraints necessary for the incorporation of APOLLO into CLAS12, as well as the considerations necessary to utilize the CLAS12 solenoid, introduced unique challenges to the target design. This document presents the innovative solutions developed for these challenges including a novel material transport system, superconducting magnetic correction coils, and an all new bespoke NMR (Nuclear Magnetic Resonance) system. In addition to a detailed description of the complete target system and an initial report of the RGC experimental run, it will also present a study of Quark-Hadron Duality in the g1 spin structure function based on Hall B EG1b data and pQCD fits from the JAM (Jefferson Lab Angular Momentum) Collaboration

    Optimisation of flow chemistry: tools and algorithms

    Get PDF
    The coupling of flow chemistry with automated laboratory equipment has become increasingly common and used to support the efficient manufacturing of chemicals. A variety of reactors and analytical techniques have been used in such configurations for investigating and optimising the processing conditions of different reactions. However, the integrated reactors used thus far have been constrained to single phase mixing, greatly limiting the scope of reactions for such studies. This thesis presents the development and integration of a millilitre-scale CSTR, the fReactor, that is able to process multiphase flows, thus broadening the range of reactions susceptible of being investigated in this way. Following a thorough review of the literature covering the uses of flow chemistry and lab-scale reactor technology, insights on the design of a temperature-controlled version of the fReactor with an accuracy of ±0.3 ºC capable of cutting waiting times 44% when compared to the previous reactor are given. A demonstration of its use is provided for which the product of a multiphasic reaction is analysed automatically under different reaction conditions according to a sampling plan. Metamodeling and cross-validation techniques are applied to these results, where single and multi-objective optimisations are carried out over the response surface models of different metrics to illustrate different trade-offs between them. The use of such techniques allowed reducing the error incurred by the common least squares polynomial fitting by over 12%. Additionally, a demonstration of the fReactor as a tool for synchrotron X-Ray Diffraction is also carried out by means of successfully assessing the change in polymorph caused by solvent switching, this being the first synchrotron experiment using this sort of device. The remainder of the thesis focuses on applying the same metamodeling and cross-validation techniques used previously, in the optimisation of the design of a miniaturised continuous oscillatory baffled reactor. However, rather than using these techniques with physical experimentation, they are used in conjunction with computational fluid dynamics. This reactor shows a better residence time distribution than its CSTR counterparts. Notably, the effect of the introduction of baffle offsetting in a plate design of the reactor is identified as a key parameter in giving a narrow residence time distribution and good mixing. Under this configuration it is possible to reduce the RTD variance by 45% and increase the mixing efficiency by 60% when compared to the best performing opposing baffles geometry
    corecore