6,335 research outputs found

    A New Look at the Easy-Hard-Easy Pattern of Combinatorial Search Difficulty

    Full text link
    The easy-hard-easy pattern in the difficulty of combinatorial search problems as constraints are added has been explained as due to a competition between the decrease in number of solutions and increased pruning. We test the generality of this explanation by examining one of its predictions: if the number of solutions is held fixed by the choice of problems, then increased pruning should lead to a monotonic decrease in search cost. Instead, we find the easy-hard-easy pattern in median search cost even when the number of solutions is held constant, for some search methods. This generalizes previous observations of this pattern and shows that the existing theory does not explain the full range of the peak in search cost. In these cases the pattern appears to be due to changes in the size of the minimal unsolvable subproblems, rather than changing numbers of solutions.Comment: See http://www.jair.org/ for any accompanying file

    Single-Step Quantum Search Using Problem Structure

    Get PDF
    The structure of satisfiability problems is used to improve search algorithms for quantum computers and reduce their required coherence times by using only a single coherent evaluation of problem properties. The structure of random k-SAT allows determining the asymptotic average behavior of these algorithms, showing they improve on quantum algorithms, such as amplitude amplification, that ignore detailed problem structure but remain exponential for hard problem instances. Compared to good classical methods, the algorithm performs better, on average, for weakly and highly constrained problems but worse for hard cases. The analytic techniques introduced here also apply to other quantum algorithms, supplementing the limited evaluation possible with classical simulations and showing how quantum computing can use ensemble properties of NP search problems.Comment: 39 pages, 12 figures. Revision describes further improvement with multiple steps (section 7). See also http://www.parc.xerox.com/dynamics/www/quantum.htm

    Quantum Computing and Phase Transitions in Combinatorial Search

    Get PDF
    We introduce an algorithm for combinatorial search on quantum computers that is capable of significantly concentrating amplitude into solutions for some NP search problems, on average. This is done by exploiting the same aspects of problem structure as used by classical backtrack methods to avoid unproductive search choices. This quantum algorithm is much more likely to find solutions than the simple direct use of quantum parallelism. Furthermore, empirical evaluation on small problems shows this quantum algorithm displays the same phase transition behavior, and at the same location, as seen in many previously studied classical search methods. Specifically, difficult problem instances are concentrated near the abrupt change from underconstrained to overconstrained problems.Comment: See http://www.jair.org/ for an online appendix and other files accompanying this articl

    Solving Highly Constrained Search Problems with Quantum Computers

    Full text link
    A previously developed quantum search algorithm for solving 1-SAT problems in a single step is generalized to apply to a range of highly constrained k-SAT problems. We identify a bound on the number of clauses in satisfiability problems for which the generalized algorithm can find a solution in a constant number of steps as the number of variables increases. This performance contrasts with the linear growth in the number of steps required by the best classical algorithms, and the exponential number required by classical and quantum methods that ignore the problem structure. In some cases, the algorithm can also guarantee that insoluble problems in fact have no solutions, unlike previously proposed quantum search algorithms

    The faint-galaxy hosts of gamma-ray bursts

    Full text link
    The observed redshifts and magnitudes of the host galaxies of gamma-ray bursts (GRBs) are compared with the predictions of three basic GRB models, in which the comoving rate density of GRBs is (1) proportional to the cosmic star formation rate density, (2) proportional to the total integrated stellar density and (3) constant. All three models make the assumption that at every epoch the probability of a GRB occuring in a galaxy is proportional to that galaxy's broad-band luminosity. No assumption is made that GRBs are standard candles or even that their luminosity function is narrow. All three rate density models are consistent with the observed GRB host galaxies to date, although model (2) is slightly disfavored relative to the others. Models (1) and (3) make very similar predictions for host galaxy magnitude and redshift distributions; these models will be probably not be distinguished without measurements of host-galaxy star-formation rates. The fraction of host galaxies fainter than 28 mag may constrain the faint end of the galaxy luminosity function at high redshift, or, if the fraction is observed to be low, may suggest that the bursters are expelled from low-luminosity hosts. In all models, the probability of finding a z<0.008 GRB among a sample of 11 GRBs is less than 10^(-4), strongly suggesting that GRB 980425, if associated with supernova 1998bw, represents a distinct class of GRBs.Comment: 7 pages, ApJ in press, revised to incorporate yet more new and revised observational result

    Holocene geomagnetic field in Europe

    Get PDF

    Using microsimulation feedback for trip adaptation for realistic traffic in Dallas

    Full text link
    This paper presents a day-to-day re-routing relaxation approach for traffic simulations. Starting from an initial planset for the routes, the route-based microsimulation is executed. The result of the microsimulation is fed into a re-router, which re-routes a certain percentage of all trips. This approach makes the traffic patterns in the microsimulation much more reasonable. Further, it is shown that the method described in this paper can lead to strong oscillations in the solutions.Comment: Accepted by International Journal of Modern Physics C. Complete postscript version including figures in http://www-transims.tsasa.lanl.gov/research_team/papers

    Valley current characterization of high current density resonant tunnelling diodes for terahertz-wave applications

    Get PDF
    We report valley current characterisation of high current density InGaAs/AlAs/InP resonant tunnelling diodes (RTDs) grown by metal-organic vapour phase epitaxy (MOVPE) for THz emission, with a view to investigate the origin of the valley current and optimize device performance. By applying a dual-pass fabrication technique, we are able to measure the RTD I-V characteristic for different perimeter/area ratios, which uniquely allows us to investigate the contribution of leakage current to the valley current and its effect on the PVCR from a single device. Temperature dependent (20 – 300 K) characteristics for a device are critically analysed and the effect of temperature on the maximum extractable power (PMAX) and the negative differential conductance (NDC) of the device is investigated. By performing theoretical modelling, we are able to explore the effect of typical variations in structural composition during the growth process on the tunnelling properties of the device, and hence the device performance

    Invoice from T. P. Hogg to Ogden Goelet

    Get PDF
    https://digitalcommons.salve.edu/goelet-personal-expenses/1080/thumbnail.jp
    • 

    corecore