9,388 research outputs found

    An Atypical Survey of Typical-Case Heuristic Algorithms

    Full text link
    Heuristic approaches often do so well that they seem to pretty much always give the right answer. How close can heuristic algorithms get to always giving the right answer, without inducing seismic complexity-theoretic consequences? This article first discusses how a series of results by Berman, Buhrman, Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the early 1970s through the early 1990s, explicitly or implicitly limited how well heuristic algorithms can do on NP-hard problems. In particular, many desirable levels of heuristic success cannot be obtained unless severe, highly unlikely complexity class collapses occur. Second, we survey work initiated by Goldreich and Wigderson, who showed how under plausible assumptions deterministic heuristics for randomized computation can achieve a very high frequency of correctness. Finally, we consider formal ways in which theory can help explain the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012 issue of SIGACT New

    Selected Problems of Determining Critical Loads in Sructures with Stable Post-Critical Behaviour.

    Get PDF
    This paper presents selected cases of inapplicability of theory based methods of determining critical loads in thin – walled, composite tubes. 8th layered composite tubes with square cross-section were being subjected to static compression and in order to register experimental data two measuring equipment were employed: strain-gauges and Digital Image Correlation system ARAMIS R ⃝ . When measurement data were collected five different theory based methods were applied in order to determine critical loads. Cases where it was impossible to apply certain methods or some doubts about correctness of the results occurred were presented and analyzed. Moreover in cases where it was possible, the theory was equivalently transformed, in such a way to fit experimental data and calculate the critical loads

    DiFX2: A more flexible, efficient, robust and powerful software correlator

    Get PDF
    Software correlation, where a correlation algorithm written in a high-level language such as C++ is run on commodity computer hardware, has become increasingly attractive for small to medium sized and/or bandwidth constrained radio interferometers. In particular, many long baseline arrays (which typically have fewer than 20 elements and are restricted in observing bandwidth by costly recording hardware and media) have utilized software correlators for rapid, cost-effective correlator upgrades to allow compatibility with new, wider bandwidth recording systems and improve correlator flexibility. The DiFX correlator, made publicly available in 2007, has been a popular choice in such upgrades and is now used for production correlation by a number of observatories and research groups worldwide. Here we describe the evolution in the capabilities of the DiFX correlator over the past three years, including a number of new capabilities, substantial performance improvements, and a large amount of supporting infrastructure to ease use of the code. New capabilities include the ability to correlate a large number of phase centers in a single correlation pass, the extraction of phase calibration tones, correlation of disparate but overlapping sub-bands, the production of rapidly sampled filterbank and kurtosis data at minimal cost, and many more. The latest version of the code is at least 15% faster than the original, and in certain situations many times this value. Finally, we also present detailed test results validating the correctness of the new code.Comment: 28 pages, 9 figures, accepted for publication in PAS

    The Complexity of Manipulative Attacks in Nearly Single-Peaked Electorates

    Full text link
    Many electoral bribery, control, and manipulation problems (which we will refer to in general as "manipulative actions" problems) are NP-hard in the general case. It has recently been noted that many of these problems fall into polynomial time if the electorate is single-peaked (i.e., is polarized along some axis/issue). However, real-world electorates are not truly single-peaked. There are usually some mavericks, and so real-world electorates tend to merely be nearly single-peaked. This paper studies the complexity of manipulative-action algorithms for elections over nearly single-peaked electorates, for various notions of nearness and various election systems. We provide instances where even one maverick jumps the manipulative-action complexity up to \np-hardness, but we also provide many instances where a reasonable number of mavericks can be tolerated without increasing the manipulative-action complexity.Comment: 35 pages, also appears as URCS-TR-2011-96

    Verification of Many-Qubit States

    Get PDF
    Verification is a task to check whether a given quantum state is close to an ideal state or not. In this paper, we show that a variety of many-qubit quantum states can be verified with only sequential single-qubit measurements of Pauli operators. First, we introduce a protocol for verifying ground states of Hamiltonians. We next explain how to verify quantum states generated by a certain class of quantum circuits. We finally propose an adaptive test of stabilizers that enables the verification of all polynomial-time-generated hypergraph states, which include output states of the Bremner-Montanaro-Shepherd-type instantaneous quantum polynomial time (IQP) circuits. Importantly, we do not make any assumption that the identically and independently distributed copies of the same states are given: Our protocols work even if some highly complicated entanglement is created among copies in any artificial way. As applications, we consider the verification of the quantum computational supremacy demonstration with IQP models, and verifiable blind quantum computing.Comment: 15 pages, 3 figures, published versio

    The Long-Short Story of Movie Description

    Full text link
    Generating descriptions for videos has many applications including assisting blind people and human-robot interaction. The recent advances in image captioning as well as the release of large-scale movie description datasets such as MPII Movie Description allow to study this task in more depth. Many of the proposed methods for image captioning rely on pre-trained object classifier CNNs and Long-Short Term Memory recurrent networks (LSTMs) for generating descriptions. While image description focuses on objects, we argue that it is important to distinguish verbs, objects, and places in the challenging setting of movie description. In this work we show how to learn robust visual classifiers from the weak annotations of the sentence descriptions. Based on these visual classifiers we learn how to generate a description using an LSTM. We explore different design choices to build and train the LSTM and achieve the best performance to date on the challenging MPII-MD dataset. We compare and analyze our approach and prior work along various dimensions to better understand the key challenges of the movie description task

    Verification tools for probabilistic forecasts of continuous hydrological variables

    Get PDF
    In the present paper we describe some methods for verifying and evaluating probabilistic forecasts of hydrological variables. We propose an extension to continuous-valued variables of a verification method originated in the meteorological literature for the analysis of binary variables, and based on the use of a suitable cost-loss function to evaluate the quality of the forecasts. We find that this procedure is useful and reliable when it is complemented with other verification tools, borrowed from the economic literature, which are addressed to verify the statistical correctness of the probabilistic forecast. We illustrate our findings with a detailed application to the evaluation of probabilistic and deterministic forecasts of hourly discharge value
    corecore