548 research outputs found

    Study of the Distillability of Werner States Using Entanglement Witnesses and Robust Semidefinite Programs

    Get PDF
    We use Robust Semidefinite Programs and Entanglement Witnesses to study the distillability of Werner states. We perform exact numerical calculations which show 2-undistillability in a region of the state space which was previously conjectured to be undistillable. We also introduce bases which yield interesting expressions for the {\em distillability witnesses} and for a tensor product of Werner states with arbitrary number of copies.Comment: 16 pages, 2 figure

    Separable Multipartite Mixed States - Operational Asymptotically Necessary and Sufficient Conditions

    Get PDF
    We introduce an operational procedure to determine, with arbitrary probability and accuracy, optimal entanglement witness for every multipartite entangled state. This method provides an operational criterion for separability which is asymptotically necessary and sufficient. Our results are also generalized to detect all different types of multipartite entanglement.Comment: 4 pages, 2 figures, submitted to Physical Review Letters. Revised version with new calculation

    Algorithm Engineering in Robust Optimization

    Full text link
    Robust optimization is a young and emerging field of research having received a considerable increase of interest over the last decade. In this paper, we argue that the the algorithm engineering methodology fits very well to the field of robust optimization and yields a rewarding new perspective on both the current state of research and open research directions. To this end we go through the algorithm engineering cycle of design and analysis of concepts, development and implementation of algorithms, and theoretical and experimental evaluation. We show that many ideas of algorithm engineering have already been applied in publications on robust optimization. Most work on robust optimization is devoted to analysis of the concepts and the development of algorithms, some papers deal with the evaluation of a particular concept in case studies, and work on comparison of concepts just starts. What is still a drawback in many papers on robustness is the missing link to include the results of the experiments again in the design

    Interaction-powered supernovae: Rise-time vs. peak-luminosity correlation and the shock-breakout velocity

    Get PDF
    Interaction of supernova (SN) ejecta with the optically thick circumstellar medium (CSM) of a progenitor star can result in a bright, long-lived shock breakout event. Candidates for such SNe include Type IIn and superluminous SNe. If some of these SNe are powered by interaction, then there should be a relation between their peak luminosity, bolometric light-curve rise time, and shock-breakout velocity. Given that the shock velocity during shock breakout is not measured, we expect a correlation, with a significant spread, between the rise time and the peak luminosity of these SNe. Here, we present a sample of 15 SNe IIn for which we have good constraints on their rise time and peak luminosity from observations obtained using the Palomar Transient Factory. We report on a possible correlation between the R-band rise time and peak luminosity of these SNe, with a false-alarm probability of 3%. Assuming that these SNe are powered by interaction, combining these observables and theory allows us to deduce lower limits on the shock-breakout velocity. The lower limits on the shock velocity we find are consistent with what is expected for SNe (i.e., ~10^4 km/s). This supports the suggestion that the early-time light curves of SNe IIn are caused by shock breakout in a dense CSM. We note that such a correlation can arise from other physical mechanisms. Performing such a test on other classes of SNe (e.g., superluminous SNe) can be used to rule out the interaction model for a class of events.Comment: Accepted to ApJ, 6 page

    Extended Formulations in Mixed-integer Convex Programming

    Full text link
    We present a unifying framework for generating extended formulations for the polyhedral outer approximations used in algorithms for mixed-integer convex programming (MICP). Extended formulations lead to fewer iterations of outer approximation algorithms and generally faster solution times. First, we observe that all MICP instances from the MINLPLIB2 benchmark library are conic representable with standard symmetric and nonsymmetric cones. Conic reformulations are shown to be effective extended formulations themselves because they encode separability structure. For mixed-integer conic-representable problems, we provide the first outer approximation algorithm with finite-time convergence guarantees, opening a path for the use of conic solvers for continuous relaxations. We then connect the popular modeling framework of disciplined convex programming (DCP) to the existence of extended formulations independent of conic representability. We present evidence that our approach can yield significant gains in practice, with the solution of a number of open instances from the MINLPLIB2 benchmark library.Comment: To be presented at IPCO 201

    A Look at the Generalized Heron Problem through the Lens of Majorization-Minimization

    Full text link
    In a recent issue of this journal, Mordukhovich et al.\ pose and solve an interesting non-differentiable generalization of the Heron problem in the framework of modern convex analysis. In the generalized Heron problem one is given k+1k+1 closed convex sets in \Real^d equipped with its Euclidean norm and asked to find the point in the last set such that the sum of the distances to the first kk sets is minimal. In later work the authors generalize the Heron problem even further, relax its convexity assumptions, study its theoretical properties, and pursue subgradient algorithms for solving the convex case. Here, we revisit the original problem solely from the numerical perspective. By exploiting the majorization-minimization (MM) principle of computational statistics and rudimentary techniques from differential calculus, we are able to construct a very fast algorithm for solving the Euclidean version of the generalized Heron problem.Comment: 21 pages, 3 figure

    Precursors prior to Type IIn supernova explosions are common: precursor rates, properties, and correlations

    Get PDF
    There is a growing number of supernovae (SNe), mainly of Type IIn, which present an outburst prior to their presumably final explosion. These precursors may affect the SN display, and are likely related to some poorly charted phenomena in the final stages of stellar evolution. Here we present a sample of 16 SNe IIn for which we have Palomar Transient Factory (PTF) observations obtained prior to the SN explosion. By coadding these images taken prior to the explosion in time bins, we search for precursor events. We find five Type IIn SNe that likely have at least one possible precursor event, three of which are reported here for the first time. For each SN we calculate the control time. Based on this analysis we find that precursor events among SNe IIn are common: at the one-sided 99% confidence level, more than 50% of SNe IIn have at least one pre-explosion outburst that is brighter than absolute magnitude -14, taking place up to 1/3 yr prior to the SN explosion. The average rate of such precursor events during the year prior to the SN explosion is likely larger than one per year, and fainter precursors are possibly even more common. We also find possible correlations between the integrated luminosity of the precursor, and the SN total radiated energy, peak luminosity, and rise time. These correlations are expected if the precursors are mass-ejection events, and the early-time light curve of these SNe is powered by interaction of the SN shock and ejecta with optically thick circumstellar material.Comment: 15 pages, 20 figures, submitted to Ap

    Robustness and Generalization

    Full text link
    We derive generalization bounds for learning algorithms based on their robustness: the property that if a testing sample is "similar" to a training sample, then the testing error is close to the training error. This provides a novel approach, different from the complexity or stability arguments, to study generalization of learning algorithms. We further show that a weak notion of robustness is both sufficient and necessary for generalizability, which implies that robustness is a fundamental property for learning algorithms to work
    corecore