3,889 research outputs found

    Lower-bounding procedures for the 2-dimensional cell suppression problem

    Get PDF
    To protect confidential data from disclosure, statistical tables use a technique called cell suppression which consists of suppressing data from the statistical tables they publish. As some row and column subtotals are published, omitting just the confidential values does not guarantee in every case that they cannot be disclosed or estimated within a narrow range. Therefore to protect confidential data it is often necessary to make complementary suppressions, that is, to suppress also values that are not confidential. Assigning a cost to every complementary suppression, the cell suppression problem is that of finding a set of complementary suppressions with minimum total cost. In this paper new necessary protection conditions are presented. Combining these new conditions with the ones known from the literature new lower-bounding methods for the cell suppression problem are developed. Dominance theoretical results are proven and computational experience is reported for randomly generated tables.info:eu-repo/semantics/publishedVersio

    Cell suppression problem: A genetic-based approach

    Get PDF
    Cell suppression is one of the most frequently used techniques to prevent the disclosure of sensitive data in statistical tables. Finding the minimum cost set of nonsensitive entries to suppress, along with the sensitive ones, in order to make a table safe for publication, is a NP-hard problem, denoted the cell suppression problem (CSP). In this paper, we present GenSup, a new heuristic for the CSP, which combines the general features of genetic algorithms with safety conditions derived by several authors. The safety conditions are used to develop fast procedures to generate multiple initial solutions and also to recombine, to perturb and to repair solutions in order to improve their quality. The results obtained for 300 tables, with up to more than 90,000 entries, show that GenSup is very effective at finding low-cost sets of complementary suppressions to protect confidential data in two-dimensional tables.(2008).info:eu-repo/semantics/publishedVersio

    Exact disclosure prevention in two-dimensional statistical tables

    Get PDF
    We propose new formulations for the exact disclosure problem and develop Lagrangian schemes, that rely on shortest path problems, to generate near optimal solutions. Computational experience is reported for 550 tables with up to 40,000 cells. A proven optimal solution was obtained for 95% of the instances and a near optimal solution was computed for each remaining instance as well as an upper bound on the deviation from the optimum.info:eu-repo/semantics/publishedVersio

    Additional extensions to the NASCAP computer code, volume 2

    Get PDF
    Particular attention is given to comparison of the actural response of the SCATHA (Spacecraft Charging AT High Altitudes) P78-2 satellite with theoretical (NASCAP) predictions. Extensive comparisons for a variety of environmental conditions confirm the validity of the NASCAP model. A summary of the capabilities and range of validity of NASCAP is presented, with extensive reference to previously published applications. It is shown that NASCAP is capable of providing quantitatively accurate results when the object and environment are adequately represented and fall within the range of conditions for which NASCAP was intended. Three dimensional electric field affects play an important role in determining the potential of dielectric surfaces and electrically isolated conducting surfaces, particularly in the presence of artificially imposed high voltages. A theory for such phenomena is presented and applied to the active control experiments carried out in SCATHA, as well as other space and laboratory experiments. Finally, some preliminary work toward modeling large spacecraft in polar Earth orbit is presented. An initial physical model is presented including charge emission. A simple code based upon the model is described along with code test results

    Performance and structure of single-mode bosonic codes

    Get PDF
    The early Gottesman, Kitaev, and Preskill (GKP) proposal for encoding a qubit in an oscillator has recently been followed by cat- and binomial-code proposals. Numerically optimized codes have also been proposed, and we introduce new codes of this type here. These codes have yet to be compared using the same error model; we provide such a comparison by determining the entanglement fidelity of all codes with respect to the bosonic pure-loss channel (i.e., photon loss) after the optimal recovery operation. We then compare achievable communication rates of the combined encoding-error-recovery channel by calculating the channel's hashing bound for each code. Cat and binomial codes perform similarly, with binomial codes outperforming cat codes at small loss rates. Despite not being designed to protect against the pure-loss channel, GKP codes significantly outperform all other codes for most values of the loss rate. We show that the performance of GKP and some binomial codes increases monotonically with increasing average photon number of the codes. In order to corroborate our numerical evidence of the cat/binomial/GKP order of performance occurring at small loss rates, we analytically evaluate the quantum error-correction conditions of those codes. For GKP codes, we find an essential singularity in the entanglement fidelity in the limit of vanishing loss rate. In addition to comparing the codes, we draw parallels between binomial codes and discrete-variable systems. First, we characterize one- and two-mode binomial as well as multi-qubit permutation-invariant codes in terms of spin-coherent states. Such a characterization allows us to introduce check operators and error-correction procedures for binomial codes. Second, we introduce a generalization of spin-coherent states, extending our characterization to qudit binomial codes and yielding a new multi-qudit code.Comment: 34 pages, 11 figures, 4 tables. v3: published version. See related talk at https://absuploads.aps.org/presentation.cfm?pid=1351

    Accurate Transfer Maps for Realistic Beamline Elements: Part I, Straight Elements

    Full text link
    The behavior of orbits in charged-particle beam transport systems, including both linear and circular accelerators as well as final focus sections and spectrometers, can depend sensitively on nonlinear fringe-field and high-order-multipole effects in the various beam-line elements. The inclusion of these effects requires a detailed and realistic model of the interior and fringe fields, including their high spatial derivatives. A collection of surface fitting methods has been developed for extracting this information accurately from 3-dimensional field data on a grid, as provided by various 3-dimensional finite-element field codes. Based on these realistic field models, Lie or other methods may be used to compute accurate design orbits and accurate transfer maps about these orbits. Part I of this work presents a treatment of straight-axis magnetic elements, while Part II will treat bending dipoles with large sagitta. An exactly-soluble but numerically challenging model field is used to provide a rigorous collection of performance benchmarks.Comment: Accepted to PRST-AB. Changes: minor figure modifications, reference added, typos corrected
    • …
    corecore