629,897 research outputs found

    Differential Privacy for Relational Algebra: Improving the Sensitivity Bounds via Constraint Systems

    Get PDF
    Differential privacy is a modern approach in privacy-preserving data analysis to control the amount of information that can be inferred about an individual by querying a database. The most common techniques are based on the introduction of probabilistic noise, often defined as a Laplacian parametric on the sensitivity of the query. In order to maximize the utility of the query, it is crucial to estimate the sensitivity as precisely as possible. In this paper we consider relational algebra, the classical language for queries in relational databases, and we propose a method for computing a bound on the sensitivity of queries in an intuitive and compositional way. We use constraint-based techniques to accumulate the information on the possible values for attributes provided by the various components of the query, thus making it possible to compute tight bounds on the sensitivity.Comment: In Proceedings QAPL 2012, arXiv:1207.055

    Prospects for Measuring Cosmic Microwave Background Spectral Distortions in the Presence of Foregrounds

    Full text link
    Measurements of cosmic microwave background spectral distortions have profound implications for our understanding of physical processes taking place over a vast window in cosmological history. Foreground contamination is unavoidable in such measurements and detailed signal-foreground separation will be necessary to extract cosmological science. We present MCMC-based spectral distortion detection forecasts in the presence of Galactic and extragalactic foregrounds for a range of possible experimental configurations, focusing on the Primordial Inflation Explorer (PIXIE) as a fiducial concept. We consider modifications to the baseline PIXIE mission (operating 12 months in distortion mode), searching for optimal configurations using a Fisher approach. Using only spectral information, we forecast an extended PIXIE mission to detect the expected average non-relativistic and relativistic thermal Sunyaev-Zeldovich distortions at high significance (194σ\sigma and 11σ\sigma, respectively), even in the presence of foregrounds. The Λ\LambdaCDM Silk damping μ\mu-type distortion is not detected without additional modifications of the instrument or external data. Galactic synchrotron radiation is the most problematic source of contamination in this respect, an issue that could be mitigated by combining PIXIE data with future ground-based observations at low frequencies (ν<1530\nu < 15-30GHz). Assuming moderate external information on the synchrotron spectrum, we project an upper limit of μ<3.6×107|\mu| < 3.6\times 10^{-7} (95\% c.l.), slightly more than one order of magnitude above the fiducial Λ\LambdaCDM signal from the damping of small-scale primordial fluctuations, but a factor of 250\simeq 250 improvement over the current upper limit from COBE/FIRAS. This limit could be further reduced to μ<9.4×108|\mu| < 9.4\times 10^{-8} (95\% c.l.) with more optimistic assumptions about low-frequency information. (Abridged)Comment: (16 pages, 11 figures, submitted to MNRAS. Fisher code available at https://github.com/mabitbol/sd_foregrounds. Updated with published version.

    Network meta-analysis of diagnostic test accuracy studies identifies and ranks the optimal diagnostic tests and thresholds for healthcare policy and decision making

    Get PDF
    Objective: Network meta-analyses have extensively been used to compare the effectiveness of multiple interventions for healthcare policy and decision-making. However, methods for evaluating the performance of multiple diagnostic tests are less established. In a decision-making context, we are often interested in comparing and ranking the performance of multiple diagnostic tests, at varying levels of test thresholds, in one simultaneous analysis. Study design and setting: Motivated by an example of cognitive impairment diagnosis following stroke, we synthesized data from 13 studies assessing the efficiency of two diagnostic tests: Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA), at two test thresholds: MMSE &lt;25/30 and &lt;27/30, and MoCA &lt;22/30 and &lt;26/30. Using Markov Chain Monte Carlo (MCMC) methods, we fitted a bivariate network meta-analysis model incorporating constraints on increasing test threshold, and accounting for the correlations between multiple test accuracy measures from the same study. Results: We developed and successfully fitted a model comparing multiple tests/threshold combinations while imposing threshold constraints. Using this model, we found that MoCA at threshold &lt;26/30 appeared to have the best true positive rate, whilst MMSE at threshold &lt;25/30 appeared to have the best true negative rate. Conclusion: The combined analysis of multiple tests at multiple thresholds allowed for more rigorous comparisons between competing diagnostics tests for decision making

    Introducing the sequential linear programming level-set method for topology optimization

    Get PDF
    The authors would like to thank Numerical Analysis Group at the Rutherford Appleton Laboratory for their FORTRAN HSL packages (HSL, a collection of Fortran codes for large-scale scientific computation. See http://www.hsl.rl.ac.uk/). Dr H Alicia Kim acknowledges the support from Engineering and Physical Sciences Research Council, grant number EP/M002322/1Peer reviewedPublisher PD

    Effects of foreign acquisitions on financial constraints, productivity and investment in R&amp;D of target firms in China

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.This paper examines whether foreign acquisitions lessen financial constraints, improve investment in research & development (R&D) and productivity of the target firms in China based on a sample of 914 cross-border mergers and acquisitions (CBM&A) over the period of 1994-2011. Using investment to cash-flow sensitivity to measure financial constraints, we find that foreign acquisitions in China are associated with a reduction of target firms’ financial constraints, irrespective of the ownership type of the target firm. However, the extent of financial constraint reduction is pronounced for non-SOEs compared to state-owned enterprises (SOEs). This study also provides evidence that foreign acquisitions improve Chinese target firms’ productivity and investment in R&D

    The LEECH Exoplanet Imaging Survey: Limits on Planet Occurrence Rates Under Conservative Assumptions

    Get PDF
    We present the results of the largest LL^{\prime} (3.8 μ3.8~\mum) direct imaging survey for exoplanets to date, the Large Binocular Telescope Interferometer (LBTI) Exozodi Exoplanet Common Hunt (LEECH). We observed 98 stars with spectral types from B to M. Cool planets emit a larger share of their flux in LL^{\prime} compared to shorter wavelengths, affording LEECH an advantage in detecting low-mass, old, and cold-start giant planets. We emphasize proximity over youth in our target selection, probing physical separations smaller than other direct imaging surveys. For FGK stars, LEECH outperforms many previous studies, placing tighter constraints on the hot-start planet occurrence frequency interior to 20\sim20 au. For less luminous, cold-start planets, LEECH provides the best constraints on giant-planet frequency interior to 20\sim20 au around FGK stars. Direct imaging survey results depend sensitively on both the choice of evolutionary model (e.g., hot- or cold-start) and assumptions (explicit or implicit) about the shape of the underlying planet distribution, in particular its radial extent. Artificially low limits on the planet occurrence frequency can be derived when the shape of the planet distribution is assumed to extend to very large separations, well beyond typical protoplanetary dust-disk radii (50\lesssim50 au), and when hot-start models are used exclusively. We place a conservative upper limit on the planet occurrence frequency using cold-start models and planetary population distributions that do not extend beyond typical protoplanetary dust-disk radii. We find that 90%\lesssim90\% of FGK systems can host a 7 to 10 MJupM_{\mathrm{Jup}} planet from 5 to 50 au. This limit leaves open the possibility that planets in this range are common.Comment: 31 pages, 13 figures, accepted to A

    Really Natural Linear Indexed Type Checking

    Full text link
    Recent works have shown the power of linear indexed type systems for enforcing complex program properties. These systems combine linear types with a language of type-level indices, allowing more fine-grained analyses. Such systems have been fruitfully applied in diverse domains, including implicit complexity and differential privacy. A natural way to enhance the expressiveness of this approach is by allowing the indices to depend on runtime information, in the spirit of dependent types. This approach is used in DFuzz, a language for differential privacy. The DFuzz type system relies on an index language supporting real and natural number arithmetic over constants and variables. Moreover, DFuzz uses a subtyping mechanism to make types more flexible. By themselves, linearity, dependency, and subtyping each require delicate handling when performing type checking or type inference; their combination increases this challenge substantially, as the features can interact in non-trivial ways. In this paper, we study the type-checking problem for DFuzz. We show how we can reduce type checking for (a simple extension of) DFuzz to constraint solving over a first-order theory of naturals and real numbers which, although undecidable, can often be handled in practice by standard numeric solvers

    Human-Machine Collaborative Optimization via Apprenticeship Scheduling

    Full text link
    Coordinating agents to complete a set of tasks with intercoupled temporal and resource constraints is computationally challenging, yet human domain experts can solve these difficult scheduling problems using paradigms learned through years of apprenticeship. A process for manually codifying this domain knowledge within a computational framework is necessary to scale beyond the ``single-expert, single-trainee" apprenticeship model. However, human domain experts often have difficulty describing their decision-making processes, causing the codification of this knowledge to become laborious. We propose a new approach for capturing domain-expert heuristics through a pairwise ranking formulation. Our approach is model-free and does not require enumerating or iterating through a large state space. We empirically demonstrate that this approach accurately learns multifaceted heuristics on a synthetic data set incorporating job-shop scheduling and vehicle routing problems, as well as on two real-world data sets consisting of demonstrations of experts solving a weapon-to-target assignment problem and a hospital resource allocation problem. We also demonstrate that policies learned from human scheduling demonstration via apprenticeship learning can substantially improve the efficiency of a branch-and-bound search for an optimal schedule. We employ this human-machine collaborative optimization technique on a variant of the weapon-to-target assignment problem. We demonstrate that this technique generates solutions substantially superior to those produced by human domain experts at a rate up to 9.5 times faster than an optimization approach and can be applied to optimally solve problems twice as complex as those solved by a human demonstrator.Comment: Portions of this paper were published in the Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) in 2016 and in the Proceedings of Robotics: Science and Systems (RSS) in 2016. The paper consists of 50 pages with 11 figures and 4 table
    corecore