386 research outputs found

    The data privacy matrix project: towards a global alignment of data privacy laws

    Get PDF
    Data privacy is an expected right of most citizens around the world but there are many legislative challenges within a boundary-less cloud computing and World Wide Web environment. Despite its importance, there is limited research around data privacy law gaps and alignment, and the legal side of the security ecosystem which seems to be in a constant effort to catch-up. There are already issues within recent history which show a lack of alignment causing a great deal of confusion, an example of this is the 'right to be forgotten' case which came up in 2014. This case involved a Spanish man against Google Spain. He requested the removal of a link to an article about an auction for his foreclosed home, for a debt that he had subsequently paid. However, misalignment of data privacy laws caused further complications to the case. This paper introduces the Waikato Data Privacy Matrix, our global project for alignment of data privacy laws by focusing on Asia Pacific data privacy laws and their relationships with the European Union and the USA. This will also suggest potential solutions to address some of the issues which may occur when a breach of data privacy occurs, in order to ensure an individual has their data privacy protected across the boundaries in the Web. With the increase in data processing and storage across different jurisdictions and regions (e.g. public cloud computing), the Waikato Data Privacy Matrix empowers businesses using or providing cloud services to understand the different data privacy requirements across the globe, paving the way for increased cloud adoption and usage

    Generating Non-Linear Interpolants by Semidefinite Programming

    Full text link
    Interpolation-based techniques have been widely and successfully applied in the verification of hardware and software, e.g., in bounded-model check- ing, CEGAR, SMT, etc., whose hardest part is how to synthesize interpolants. Various work for discovering interpolants for propositional logic, quantifier-free fragments of first-order theories and their combinations have been proposed. However, little work focuses on discovering polynomial interpolants in the literature. In this paper, we provide an approach for constructing non-linear interpolants based on semidefinite programming, and show how to apply such results to the verification of programs by examples.Comment: 22 pages, 4 figure

    SMT-based Model Checking for Recursive Programs

    Full text link
    We present an SMT-based symbolic model checking algorithm for safety verification of recursive programs. The algorithm is modular and analyzes procedures individually. Unlike other SMT-based approaches, it maintains both "over-" and "under-approximations" of procedure summaries. Under-approximations are used to analyze procedure calls without inlining. Over-approximations are used to block infeasible counterexamples and detect convergence to a proof. We show that for programs and properties over a decidable theory, the algorithm is guaranteed to find a counterexample, if one exists. However, efficiency depends on an oracle for quantifier elimination (QE). For Boolean Programs, the algorithm is a polynomial decision procedure, matching the worst-case bounds of the best BDD-based algorithms. For Linear Arithmetic (integers and rationals), we give an efficient instantiation of the algorithm by applying QE "lazily". We use existing interpolation techniques to over-approximate QE and introduce "Model Based Projection" to under-approximate QE. Empirical evaluation on SV-COMP benchmarks shows that our algorithm improves significantly on the state-of-the-art.Comment: originally published as part of the proceedings of CAV 2014; fixed typos, better wording at some place

    Isotopic analysis of faunal material from South Uist, Western Isles, Scotland

    Get PDF
    This paper reports on the results from stable isotope analysis of faunal bone collagen from a number of Iron Age and later sites on the island of South Uist, in the Western Isles, Scotland. This preliminary investigation into the isotopic signatures of the fauna is part of a larger project to model the interaction between humans, animals, and the broader environment in the Western Isles. The results demonstrate that the island fauna data fall within the range of expected results for the UK, with the terrestrial herbivorous diets of cattle and sheep confi rmed. The isotopic composition for pigs suggests that some of these animals had an omnivorous diet, whilst a single red deer value might be suggestive of the consumption of marine foods, such as by grazing on seaweed. However, further analysis is needed in order to verify this anomalous isotopic ratio

    Evolutionary trade-offs associated with loss of PmrB function in host-adapted <i>Pseudomonas aeruginosa</i>

    Get PDF
    Pseudomonas aeruginosa colonises the upper airway of cystic fibrosis (CF) patients, providing a reservoir of host-adapted genotypes that subsequently establish chronic lung infection. We previously experimentally-evolved P. aeruginosa in a murine model of respiratory tract infection and observed early-acquired mutations in pmrB, encoding the sensor kinase of a two-component system that promoted establishment and persistence of infection. Here, using proteomics, we show downregulation of proteins involved in LPS biosynthesis, antimicrobial resistance and phenazine production in pmrB mutants, and upregulation of proteins involved in adherence, lysozyme resistance and inhibition of the chloride ion channel CFTR, relative to wild-type strain LESB65. Accordingly, pmrB mutants are susceptible to antibiotic treatment but show enhanced adherence to airway epithelial cells, resistance to lysozyme treatment, and downregulate host CFTR expression. We propose that P. aeruginosa pmrB mutations in CF patients are subject to an evolutionary trade-off, leading to enhanced colonisation potential, CFTR inhibition, and resistance to host defences, but also to increased susceptibility to antibiotics.</p

    Developing a core outcome set for future infertility research : An international consensus development study

    Get PDF
    STUDY QUESTION: Can a core outcome set to standardize outcome selection, collection and reporting across future infertility research be developed? SUMMARY ANSWER: A minimum data set, known as a core outcome set, has been developed for randomized controlled trials (RCTs) and systematic reviews evaluating potential treatments for infertility. WHAT IS KNOWN ALREADY: Complex issues, including a failure to consider the perspectives of people with fertility problems when selecting outcomes, variations in outcome definitions and the selective reporting of outcomes on the basis of statistical analysis, make the results of infertility research difficult to interpret. STUDY DESIGN, SIZE, DURATION: A three-round Delphi survey (372 participants from 41 countries) and consensus development workshop (30 participants from 27 countries). PARTICIPANTS/MATERIALS, SETTING, METHODS: Healthcare professionals, researchers and people with fertility problems were brought together in an open and transparent process using formal consensus science methods. MAIN RESULTS AND THE ROLE OF CHANCE: The core outcome set consists of: viable intrauterine pregnancy confirmed by ultrasound (accounting for singleton, twin and higher multiple pregnancy); pregnancy loss (accounting for ectopic pregnancy, miscarriage, stillbirth and termination of pregnancy); live birth; gestational age at delivery; birthweight; neonatal mortality; and major congenital anomaly. Time to pregnancy leading to live birth should be reported when applicable. LIMITATIONS, REASONS FOR CAUTION: We used consensus development methods which have inherent limitations, including the representativeness of the participant sample, Delphi survey attrition and an arbitrary consensus threshold. WIDER IMPLICATIONS OF THE FINDINGS: Embedding the core outcome set within RCTs and systematic reviews should ensure the comprehensive selection, collection and reporting of core outcomes. Research funding bodies, the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) statement, and over 80 specialty journals, including the Cochrane Gynaecology and Fertility Group, Fertility and Sterility and Human Reproduction, have committed to implementing this core outcome set. STUDY FUNDING/COMPETING INTEREST(S): This research was funded by the Catalyst Fund, Royal Society of New Zealand, Auckland Medical Research Fund and Maurice and Phyllis Paykel Trust. The funder had no role in the design and conduct of the study, the collection, management, analysis or interpretation of data, or manuscript preparation. B.W.J.M. is supported by a National Health and Medical Research Council Practitioner Fellowship (GNT1082548). S.B. was supported by University of Auckland Foundation Seelye Travelling Fellowship. S.B. reports being the Editor-in-Chief of Human Reproduction Open and an editor of the Cochrane Gynaecology and Fertility group. J.L.H.E. reports being the Editor Emeritus of Human Reproduction. J.M.L.K. reports research sponsorship from Ferring and Theramex. R.S.L. reports consultancy fees from Abbvie, Bayer, Ferring, Fractyl, Insud Pharma and Kindex and research sponsorship from Guerbet and Hass Avocado Board. B.W.J.M. reports consultancy fees from Guerbet, iGenomix, Merck, Merck KGaA and ObsEva. C.N. reports being the Co Editor-in-Chief of Fertility and Sterility and Section Editor of the Journal of Urology, research sponsorship from Ferring, and retains a financial interest in NexHand. A.S. reports consultancy fees from Guerbet. E.H.Y.N. reports research sponsorship from Merck. N.L.V. reports consultancy and conference fees from Ferring, Merck and Merck Sharp and Dohme. The remaining authors declare no competing interests in relation to the work presented. All authors have completed the disclosure form

    A new strategy for enhancing imputation quality of rare variants from next-generation sequencing data via combining SNP and exome chip data

    Get PDF
    Background: Rare variants have gathered increasing attention as a possible alternative source of missing heritability. Since next generation sequencing technology is not yet cost-effective for large-scale genomic studies, a widely used alternative approach is imputation. However, the imputation approach may be limited by the low accuracy of the imputed rare variants. To improve imputation accuracy of rare variants, various approaches have been suggested, including increasing the sample size of the reference panel, using sequencing data from study-specific samples (i.e., specific populations), and using local reference panels by genotyping or sequencing a subset of study samples. While these approaches mainly utilize reference panels, imputation accuracy of rare variants can also be increased by using exome chips containing rare variants. The exome chip contains 250 K rare variants selected from the discovered variants of about 12,000 sequenced samples. If exome chip data are available for previously genotyped samples, the combined approach using a genotype panel of merged data, including exome chips and SNP chips, should increase the imputation accuracy of rare variants. Results: In this study, we describe a combined imputation which uses both exome chip and SNP chip data simultaneously as a genotype panel. The effectiveness and performance of the combined approach was demonstrated using a reference panel of 848 samples constructed using exome sequencing data from the T2D-GENES consortium and 5,349 sample genotype panels consisting of an exome chip and SNP chip. As a result, the combined approach increased imputation quality up to 11 %, and genomic coverage for rare variants up to 117.7 % (MAF < 1 %), compared to imputation using the SNP chip alone. Also, we investigated the systematic effect of reference panels on imputation quality using five reference panels and three genotype panels. The best performing approach was the combination of the study specific reference panel and the genotype panel of combined data. Conclusions: Our study demonstrates that combined datasets, including SNP chips and exome chips, enhances both the imputation quality and genomic coverage of rare variants

    Genome-wide association and Mendelian randomisation analysis provide insights into the pathogenesis of heart failure

    Get PDF
    Heart failure (HF) is a leading cause of morbidity and mortality worldwide. A small proportion of HF cases are attributable to monogenic cardiomyopathies and existing genome-wide association studies (GWAS) have yielded only limited insights, leaving the observed heritability of HF largely unexplained. We report results from a GWAS meta-analysis of HF comprising 47,309 cases and 930,014 controls. Twelve independent variants at 11 genomic loci are associated with HF, all of which demonstrate one or more associations with coronary artery disease (CAD), atrial fibrillation, or reduced left ventricular function, suggesting shared genetic aetiology. Functional analysis of non-CAD-associated loci implicate genes involved in cardiac development (MYOZ1, SYNPO2L), protein homoeostasis (BAG3), and cellular senescence (CDKN1A). Mendelian randomisation analysis supports causal roles for several HF risk factors, and demonstrates CAD-independent effects for atrial fibrillation, body mass index, and hypertension. These findings extend our knowledge of the pathways underlying HF and may inform new therapeutic strategies
    corecore