208 research outputs found
Gatekeepers and Goalposts: The Need for a New Regulatory Paradigm for Whole Genome Sequence Results
The ability to obtain a person’s whole genome sequence for a cost of one thousand dollars is nearly here. Many clinicians expect that this will usher in an era of personalized medicine by allowing the development of individualized disease-risk profiles, preventive medicine strategies, and treatment options. However, it is not clear that the regulatory strategy that currently controls the approval and availability of more limited genetic tests—typically meant to investigate one or a small number of disease or other traits—provides a satisfactory framework for whole genome sequence testing.
This Perspective takes the position that the generation of whole genome sequence testing information needs to be treated differently than the tests and results associated with more traditional diagnostic assays. Part I considers the current regulatory environment and efforts to reform the oversight of genetic tests, in particular, the solution to the question of whether consumers should be permitted to order whole genome sequence tests without the guidance of a health-care professional. Part II discusses how whole genome sequence tests differ from conventional genetic tests both in the vastly greater amount of information that is generated and in the ways the information can be interpreted and reinterpreted for different purposes at different times. Part III suggests that rather than using the current regulatory approach of concentrating on technical attributes of the whole genome sequence testing process, regulatory approaches should be directed to the tools needed to analyze and apply deoxyribonucleic acid (DNA) sequence information. Such efforts will safeguard patients from adverse outcomes associated with unreliable disease-risk prediction, while improving access to the perceived benefits of whole genome sequence testing
Gatekeepers and Goalposts: The Need for a New Regulatory Paradigm for Whole Genome Sequence Results
The ability to obtain a person’s whole genome sequence for a cost of one thousand dollars is nearly here. Many clinicians expect that this will usher in an era of personalized medicine by allowing the development of individualized disease-risk profiles, preventive medicine strategies, and treatment options. However, it is not clear that the regulatory strategy that currently controls the approval and availability of more limited genetic tests—typically meant to investigate one or a small number of disease or other traits—provides a satisfactory framework for whole genome sequence testing.
This Perspective takes the position that the generation of whole genome sequence testing information needs to be treated differently than the tests and results associated with more traditional diagnostic assays. Part I considers the current regulatory environment and efforts to reform the oversight of genetic tests, in particular, the solution to the question of whether consumers should be permitted to order whole genome sequence tests without the guidance of a health-care professional. Part II discusses how whole genome sequence tests differ from conventional genetic tests both in the vastly greater amount of information that is generated and in the ways the information can be interpreted and reinterpreted for different purposes at different times. Part III suggests that rather than using the current regulatory approach of concentrating on technical attributes of the whole genome sequence testing process, regulatory approaches should be directed to the tools needed to analyze and apply deoxyribonucleic acid (DNA) sequence information. Such efforts will safeguard patients from adverse outcomes associated with unreliable disease-risk prediction, while improving access to the perceived benefits of whole genome sequence testing
Recommended from our members
U.K. HiGEM: impacts of desert dust radiative forcing in a high-resolution atmospheric GCM
This work investigates the impacts of mineral dust aerosol on climate using the atmospheric component of the
U.K. High-Resolution Global Environmental Model (HiGEM) with an interactive embedded mineral dust scheme. It extends earlier work by Woodage et al. in which direct radiative forcing due to dust was calculated and in which it was reported that the global total dust burden was increased when this was included in the model.Here this result is analyzed further and the regional and global impacts are investigated. It is found that particle
size distribution is critically important: In regions where large, more absorbent dust particles are present,
burdens are increased because of the enhanced heating aloft, which strengthens convection, whereas, in areas
where smaller, more scattering particles dominate, the surface layers are stabilized and dust emissions are
decreased. The consequent changes in dust load and particle size distribution when radiative effects are included make the annual mean global forcing more positive at the top of the atmosphere (0.33 versus 0.05Wm22).Impacts on the West African monsoon are also considered, where Saharan dust brings about a northward shift in the summertime intertropical convergence zone with increased precipitation on its northern side. This contrasts with results from some other studies, but the authors’ findings are supported by recent observational data. They argue that the impacts depend crucially on the size distribution and radiative properties of the dust particles, which are poorly known on a global scale and differ here from those used in other models
Is the Handling Data Cycle about to do a runner?
This article reflects back on a previous review of the
Handling Data Cycle (HDC), part of the statistics
section of the National Curriculum programmes of
study for mathematics. It summarises the findings of
the earlier review, considering these in light of recent
experiences in school of a group of trainee secondary
mathematics teachers. In particular, the effect of
there no longer being GCSE statistics coursework is
discussed. With reference to the priorities of different
educational ideological groups, the article supports a
continued emphasis on HDC in our training, despite
indications that it is becoming increasingly marginalised
in the secondary mathematics curriculum
Recommended from our members
High resolution forecast models of water vapour over mountains: comparison of results from the UM and MERIS
Propagation delay due to variable tropospheric water vapor (WV) is one of the most intractable problems for radar interferometry, particularly over mountains. The WV field can be simulated by an atmospheric model, and the difference between the two fields is used to correct the radar interferogram. Here, we report our use of the U.K. Met Office Unified Model in a nested mode to produce high-resolution forecast fields for the 3-km-high Mount Etna volcano. The simulated precipitablewater field is validated against that retrieved from the Medium Resolution Imaging Spectrometer (MERIS) radiometer on the Envisat satellite, which has a resolution of 300 m. Two case studies, one from winter (November 24, 2004) and one from summer (June 25, 2005), show that the mismatch between the model and the MERIS fields (rms = 1.1 and 1.6 mm, respectively) is small. One of the main potential sources of error in the models is the timing of the WV field simulation. We show that long-wavelength upper tropospheric troughs of low WV could be identified in both the model output and Meteosat WV imagery for the November 24, 2004 case and used to choose the best time of model output. © 2007 IEEE
Backdoors in Pseudorandom Number Generators:Possibility and Impossibility Results
Inspired by the Dual EC DBRG incident, Dodis et al. (Eurocrypt 2015) initiated the formal study of backdoored PRGs, showing that backdoored PRGs are equivalent to public key encryption schemes, giving constructions for backdoored PRGs (BPRGs), and showing how BPRGs can be ``immunised\u27\u27 by careful post-processing of their outputs. In this paper, we continue the foundational line of work initiated by Dodis et al., providing both positive and negative results.
We first revisit the backdoored PRG setting of Dodis et al., showing that PRGs can be more strongly backdoored than was previously envisaged. Specifically, we give efficient constructions of BPRGs for which, given a single generator output, Big Brother can recover the initial state and, therefore, all outputs of the BPRG. Moreover, our constructions are forward-secure in the traditional sense for a PRG, resolving an open question of Dodis et al. in the negative.
We then turn to the question of the effectiveness of backdoors in robust PRNGs with input (c.f. Dodis et al., ACM-CCS 2013): generators in which the state can be regularly refreshed using an entropy source, and in which, provided sufficient entropy has been made available since the last refresh, the outputs will appear pseudorandom. The presence of a refresh procedure might suggest that Big Brother could be defeated, since he would not be able to predict the values of the PRNG state backwards or forwards through the high-entropy refreshes. Unfortunately, we show that this intuition is not correct: we are also able to construct robust PRNGs with input that are backdoored in a backwards sense. Namely, given a single output, Big Brother is able to rewind through a number of refresh operations to earlier ``phases\u27\u27, and recover all the generator\u27s outputs in those earlier phases.
Finally, and ending on a positive note, we give an impossibility result: we provide a bound on the number of previous phases that Big Brother can compromise as a function of the state-size of the generator: smaller states provide more limited backdooring opportunities for Big Brother
Molecular analysis of chromosome 15 in Prader-Willi syndrome
Prader-Willi syndrome is a complex multi-system disorder characterised by mental retardation, obesity with insatiable appetite and hypogonadism. Cytogenetic and molecular studies found that some patients with Prader-Willi syndrome had deletions involving the proximal part of the long arm of their patemally-inherited chromosome 15. Several patients lacking deletions were shown to have inherited both of their chromosomes 15 from their mothers’ (maternal uniparental disomy). A phenotypically distinct disease, Angelman syndrome, was shown to commonly be due to matemally—inherited deletions of the same region of chromosome 15 or occasionally to paternal uniparental disomy. These findings led to the suggestion that genes subject to genetic imprinting, a phenomenon in which alleles of paternal and maternal origin are differentially expressed, were associated with the development of Prader-Willi syndrome and Angelman syndrome. The work described in this thesis was directed towards the characterisation of the molecular abnormalities found in subjects with Prader-Willi syndrome and the eventual identification of genes that might be involved in the pathogenesis of these imprinted conditions.
Thirty patients with Prader-Willi syndrome were carefully analysed with a set of DNA markers mapping to chromosome 15 . Densitometry and family studies with polymorphic markers were used to investigate these patients. Eighteen subjects had patemally—inherited deletions involving 15q11-q13, eight subjects had maternal uniparental disomy for chromosome 15, one individual had inherited two intact chromosomes 15 from his mother and was mosaic for the presence of a patemallyinherited chromosome fragment that included 15q11-q13, while the remaining three patients were not found to have any abnormalities involving chromosome 15 . Prader-Willi syndrome patients with the last two sets of findings have not been well characterised in the past. One patient was found to be deleted for a more circumscribed set of DNA loci than those that were in missing in the other individuals with deletions. Consideration of the limited deletion found in this patient together with results from other laboratories suggested that the gene(s) responsible for Prader-Willi syndrome were located between two loci mapping to 15q11-q13,
D15813 and DISSIO.
Pulsed field gel electrophoresis studies were performed to better characterise the region of chromosome 15 near D158 10. A long range physical map extending over approximately 2 800 kb was constructed and several novel CpG islands were identified. These structures are often associated with transcribed sequences and
potentially could mark the location of candidate genes for Prader-Willi syndrome or
Angelman syndrome. A series of experiments were performed utilising chromosome
jumping libraries and yeast artificial chromosomes that were aimed at cloning DNA
sequences associated with these CpG islands. While these attempts met with only limited success, they demonstrated a valid approach to attempting to identify candidate genes for the diseases under study. One probable CpG island was cloned, however, a very high GC-content interfered with attempts to analyse this DNA fragment in detail.
A subject with Prader—Willi syndrome and maternal uniparental disomy for chromosome 15 was additionally shown to have a second genetic disorder, Bloom syndrome. This autosomal recessive condition, which is characterised by an increased incidence of a wide range of malignancies, was thought to have occurred because of the inheritance of two disease gene alleles from a carrier mother. Through the identification of a meiotic crossover event involving the distal portion of the matemally-inherited chromosomes 15, it proved possible to establish the location of the Bloom syndrome gene as being 15q25—»qter. This was the first example of this form of analysis being used to regionally localise a disease gene.
The work described in this thesis has allowed the better characterisation of the molecular defects found in patients with Prader-Willi syndrome and related disorders. It has also provided important information about the physical structure of the region deleted in patients with Prader-Willi syndrome. It will be necessary to continue such efforts if attempts to identify the genes responsible for these disorders and understand the basis of genetic imprinting are to be successful
The TypTop System:Personalized Typo-Tolerant Password Checking
Password checking systems traditionally allow login only if the correct password is submitted. Recent work on typo-tolerant password
checking suggests that usability can be improved, with negligible
security loss, by allowing a small number of typographical errors.
Existing systems, however, can only correct a handful of errors,
such as accidentally leaving caps lock on or incorrect capitalization
of the first letter in a password. This leaves out numerous kinds of
typos made by users, such as transposition errors, substitutions, or
capitalization errors elsewhere in a password. Some users therefore
receive no benefit from existing typo-tolerance mechanisms.
We introduce personalized typo-tolerant password checking. In
our approach, the authentication system learns over time the typos
made by a specific user. In experiments using Mechanical Turk,
we show that 45% of users would benefit from personalization. We
therefore design a system, called TypTop, that securely implements
personalized typo-tolerance. Underlying TypTop is a new stateful
password-based encryption scheme that can be used to store recent
failed login attempts. Our formal analysis shows that security in
the face of an attacker that obtains the state of the system reduces
to the difficulty of a brute-force dictionary attack against the real
password. We implement TypTop for Linux and Mac OS login and
report on a proof-of-concept deployment
- …