100 research outputs found
Science data quality assessment for the Large Synoptic Survey Telescope
LSST will have a Science Data Quality Assessment (SDQA) subsystem for the assessment of the data products that will be produced during the course of a 10 yr survey. The LSST will produce unprecedented volumes of astronomical data as it surveys the accessible sky every few nights. The SDQA subsystem will enable comparisons of the science data with expectations from prior experience and models, and with established requirements for the survey. While analogous systems have been built for previous large astronomical surveys, SDQA for LSST must meet a unique combination of challenges. Chief among them will be the extraordinary data rate and volume, which restricts the bulk of the quality computations to the automated processing stages, as revisiting the pixels for a post-facto evaluation is prohibitively expensive. The identification of appropriate scientific metrics is driven by the breadth of the expected science, the scope of the time-domain survey, the need to tap the widest possible pool of scientific expertise, and the historical tendency of new quality metrics to be crafted and refined as experience grows. Prior experience suggests that contemplative, off-line quality analyses are essential to distilling new automated quality metrics, so the SDQA architecture must support integrability with a variety of custom and community-based tools, and be flexible to embrace evolving QA demands. Finally, the time-domain nature of LSST means every exposure may be useful for some scientific purpose, so the model of quality thresholds must be sufficiently rich to reflect the quality demands of diverse science aims
Spatial heterogeneity promotes coexistence of rock-paper-scissor metacommunities
The rock-paper-scissor game -- which is characterized by three strategies
R,P,S, satisfying the non-transitive relations S excludes P, P excludes R, and
R excludes S -- serves as a simple prototype for studying more complex
non-transitive systems. For well-mixed systems where interactions result in
fitness reductions of the losers exceeding fitness gains of the winners,
classical theory predicts that two strategies go extinct. The effects of
spatial heterogeneity and dispersal rates on this outcome are analyzed using a
general framework for evolutionary games in patchy landscapes. The analysis
reveals that coexistence is determined by the rates at which dominant
strategies invade a landscape occupied by the subordinate strategy (e.g. rock
invades a landscape occupied by scissors) and the rates at which subordinate
strategies get excluded in a landscape occupied by the dominant strategy (e.g.
scissor gets excluded in a landscape occupied by rock). These invasion and
exclusion rates correspond to eigenvalues of the linearized dynamics near
single strategy equilibria. Coexistence occurs when the product of the invasion
rates exceeds the product of the exclusion rates. Provided there is sufficient
spatial variation in payoffs, the analysis identifies a critical dispersal rate
required for regional persistence. For dispersal rates below , the
product of the invasion rates exceed the product of the exclusion rates and the
rock-paper-scissor metacommunities persist regionally despite being extinction
prone locally. For dispersal rates above , the product of the exclusion
rates exceed the product of the invasion rates and the strategies are
extinction prone. These results highlight the delicate interplay between
spatial heterogeneity and dispersal in mediating long-term outcomes for
evolutionary games.Comment: 31pages, 5 figure
Science data quality assessment for the Large Synoptic Survey Telescope
LSST will have a Science Data Quality Assessment (SDQA) subsystem for the assessment of the data products that will be produced during the course of a 10 yr survey. The LSST will produce unprecedented volumes of astronomical data as it surveys the accessible sky every few nights. The SDQA subsystem will enable comparisons of the science data with expectations from prior experience and models, and with established requirements for the survey. While analogous systems have been built for previous large astronomical surveys, SDQA for LSST must meet a unique combination of challenges. Chief among them will be the extraordinary data rate and volume, which restricts the bulk of the quality computations to the automated processing stages, as revisiting the pixels for a post-facto evaluation is prohibitively expensive. The identification of appropriate scientific metrics is driven by the breadth of the expected science, the scope of the time-domain survey, the need to tap the widest possible pool of scientific expertise, and the historical tendency of new quality metrics to be crafted and refined as experience grows. Prior experience suggests that contemplative, off-line quality analyses are essential to distilling new automated quality metrics, so the SDQA architecture must support integrability with a variety of custom and community-based tools, and be flexible to embrace evolving QA demands. Finally, the time-domain nature of LSST means every exposure may be useful for some scientific purpose, so the model of quality thresholds must be sufficiently rich to reflect the quality demands of diverse science aims
Genetic Variation in an Individual Human Exome
There is much interest in characterizing the variation in a human individual, because this may elucidate what contributes significantly to a person's phenotype, thereby enabling personalized genomics. We focus here on the variants in a person's ‘exome,’ which is the set of exons in a genome, because the exome is believed to harbor much of the functional variation. We provide an analysis of the ∼12,500 variants that affect the protein coding portion of an individual's genome. We identified ∼10,400 nonsynonymous single nucleotide polymorphisms (nsSNPs) in this individual, of which ∼15–20% are rare in the human population. We predict ∼1,500 nsSNPs affect protein function and these tend be heterozygous, rare, or novel. Of the ∼700 coding indels, approximately half tend to have lengths that are a multiple of three, which causes insertions/deletions of amino acids in the corresponding protein, rather than introducing frameshifts. Coding indels also occur frequently at the termini of genes, so even if an indel causes a frameshift, an alternative start or stop site in the gene can still be used to make a functional protein. In summary, we reduced the set of ∼12,500 nonsilent coding variants by ∼8-fold to a set of variants that are most likely to have major effects on their proteins' functions. This is our first glimpse of an individual's exome and a snapshot of the current state of personalized genomics. The majority of coding variants in this individual are common and appear to be functionally neutral. Our results also indicate that some variants can be used to improve the current NCBI human reference genome. As more genomes are sequenced, many rare variants and non-SNP variants will be discovered. We present an approach to analyze the coding variation in humans by proposing multiple bioinformatic methods to hone in on possible functional variation
Reciprocity as a foundation of financial economics
This paper argues that the subsistence of the fundamental theorem of contemporary financial mathematics is the ethical concept ‘reciprocity’. The argument is based on identifying an equivalence between the contemporary, and ostensibly ‘value neutral’, Fundamental Theory of Asset Pricing with theories of mathematical probability that emerged in the seventeenth century in the context of the ethical assessment of commercial contracts in a framework of Aristotelian ethics. This observation, the main claim of the paper, is justified on the basis of results from the Ultimatum Game and is analysed within a framework of Pragmatic philosophy. The analysis leads to the explanatory hypothesis that markets are centres of communicative action with reciprocity as a rule of discourse. The purpose of the paper is to reorientate financial economics to emphasise the objectives of cooperation and social cohesion and to this end, we offer specific policy advice
LSST Science Book, Version 2.0
A survey that can cover the sky in optical bands over wide fields to faint
magnitudes with a fast cadence will enable many of the exciting science
opportunities of the next decade. The Large Synoptic Survey Telescope (LSST)
will have an effective aperture of 6.7 meters and an imaging camera with field
of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over
20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with
fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a
total point-source depth of r~27.5. The LSST Science Book describes the basic
parameters of the LSST hardware, software, and observing plans. The book
discusses educational and outreach opportunities, then goes on to describe a
broad range of science that LSST will revolutionize: mapping the inner and
outer Solar System, stellar populations in the Milky Way and nearby galaxies,
the structure of the Milky Way disk and halo and other objects in the Local
Volume, transient and variable objects both at low and high redshift, and the
properties of normal and active galaxies at low and high redshift. It then
turns to far-field cosmological topics, exploring properties of supernovae to
z~1, strong and weak lensing, the large-scale distribution of galaxies and
baryon oscillations, and how these different probes may be combined to
constrain cosmological models and the physics of dark energy.Comment: 596 pages. Also available at full resolution at
http://www.lsst.org/lsst/sciboo
The Diploid Genome Sequence of an Individual Human
Presented here is a genome sequence of an individual human. It was produced from ∼32 million random DNA fragments, sequenced by Sanger dideoxy technology and assembled into 4,528 scaffolds, comprising 2,810 million bases (Mb) of contiguous sequence with approximately 7.5-fold coverage for any given region. We developed a modified version of the Celera assembler to facilitate the identification and comparison of alternate alleles within this individual diploid genome. Comparison of this genome and the National Center for Biotechnology Information human reference assembly revealed more than 4.1 million DNA variants, encompassing 12.3 Mb. These variants (of which 1,288,319 were novel) included 3,213,401 single nucleotide polymorphisms (SNPs), 53,823 block substitutions (2–206 bp), 292,102 heterozygous insertion/deletion events (indels)(1–571 bp), 559,473 homozygous indels (1–82,711 bp), 90 inversions, as well as numerous segmental duplications and copy number variation regions. Non-SNP DNA variation accounts for 22% of all events identified in the donor, however they involve 74% of all variant bases. This suggests an important role for non-SNP genetic alterations in defining the diploid genome structure. Moreover, 44% of genes were heterozygous for one or more variants. Using a novel haplotype assembly strategy, we were able to span 1.5 Gb of genome sequence in segments >200 kb, providing further precision to the diploid nature of the genome. These data depict a definitive molecular portrait of a diploid human genome that provides a starting point for future genome comparisons and enables an era of individualized genomic information
Family-based association study of the BDNF, COMT and serotonin transporter genes and DSM-IV bipolar-I disorder in children
<p>Abstract</p> <p>Background</p> <p>Over the past decade pediatric bipolar disorder has gained recognition as a potentially more severe and heritable form of the disorder. In this report we test for association with genes coding brain-derived neurotrophic factor (<it>BDNF</it>), the serotonin transporter (<it>SLC6A4</it>), and catechol-O-methyltransferase (<it>COMT</it>).</p> <p>Methods</p> <p>Bipolar-I affected offspring triads (N = 173) were drawn from 522 individuals with 2 parents in 332 nuclear families recruited for genetic studies of pediatric psychopathology at the Clinical and Research Program in Pediatric Psychopharmacology and Adult ADHD at Massachusetts General Hospital.</p> <p>Results</p> <p>We failed to identify an association with the val66 allele in BDNF (OR = 1.23, p = 0.36), the COMT-l allele (OR = 1.27, p = 0.1), or the HTTLPR short allele (OR = 0.87, p = 0.38).</p> <p>Conclusion</p> <p>Our study suggests that the markers examined thus far in <it>COMT </it>and <it>SLC6A4 </it>are not associated with pediatric bipolar disorder and that if the val66met marker in <it>BDNF </it>is associated with pediatric bipolar disorder the magnitude of the association is much smaller than first reported.</p
- …