116 research outputs found

    Implementation of a 10.24 GS/s 12-bit Optoelectronics Analog-to-Digital Converter Based on a Polyphase Demultiplexing Architecture

    Get PDF
    AbstractIn this paper we present the practical implementation of a high-speed polyphase sampling and demultiplexing architecture for optoelectronics analog-to-digital converters (OADCs). The architecture consists of a one-stage divide-by-eight decimator circuit where optically-triggered samplers are cascaded to sample an analog input signal, and demultiplex different phases of the sampled signal to yield low data rate for electronic quantization. Electrical-in to electrical-out data format is maintained through the sampling, demultiplexing and quantization processes of the architecture thereby avoiding the need for electrical-to-optical and optical-to-electrical signal conversions. We experimentally demonstrate a 10.24 giga samples per second (GS/s), 12-bit resolution OADC system comprising the optically-triggered sampling circuits integrated with commercial electronic quantizers. Measurements performed on the OADC yielded an effective bit resolution (ENOB) of 10.3 bits, spurious free dynamic range (SFDR) of -32 dB and signal-to-noise and distortion ratio (SNDR) of 63.7 dB

    A combined long-range phasing and long haplotype imputation method to impute phase for SNP genotypes

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Knowing the phase of marker genotype data can be useful in genome-wide association studies, because it makes it possible to use analysis frameworks that account for identity by descent or parent of origin of alleles and it can lead to a large increase in data quantities via genotype or sequence imputation. Long-range phasing and haplotype library imputation constitute a fast and accurate method to impute phase for SNP data.</p> <p>Methods</p> <p>A long-range phasing and haplotype library imputation algorithm was developed. It combines information from surrogate parents and long haplotypes to resolve phase in a manner that is not dependent on the family structure of a dataset or on the presence of pedigree information.</p> <p>Results</p> <p>The algorithm performed well in both simulated and real livestock and human datasets in terms of both phasing accuracy and computation efficiency. The percentage of alleles that could be phased in both simulated and real datasets of varying size generally exceeded 98% while the percentage of alleles incorrectly phased in simulated data was generally less than 0.5%. The accuracy of phasing was affected by dataset size, with lower accuracy for dataset sizes less than 1000, but was not affected by effective population size, family data structure, presence or absence of pedigree information, and SNP density. The method was computationally fast. In comparison to a commonly used statistical method (fastPHASE), the current method made about 8% less phasing mistakes and ran about 26 times faster for a small dataset. For larger datasets, the differences in computational time are expected to be even greater. A computer program implementing these methods has been made available.</p> <p>Conclusions</p> <p>The algorithm and software developed in this study make feasible the routine phasing of high-density SNP chips in large datasets.</p

    A method for the allocation of sequencing resources in genotyped livestock populations

    Get PDF
    International audienceAbstractBackgroundThis paper describes a method, called AlphaSeqOpt, for the allocation of sequencing resources in livestock populations with existing phased genomic data to maximise the ability to phase and impute sequenced haplotypes into the whole population.MethodsWe present two algorithms. The first selects focal individuals that collectively represent the maximum possible portion of the haplotype diversity in the population. The second allocates a fixed sequencing budget among the families of focal individuals to enable phasing of their haplotypes at the sequence level. We tested the performance of the two algorithms in simulated pedigrees. For each pedigree, we evaluated the proportion of population haplotypes that are carried by the focal individuals and compared our results to a variant of the widely-used key ancestors approach and to two haplotype-based approaches. We calculated the expected phasing accuracy of the haplotypes of a focal individual at the sequence level given the proportion of the fixed sequencing budget allocated to its family.ResultsAlphaSeqOpt maximises the ability to capture and phase the most frequent haplotypes in a population in three ways. First, it selects focal individuals that collectively represent a larger portion of the population haplotype diversity than existing methods. Second, it selects focal individuals from across the pedigree whose haplotypes can be easily phased using family-based phasing and imputation algorithms, thus maximises the ability to impute sequence into the rest of the population. Third, it allocates more of the fixed sequencing budget to focal individuals whose haplotypes are more frequent in the population than to focal individuals whose haplotypes are less frequent. Unlike existing methods, we additionally present an algorithm to allocate part of the sequencing budget to the families (i.e. immediate ancestors) of focal individuals to ensure that their haplotypes can be phased at the sequence level, which is essential for enabling and maximising subsequent sequence imputation.ConclusionsWe present a new method for the allocation of a fixed sequencing budget to focal individuals and their families such that the final sequenced haplotypes, when phased at the sequence level, represent the maximum possible portion of the haplotype diversity in the population that can be sequenced and phased at that budget

    Planck Intermediate Results. IV. The XMM-Newton validation programme for new Planck galaxy clusters

    Get PDF
    We present the final results from the XMM-Newton validation follow-up of new Planck galaxy cluster candidates. We observed 15 new candidates, detected with signal-to-noise ratios between 4.0 and 6.1 in the 15.5-month nominal Planck survey. The candidates were selected using ancillary data flags derived from the ROSAT All Sky Survey (RASS) and Digitized Sky Survey all-sky maps, with the aim of pushing into the low SZ flux, high-z regime and testing RASS flags as indicators of candidate reliability. 14 new clusters were detected by XMM, including 2 double systems. Redshifts lie in the range 0.2 to 0.9, with 6 clusters at z>0.5. Estimated M500 range from 2.5 10^14 to 8 10^14 Msun. We discuss our results in the context of the full XMM validation programme, in which 51 new clusters have been detected. This includes 4 double and 2 triple systems, some of which are chance projections on the sky of clusters at different z. We find that association with a RASS-BSC source is a robust indicator of the reliability of a candidate, whereas association with a FSC source does not guarantee that the SZ candidate is a bona fide cluster. Nevertheless, most Planck clusters appear in RASS maps, with a significance greater than 2 sigma being a good indication that the candidate is a real cluster. The full sample gives a Planck sensitivity threshold of Y500 ~ 4 10^-4 arcmin^2, with indication for Malmquist bias in the YX-Y500 relation below this level. The corresponding mass threshold depends on z. Systems with M500 > 5 10^14 Msun at z > 0.5 are easily detectable with Planck. The newly-detected clusters follow the YX-Y500 relation derived from X-ray selected samples. Compared to X-ray selected clusters, the new SZ clusters have a lower X-ray luminosity on average for their mass. There is no indication of departure from standard self-similar evolution in the X-ray versus SZ scaling properties. (abridged)Comment: accepted by A&

    Genomic evaluations with many more genotypes

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Genomic evaluations in Holstein dairy cattle have quickly become more reliable over the last two years in many countries as more animals have been genotyped for 50,000 markers. Evaluations can also include animals genotyped with more or fewer markers using new tools such as the 777,000 or 2,900 marker chips recently introduced for cattle. Gains from more markers can be predicted using simulation, whereas strategies to use fewer markers have been compared using subsets of actual genotypes. The overall cost of selection is reduced by genotyping most animals at less than the highest density and imputing their missing genotypes using haplotypes. Algorithms to combine different densities need to be efficient because numbers of genotyped animals and markers may continue to grow quickly.</p> <p>Methods</p> <p>Genotypes for 500,000 markers were simulated for the 33,414 Holsteins that had 50,000 marker genotypes in the North American database. Another 86,465 non-genotyped ancestors were included in the pedigree file, and linkage disequilibrium was generated directly in the base population. Mixed density datasets were created by keeping 50,000 (every tenth) of the markers for most animals. Missing genotypes were imputed using a combination of population haplotyping and pedigree haplotyping. Reliabilities of genomic evaluations using linear and nonlinear methods were compared.</p> <p>Results</p> <p>Differing marker sets for a large population were combined with just a few hours of computation. About 95% of paternal alleles were determined correctly, and > 95% of missing genotypes were called correctly. Reliability of breeding values was already high (84.4%) with 50,000 simulated markers. The gain in reliability from increasing the number of markers to 500,000 was only 1.6%, but more than half of that gain resulted from genotyping just 1,406 young bulls at higher density. Linear genomic evaluations had reliabilities 1.5% lower than the nonlinear evaluations with 50,000 markers and 1.6% lower with 500,000 markers.</p> <p>Conclusions</p> <p>Methods to impute genotypes and compute genomic evaluations were affordable with many more markers. Reliabilities for individual animals can be modified to reflect success of imputation. Breeders can improve reliability at lower cost by combining marker densities to increase both the numbers of markers and animals included in genomic evaluation. Larger gains are expected from increasing the number of animals than the number of markers.</p

    Potential of gene drives with genome editing to increase genetic gain in livestock breeding programs

    Get PDF
    Abstract Background This paper uses simulation to explore how gene drives can increase genetic gain in livestock breeding programs. Gene drives are naturally occurring phenomena that cause a mutation on one chromosome to copy itself onto its homologous chromosome. Methods We simulated nine different breeding and editing scenarios with a common overall structure. Each scenario began with 21 generations of selection, followed by 20 generations of selection based on true breeding values where the breeder used selection alone, selection in combination with genome editing, or selection with genome editing and gene drives. In the scenarios that used gene drives, we varied the probability of successfully incorporating the gene drive. For each scenario, we evaluated genetic gain, genetic variance ( \u3c3 A 2 ) , rate of change in inbreeding ( \u394 F ), number of distinct quantitative trait nucleotides (QTN) edited, rate of increase in favourable allele frequencies of edited QTN and the time to fix favourable alleles. Results Gene drives enhanced the benefits of genome editing in seven ways: (1) they amplified the increase in genetic gain brought about by genome editing; (2) they amplified the rate of increase in the frequency of favourable alleles and reduced the time it took to fix them; (3) they enabled more rapid targeting of QTN with lesser effect for genome editing; (4) they distributed fixed editing resources across a larger number of distinct QTN across generations; (5) they focussed editing on a smaller number of QTN within a given generation; (6) they reduced the level of inbreeding when editing a subset of the sires; and (7) they increased the efficiency of converting genetic variation into genetic gain. Conclusions Genome editing in livestock breeding results in short-, medium- and long-term increases in genetic gain. The increase in genetic gain occurs because editing increases the frequency of favourable alleles in the population. Gene drives accelerate the increase in allele frequency caused by editing, which results in even higher genetic gain over a shorter period of time with no impact on inbreeding

    Tracing Cattle Breeds with Principal Components Analysis Ancestry Informative SNPs

    Get PDF
    The recent release of the Bovine HapMap dataset represents the most detailed survey of bovine genetic diversity to date, providing an important resource for the design and development of livestock production. We studied this dataset, comprising more than 30,000 Single Nucleotide Polymorphisms (SNPs) for 19 breeds (13 taurine, three zebu, and three hybrid breeds), seeking to identify small panels of genetic markers that can be used to trace the breed of unknown cattle samples. Taking advantage of the power of Principal Components Analysis and algorithms that we have recently described for the selection of Ancestry Informative Markers from genomewide datasets, we present a decision-tree which can be used to accurately infer the origin of individual cattle. In doing so, we present a thorough examination of population genetic structure in modern bovine breeds. Performing extensive cross-validation experiments, we demonstrate that 250-500 carefully selected SNPs suffice in order to achieve close to 100% prediction accuracy of individual ancestry, when this particular set of 19 breeds is considered. Our methods, coupled with the dense genotypic data that is becoming increasingly available, have the potential to become a valuable tool and have considerable impact in worldwide livestock production. They can be used to inform the design of studies of the genetic basis of economically important traits in cattle, as well as breeding programs and efforts to conserve biodiversity. Furthermore, the SNPs that we have identified can provide a reliable solution for the traceability of breed-specific branded products

    Short-lived Nuclei in the Early Solar System: Possible AGB Sources

    Get PDF
    (Abridged) We review abundances of short-lived nuclides in the early solar system (ESS) and the methods used to determine them. We compare them to the inventory for a uniform galactic production model. Within a factor of two, observed abundances of several isotopes are compatible with this model. I-129 is an exception, with an ESS inventory much lower than expected. The isotopes Pd-107, Fe-60, Ca-41, Cl-36, Al-26, and Be-10 require late addition to the solar nebula. Be-10 is the product of particle irradiation of the solar system as probably is Cl-36. Late injection by a supernova (SN) cannot be responsible for most short-lived nuclei without excessively producing Mn-53; it can be the source of Mn-53 and maybe Fe-60. If a late SN is responsible for these two nuclei, it still cannot make Pd-107 and other isotopes. We emphasize an AGB star as a source of nuclei, including Fe-60 and explore this possibility with new stellar models. A dilution factor of about 4e-3 gives reasonable amounts of many nuclei. We discuss the role of irradiation for Al-26, Cl-36 and Ca-41. Conflict between scenarios is emphasized as well as the absence of a global interpretation for the existing data. Abundances of actinides indicate a quiescent interval of about 1e8 years for actinide group production in order to explain the data on Pu-244 and new bounds on Cm-247. This interval is not compatible with Hf-182 data, so a separate type of r-process is needed for at least the actinides, distinct from the two types previously identified. The apparent coincidence of the I-129 and trans-actinide time scales suggests that the last actinide contribution was from an r-process that produced actinides without fission recycling so that the yields at Ba and below were governed by fission.Comment: 92 pages, 14 figure files, in press at Nuclear Physics

    Comparison of linkage disequilibrium and haplotype diversity on macro- and microchromosomes in chicken

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The chicken (<it>Gallus gallus</it>), like most avian species, has a very distinct karyotype consisting of many micro- and a few macrochromosomes. While it is known that recombination frequencies are much higher for micro- as compared to macrochromosomes, there is limited information on differences in linkage disequilibrium (LD) and haplotype diversity between these two classes of chromosomes. In this study, LD and haplotype diversity were systematically characterized in 371 birds from eight chicken populations (commercial lines, fancy breeds, and red jungle fowl) across macro- and microchromosomes. To this end we sampled four regions of ~1 cM each on macrochromosomes (GGA1 and GGA2), and four 1.5 -2 cM regions on microchromosomes (GGA26 and GGA27) at a high density of 1 SNP every 2 kb (total of 889 SNPs).</p> <p>Results</p> <p>At a similar physical distance, LD, haplotype homozygosity, haploblock structure, and haplotype sharing were all lower for the micro- as compared to the macrochromosomes. These differences were consistent across populations. Heterozygosity, genetic differentiation, and derived allele frequencies were also higher for the microchromosomes. Differences in LD, haplotype variation, and haplotype sharing between populations were largely in line with known demographic history of the commercial chicken. Despite very low levels of LD, as measured by r<sup>2 </sup>for most populations, some haploblock structure was observed, particularly in the macrochromosomes, but the haploblock sizes were typically less than 10 kb.</p> <p>Conclusion</p> <p>Differences in LD between micro- and macrochromosomes were almost completely explained by differences in recombination rate. Differences in haplotype diversity and haplotype sharing between micro- and macrochromosomes were explained by differences in recombination rate and genotype variation. Haploblock structure was consistent with demography of the chicken populations, and differences in recombination rates between micro- and macrochromosomes. The limited haploblock structure and LD suggests that future whole-genome marker assays will need 100+K SNPs to exploit haplotype information. Interpretation and transferability of genetic parameters will need to take into account the size of chromosomes in chicken, and, since most birds have microchromosomes, in other avian species as well.</p
    corecore