180 research outputs found
Highly oriented nonepitaxially grown \u3ci\u3eL\u3c/i\u3e1\u3csub\u3e0\u3c/sub\u3e FePt films
A method of preparing nonepitaxially grown, highly textured L10 FePt thin films is described. A nearly perfect (001) texture was obtained by direct deposition of FePt films on Corning 7059 glass substrates and subsequent rapid thermal annealing. The ordering and orientation of the L10-phase FePt grains were controlled by the initial as-deposited film structure, and also by the annealing process. Magnetic measurements reveal large perpendicular anisotropy for these (001) textured films. The substrates and processes used for nonepitaxial growth of L10 ordered FePt films are much more compatible with practical applications than those grown epitaxially
Orientation-controlled nonepitaxial L1\u3csub\u3e0\u3c/sub\u3e CoPt and FePt films
We report results on highly oriented, face-centered tetragonal ordered CoPt and FePt thin films grown nonepitaxially by directly depositing films on thermally oxidized Si substrates and subsequent annealing. By controlling the thickness, composition, and annealing conditions, and/or depositing a proper underlayer, nearly perfect (001)-oriented CoPt and FePt films can be obtained. Magnetic measurements reveal large perpendicular anisotropy for such films
A comparison across non-model animals suggests an optimal sequencing depth for de novo transcriptome assembly
Background: The lack of genomic resources can present challenges for studies of non-model organisms. Transcriptome sequencing offers an attractive method to gather information about genes and gene expression without the need for a reference genome. However, it is unclear what sequencing depth is adequate to assemble the transcriptome de novo for these purposes.
Results: We assembled transcriptomes of animals from six different phyla (Annelids, Arthropods, Chordates, Cnidarians, Ctenophores, and Molluscs) at regular increments of reads using Velvet/Oases and Trinity to determine how read count affects the assembly. This included an assembly of mouse heart reads because we could compare those against the reference genome that is available. We found qualitative differences in the assemblies of whole-animals versus tissues. With increasing reads, whole-animal assemblies show rapid increase of transcripts and discovery of conserved genes, while single-tissue assemblies show a slower discovery of conserved genes though the assembled transcripts were often longer. A deeper examination of the mouse assemblies shows that with more reads, assembly errors become more frequent but such errors can be mitigated with more stringent assembly parameters.
Conclusions: These assembly trends suggest that representative assemblies are generated with as few as 20 million reads for tissue samples and 30 million reads for whole-animals for RNA-level coverage. These depths provide a good balance between coverage and noise. Beyond 60 million reads, the discovery of new genes is low and sequencing errors of highly-expressed genes are likely to accumulate. Finally, siphonophores (polymorphic Cnidarians) are an exception and possibly require alternate assembly strategies
Assessing the Impact of Differential Genotyping Errors on Rare Variant Tests of Association
Genotyping errors are well-known to impact the power and type I error rate in single marker tests of association. Genotyping errors that happen according to the same process in cases and controls are known as non-differential genotyping errors, whereas genotyping errors that occur with different processes in the cases and controls are known as differential genotype errors. For single marker tests, non-differential genotyping errors reduce power, while differential genotyping errors increase the type I error rate. However, little is known about the behavior of the new generation of rare variant tests of association in the presence of genotyping errors. In this manuscript we use a comprehensive simulation study to explore the effects of numerous factors on the type I error rate of rare variant tests of association in the presence of differential genotyping error. We find that increased sample size, decreased minor allele frequency, and an increased number of single nucleotide variants (SNVs) included in the test all increase the type I error rate in the presence of differential genotyping errors. We also find that the greater the relative difference in case-control genotyping error rates the larger the type I error rate. Lastly, as is the case for single marker tests, genotyping errors classifying the common homozygote as the heterozygote inflate the type I error rate significantly more than errors classifying the heterozygote as the common homozygote. In general, our findings are in line with results from single marker tests. To ensure that type I error inflation does not occur when analyzing next-generation sequencing data careful consideration of study design (e.g. use of randomization), caution in meta-analysis and using publicly available controls, and the use of standard quality control metrics is critical
Tunable monoenergetic electron beams from independently controllable laser-wakefield acceleration and injection
We report the results of experiments on laser-wakefield acceleration in a novel two-stage gas target with independently adjustable density and atomic-composition profiles.We were able to tailor these profiles in a way that led to the separation of the processes of electron injection and acceleration and permitted independent control of both. This resulted in the generation of stable, quasimonoenergetic electron beams with central energy tunable in 50–300 MeV range. For the first time, we are able to independently control the beam charge and energy spread over the entire tunability range
Tunable monoenergetic electron beams from independently controllable laser-wakefield acceleration and injection
We report the results of experiments on laser-wakefield acceleration in a novel two-stage gas target with independently adjustable density and atomic-composition profiles.We were able to tailor these profiles in a way that led to the separation of the processes of electron injection and acceleration and permitted independent control of both. This resulted in the generation of stable, quasimonoenergetic electron beams with central energy tunable in 50–300 MeV range. For the first time, we are able to independently control the beam charge and energy spread over the entire tunability range
Submillimeter-resolution radiography of shielded structures with laser-accelerated electron beams
We investigate the use of energetic electron beams for high-resolution radiography of flaws embedded in thick solid objects. A bright, monoenergetic electron beam (with energy \u3e100 MeV) was generated by the process of laser-wakefield acceleration through the interaction of 50-TW, 30-fs laser pulses with a supersonic helium jet. The high energy, low divergence, and small source size of these beams make them ideal for high-resolution radiographic studies of cracks or voids embedded in dense materials that are placed at a large distance from the source. We report radiographic imaging of steel with submillimeter resolution
MeV-Energy X Rays from Inverse Compton Scattering with Laser-Wakefield Accelerated Electrons
We report the generation of MeV x rays using an undulator and accelerator that are both driven by the same 100-terawatt laser system. The laser pulse driving the accelerator and the scattering laser pulse are independently optimized to generate a high energy electron beam (\u3e200 MeV) and maximize the output x-ray brightness. The total x-ray photon number was measured to be ∼1×107, the source size was 5 μm, and the beam divergence angle was ∼10 mrad. The x-ray photon energy, peaked at 1 MeV (reaching up to 4 MeV), exceeds the thresholds of fundamental nuclear processes (e.g., pair production and photodisintegration)
Evaluating methods for combining rare variant data in pathway-based tests of genetic association
Analyzing sets of genes in genome-wide association studies is a relatively new approach that aims to capitalize on biological knowledge about the interactions of genes in biological pathways. This approach, called pathway analysis or gene set analysis, has not yet been applied to the analysis of rare variants. Applying pathway analysis to rare variants offers two competing approaches. In the first approach rare variant statistics are used to generate p-values for each gene (e.g., combined multivariate collapsing [CMC] or weighted-sum [WS]) and the gene-level p-values are combined using standard pathway analysis methods (e.g., gene set enrichment analysis or Fisher’s combined probability method). In the second approach, rare variant methods (e.g., CMC and WS) are applied directly to sets of single-nucleotide polymorphisms (SNPs) representing all SNPs within genes in a pathway. In this paper we use simulated phenotype and real next-generation sequencing data from Genetic Analysis Workshop 17 to analyze sets of rare variants using these two competing approaches. The initial results suggest substantial differences in the methods, with Fisher’s combined probability method and the direct application of the WS method yielding the best power. Evidence suggests that the WS method works well in most situations, although Fisher’s method was more likely to be optimal when the number of causal SNPs in the set was low but the risk of the causal SNPs was high
- …