730 research outputs found

    ON THE RENTAL PRICE OF CAPITAL AND THE PROFIT RATE: THE PERILS AND PITFALLS OF TOTAL FACTOR PRODUCTIVITY GROWTH

    Get PDF
    This paper considers the implications of the conceptual difference between the rental price of capital, embedded in the neoclassical cost identity (output equals the cost of labour plus the cost of capital), and used in growth accounting studies; and the profit rate, which can be derived from the national income and product accounts (NIPA). The neoclassical identity is a "virtual" identity in that it depends on a series of assumptions (constant returns to scale and perfectly competitive factor markets). The income side of the NIPA also provides an accounting identity for output as the sum of the wage bill plus the surplus. This identity, however, is a "real" one, in the sense that it does not depend on any assumptions and thus it holds always. It is shown that because the neoclassical cost identity and the income accounting identity according to the NIPA are formally equivalent expressions, estimations of aggregate production functions and growth accounting studies are tautologies. Likewise, the test of the hypothesis of competitive markets using Hall's (1988) framework gives rise to a null hypothesis that cannot be rejected statistically.

    CORRECTING FOR BIASES WHEN ESTIMATING PRODUCTION FUNCTIONS: AN ILLUSION OF THE LAWS OF ALGEBRA?

    Get PDF
    this paper argues that the true cause of the endogeneity bias that allegedly appears when estimating production functions, and which the literature has tried to deal with since the 1940s, is s imply the result of omitted-variable bias due to an incorrect approximation to an accounting identity. As a result we question recent attempts to solve the problem by developing new estimators.

    Stereospecific synthesis of the aglycone of pseudopterosin E

    Get PDF
    No description supplie

    Oxford Nanopore sequencing, hybrid error correction, and de novo assembly of a eukaryotic genome

    Get PDF
    Monitoring the progress of DNA molecules through a membrane pore has been postulated as a method for sequencing DNA for several decades. Recently, a nanopore-based sequencing instrument, the Oxford Nanopore MinION, has become available, and we used this for sequencing the Saccharomyces cerevisiae genome. To make use of these data, we developed a novel open-source hybrid error correction algorithm Nanocorr specifically for Oxford Nanopore reads, because existing packages were incapable of assembling the long read lengths (5-50 kbp) at such high error rates (between approximately 5% and 40% error). With this new method, we were able to perform a hybrid error correction of the nanopore reads using complementary MiSeq data and produce a de novo assembly that is highly contiguous and accurate: The contig N50 length is more than ten times greater than an Illumina-only assembly (678 kb versus 59.9 kbp) and has >99.88% consensus identity when compared to the reference. Furthermore, the assembly with the long nanopore reads presents a much more complete representation of the features of the genome and correctly assembles gene cassettes, rRNAs, transposable elements, and other genomic features that were almost entirely absent in the Illumina-only assembly

    The suppression of CMR in Nd(Mn1−xCox)AsO0.95F0.05

    Get PDF
    This research is supported by the EPSRC (research grant EP/L002493/1). We also acknowledge the UK Science and Technology Facilities Council (STFC) for provision of beam time at ISIS.Peer reviewedPostprin

    A comparative analysis of exome capture

    Get PDF
    ABSTRACT: BACKGROUND: Human exome resequencing using commercial target capture kits has been and is being used for sequencing large numbers of individuals to search for variants associated with various human diseases. We rigorously evaluated the capabilities of two solution exome capture kits. These analyses help clarify the strengths and limitations of those data as well as systematically identify variables that should be considered in the use of those data. RESULTS: Each exome kit performed well at capturing the targets they were designed to capture, which mainly corresponds to the consensus coding sequences (CCDS) annotations of the human genome. In addition, based on their respective targets, each capture kit coupled with high coverage Illumina sequencing produced highly accurate nucleotide calls. However, other databases, such as the Reference Sequence collection (RefSeq), define the exome more broadly, and so not surprisingly, the exome kits did not capture these additional regions. CONCLUSIONS: Commercial exome capture kits provide a very efficient way to sequence select areas of the genome at very high accuracy. Here we provide the data to help guide critical analyses of sequencing data derived from these products

    The crystal structure and electrical properties of the oxide ion conductor Ba3WNbO8.5

    Get PDF
    This research was supported by the Northern Research Partnership and the University of Aberdeen. We also acknowledge Science and Technology Facilities Council (STFC) for provision of beamtime at ISIS.Peer reviewedPostprin

    The Relationship between the Crystal Structure and Electrical Properties of Oxide Ion Conducting Ba3W1.2Nb0.8O8.6

    Get PDF
    This research was supported by the University of Aberdeen and EPSRC (research grant EP/L002493/1). We also acknowledge the UK Science and Technology Facilities Council (STFC) for provision of beamtime at ISIS and the ILL.Peer reviewedPostprintPostprin

    Validation and assessment of variant calling pipelines for next-generation sequencing

    Get PDF
    Background: The processing and analysis of the large scale data generated by next-generation sequencing (NGS) experiments is challenging and is a burgeoning area of new methods development. Several new bioinformatics tools have been developed for calling sequence variants from NGS data. Here, we validate the variant calling of these tools and compare their relative accuracy to determine which data processing pipeline is optimal. Results: We developed a unified pipeline for processing NGS data that encompasses four modules: mapping, filtering, realignment and recalibration, and variant calling. We processed 130 subjects from an ongoing whole exome sequencing study through this pipeline. To evaluate the accuracy of each module, we conducted a series of comparisons between the single nucleotide variant (SNV) calls from the NGS data and either gold-standard Sanger sequencing on a total of 700 variants or array genotyping data on a total of 9,935 single-nucleotide polymorphisms. A head to head comparison showed that Genome Analysis Toolkit (GATK) provided more accurate calls than SAMtools (positive predictive value of 92.55% vs. 80.35%, respectively). Realignment of mapped reads and recalibration of base quality scores before SNV calling proved to be crucial to accurate variant calling. GATK HaplotypeCaller algorithm for variant calling outperformed the UnifiedGenotype algorithm. We also showed a relationship between mapping quality, read depth and allele balance, and SNV call accuracy. However, if best practices are used in data processing, then additional filtering based on these metrics provides little gains and accuracies of >99% are achievable. Conclusions: Our findings will help to determine the best approach for processing NGS data to confidently call variants for downstream analyses. To enable others to implement and replicate our results, all of our codes are freely available at http://metamoodics.org/wes
    corecore