3,345 research outputs found

    Confronting Cultural Difference in the Establishment of a Global Zen Community

    Get PDF
    As a commercial phenomenon, Zen is recognizable throughout the world as a lucrative brand name that communicates harmony, simplicity, and cosmopolitan elegance. In contrast, the Japanese Zen institution’s attempts to develop Zen into a successful global religion have proven more problematic. Despite initial successes by Japanese clergy in establishing centers of Zen practice throughout Europe and the Americas, the past fifty years have seen the dream of a global Zen community descend into a legacy of controversy, scandals, and schisms over conflicting claims of authority. Looking specifically at the internationalization efforts of the Japanese Sōtō Zen sect, this paper will discuss how essentialist assumptions of culture, religion, and race held by both Japanese and international practitioners continue to directly hinder Zen’s development as a global religion. Complicating matters further are competing ideas of authenticity and legitimacy which have led to groups of dedicated but disenfranchised Zen practitioners becoming alienated from the Japanese Zen institution, all the while trying to maintain a sense of shared “lineage” with the same clergy and temples in Japan. As I will show, recent attempts to reconcile these groups and overcome cultural difference ultimately suggest that these ruptures hide a very different problem: an ideological divide over how participants conceive of the role of religion in their lives, and conflicting cultural expectations about the nature of religious practice itself

    Choosing Smoothness Parameters for Smoothing Splines by Minimizing and Estimate of Risk

    Get PDF
    Smoothing splines are a popular approach for non-parametric regression problems. We use periodic smoothing splines to fit a periodic signal plus noise model to data for which we assume there are underlying circadian patterns. In the smoothing spline methodology, choosing an appropriate smoothness parameter is an important step in practice. In this paper, we draw a connection between smoothing splines and REACT estimators that provides motivation for the creation of criteria for choosing the smoothness parameter. The new criteria are compared to three existing methods, namely cross-validation, generalized cross-validation, and generalization of maximum likelihood criteria, by a Monte Carlo simulation and by an application to the study of circadian patterns. For most of the situations presented in the simulations, including the practical example, the new criteria out-perform the three existing criteria

    Risk-based operating limits for dynamic security constrained electric power systems

    Get PDF
    In this work we develop a method that provides risk-based security assessment in an operating environment considering any type of security violation. Particular emphasis is placed on security constraints associated with dynamic system performance. Our work is motivated by a perception that today\u27s deterministic approach to security assessment often results in costly operating restrictions that are not justified by the corresponding low level of risk. A risk-based approach to security assessment is attractive because it balances the system\u27s reliability cost and reliability worth;Our method allows determination of operating limits based on the risk of insecurity at a given operating point. We characterize the operating point in terms of pre-contingency controllable parameters, the critical parameter set, that most influence the postcontingency system performance. Total risk at a given operating point is obtained summing over all the individual risk associated with defined security violations and their corresponding triggering events. We develop risk expressions that account for fully reliable conventional protection equipment and for main breakers passive and active failures;This dissertation introduces the concept of limiting operating point functions, curves that give limiting values of the critical parameter for various fault locations on a line and characterize the dependency of operating limit on fault type and fault location. The limiting operating point functions combine system stability performance and probability of instability information. This dissertation includes a detailed study on how excitation systems and other parameters affect limiting operating point functions. We also develop, using probability theory, expressions to calculate the conditional probability of insecurity given a fault occurs for thermal overloads and two approaches for computing probability of transient instability; one based on Law of Total Probability and the other on Cartesian products;Finally, we use a modified version of the IEEE Reliability Test System to illustrate risk-based electric power system security assessment and to compare it with traditional deterministic security assessment. We determine operating limits using iso-risk contours drawn in the space of pre-contingency controllable parameters, effectively creating nomograms based on risk. The contours of constant risk in the space of operating parameters provide a risk management tool that allows managers to justify decisions to operate beyond deterministic operating limits when it is economically advantageous to do so

    A Statistical Framework for the Analysis of Microarray Probe-Level Data

    Get PDF
    Microarrays are an example of the powerful high through-put genomics tools that are revolutionizing the measurement of biological systems. In this and other technologies, a number of critical steps are required to convert the raw measures into the data relied upon by biologists and clinicians. These data manipulations, referred to as preprocessing, have enormous influence on the quality of the ultimate measurements and studies that rely upon them. Many researchers have previously demonstrated that the use of modern statistical methodology can substantially improve accuracy and precision of gene expression measurements, relative to ad-hoc procedures introduced by designers and manufacturers of the technology. However, further substantial improvements are possible. Microarrays are now being used to measure diverse high genomic endpoints including yeast mutant representations, the presence of SNPs, presence of deletions/insertions, and protein binding sites by chromatin immunoprecipitation (known as ChIP-chip). In each case, the genomic units of measurement are relatively short DNA molecules referred to as probes. Without appropriate understanding of the bias and variance of these measurements, biological inferences based upon probe analysis will be compromised. Standard operating procedure for microarray researchers is to use preprocessed data as the starting point for the statistical analyses that produce reported results. This has prevented many researchers from carefully considering their choice of preprocessing methodology. Furthermore, the fact that the preprocessing step greatly affects the stochastic properties of the final statistical summaries is ignored. In this paper we propose a statistical framework that permits the integration of preprocessing into the standard statistical analysis flow of microarray data. We demonstrate its usefulness by applying the idea in three different applications of the technology

    A statistical framework for the analysis of microarray probe-level data

    Full text link
    In microarray technology, a number of critical steps are required to convert the raw measurements into the data relied upon by biologists and clinicians. These data manipulations, referred to as preprocessing, influence the quality of the ultimate measurements and studies that rely upon them. Standard operating procedure for microarray researchers is to use preprocessed data as the starting point for the statistical analyses that produce reported results. This has prevented many researchers from carefully considering their choice of preprocessing methodology. Furthermore, the fact that the preprocessing step affects the stochastic properties of the final statistical summaries is often ignored. In this paper we propose a statistical framework that permits the integration of preprocessing into the standard statistical analysis flow of microarray data. This general framework is relevant in many microarray platforms and motivates targeted analysis methods for specific applications. We demonstrate its usefulness by applying the idea in three different applications of the technology.Comment: Published in at http://dx.doi.org/10.1214/07-AOAS116 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Multiple Lab Comparison of Microarray Platforms

    Get PDF
    Microarray technology is a powerful tool able to measure RNA expression for thousands of genes at once. Various studies have been published comparing competing platforms with mixed results: some find agreement, others do not. As the number of researchers starting to use microarrays and the number of crossplatform meta-analysis studies rapidly increase, appropriate platform assessments become more important. Here we present results from a comparison study that offers important improvements over those previously described in the literature. In particular, we notice that none of the previously published papers consider differences between labs. For this paper, a consortium of ten labs from the Washington DC/Baltimore (USA) area was formed to compare three heavily used platforms using identical RNA samples: Appropriate statistical analysis demonstrates that relatively large differences exist between labs using the same platform, but that the results from the best performing labs agree rather well. Supplemental material is available from http://www.biostat.jhsph.edu/~ririzarr/techcomp

    MODEL-BASED QUALITY ASSESSMENT AND BASE-CALLING FOR SECOND-GENERATION SEQUENCING DATA

    Get PDF
    Second-generation sequencing (sec-gen) technology can sequence millions of short fragments of DNA in parallel, and is capable of assembling complex genomes for a small fraction of the price and time of previous technologies. In fact, a recently formed international consortium, the 1,000 Genomes Project, plans to fully sequence the genomes of approximately 1,200 people. The prospect of comparative analysis at the sequence level of a large number of samples across multiple populations may be achieved within the next five years. These data present unprecedented challenges in statistical analysis. For instance, analysis operates on millions of short nucleotide sequences, or reads—strings of A,C,G, or T’s, between 30-100 characters long—which are the result of complex processing of noisy continuous fluorescence intensity measurements known as base-calling. The complexity of the base-calling discretization process results in reads of widely varying quality within and across sequence samples. This variation in processing quality results in infrequent but systematic errors that we have found to mislead downstream analysis of the discretized sequence read data. For instance, a central goal of the 1000 Genomes Project is to quantify across-sample variation at the single nucleotide level. At this resolution, small error rates in sequencing prove significant, especially for rare variants. Sec-gen sequencing is a relatively new technology for which potential biases and sources of obscuring variation are not yet fully understood. Therefore, modeling and quantifying the uncertainty inherent in the generation of sequence reads is of utmost importance. In this paper we present a simple model to capture uncertainty arising in the base-calling procedure of the Illumina/Solexa GA platform. Model parameters have a straightforward interpretation in terms of the chemistry of base-calling allowing for informative and easily interpretable metrics that capture the variability in sequencing quality. Our model provides these informative estimates readily usable in quality assessment tools while significantly improving base-calling performance

    Accounting for cellular heterogeneity is critical in epigenome-wide association studies

    Get PDF
    Background: Epigenome-wide association studies of human disease and other quantitative traits are becoming increasingly common. A series of papers reporting age-related changes in DNA methylation profiles in peripheral blood have already been published. However, blood is a heterogeneous collection of different cell types, each with a very different DNA methylation profile. Results: Using a statistical method that permits estimating the relative proportion of cell types from DNA methylation profiles, we examine data from five previously published studies, and find strong evidence of cell composition change across age in blood. We also demonstrate that, in these studies, cellular composition explains much of the observed variability in DNA methylation. Furthermore, we find high levels of confounding between age-related variability and cellular composition at the CpG level. Conclusions: Our findings underscore the importance of considering cell composition variability in epigenetic studies based on whole blood and other heterogeneous tissue sources. We also provide software for estimating and exploring this composition confounding for the Illumina 450k microarray

    In vitro identification and in silico utilization of interspecies sequence similarities using GeneChip(® )technology

    Get PDF
    BACKGROUND: Genomic approaches in large animal models (canine, ovine etc) are challenging due to insufficient genomic information for these species and the lack of availability of corresponding microarray platforms. To address this problem, we speculated that conserved interspecies genetic sequences can be experimentally detected by cross-species hybridization. The Affymetrix platform probe redundancy offers flexibility in selecting individual probes with high sequence similarities between related species for gene expression analysis. RESULTS: Gene expression profiles of 40 canine samples were generated using the human HG-U133A GeneChip (U133A). Due to interspecies genetic differences, only 14 ± 2% of canine transcripts were detected by U133A probe sets whereas profiling of 40 human samples detected 49 ± 6% of human transcripts. However, when these probe sets were deconstructed into individual probes and examined performance of each probe, we found that 47% of human probes were able to find their targets in canine tissues and generate a detectable hybridization signal. Therefore, we restricted gene expression analysis to these probes and observed the 60% increase in the number of identified canine transcripts. These results were validated by comparison of transcripts identified by our restricted analysis of cross-species hybridization with transcripts identified by hybridization of total lung canine mRNA to new Affymetrix Canine GeneChip(®). CONCLUSION: The experimental identification and restriction of gene expression analysis to probes with detectable hybridization signal drastically increases transcript detection of canine-human hybridization suggesting the possibility of broad utilization of cross-hybridizations of related species using GeneChip technology

    Using the R Package crlmm for Genotyping and Copy Number Estimation

    Get PDF
    Genotyping platforms such as Affymetrix can be used to assess genotype-phenotype as well as copy number-phenotype associations at millions of markers. While genotyping algorithms are largely concordant when assessed on HapMap samples, tools to assess copy number changes are more variable and often discordant. One explanation for the discordance is that copy number estimates are susceptible to systematic differences between groups of samples that were processed at different times or by different labs. Analysis algorithms that do not adjust for batch effects are prone to spurious measures of association. The R package crlmm implements a multilevel model that adjusts for batch effects and provides allele-specific estimates of copy number. This paper illustrates a workflow for the estimation of allele-specific copy number and integration of the marker-level estimates with complimentary Bioconductor software for inferring regions of copy number gain or loss. All analyses are performed in the statistical environment R.
    • …
    corecore