1,843 research outputs found

    A multi-treatment experimental system to examine photosynthetic differentiation in the maize leaf

    Get PDF
    BACKGROUND: The establishment of C(4 )photosynthesis in maize is associated with differential accumulation of gene transcripts and proteins between bundle sheath and mesophyll photosynthetic cell types. We have physically separated photosynthetic cell types in the leaf blade to characterize differences in gene expression by microarray analysis. Additional control treatments were used to account for transcriptional changes induced by cell preparation treatments. To analyse these data, we have developed a statistical model to compare gene expression values derived from multiple, partially confounded, treatment groups. RESULTS: Differential gene expression in the leaves of wild-type maize seedlings was characterized using the latest release of a maize long-oligonucleotide microarray produced by the Maize Array Project consortium. The complete data set is available through the project web site. Data is also available at the NCBI GEO website, series record GSE3890. Data was analysed with and without consideration of cell preparation associated stress. CONCLUSION: Empirical comparison of the two analyses suggested that consideration of stress helped to reduce the false identification of stress responsive transcripts as cell-type enriched. Using our model including a stress term, we identified 8% of features as differentially expressed between bundle sheath and mesophyll cell types under control of false discovery rate of 5%. An estimate of the overall proportion of differentially accumulating transcripts (1-π(0)) suggested that as many as 18% of the genes may be differentially expressed between B and M. The analytical model presented here is generally applicable to gene expression data and demonstrates the use of statistical elimination of confounding effects such as stress in the context of microarray analysis. We discuss the implications of the high degree of differential transcript accumulation observed with regard to both the establishment and engineering of the C(4 )syndrome

    Dynamical Autler-Townes control of a phase qubit

    Get PDF
    Routers, switches, and repeaters are essential components of modern information-processing systems. Similar devices will be needed in future superconducting quantum computers. In this work we investigate experimentally the time evolution of Autler-Townes splitting in a superconducting phase qubit under the application of a control tone resonantly coupled to the second transition. A three-level model that includes independently determined parameters for relaxation and dephasing gives excellent agreement with the experiment. The results demonstrate that the qubit can be used as a ON/OFF switch with 100 ns operating time-scale for the reflection/transmission of photons coming from an applied probe microwave tone. The ON state is realized when the control tone is sufficiently strong to generate an Autler-Townes doublet, suppressing the absorption of the probe tone photons and resulting in a maximum of transmission.Comment: 8 pages, 8 figure

    Efficient coupling of photons to a single molecule and the observation of its resonance fluorescence

    Full text link
    Single dye molecules at cryogenic temperatures display many spectroscopic phenomena known from free atoms and are thus promising candidates for fundamental quantum optical studies. However, the existing techniques for the detection of single molecules have either sacrificed the information on the coherence of the excited state or have been inefficient. Here we show that these problems can be addressed by focusing the excitation light near to the absorption cross section of a molecule. Our detection scheme allows us to explore resonance fluorescence over 9 orders of magnitude of excitation intensity and to separate its coherent and incoherent parts. In the strong excitation regime, we demonstrate the first observation of the Mollow triplet from a single solid-state emitter. Under weak excitation we report the detection of a single molecule with an incident power as faint as 150 attoWatt, paving the way for studying nonlinear effects with only a few photons.Comment: 6 figure

    Graphene plasmonics

    Full text link
    Two rich and vibrant fields of investigation, graphene physics and plasmonics, strongly overlap. Not only does graphene possess intrinsic plasmons that are tunable and adjustable, but a combination of graphene with noble-metal nanostructures promises a variety of exciting applications for conventional plasmonics. The versatility of graphene means that graphene-based plasmonics may enable the manufacture of novel optical devices working in different frequency ranges, from terahertz to the visible, with extremely high speed, low driving voltage, low power consumption and compact sizes. Here we review the field emerging at the intersection of graphene physics and plasmonics.Comment: Review article; 12 pages, 6 figures, 99 references (final version available only at publisher's web site

    Presenting the Uncertainties of Odds Ratios Using Empirical-Bayes Prediction Intervals

    Get PDF
    Quantifying exposure-disease associations is a central issue in epidemiology. Researchers of a study often present an odds ratio (or a logarithm of odds ratio, logOR) estimate together with its confidence interval (CI), for each exposure they examined. Here the authors advocate using the empirical-Bayes-based ‘prediction intervals’ (PIs) to bound the uncertainty of logORs. The PI approach is applicable to a panel of factors believed to be exchangeable (no extra information, other than the data itself, is available to distinguish some logORs from the others). The authors demonstrate its use in a genetic epidemiological study on age-related macular degeneration (AMD). The proposed PIs can enjoy straightforward probabilistic interpretations—a 95% PI has a probability of 0.95 to encompass the true value, and the expected number of true values that are being encompassed is for a total of 95% PIs. The PI approach is theoretically more efficient (producing shorter intervals) than the traditional CI approach. In the AMD data, the average efficiency gain is 51.2%. The PI approach is advocated to present the uncertainties of many logORs in a study, for its straightforward probabilistic interpretations and higher efficiency while maintaining the nominal coverage probability

    A Condensation-Ordering Mechanism in Nanoparticle-Catalyzed Peptide Aggregation

    Get PDF
    Nanoparticles introduced in living cells are capable of strongly promoting the aggregation of peptides and proteins. We use here molecular dynamics simulations to characterise in detail the process by which nanoparticle surfaces catalyse the self- assembly of peptides into fibrillar structures. The simulation of a system of hundreds of peptides over the millisecond timescale enables us to show that the mechanism of aggregation involves a first phase in which small structurally disordered oligomers assemble onto the nanoparticle and a second phase in which they evolve into highly ordered beta-sheets as their size increases

    A frequentist framework of inductive reasoning

    Full text link
    Reacting against the limitation of statistics to decision procedures, R. A. Fisher proposed for inductive reasoning the use of the fiducial distribution, a parameter-space distribution of epistemological probability transferred directly from limiting relative frequencies rather than computed according to the Bayes update rule. The proposal is developed as follows using the confidence measure of a scalar parameter of interest. (With the restriction to one-dimensional parameter space, a confidence measure is essentially a fiducial probability distribution free of complications involving ancillary statistics.) A betting game establishes a sense in which confidence measures are the only reliable inferential probability distributions. The equality between the probabilities encoded in a confidence measure and the coverage rates of the corresponding confidence intervals ensures that the measure's rule for assigning confidence levels to hypotheses is uniquely minimax in the game. Although a confidence measure can be computed without any prior distribution, previous knowledge can be incorporated into confidence-based reasoning. To adjust a p-value or confidence interval for prior information, the confidence measure from the observed data can be combined with one or more independent confidence measures representing previous agent opinion. (The former confidence measure may correspond to a posterior distribution with frequentist matching of coverage probabilities.) The representation of subjective knowledge in terms of confidence measures rather than prior probability distributions preserves approximate frequentist validity.Comment: major revisio

    Linear, Deterministic, and Order-Invariant Initialization Methods for the K-Means Clustering Algorithm

    Full text link
    Over the past five decades, k-means has become the clustering algorithm of choice in many application domains primarily due to its simplicity, time/space efficiency, and invariance to the ordering of the data points. Unfortunately, the algorithm's sensitivity to the initial selection of the cluster centers remains to be its most serious drawback. Numerous initialization methods have been proposed to address this drawback. Many of these methods, however, have time complexity superlinear in the number of data points, which makes them impractical for large data sets. On the other hand, linear methods are often random and/or sensitive to the ordering of the data points. These methods are generally unreliable in that the quality of their results is unpredictable. Therefore, it is common practice to perform multiple runs of such methods and take the output of the run that produces the best results. Such a practice, however, greatly increases the computational requirements of the otherwise highly efficient k-means algorithm. In this chapter, we investigate the empirical performance of six linear, deterministic (non-random), and order-invariant k-means initialization methods on a large and diverse collection of data sets from the UCI Machine Learning Repository. The results demonstrate that two relatively unknown hierarchical initialization methods due to Su and Dy outperform the remaining four methods with respect to two objective effectiveness criteria. In addition, a recent method due to Erisoglu et al. performs surprisingly poorly.Comment: 21 pages, 2 figures, 5 tables, Partitional Clustering Algorithms (Springer, 2014). arXiv admin note: substantial text overlap with arXiv:1304.7465, arXiv:1209.196

    Isotocin controls ion regulation through regulating ionocyte progenitor differentiation and proliferation

    Get PDF
    The present study using zebrafish as a model explores the role of isotocin, a homolog of oxytocin, in controlling ion regulatory mechanisms. Double-deionized water treatment for 24 h significantly stimulated isotocin mRNA expression in zebrafish embryos. Whole-body Cl−, Ca2+, and Na+ contents, mRNA expressions of ion transporters and ionocyte-differentiation related transcription factors, and the number of skin ionocytes decreased in isotocin morphants. In contrast, overexpression of isotocin caused an increase in ionocyte numbers. Isotocin morpholino caused significant suppression of foxi3a mRNA expression, while isotocin cRNA stimulated foxi3a mRNA expressions at the tail-bud stage of zebrafish embryos. The density of P63 (an epidermal stem cell marker)-positive cells was downregulated by isotocin morpholinos and was upregulated by isotocin cRNA. Taken together, isotocin stimulates the proliferation of epidermal stem cells and differentiation of ionocyte progenitors by regulating the P63 and Foxi3a transcription factors, consequently enhancing the functional activities of ionocytes
    corecore