488 research outputs found
Recommended from our members
Additive-Free, Low-Temperature Crystallization of Stable Îą-FAPbI3 Perovskite
Formamidinium lead triiodide (FAPbI3) is attractive for photovoltaic devices due to its optimal bandgap at around 1.45 eV and improved thermal stability compared with methylammoniumâbased perovskites. Crystallization of phaseâpure ÎąâFAPbI3 conventionally requires highâtemperature thermal annealing at 150 °C whilst the obtained ÎąâFAPbI3 is metastable at room temperature. Here, aerosolâassisted crystallization (AAC) is reported, which converts yellow δâFAPbI3 into black ÎąâFAPbI3 at only 100 °C using precursor solutions containing only lead iodide and formamidinium iodide with no chemical additives. The obtained ÎąâFAPbI3 exhibits remarkably enhanced stability compared to the 150 °C annealed counterparts, in combination with improvements in film crystallinity and photoluminescence yield. Using Xâray diffraction, Xâray scattering, and density functional theory simulation, it is identified that relaxation of residual tensile strains, achieved through the lower annealing temperature and postâcrystallization crystal growth during AAC, is the key factor that facilitates the formation of phaseâstable ÎąâFAPbI3. This overcomes the strainâinduced lattice expansion that is known to cause the metastability of ÎąâFAPbI3. Accordingly, pure FAPbI3 pâiân solar cells are reported, facilitated by the lowâtemperature (â¤100 °C) AAC processing, which demonstrates increases of both power conversion efficiency and operational stability compared to devices fabricated using 150 °C annealed films
Association of tamoxifen use and reduced risk of contralateral breast cancer for BRCA1 and BRCA2 mutation carriers
RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are
Statistical analysis and significance testing of serial analysis of gene expression data using a Poisson mixture model
<p>Abstract</p> <p>Background</p> <p>Serial analysis of gene expression (SAGE) is used to obtain quantitative snapshots of the transcriptome. These profiles are count-based and are assumed to follow a Binomial or Poisson distribution. However, tag counts observed across multiple libraries (for example, one or more groups of biological replicates) have additional variance that cannot be accommodated by this assumption alone. Several models have been proposed to account for this effect, all of which utilize a continuous prior distribution to explain the excess variance. Here, a Poisson mixture model, which assumes excess variability arises from sampling a mixture of distinct components, is proposed and the merits of this model are discussed and evaluated.</p> <p>Results</p> <p>The goodness of fit of the Poisson mixture model on 15 sets of biological SAGE replicates is compared to the previously proposed hierarchical gamma-Poisson (negative binomial) model, and a substantial improvement is seen. In further support of the mixture model, there is observed: 1) an increase in the number of mixture components needed to fit the expression of tags representing more than one transcript; and 2) a tendency for components to cluster libraries into the same groups. A confidence score is presented that can identify tags that are differentially expressed between groups of SAGE libraries. Several examples where this test outperforms those previously proposed are highlighted.</p> <p>Conclusion</p> <p>The Poisson mixture model performs well as a) a method to represent SAGE data from biological replicates, and b) a basis to assign significance when testing for differential expression between multiple groups of replicates. Code for the R statistical software package is included to assist investigators in applying this model to their own data.</p
Engaging Undergraduates in Science Research: Not Just About Faculty Willingness.
Despite the many benefits of involving undergraduates in research and the growing number of undergraduate research programs, few scholars have investigated the factors that affect faculty members' decisions to involve undergraduates in their research projects. We investigated the individual factors and institutional contexts that predict faculty members' likelihood of engaging undergraduates in their research project(s). Using data from the Higher Education Research Institute's 2007-2008 Faculty Survey, we employ hierarchical generalized linear modeling to analyze data from 4,832 science, technology, engineering, and mathematics (STEM) faculty across 194 institutions to examine how organizational citizenship behavior theory and social exchange theory relate to mentoring students in research. Key findings show that faculty who work in the life sciences and those who receive government funding for their research are more likely to involve undergraduates in their research project(s). In addition, faculty at liberal arts or historically Black colleges are significantly more likely to involve undergraduate students in research. Implications for advancing undergraduate research opportunities are discussed
Power calculations using exact data simulation: A useful tool for genetic study designs.
Statistical power calculations constitute an essential first step in the planning of scientific studies. If sufficient summary statistics are available, power calculations are in principle straightforward and computationally light. In designs, which comprise distinct groups (e.g., MZ & DZ twins), sufficient statistics can be calculated within each group, and analyzed in a multi-group model. However, when the number of possible groups is prohibitively large (say, in the hundreds), power calculations on the basis of the summary statistics become impractical. In that case, researchers may resort to Monte Carlo based power studies, which involve the simulation of hundreds or thousands of replicate samples for each specified set of population parameters. Here we present exact data simulation as a third method of power calculation. Exact data simulation involves a transformation of raw data so that the data fit the hypothesized model exactly. As in power calculation with summary statistics, exact data simulation is computationally light, while the number of groups in the analysis has little bearing on the practicality of the method. The method is applied to three genetic designs for illustrative purposes
Using a 3D virtual muscle model to link gene expression changes during myogenesis to protein spatial location in muscle
Background: Myogenesis is an ordered process whereby mononucleated muscle precursor cells (myoblasts) fuse into multinucleated myotubes that eventually differentiate into myofibres, involving substantial changes in gene expression and the organisation of structural components of the cells. To gain further insight into the orchestration of these structural changes we have overlaid the spatial organisation of the protein components of a muscle cell with their gene expression changes during differentiation using a new 3D visualisation tool: the Virtual Muscle 3D (VMus3D)
Folding of the apolipoprotein A1 driven by the salt concentration as a possible mechanism to improve cholesterol trapping
The folding of the cholesterol trapping apolipoprotein A1 in aqueous solution
at increasing ionic strength is studied using atomically detailed molecular
dynamics simulations. We calculate various structural properties to
characterize the conformation of the protein, such as the radius of gyration,
the radial distribution function and the end to end distance. Additionally we
report information using tools specifically tailored for the characterization
of proteins, such as the mean smallest distance matrix and the Ramachandran
plot. We find that two qualitatively different configurations of this protein
are preferred, one where the protein is extended, and one where it forms loops
or closed structures. It is argued that the latter promote the association of
the protein with cholesterol and other fatty acids.Comment: 14 pages, 6 figures. To appear in "Selected Topics of Computational
and Experimental Fluid Mechanics", Springer, J. Klapp, G. Ru\'iz, A. Medina,
A. L\'opez & L. Di G. Sigalotti (eds.), 201
A review of ECG-based diagnosis support systems for obstructive sleep apnea
Humans need sleep. It is important for physical and psychological recreation. During sleep our consciousness is suspended or least altered. Hence, our ability to avoid or react to disturbances is reduced. These disturbances can come from external sources or from disorders within the body. Obstructive Sleep Apnea (OSA) is such a disorder. It is caused by obstruction of the upper airways which causes periods where the breathing ceases. In many cases, periods of reduced breathing, known as hypopnea, precede OSA events. The medical background of OSA is well understood, but the traditional diagnosis is expensive, as it requires sophisticated measurements and human interpretation of potentially large amounts of physiological data. Electrocardiogram (ECG) measurements have the potential to reduce the cost of OSA diagnosis by simplifying the measurement process. On the down side, detecting OSA events based on ECG data is a complex task which requires highly skilled practitioners. Computer algorithms can help to detect the subtle signal changes which indicate the presence of a disorder. That approach has the following advantages: computers never tire, processing resources are economical and progress, in the form of better algorithms, can be easily disseminated as updates over the internet. Furthermore, Computer-Aided Diagnosis (CAD) reduces intra- and inter-observer variability. In this review, we adopt and support the position that computer based ECG signal interpretation is able to diagnose OSA with a high degree of accuracy
Random-phase approximation and its applications in computational chemistry and materials science
The random-phase approximation (RPA) as an approach for computing the
electronic correlation energy is reviewed. After a brief account of its basic
concept and historical development, the paper is devoted to the theoretical
formulations of RPA, and its applications to realistic systems. With several
illustrating applications, we discuss the implications of RPA for computational
chemistry and materials science. The computational cost of RPA is also
addressed which is critical for its widespread use in future applications. In
addition, current correction schemes going beyond RPA and directions of further
development will be discussed.Comment: 25 pages, 11 figures, published online in J. Mater. Sci. (2012
- âŚ