2,030 research outputs found
A Brownian particle in a microscopic periodic potential
We study a model for a massive test particle in a microscopic periodic
potential and interacting with a reservoir of light particles. In the regime
considered, the fluctuations in the test particle's momentum resulting from
collisions typically outweigh the shifts in momentum generated by the periodic
force, and so the force is effectively a perturbative contribution. The
mathematical starting point is an idealized reduced dynamics for the test
particle given by a linear Boltzmann equation. In the limit that the mass ratio
of a single reservoir particle to the test particle tends to zero, we show that
there is convergence to the Ornstein-Uhlenbeck process under the standard
normalizations for the test particle variables. Our analysis is primarily
directed towards bounding the perturbative effect of the periodic potential on
the particle's momentum.Comment: 60 pages. We reorganized the article and made a few simplifications
of the conten
Missing Features Reconstruction Using a Wasserstein Generative Adversarial Imputation Network
Missing data is one of the most common preprocessing problems. In this paper,
we experimentally research the use of generative and non-generative models for
feature reconstruction. Variational Autoencoder with Arbitrary Conditioning
(VAEAC) and Generative Adversarial Imputation Network (GAIN) were researched as
representatives of generative models, while the denoising autoencoder (DAE)
represented non-generative models. Performance of the models is compared to
traditional methods k-nearest neighbors (k-NN) and Multiple Imputation by
Chained Equations (MICE). Moreover, we introduce WGAIN as the Wasserstein
modification of GAIN, which turns out to be the best imputation model when the
degree of missingness is less than or equal to 30%. Experiments were performed
on real-world and artificial datasets with continuous features where different
percentages of features, varying from 10% to 50%, were missing. Evaluation of
algorithms was done by measuring the accuracy of the classification model
previously trained on the uncorrupted dataset. The results show that GAIN and
especially WGAIN are the best imputers regardless of the conditions. In
general, they outperform or are comparative to MICE, k-NN, DAE, and VAEAC.Comment: Preprint of the conference paper (ICCS 2020), part of the Lecture
Notes in Computer Scienc
Neither fair nor unchangeable but part of the natural order: orientations towards inequality in the face of criticism of the economic system
The magnitude of climate change threats to life on the planet is not matched by the level of current mitigation strategies. To contribute to our understanding of inaction in the face of climate change, the reported study draws upon the pro status quo motivations encapsulated within System Justification Theory. In an online questionnaire study, participants (N = 136) initially completed a measure of General System Justification. Participants in a “System-critical” condition were then exposed to information linking environmental problems to the current economic system; participants in a Control condition were exposed to information unrelated to either environmental problems or the economic system. A measure of Economic System Justification was subsequently administered. Regressions of Economic System Justification revealed interactions between General System Justification and Information Type: higher general system justifiers in the System-critical condition rated the economic system as less fair than did their counterparts in the Control condition. However, they also indicated inequality as more natural than did their counterparts in the Control condition. The groups did not differ in terms of beliefs about the economic system being open to change. The results are discussed in terms of how reassurance about the maintenance of the status quo may be bolstered by recourse to beliefs in a natural order
Differential expression analysis with global network adjustment
<p>Background: Large-scale chromosomal deletions or other non-specific perturbations of the transcriptome can alter the expression of hundreds or thousands of genes, and it is of biological interest to understand which genes are most profoundly affected. We present a method for predicting a gene’s expression as a function of other genes thereby accounting for the effect of transcriptional regulation that confounds the identification of genes differentially expressed relative to a regulatory network. The challenge in constructing such models is that the number of possible regulator transcripts within a global network is on the order of thousands, and the number of biological samples is typically on the order of 10. Nevertheless, there are large gene expression databases that can be used to construct networks that could be helpful in modeling transcriptional regulation in smaller experiments.</p>
<p>Results: We demonstrate a type of penalized regression model that can be estimated from large gene expression databases, and then applied to smaller experiments. The ridge parameter is selected by minimizing the cross-validation error of the predictions in the independent out-sample. This tends to increase the model stability and leads to a much greater degree of parameter shrinkage, but the resulting biased estimation is mitigated by a second round of regression. Nevertheless, the proposed computationally efficient “over-shrinkage” method outperforms previously used LASSO-based techniques. In two independent datasets, we find that the median proportion of explained variability in expression is approximately 25%, and this results in a substantial increase in the signal-to-noise ratio allowing more powerful inferences on differential gene expression leading to biologically intuitive findings. We also show that a large proportion of gene dependencies are conditional on the biological state, which would be impossible with standard differential expression methods.</p>
<p>Conclusions: By adjusting for the effects of the global network on individual genes, both the sensitivity and reliability of differential expression measures are greatly improved.</p>
BRCA1 and BRCA2 mutations in a population-based study of male breast cancer
Background: The contribution of BRCA1 and BRCA2 to the incidence of male breast cancer (MBC)
in the United Kingdom is not known, and the importance of these genes in the increased risk of female
breast cancer associated with a family history of breast cancer in a male first-degree relative is unclear.
Methods: We have carried out a population-based study of 94 MBC cases collected in the UK. We
screened genomic DNA for mutations in BRCA1 and BRCA2 and used family history data from these
cases to calculate the risk of breast cancer to female relatives of MBC cases. We also estimated the
contribution of BRCA1 and BRCA2 to this risk.
Results: Nineteen cases (20%) reported a first-degree relative with breast cancer, of whom seven also
had an affected second-degree relative. The breast cancer risk in female first-degree relatives was 2.4
times (95% confidence interval [CI] = 1.4–4.0) the risk in the general population. No BRCA1 mutation
carriers were identified and five cases were found to carry a mutation in BRCA2. Allowing for a
mutation detection sensitivity frequency of 70%, the carrier frequency for BRCA2 mutations was 8%
(95% CI = 3–19). All the mutation carriers had a family history of breast, ovarian, prostate or
pancreatic cancer. However, BRCA2 accounted for only 15% of the excess familial risk of breast
cancer in female first-degree relatives.
Conclusion: These data suggest that other genes that confer an increased risk for both female and
male breast cancer have yet to be found
Antikaon production in nucleon-nucleon reactions near threshold
The antikaon production cross section from nucleon-nucleon reactions near
threshold is studied in a meson exchange model. We include both pion and kaon
exchange, but neglect the interference between the amplitudes. In case of pion
exchange the antikaon production cross section can be expressed in terms of the
antikaon production cross section from a pion-nucleon interaction, which we
take from the experimental data if available. Otherwise, a -resonance
exchange model is introduced to relate the different reaction cross sections.
In case of kaon exchange the antikaon production cross section is related to
the elastic and cross sections, which are again taken from
experimental measurements. We find that the one-meson exchange model gives a
satisfactory fit to the available data for the cross section
at high energies. We compare our predictions for the cross section near
threshold with an earlier empirical parameterization and that from phase space
models.Comment: 16 pages, LaTeX, 5 postscript figures included, submitted to Z. Phys.
Discovering study-specific gene regulatory networks
This article has been made available through the Brunel Open Access Publishing Fund.Microarrays are commonly used in biology because of their ability to simultaneously measure thousands of genes under different conditions. Due to their structure, typically containing a high amount of variables but far fewer samples, scalable network analysis techniques are often employed. In particular, consensus approaches have been recently used that combine multiple microarray studies in order to find networks that are more robust. The purpose of this paper, however, is to combine multiple microarray studies to automatically identify subnetworks that are distinctive to specific experimental conditions rather than common to them all. To better understand key regulatory mechanisms and how they change under different conditions, we derive unique networks from multiple independent networks built using glasso which goes beyond standard correlations. This involves calculating cluster prediction accuracies to detect the most predictive genes for a specific set of conditions. We differentiate between accuracies calculated using cross-validation within a selected cluster of studies (the intra prediction accuracy) and those calculated on a set of independent studies belonging to different study clusters (inter prediction accuracy). Finally, we compare our method's results to related state-of-the art techniques. We explore how the proposed pipeline performs on both synthetic data and real data (wheat and Fusarium). Our results show that subnetworks can be identified reliably that are specific to subsets of studies and that these networks reflect key mechanisms that are fundamental to the experimental conditions in each of those subsets
Thioglycosides Are efficient metabolic decoys of glycosylation that reduce selectin dependent leukocyte adhesion
Metabolic decoys are synthetic analogs of naturally occurring biosynthetic acceptors. These compounds divert cellular biosynthetic pathways by acting as artificial substrates that usurp the activity of natural enzymes. While O-linked glycosides are common, they are only partially effective even at millimolar concentrations. In contrast, we report that N-acetylglucosamine (GlcNAc) incorporated into various thioglycosides robustly truncate cell surface N- and O-linked glycan biosynthesis at 10-100 μM concentrations. The >10-fold greater inhibition is in part due to the resistance of thioglycosides to hydrolysis by intracellular hexosaminidases. The thioglycosides reduce β-galactose incorporation into lactosamine chains, cell surface sialyl Lewis-X expression, and leukocyte rolling on selectin substrates including inflamed endothelial cells under fluid shear. Treatment of granulocytes with thioglycosides prior to infusion into mouse inhibited neutrophil homing to sites of acute inflammation and bone marrow by ∼80%-90%. Overall, thioglycosides represent an easy to synthesize class of efficient metabolic inhibitors or decoys. They reduce N-/O-linked glycan biosynthesis and inflammatory leukocyte accumulation
Tate Form and Weak Coupling Limits in F-theory
We consider the weak coupling limit of F-theory in the presence of
non-Abelian gauge groups implemented using the traditional ansatz coming from
Tate's algorithm. We classify the types of singularities that could appear in
the weak coupling limit and explain their resolution. In particular, the weak
coupling limit of SU(n) gauge groups leads to an orientifold theory which
suffers from conifold singulaties that do not admit a crepant resolution
compatible with the orientifold involution. We present a simple resolution to
this problem by introducing a new weak coupling regime that admits
singularities compatible with both a crepant resolution and an orientifold
symmetry. We also comment on possible applications of the new limit to model
building. We finally discuss other unexpected phenomena as for example the
existence of several non-equivalent directions to flow from strong to weak
coupling leading to different gauge groups.Comment: 34 page
- …