3,154 research outputs found
Fundamental Aspects of the ISM Fractality
The ubiquitous clumpy state of the ISM raises a fundamental and open problem
of physics, which is the correct statistical treatment of systems dominated by
long range interactions. A simple solvable hierarchical model is presented
which explains why systems dominated by gravity prefer to adopt a fractal
dimension around 2 or less, like the cold ISM and large scale structures. This
has direct relation with the general transparency, or blackness, of the
Universe.Comment: 6 pages, LaTeX2e, crckapb macro, no figure, uuencoded compressed tar
file. To be published in the proceeedings of the "Dust-Morphology"
conference, Johannesburg, 22-26 January, 1996, D. Block (ed.), (Kluwer
Dordrecht
Physics in Riemann's mathematical papers
Riemann's mathematical papers contain many ideas that arise from physics, and
some of them are motivated by problems from physics. In fact, it is not easy to
separate Riemann's ideas in mathematics from those in physics. Furthermore,
Riemann's philosophical ideas are often in the background of his work on
science. The aim of this chapter is to give an overview of Riemann's
mathematical results based on physical reasoning or motivated by physics. We
also elaborate on the relation with philosophy. While we discuss some of
Riemann's philosophical points of view, we review some ideas on the same
subjects emitted by Riemann's predecessors, and in particular Greek
philosophers, mainly the pre-socratics and Aristotle. The final version of this
paper will appear in the book: From Riemann to differential geometry and
relativity (L. Ji, A. Papadopoulos and S. Yamada, ed.) Berlin: Springer, 2017
Constraints on supersymmetry with light third family from LHC data
We present a re-interpretation of the recent ATLAS limits on supersymmetry in
channels with jets (with and without b-tags) and missing energy, in the context
of light third family squarks, while the first two squark families are
inaccessible at the 7 TeV run of the Large Hadron Collider (LHC). In contrast
to interpretations in terms of the high-scale based constrained minimal
supersymmetric standard model (CMSSM), we primarily use the low-scale
parametrisation of the phenomenological MSSM (pMSSM), and translate the limits
in terms of physical masses of the third family squarks. Side by side, we also
investigate the limits in terms of high-scale scalar non-universality, both
with and without low-mass sleptons. Our conclusion is that the limits based on
0-lepton channels are not altered by the mass-scale of sleptons, and can be
considered more or less model-independent.Comment: 20 pages, 8 figures, 2 tables. Version published in JHE
Giant QCD K-factors beyond NLO
Hadronic observables in Z+jet events can be subject to large NLO corrections
at TeV scales, with K-factors that even reach values of order 50 in some cases.
We develop a method, LoopSim, by which approximate NNLO predictions can be
obtained for such observables, supplementing NLO Z+jet and NLO Z+2-jet results
with a unitarity-based approximation for missing higher loop terms. We first
test the method against known NNLO results for Drell-Yan lepton pt spectra. We
then show our approximate NNLO results for the Z+jet observables. Finally we
examine whether the LoopSim method can provide useful information even in cases
without giant K-factors, with results for observables in dijet events that can
be compared to early LHC data.Comment: 38 pages, 13 figures; v2 includes additional reference
Prediction and Topological Models in Neuroscience
In the last two decades, philosophy of neuroscience has predominantly focused on explanation. Indeed, it has been argued that mechanistic models are the standards of explanatory success in neuroscience over, among other things, topological models. However, explanatory power is only one virtue of a scientific model. Another is its predictive power. Unfortunately, the notion of prediction has received comparatively little attention in the philosophy of neuroscience, in part because predictions seem disconnected from interventions. In contrast, we argue that topological predictions can and do guide interventions in science, both inside and outside of neuroscience. Topological models allow researchers to predict many phenomena, including diseases, treatment outcomes, aging, and cognition, among others. Moreover, we argue that these predictions also offer strategies for useful interventions. Topology-based predictions play this role regardless of whether they do or can receive a mechanistic interpretation. We conclude by making a case for philosophers to focus on prediction in neuroscience in addition to explanation alone
Techniques for Arbuscular Mycorrhiza Inoculum Reduction
It is well established that arbuscular mycorrhizal (AM) fungi can play a significant role in sustainable crop production and environmental conservation. With the increasing awareness of the ecological significance of mycorrhizas and their diversity, research needs to be directed away from simple records of their occurrence or casual speculation of their function (Smith and Read 1997). Rather, the need is for empirical studies and investigations of the quantitative aspects of the distribution of different types and their contribution to the function of ecosystems.
There is no such thing as a fungal effect or a plant effect, but there is an interaction between both symbionts. This results from the AM fungi and plant community size and structure, soil and climatic conditions, and the interplay between all these factors (Kahiluoto et al. 2000). Consequently, it is readily understood that it is the problems associated with methodology that limit our understanding of the functioning and effects of AM fungi within field communities.
Given the ubiquous presence of AM fungi, a major constraint to the evaluation of the activity of AM colonisation has been the need to account for the indigenous soil native inoculum. This has to be controlled (i.e. reduced or eliminated) if we are to obtain a true control treatment for analysis of arbuscular mycorrhizas in natural substrates. There are various procedures possible for achieving such an objective, and the purpose of this chapter is to provide details of a number of techniques and present some evaluation of their advantages and disadvantages.
Although there have been a large number of experiments to investigated the effectiveness of different sterilization procedures for reducing pathogenic soil fungi, little information is available on their impact on beneficial organisms such as AM fungi. Furthermore, some of the techniques have been shown to affect physical and chemical soil characteristics as well as eliminate soil microorganisms that can interfere with the development of mycorrhizas, and this creates difficulties in the interpretation of results simply in terms of possible mycorrhizal activity.
An important subject is the differentiation of methods that involve sterilization from those focussed on indigenous inoculum reduction. Soil sterilization aims to destroy or eliminate microbial cells while maintaining the existing chemical and physical characteristics of the soil (Wolf and Skipper 1994). Consequently, it is often used for experiments focussed on specific AM fungi, or to establish a negative control in some other types of study. In contrast, the purpose of inoculum reduction techniques is to create a perturbation that will interfere with mycorrhizal formation, although not necessarily eliminating any component group within the inoculum. Such an approach allows the establishment of different degrees of mycorrhizal formation between treatments and the study of relative effects.
Frequently the basic techniques used to achieve complete sterilization or just an inoculum reduction may be similar but the desired outcome is accomplished by adjustments of the dosage or intensity of the treatment. The ultimate choice of methodology for establishing an adequate non-mycorrhizal control depends on the design of the particular experiments, the facilities available and the amount of soil requiring treatment
The degradation of p53 and its major E3 ligase Mdm2 is differentially dependent on the proteasomal ubiquitin receptor S5a.
p53 and its major E3 ligase Mdm2 are both ubiquitinated and targeted to the proteasome for degradation. Despite the importance of this in regulating the p53 pathway, little is known about the mechanisms of proteasomal recognition of ubiquitinated p53 and Mdm2. In this study, we show that knockdown of the proteasomal ubiquitin receptor S5a/PSMD4/Rpn10 inhibits p53 protein degradation and results in the accumulation of ubiquitinated p53. Overexpression of a dominant-negative deletion of S5a lacking its ubiquitin-interacting motifs (UIM)s, but which can be incorporated into the proteasome, also causes the stabilization of p53. Furthermore, small-interferring RNA (siRNA) rescue experiments confirm that the UIMs of S5a are required for the maintenance of low p53 levels. These observations indicate that S5a participates in the recognition of ubiquitinated p53 by the proteasome. In contrast, targeting S5a has no effect on the rate of degradation of Mdm2, indicating that proteasomal recognition of Mdm2 can be mediated by an S5a-independent pathway. S5a knockdown results in an increase in the transcriptional activity of p53. The selective stabilization of p53 and not Mdm2 provides a mechanism for p53 activation. Depletion of S5a causes a p53-dependent decrease in cell proliferation, demonstrating that p53 can have a dominant role in the response to targeting S5a. This study provides evidence for alternative pathways of proteasomal recognition of p53 and Mdm2. Differences in recognition by the proteasome could provide a means to modulate the relative stability of p53 and Mdm2 in response to cellular signals. In addition, they could be exploited for p53-activating therapies. This work shows that the degradation of proteins by the proteasome can be selectively dependent on S5a in human cells, and that this selectivity can extend to an E3 ubiquitin ligase and its substrate
Modulation of enhancer looping and differential gene targeting by Epstein-Barr virus transcription factors directs cellular reprogramming
Epstein-Barr virus (EBV) epigenetically reprogrammes B-lymphocytes to drive immortalization and facilitate viral persistence. Host-cell transcription is perturbed principally through the actions of EBV EBNA 2, 3A, 3B and 3C, with cellular genes deregulated by specific combinations of these EBNAs through unknown mechanisms. Comparing human genome binding by these viral transcription factors, we discovered that 25% of binding sites were shared by EBNA 2 and the EBNA 3s and were located predominantly in enhancers. Moreover, 80% of potential EBNA 3A, 3B or 3C target genes were also targeted by EBNA 2, implicating extensive interplay between EBNA 2 and 3 proteins in cellular reprogramming. Investigating shared enhancer sites neighbouring two new targets (WEE1 and CTBP2) we discovered that EBNA 3 proteins repress transcription by modulating enhancer-promoter loop formation to establish repressive chromatin hubs or prevent assembly of active hubs. Re-ChIP analysis revealed that EBNA 2 and 3 proteins do not bind simultaneously at shared sites but compete for binding thereby modulating enhancer-promoter interactions. At an EBNA 3-only intergenic enhancer site between ADAM28 and ADAMDEC1 EBNA 3C was also able to independently direct epigenetic repression of both genes through enhancer-promoter looping. Significantly, studying shared or unique EBNA 3 binding sites at WEE1, CTBP2, ITGAL (LFA-1 alpha chain), BCL2L11 (Bim) and the ADAMs, we also discovered that different sets of EBNA 3 proteins bind regulatory elements in a gene and cell-type specific manner. Binding profiles correlated with the effects of individual EBNA 3 proteins on the expression of these genes, providing a molecular basis for the targeting of different sets of cellular genes by the EBNA 3s. Our results therefore highlight the influence of the genomic and cellular context in determining the specificity of gene deregulation by EBV and provide a paradigm for host-cell reprogramming through modulation of enhancer-promoter interactions by viral transcription factors
The Formation and Evolution of the First Massive Black Holes
The first massive astrophysical black holes likely formed at high redshifts
(z>10) at the centers of low mass (~10^6 Msun) dark matter concentrations.
These black holes grow by mergers and gas accretion, evolve into the population
of bright quasars observed at lower redshifts, and eventually leave the
supermassive black hole remnants that are ubiquitous at the centers of galaxies
in the nearby universe. The astrophysical processes responsible for the
formation of the earliest seed black holes are poorly understood. The purpose
of this review is threefold: (1) to describe theoretical expectations for the
formation and growth of the earliest black holes within the general paradigm of
hierarchical cold dark matter cosmologies, (2) to summarize several relevant
recent observations that have implications for the formation of the earliest
black holes, and (3) to look into the future and assess the power of
forthcoming observations to probe the physics of the first active galactic
nuclei.Comment: 39 pages, review for "Supermassive Black Holes in the Distant
Universe", Ed. A. J. Barger, Kluwer Academic Publisher
- …
