120 research outputs found
Large cycles in random permutations related to the Heisenberg model
We study the weighted version of the interchange process where a permutation
receives weight . For this is T\'oth's
representation of the quantum Heisenberg ferromagnet on the complete graph. We
prove, for , that large cycles appear at `low temperature'.Comment: 11 page
The free energy in a class of quantum spin systems and interchange processes
We study a class of quantum spin systems in the mean-field setting of the
complete graph. For spin the model is the Heisenberg ferromagnet,
for general spin it has a probabilistic representation
as a cycle-weighted interchange process. We determine the free energy and the
critical temperature (recovering results by T\'oth and by Penrose when
). The critical temperature is shown to coincide (as a function of
) with that of the state classical Potts model, and the phase
transition is discontinuous when .Comment: 22 page
Infrared bound and mean-field behaviour in the quantum Ising model
We prove an infrared bound for the transverse field Ising model. This bound
is stronger than the previously known infrared bound for the model, and allows
us to investigate mean-field behaviour. As an application we show that the
critical exponent for the susceptibility attains its mean-field value
in dimension at least 4 (positive temperature), respectively 3
(ground state), with logarithmic corrections in the boundary cases.Comment: 42 pages, 5 figures, to appear in CM
The phase transition of the quantum Ising model is sharp
An analysis is presented of the phase transition of the quantum Ising model
with transverse field on the d-dimensional hypercubic lattice. It is shown that
there is a unique sharp transition. The value of the critical point is
calculated rigorously in one dimension. The first step is to express the
quantum Ising model in terms of a (continuous) classical Ising model in d+1
dimensions. A so-called `random-parity' representation is developed for the
latter model, similar to the random-current representation for the classical
Ising model on a discrete lattice. Certain differential inequalities are
proved. Integration of these inequalities yields the sharpness of the phase
transition, and also a number of other facts concerning the critical and
near-critical behaviour of the model under study.Comment: Small changes. To appear in the Journal of Statistical Physic
A practical, bioinformatic workflow system for large data sets generated by next-generation sequencing
Transcriptomics (at the level of single cells, tissues and/or whole organisms) underpins many fields of biomedical science, from understanding the basic cellular function in model organisms, to the elucidation of the biological events that govern the development and progression of human diseases, and the exploration of the mechanisms of survival, drug-resistance and virulence of pathogens. Next-generation sequencing (NGS) technologies are contributing to a massive expansion of transcriptomics in all fields and are reducing the cost, time and performance barriers presented by conventional approaches. However, bioinformatic tools for the analysis of the sequence data sets produced by these technologies can be daunting to researchers with limited or no expertise in bioinformatics. Here, we constructed a semi-automated, bioinformatic workflow system, and critically evaluated it for the analysis and annotation of large-scale sequence data sets generated by NGS. We demonstrated its utility for the exploration of differences in the transcriptomes among various stages and both sexes of an economically important parasitic worm (Oesophagostomum dentatum) as well as the prediction and prioritization of essential molecules (including GTPases, protein kinases and phosphatases) as novel drug target candidates. This workflow system provides a practical tool for the assembly, annotation and analysis of NGS data sets, also to researchers with a limited bioinformatic expertise. The custom-written Perl, Python and Unix shell computer scripts used can be readily modified or adapted to suit many different applications. This system is now utilized routinely for the analysis of data sets from pathogens of major socio-economic importance and can, in principle, be applied to transcriptomics data sets from any organism
A practical, bioinformatic workflow system for large data sets generated by next-generation sequencing
Transcriptomics (at the level of single cells, tissues and/or whole organisms) underpins many fields of biomedical science, from understanding the basic cellular function in model organisms, to the elucidation of the biological events that govern the development and progression of human diseases, and the exploration of the mechanisms of survival, drug-resistance and virulence of pathogens. Next-generation sequencing (NGS) technologies are contributing to a massive expansion of transcriptomics in all fields and are reducing the cost, time and performance barriers presented by conventional approaches. However, bioinformatic tools for the analysis of the sequence data sets produced by these technologies can be daunting to researchers with limited or no expertise in bioinformatics. Here, we constructed a semi-automated, bioinformatic workflow system, and critically evaluated it for the analysis and annotation of large-scale sequence data sets generated by NGS. We demonstrated its utility for the exploration of differences in the transcriptomes among various stages and both sexes of an economically important parasitic worm (Oesophagostomum dentatum) as well as the prediction and prioritization of essential molecules (including GTPases, protein kinases and phosphatases) as novel drug target candidates. This workflow system provides a practical tool for the assembly, annotation and analysis of NGS data sets, also to researchers with a limited bioinformatic expertise. The custom-written Perl, Python and Unix shell computer scripts used can be readily modified or adapted to suit many different applications. This system is now utilized routinely for the analysis of data sets from pathogens of major socio-economic importance and can, in principle, be applied to transcriptomics data sets from any organism
Inter-individual variations of human mercury exposure biomarkers: a cross-sectional assessment
BACKGROUND: Biomarkers for mercury (Hg) exposure have frequently been used to assess exposure and risk in various groups of the general population. We have evaluated the most frequently used biomarkers and the physiology on which they are based, to explore the inter-individual variations and their suitability for exposure assessment. METHODS: Concentrations of total Hg (THg), inorganic Hg (IHg) and organic Hg (OHg, assumed to be methylmercury; MeHg) were determined in whole blood, red blood cells, plasma, hair and urine from Swedish men and women. An automated multiple injection cold vapour atomic fluorescence spectrophotometry analytical system for Hg analysis was developed, which provided high sensitivity, accuracy, and precision. The distribution of the various mercury forms in the different biological media was explored. RESULTS: About 90% of the mercury found in the red blood cells was in the form of MeHg with small inter-individual variations, and part of the IHg found in the red blood cells could be attributed to demethylated MeHg. THg in plasma was associated with both IHg and MeHg, with large inter-individual variations in the distribution between red blood cells and plasma. THg in hair reflects MeHg exposure at all exposure levels, and not IHg exposure. The small fraction of IHg in hair is most probably emanating from demethylated MeHg. The inter-individual variation in the blood to hair ratio was very large. The variability seemed to decrease with increasing OHg in blood, most probably due to more frequent fish consumption and thereby blood concentrations approaching steady state. THg in urine reflected IHg exposure, also at very low IHg exposure levels. CONCLUSION: The use of THg concentration in whole blood as a proxy for MeHg exposure will give rise to an overestimation of the MeHg exposure depending on the degree of IHg exposure, why speciation of mercury forms is needed. THg in RBC and hair are suitable proxies for MeHg exposure. Using THg concentration in plasma as a measure of IHg exposure can lead to significant exposure misclassification. THg in urine is a suitable proxy for IHg exposure
The kinematics of swimming and relocation jumps in copepod nauplii
Copepod nauplii move in a world dominated by viscosity. Their swimming-by-jumping propulsion mode, with alternating power and recovery strokes of three pairs of cephalic appendages, is fundamentally different from the way other microplankters move. Protozoans move using cilia or flagella, and copepodites are equipped with highly specialized swimming legs. In some species the nauplius may also propel itself more slowly through the water by beating and rotating the appendages in a different, more complex pattern. We use high-speed video to describe jumping and swimming in nauplii of three species of pelagic copepods: Temora longicornis, Oithona davisae and Acartia tonsa. The kinematics of jumping is similar between the three species. Jumps result in a very erratic translation with no phase of passive coasting and the nauplii move backwards during recovery strokes. This is due to poorly synchronized recovery strokes and a low beat frequency relative to the coasting time scale. For the same reason, the propulsion efficiency of the nauplii is low. Given the universality of the nauplius body plan, it is surprising that they seem to be inefficient when jumping, which is different from the very efficient larger copepodites. A slow-swimming mode is only displayed by T. longicornis. In this mode, beating of the appendages results in the creation of a strong feeding current that is about 10 times faster than the average translation speed of the nauplius. The nauplius is thus essentially hovering when feeding, which results in a higher feeding efficiency than that of a nauplius cruising through the water
- …