982 research outputs found

    Nitrogen forms affect root structure and water uptake in the hybrid poplar

    Get PDF
    The study analyses the effects of two different forms of nitrogen fertilisation (nitrate and ammonium) on root structure and water uptake of two hybrid poplar (Populus maximowiczii x P. balsamifera) clones in a field experiment. Water uptake was studied using sap flow gauges on individual proximal roots and coarse root structure was examined by excavating 18 whole-root systems. Finer roots were scanned and analyzed for architecture. Nitrogen forms did not affect coarse-root system development, but had a significant effect on fine-root development. Nitrate-treated trees presented higher fine:coarse root ratios and higher specific root lengths than control or ammonium treated trees. These allocation differences affected the water uptake capacity of the plants as reflected by the higher sapflow rate in the nitrate treatment. The diameter of proximal roots at the tree base predicted well the total root biomass and length. The diameter of smaller lateral roots also predicted the lateral root mass, length, surface area and the number of tips. The effect of nitrogen fertilisation on the fine root structure translated into an effect on the functioning of the fine roots forming a link between form (architecture) and function (water uptake)

    Luminescent properties and reduced dimensional behavior of hydrothermally prepared Y <inf>2</inf>SiO <inf>5</inf>: Ce nanophosphors

    Get PDF
    Hydrothermally prepared nanophosphor Y2 Si O5: Ce crystallizes in the P 21 c structure, rather than the B2b structure observed in bulk material. Relative to bulk powder, nanophosphors of particle size ∼25-100 nm diameter exhibit redshifts of the photoluminescence excitation and emission spectra, reduced self absorption, enhanced light output, and medium-dependent radiative lifetime. Photoluminescence data are consistent with reduced symmetry of the P 21 c structure and are not necessarily related to reduced dimensionality of the nanophosphor. In contrast, medium-dependent lifetime and enhanced light output are attributed to nanoscale behavior. Perturbation of the Ce ion electric field is responsible for the variable lifetime. © 2006 American Institute of Physics

    Interactions between Connected Half-Sarcomeres Produce Emergent Mechanical Behavior in a Mathematical Model of Muscle

    Get PDF
    Most reductionist theories of muscle attribute a fiber's mechanical properties to the scaled behavior of a single half-sarcomere. Mathematical models of this type can explain many of the known mechanical properties of muscle but have to incorporate a passive mechanical component that becomes ∼300% stiffer in activating conditions to reproduce the force response elicited by stretching a fast mammalian muscle fiber. The available experimental data suggests that titin filaments, which are the mostly likely source of the passive component, become at most ∼30% stiffer in saturating Ca2+ solutions. The work described in this manuscript used computer modeling to test an alternative systems theory that attributes the stretch response of a mammalian fiber to the composite behavior of a collection of half-sarcomeres. The principal finding was that the stretch response of a chemically permeabilized rabbit psoas fiber could be reproduced with a framework consisting of 300 half-sarcomeres arranged in 6 parallel myofibrils without requiring titin filaments to stiffen in activating solutions. Ablation of inter-myofibrillar links in the computer simulations lowered isometric force values and lowered energy absorption during a stretch. This computed behavior mimics effects previously observed in experiments using muscles from desmin-deficient mice in which the connections between Z-disks in adjacent myofibrils are presumably compromised. The current simulations suggest that muscle fibers exhibit emergent properties that reflect interactions between half-sarcomeres and are not properties of a single half-sarcomere in isolation. It is therefore likely that full quantitative understanding of a fiber's mechanical properties requires detailed analysis of a complete fiber system and cannot be achieved by focusing solely on the properties of a single half-sarcomere

    Phosphorylated c-Src in the nucleus is associated with improved patient outcome in ER-positive breast cancer

    Get PDF
    Elevated c-Src protein expression has been shown in breast cancer and &lt;i&gt;in vitro&lt;/i&gt; evidence suggests a role in endocrine resistance. To investigate whether c-Src is involved in endocrine resistance, we examined the expression of both total and activated c-Src in human breast cancer specimens from a cohort of oestrogen receptor (ER)-positive tamoxifen-treated breast cancer patients. Tissue microarray technology was employed to analyse 262 tumour specimens taken before tamoxifen treatment. Immunohistochemistry using total c-Src and activated c-Src antibodies was performed. Kaplan–Meier survival curves were constructed and log-rank test were performed. High level of nuclear activated Src was significantly associated with improved overall survival (&lt;i&gt;P&lt;/i&gt;=0.047) and lower recurrence rates on tamoxifen (&lt;i&gt;P&lt;/i&gt;=0.02). Improved patient outcome was only seen with activated Src in the nucleus. Nuclear activated Src expression was significantly associated with node-negative disease and a lower NPI (&lt;i&gt;P&lt;/i&gt;&#60;0.05). On subgroup analysis, only ER-positive/progesterone receptor (PgR)-positive tumours were associated with improved survival (&lt;i&gt;P&lt;/i&gt;=0.004). This shows that c-Src activity is increased in breast cancer and that activated Src within the nucleus of ER-positive tumours predicts an improved outcome. In ER/PgR-positive disease, activated Src kinase does not appear to be involved in &lt;i&gt;de novo&lt;/i&gt; endocrine resistance. Further study is required in ER-negative breast cancer as this may represent a cohort in which it is associated with poor outcome

    Memory consolidation in the cerebellar cortex

    Get PDF
    Several forms of learning, including classical conditioning of the eyeblink, depend upon the cerebellum. In examining mechanisms of eyeblink conditioning in rabbits, reversible inactivations of the control circuitry have begun to dissociate aspects of cerebellar cortical and nuclear function in memory consolidation. It was previously shown that post-training cerebellar cortical, but not nuclear, inactivations with the GABA(A) agonist muscimol prevented consolidation but these findings left open the question as to how final memory storage was partitioned across cortical and nuclear levels. Memory consolidation might be essentially cortical and directly disturbed by actions of the muscimol, or it might be nuclear, and sensitive to the raised excitability of the nuclear neurons following the loss of cortical inhibition. To resolve this question, we simultaneously inactivated cerebellar cortical lobule HVI and the anterior interpositus nucleus of rabbits during the post-training period, so protecting the nuclei from disinhibitory effects of cortical inactivation. Consolidation was impaired by these simultaneous inactivations. Because direct application of muscimol to the nuclei alone has no impact upon consolidation, we can conclude that post-training, consolidation processes and memory storage for eyeblink conditioning have critical cerebellar cortical components. The findings are consistent with a recent model that suggests the distribution of learning-related plasticity across cortical and nuclear levels is task-dependent. There can be transfer to nuclear or brainstem levels for control of high-frequency responses but learning with lower frequency response components, such as in eyeblink conditioning, remains mainly dependent upon cortical memory storage

    Dispelling urban myths about default uncertainty factors in chemical risk assessment - Sufficient protection against mixture effects?

    Get PDF
    © 2013 Martin et al.; licensee BioMed Central LtdThis article has been made available through the Brunel Open Access Publishing Fund.Assessing the detrimental health effects of chemicals requires the extrapolation of experimental data in animals to human populations. This is achieved by applying a default uncertainty factor of 100 to doses not found to be associated with observable effects in laboratory animals. It is commonly assumed that the toxicokinetic and toxicodynamic sub-components of this default uncertainty factor represent worst-case scenarios and that the multiplication of those components yields conservative estimates of safe levels for humans. It is sometimes claimed that this conservatism also offers adequate protection from mixture effects. By analysing the evolution of uncertainty factors from a historical perspective, we expose that the default factor and its sub-components are intended to represent adequate rather than worst-case scenarios. The intention of using assessment factors for mixture effects was abandoned thirty years ago. It is also often ignored that the conservatism (or otherwise) of uncertainty factors can only be considered in relation to a defined level of protection. A protection equivalent to an effect magnitude of 0.001-0.0001% over background incidence is generally considered acceptable. However, it is impossible to say whether this level of protection is in fact realised with the tolerable doses that are derived by employing uncertainty factors. Accordingly, it is difficult to assess whether uncertainty factors overestimate or underestimate the sensitivity differences in human populations. It is also often not appreciated that the outcome of probabilistic approaches to the multiplication of sub-factors is dependent on the choice of probability distributions. Therefore, the idea that default uncertainty factors are overly conservative worst-case scenarios which can account both for the lack of statistical power in animal experiments and protect against potential mixture effects is ill-founded. We contend that precautionary regulation should provide an incentive to generate better data and recommend adopting a pragmatic, but scientifically better founded approach to mixture risk assessment. © 2013 Martin et al.; licensee BioMed Central Ltd.Oak Foundatio

    Predicting violent infractions in a Swiss state penitentiary: A replication study of the PCL-R in a population of sex and violent offenders

    Get PDF
    BACKGROUND: Research conducted with forensic psychiatric patients found moderate correlations between violence in institutions and psychopathy. It is unclear though, whether the PCL-R is an accurate instrument for predicting aggressive behavior in prisons. Results seem to indicate that the instrument is better suited for predicting verbal rather than physical aggression of prison inmates. METHODS: PCL-R scores were assessed for a sample of 113 imprisoned sex and violent offenders in Switzerland. Logistic regression analyses were used to estimate physical and verbal aggression as a function of the PCL-R sum score. Additionally, stratified analyses were conducted for Factor 1 and 2. Infractions were analyzed as to their motives and consequences. RESULTS: The mean score of the PCL-R was 12 points. Neither the relationship between physical aggression and the sum score of the PCL-R, nor the relationship between physical aggression and either of the two factors of the PCL-R were significant. Both the sum score and Factor 1 predicted the occurrence of verbal aggression (AUC=0.70 and 0.69), while Factor 2 did not. CONCLUSION: Possible explanations are discussed for the weak relationship between PCL-R scores and physically aggressive behavior during imprisonment. Some authors have discussed whether the low base rate of violent infractions can be considered an explanation for the non-significant relation between PCL-R-score and violence. The base rate in this study, however, with 27%, was not low. It is proposed that the distinction between reactive and instrumental motives of institutional violence must be considered when examining the usefulness of the PCL-R in predicting in-prison physical aggressive behavior

    Digestibility of resistant starch containing preparations using two in vitro models

    Get PDF
    BACKGROUND: Resistant starch (RS) is known for potential health benefits in the human colon. To investigate these positive effects it is important to be able to predict the amount, and the structure of starch reaching the large intestine. AIM OF THE STUDY: The aim of this study was to compare two different in vitro models simulating the digestibility of two RS containing preparations. METHODS: The substrates, high amylose maize (HAM) containing RS type 2, and retrograded long chain tapioca maltodextrins (RTmd) containing RS type 3 were in vitro digested using a batch and a dynamic model, respectively. Both preparations were characterized before and after digestion by using X-Ray and DSC, and by measuring their total starch, RS and protein contents. RESULTS: Using both digestion models, 60-61 g/100 g of RTmd turned out to be indigestible, which is very well in accordance with 59 g/100 g found in vivo after feeding RTmd to ileostomy patients. In contrast, dynamic and batch in vitro digestion experiments using HAM as a substrate led to 58 g/100 g and 66 g/100 g RS recovery. The degradability of HAM is more affected by differences in experimental parameters compared to RTmd. The main variations between the two in vitro digestion methods are the enzyme preparations used, incubation times and mechanical stress exerted on the substrate. However, for both preparations dynamically digested fractions led to lower amounts of analytically RS and a lower crystallinity. CONCLUSIONS: The two in vitro digestion methods used attacked the starch molecules differently, which influenced starch digestibility of HAM but not of RTmd

    Variational Methods for Biomolecular Modeling

    Full text link
    Structure, function and dynamics of many biomolecular systems can be characterized by the energetic variational principle and the corresponding systems of partial differential equations (PDEs). This principle allows us to focus on the identification of essential energetic components, the optimal parametrization of energies, and the efficient computational implementation of energy variation or minimization. Given the fact that complex biomolecular systems are structurally non-uniform and their interactions occur through contact interfaces, their free energies are associated with various interfaces as well, such as solute-solvent interface, molecular binding interface, lipid domain interface, and membrane surfaces. This fact motivates the inclusion of interface geometry, particular its curvatures, to the parametrization of free energies. Applications of such interface geometry based energetic variational principles are illustrated through three concrete topics: the multiscale modeling of biomolecular electrostatics and solvation that includes the curvature energy of the molecular surface, the formation of microdomains on lipid membrane due to the geometric and molecular mechanics at the lipid interface, and the mean curvature driven protein localization on membrane surfaces. By further implicitly representing the interface using a phase field function over the entire domain, one can simulate the dynamics of the interface and the corresponding energy variation by evolving the phase field function, achieving significant reduction of the number of degrees of freedom and computational complexity. Strategies for improving the efficiency of computational implementations and for extending applications to coarse-graining or multiscale molecular simulations are outlined.Comment: 36 page
    corecore