1,751 research outputs found

    Changes in and predictors of length of stay in hospital after surgery for breast cancer between 1997/98 and 2004/05 in two regions of England: a population-based

    Get PDF
    BACKGROUND Decreases in length of stay (LOS) in hospital after breast cancer surgery can be partly attributed to the change to less radical surgery, but many other factors are operating at the patient, surgeon and hospital levels. This study aimed to describe the changes in and predictors of length of stay (LOS) in hospital after surgery for breast cancer between 1997/98 and 2004/05 in two regions of England. METHODS Cases of female invasive breast cancer diagnosed in two English cancer registry regions were linked to Hospital Episode Statistics data for the period 1st April 1997 to 31st March 2005. A subset of records where women underwent mastectomy or breast conserving surgery (BCS) was extracted (n = 44,877). Variations in LOS over the study period were investigated. A multilevel model with patients clustered within surgical teams and NHS Trusts was used to examine associations between LOS and a range of factors. RESULTS Over the study period the proportion of women having a mastectomy reduced from 58% to 52%. The proportion varied from 14% to 80% according to NHS Trust. LOS decreased by 21% from 1997/98 to 2004/05 (LOSratio = 0.79, 95%CI 0.77-0.80). BCS was associated with 33% shorter hospital stays compared to mastectomy (LOSratio = 0.67, 95%CI 0.66-0.68). Older age, advanced disease, presence of comorbidities, lymph node excision and reconstructive surgery were associated with increased LOS. Significant variation remained amongst Trusts and surgical teams. CONCLUSION The number of days spent in hospital after breast cancer surgery has continued to decline for several decades. The change from mastectomy to BCS accounts for only 9% of the overall decrease in LOS. Other explanations include the adoption of new techniques and practices, such as sentinel lymph node biopsy and early discharge. This study has identified wide variation in practice with substantial cost implications for the NHS. Further work is required to explain this variation

    The physiological effects of hypobaric hypoxia versus normobaric hypoxia: a systematic review of crossover trials

    Get PDF
    Much hypoxia research has been carried out at high altitude in a hypobaric hypoxia (HH) environment. Many research teams seek to replicate high-altitude conditions at lower altitudes in either hypobaric hypoxic conditions or normobaric hypoxic (NH) laboratories. Implicit in this approach is the assumption that the only relevant condition that differs between these settings is the partial pressure of oxygen (PO2), which is commonly presumed to be the principal physiological stimulus to adaptation at high altitude. This systematic review is the first to present an overview of the current available literature regarding crossover studies relating to the different effects of HH and NH on human physiology. After applying our inclusion and exclusion criteria, 13 studies were deemed eligible for inclusion. Several studies reported a number of variables (e.g. minute ventilation and NO levels) that were different between the two conditions, lending support to the notion that true physiological difference is indeed present. However, the presence of confounding factors such as time spent in hypoxia, temperature, and humidity, and the limited statistical power due to small sample sizes, limit the conclusions that can be drawn from these findings. Standardisation of the study methods and reporting may aid interpretation of future studies and thereby improve the quality of data in this area. This is important to improve the quality of data that is used for improving the understanding of hypoxia tolerance, both at altitude and in the clinical setting

    Evaluating the role of quality assessment of primary studies in systematic reviews of cancer practice guidelines

    Get PDF
    BACKGROUND: The purpose of this study was to evaluate the role of study quality assessment of primary studies in cancer practice guidelines. METHODS: Reliable and valid study quality assessment scales were sought and applied to published reports of trials included in systematic reviews of cancer guidelines. Sensitivity analyses were performed to evaluate the relationship between quality scores and pooled odds ratios (OR) for mortality and need for blood transfusion. RESULTS: Results found that that whether trials were classified as high or low quality depended on the scale used to assess them. Although the results of the sensitivity analyses found some variation in the ORs observed, the confidence intervals (CIs) of the pooled effects from each of the analyses of high quality trials overlapped with the CI of the pooled odds of all trials. Quality score was not predictive of pooled ORs studied here. CONCLUSIONS: Had sensitivity analyses based on study quality been conducted prospectively, it is highly unlikely that different conclusions would have been found or that different clinical recommendations would have emerged in the guidelines

    Modelling the nucleon wave function from soft and hard processes

    Get PDF
    Current light-cone wave functions for the nucleon are unsatisfactory since they are in conflict with the data of the nucleon's Dirac form factor at large momentum transfer. Therefore, we attempt a determination of a new wave function respecting theoretical ideas on its parameterization and satisfying the following constraints: It should provide a soft Feynman contribution to the proton's form factor in agreement with data; it should be consistent with current parameterizations of the valence quark distribution functions and lastly it should provide an acceptable value for the \jp \to N \bar N decay width. The latter process is calculated within the modified perturbative approach to hard exclusive reactions. A simultaneous fit to the three sets of data leads to a wave function whose xx-dependent part, the distribution amplitude, shows the same type of asymmetry as those distribution amplitudes constrained by QCD sum rules. The asymmetry is however much more moderate as in those amplitudes. Our distribution amplitude resembles the asymptotic one in shape but the position of the maximum is somewhat shifted.Comment: 32 pages RevTex + PS-file with 5 figures in uu-encoded, compressed fil

    The emerging structure of the Extended Evolutionary Synthesis: where does Evo-Devo fit in?

    Get PDF
    The Extended Evolutionary Synthesis (EES) debate is gaining ground in contemporary evolutionary biology. In parallel, a number of philosophical standpoints have emerged in an attempt to clarify what exactly is represented by the EES. For Massimo Pigliucci, we are in the wake of the newest instantiation of a persisting Kuhnian paradigm; in contrast, Telmo Pievani has contended that the transition to an EES could be best represented as a progressive reformation of a prior Lakatosian scientific research program, with the extension of its Neo-Darwinian core and the addition of a brand-new protective belt of assumptions and auxiliary hypotheses. Here, we argue that those philosophical vantage points are not the only ways to interpret what current proposals to ‘extend’ the Modern Synthesis-derived ‘standard evolutionary theory’ (SET) entail in terms of theoretical change in evolutionary biology. We specifically propose the image of the emergent EES as a vast network of models and interweaved representations that, instantiated in diverse practices, are connected and related in multiple ways. Under that assumption, the EES could be articulated around a paraconsistent network of evolutionary theories (including some elements of the SET), as well as models, practices and representation systems of contemporary evolutionary biology, with edges and nodes that change their position and centrality as a consequence of the co-construction and stabilization of facts and historical discussions revolving around the epistemic goals of this area of the life sciences. We then critically examine the purported structure of the EES—published by Laland and collaborators in 2015—in light of our own network-based proposal. Finally, we consider which epistemic units of Evo-Devo are present or still missing from the EES, in preparation for further analyses of the topic of explanatory integration in this conceptual framework

    Solar-type dynamo behaviour in fully convective stars without a tachocline

    Get PDF
    In solar-type stars (with radiative cores and convective envelopes), the magnetic field powers star spots, flares and other solar phenomena, as well as chromospheric and coronal emission at ultraviolet to X-ray wavelengths. The dynamo responsible for generating the field depends on the shearing of internal magnetic fields by differential rotation. The shearing has long been thought to take place in a boundary layer known as the tachocline between the radiative core and the convective envelope. Fully convective stars do not have a tachocline and their dynamo mechanism is expected to be very different, although its exact form and physical dependencies are not known. Here we report observations of four fully convective stars whose X-ray emission correlates with their rotation periods in the same way as in Sun-like stars. As the X-ray activity - rotation relationship is a well-established proxy for the behaviour of the magnetic dynamo, these results imply that fully convective stars also operate a solar-type dynamo. The lack of a tachocline in fully convective stars therefore suggests that this is not a critical ingredient in the solar dynamo and supports models in which the dynamo originates throughout the convection zone.Comment: 6 pages, 1 figure. Accepted for publication in Nature (28 July 2016). Author's version, including Method

    Evolution of Robustness to Noise and Mutation in Gene Expression Dynamics

    Get PDF
    Phenotype of biological systems needs to be robust against mutation in order to sustain themselves between generations. On the other hand, phenotype of an individual also needs to be robust against fluctuations of both internal and external origins that are encountered during growth and development. Is there a relationship between these two types of robustness, one during a single generation and the other during evolution? Could stochasticity in gene expression have any relevance to the evolution of these robustness? Robustness can be defined by the sharpness of the distribution of phenotype; the variance of phenotype distribution due to genetic variation gives a measure of `genetic robustness' while that of isogenic individuals gives a measure of `developmental robustness'. Through simulations of a simple stochastic gene expression network that undergoes mutation and selection, we show that in order for the network to acquire both types of robustness, the phenotypic variance induced by mutations must be smaller than that observed in an isogenic population. As the latter originates from noise in gene expression, this signifies that the genetic robustness evolves only when the noise strength in gene expression is larger than some threshold. In such a case, the two variances decrease throughout the evolutionary time course, indicating increase in robustness. The results reveal how noise that cells encounter during growth and development shapes networks' robustness to stochasticity in gene expression, which in turn shapes networks' robustness to mutation. The condition for evolution of robustness as well as relationship between genetic and developmental robustness is derived through the variance of phenotypic fluctuations, which are measurable experimentally.Comment: 25 page

    How large should whales be?

    Full text link
    The evolution and distribution of species body sizes for terrestrial mammals is well-explained by a macroevolutionary tradeoff between short-term selective advantages and long-term extinction risks from increased species body size, unfolding above the 2g minimum size induced by thermoregulation in air. Here, we consider whether this same tradeoff, formalized as a constrained convection-reaction-diffusion system, can also explain the sizes of fully aquatic mammals, which have not previously been considered. By replacing the terrestrial minimum with a pelagic one, at roughly 7000g, the terrestrial mammal tradeoff model accurately predicts, with no tunable parameters, the observed body masses of all extant cetacean species, including the 175,000,000g Blue Whale. This strong agreement between theory and data suggests that a universal macroevolutionary tradeoff governs body size evolution for all mammals, regardless of their habitat. The dramatic sizes of cetaceans can thus be attributed mainly to the increased convective heat loss is water, which shifts the species size distribution upward and pushes its right tail into ranges inaccessible to terrestrial mammals. Under this macroevolutionary tradeoff, the largest expected species occurs where the rate at which smaller-bodied species move up into large-bodied niches approximately equals the rate at which extinction removes them.Comment: 7 pages, 3 figures, 2 data table

    Uncovering Blind Spots in Urban Carbon Management: The Role of Consumption-Based Carbon Accounting in Bristol, UK

    Get PDF
    The rapid urbanisation of the twentieth century, along with the spread of high-consumption urban lifestyles, has led to cities becoming the dominant drivers of global anthropogenic greenhouse gas emissions. Reducing these impacts is crucial, but production-based frameworks of carbon measurement and mitigation—which encompass only a limited part of cities’ carbon footprints—are much more developed and widely applied than consumption-based approaches that consider the embedded carbon effectively imported into a city. Frequently, therefore, cities are left blind to the importance of their wider consumption-related climate impacts, while at the same time left lacking effective tools to reduce them. To explore the relevance of these issues, we implement methodologies for assessing production- and consumption-based emissions at the city-level and estimate the associated emissions trajectories for Bristol, a major UK city, from 2000 to 2035. We develop mitigation scenarios targeted at reducing the former, considering potential energy, carbon and financial savings in each case. We then compare these mitigation potentials with local government ambitions and Bristol’s consumption-based emissions trajectory. Our results suggest that the city’s consumption-based emissions are three times the production-based emissions, largely due to the impacts of imported food and drink. We find that low-carbon investments of circa £3 billion could reduce production-based emissions by 25% in 2035. However, we also find that this represents <10% of Bristol’s forecast consumption-based emissions for 2035 and is approximately equal to the mitigation achievable by eliminating the city’s current levels of food waste. Such observations suggest that incorporating consumption-based emission statistics into cities’ accounting and decision-making processes could uncover largely unrecognised opportunities for mitigation that are likely to be essential for achieving deep decarbonisation

    Evolutionary connectionism: algorithmic principles underlying the evolution of biological organisation in evo-devo, evo-eco and evolutionary transitions

    Get PDF
    The mechanisms of variation, selection and inheritance, on which evolution by natural selection depends, are not fixed over evolutionary time. Current evolutionary biology is increasingly focussed on understanding how the evolution of developmental organisations modifies the distribution of phenotypic variation, the evolution of ecological relationships modifies the selective environment, and the evolution of reproductive relationships modifies the heritability of the evolutionary unit. The major transitions in evolution, in particular, involve radical changes in developmental, ecological and reproductive organisations that instantiate variation, selection and inheritance at a higher level of biological organisation. However, current evolutionary theory is poorly equipped to describe how these organisations change over evolutionary time and especially how that results in adaptive complexes at successive scales of organisation (the key problem is that evolution is self-referential, i.e. the products of evolution change the parameters of the evolutionary process). Here we first reinterpret the central open questions in these domains from a perspective that emphasises the common underlying themes. We then synthesise the findings from a developing body of work that is building a new theoretical approach to these questions by converting well-understood theory and results from models of cognitive learning. Specifically, connectionist models of memory and learning demonstrate how simple incremental mechanisms, adjusting the relationships between individually-simple components, can produce organisations that exhibit complex system-level behaviours and improve the adaptive capabilities of the system. We use the term “evolutionary connectionism” to recognise that, by functionally equivalent processes, natural selection acting on the relationships within and between evolutionary entities can result in organisations that produce complex system-level behaviours in evolutionary systems and modify the adaptive capabilities of natural selection over time. We review the evidence supporting the functional equivalences between the domains of learning and of evolution, and discuss the potential for this to resolve conceptual problems in our understanding of the evolution of developmental, ecological and reproductive organisations and, in particular, the major evolutionary transitions
    corecore