1,200 research outputs found

    Models for pattern formation in somitogenesis: a marriage of cellular and molecular biology

    Get PDF
    Somitogenesis, the process by which a bilaterally symmetric pattern of cell aggregations is laid down in a cranio-caudal sequence in early vertebrate development, provides an excellent model study for the coupling of interactions at the molecular and cellular level. Here, we review some of the key experimental results and theoretical models related to this process. We extend a recent chemical pre-pattern model based on the cell cycle Journal of Theoretical Biology 207 (2000) 305-316, by including cell movement and show that the resultant model exhibits the correct spatio-temporal dynamics of cell aggregation. We also postulate a model to account for the recently observed spatio-temporal dynamics at the molecular level

    International Energy Trade and the Unfair Trade Laws

    Get PDF

    International Energy Trade and the Unfair Trade Laws

    Get PDF

    Generalizability of achievement goal profiles across five cultural groups : more similarities than differences

    Get PDF
    Previous results have shown possible cultural differences in students’ achievement goals endorsement and in their relations with various predictors and outcomes. In this person-centered study, we sought to identify achievement goal profiles and to assess the extent to which these configurations and their associations with predictors and outcomes generalize across cultures. We used a new statistical approach to assess latent profile similarities across adolescents from five cultural backgrounds (N = 2643, including Non-Indigenous Australians, Indigenous Australians, Indigenous American, Middle Easterners, and Asians). Our results supported the cross-cultural generalizability of the profiles, their predictors, and their outcomes. Five similar profiles were identified in each cultural group, but their relative frequency differed across cultures. The results revealed advantages of exploring multidimensional goal profiles

    Timing and Reconstruction of the Most Recent Common Ancestor of the Subtype C Clade of Human Immunodeficiency Virus Type 1

    Get PDF
    Human immunodeficiency virus type 1 (HIV-1) subtype C is responsible for more than 55% of HIV-1 infections worldwide. When this subtype first emerged is unknown. We have analyzed all available gag (p17 and p24) and env (C2-V3) subtype C sequences with known sampling dates, which ranged from 1983 to 2000. The majority of these sequences come from the Karonga District in Malawi and include some of the earliest known subtype C sequences. Linear regression analyses of sequence divergence estimates (with four different approaches)were plotted against sample year to estimate the year in which there was zero divergence from the reconstructed ancestral sequence. Here we suggest that the most recent common ancestor of subtype C appeared in the mid- to late 1960s. Sensitivity analyses, by which possible biases due to oversampling from one district were explored, gave very similar estimates

    Toward Open Science at the European Scale: Geospatial Semantic Array Programming for Integrated Environmental Modelling

    Get PDF
    [Excerpt] Interfacing science and policy raises challenging issues when large spatial-scale (regional, continental, global) environmental problems need transdisciplinary integration within a context of modelling complexity and multiple sources of uncertainty. This is characteristic of science-based support for environmental policy at European scale, and key aspects have also long been investigated by European Commission transnational research. Approaches (either of computational science or of policy-making) suitable at a given domain-specific scale may not be appropriate for wide-scale transdisciplinary modelling for environment (WSTMe) and corresponding policy-making. In WSTMe, the characteristic heterogeneity of available spatial information and complexity of the required data-transformation modelling (D-TM) appeal for a paradigm shift in how computational science supports such peculiarly extensive integration processes. In particular, emerging wide-scale integration requirements of typical currently available domain-specific modelling strategies may include increased robustness and scalability along with enhanced transparency and reproducibility. This challenging shift toward open data and reproducible research (open science) is also strongly suggested by the potential - sometimes neglected - huge impact of cascading effects of errors within the impressively growing interconnection among domain-specific computational models and frameworks. Concise array-based mathematical formulation and implementation (with array programming tools) have proved helpful in supporting and mitigating the complexity of WSTMe when complemented with generalized modularization and terse array-oriented semantic constraints. This defines the paradigm of Semantic Array Programming (SemAP) where semantic transparency also implies free software use (although black-boxes - e.g. legacy code - might easily be semantically interfaced). A new approach for WSTMe has emerged by formalizing unorganized best practices and experience-driven informal patterns. The approach introduces a lightweight (non-intrusive) integration of SemAP and geospatial tools - called Geospatial Semantic Array Programming (GeoSemAP). GeoSemAP exploits the joint semantics provided by SemAP and geospatial tools to split a complex D-TM into logical blocks which are easier to check by means of mathematical array-based and geospatial constraints. Those constraints take the form of precondition, invariant and postcondition semantic checks. This way, even complex WSTMe may be described as the composition of simpler GeoSemAP blocks. GeoSemAP allows intermediate data and information layers to be more easily and formally semantically described so as to increase fault-tolerance, transparency and reproducibility of WSTMe. This might also help to better communicate part of the policy-relevant knowledge, often diffcult to transfer from technical WSTMe to the science-policy interface. [...

    Practical approaches to produce high-quality probabilistic predictions and improve risk-based design making

    Get PDF
    Conference theme 'Digital Water.'Probabilistic predictions provide crucial information regarding the uncertainty of hydrological predictions, which are a key input for risk-based decision-making. High-quality probabilistic predictions provide reliable estimates of water resource system risks – avoiding a false sense of security. However, probabilistic predictions are not widely used in hydrological modelling applications because they are perceived to be difficult to construct and interpret. We present a software tool that provides an easy-to-use and simple approach to produce high-quality probabilistic streamflow predictions. The approach integrates the recommendations from multiple research papers over multiple years to provide guidance on selection of robust descriptions of uncertainty (residual error models) for a wide range of hydrological applications. This guidance includes the choice of transformation to handle common features of residual errors (heteroscedasticity, skewness, persistence) and techniques that handles a wide range of common objective functions. A case study illustrating the practical benefits of uncertainty analysis for risk-based decision- making is provided. The case study evaluates fish health in two catchments (Mt. McKenzie and Upper Jacobs) in Barossa Valley, South Australia. The streamflow predictions of environmental flow metrics are combined with a simplified environmental response model to estimate fish health. The outcomes obtained using deterministic streamflow predictions are contrasted to the outcomes obtained from probabilistic predictions. In general, probabilistic predictions provide greater confidence in the predictions of fish health because the uncertainty ranges recognise the differences at the two sites between the quality of hydrological predictions. The uncertainty ranges were generally high, in the range 40-60% (Mt McKenzie) or 4-20% (Upper Jacobs) for predictions of the frequency of years with poor (or worse) fish health. This analysis provides a richer source of information for risk averse decision-makers than the single values provided by deterministic predictions.Mark Thyer, David McInerney, Dmitri Kavetski, Jason Hunte

    High-quality probabilistic predictions for existing hydrological models with common objective functions

    Get PDF
    Conference theme 'Digital Water.'Probabilistic predictions describe the uncertainty in modelled streamflow, which is a critical input for many environmental modelling applications. A residual error model typically produces the probabilistic predictions in tandem with a hydrological model that predicts the deterministic streamflow. However, many objective functions that are commonly used to calibrate the parameters of the hydrological model make (implicit) assumptions about the errors that do not match the properties (e.g. of heteroscedasticity and skewness) of those errors. The consequence of these assumptions is often low-quality probabilistic predictions of errors, which reduces the practical utility of probabilistic modelling. Our study has two aims: Firstly, to evaluate the impact of objective function inconsistency on the quality of probabilistic predictions; Secondly, to demonstrate how a simple enhancement to a residual error model can rectify the issues identified with inconsistent objective functions in Aim 1, and thereby improve probabilistic predictions in a wide range of scenarios. Our findings show that the enhanced error model enables high-quality probabilistic predictions to be obtained for a range of catchments and objective functions, without requiring any changes to the hydrological modelling or calibration process. This advance has practical benefits that are aimed at increasing the uptake of probabilistic predictions in real-world applications, in that the methods are applicable to existing hydrological models that are already calibrated, simple to implement, easy to use and fast. Finally, these methods are available as an open-source R-shiny application and an R-package function.Jason Hunter, Mark Thyer, David McInerney, Dmitri Kavetsk

    Revitalising audit and feedback to improve patient care

    Get PDF
    Healthcare systems face challenges in tackling variations in patient care and outcomes. Audit and feedback aim to improve patient care by reviewing clinical performance against explicit standards and directing action towards areas not meeting those standards. It is a widely used foundational component of quality improvement, included in around 60 national clinical audit programmes in the United Kingdom. Ironically, there is currently a gap between what audit and feedback can achieve and what they actually deliver, whether led locally or nationally. Several national audits have been successful in driving improvement and reducing variations in care, such as for stroke and lung cancer, but progress is also slower than hoped for in other aspects of care (table 1). Audit and feedback have a chequered past.6 Clinicians might feel threatened rather than supported by top-down feedback and rightly question whether rewards outweigh efforts invested in poorly designed audit. Healthcare organisations have limited resources to support and act on audit and feedback. Dysfunctional clinical and managerial relationships undermine effective responses to feedback, particularly when it is not clearly part of an integrated approach to quality assurance and improvement. Unsurprisingly, the full potential of audit and feedback has not been realised
    • …
    corecore