114 research outputs found

    Synthetic organisms and living machines: Positioning the products of synthetic biology at the borderline between living and non-living matter

    Get PDF
    The difference between a non-living machine such as a vacuum cleaner and a living organism as a lion seems to be obvious. The two types of entities differ in their material consistence, their origin, their development and their purpose. This apparently clear-cut borderline has previously been challenged by fictitious ideas of “artificial organism” and “living machines” as well as by progress in technology and breeding. The emergence of novel technologies such as artificial life, nanobiotechnology and synthetic biology are definitely blurring the boundary between our understanding of living and non-living matter. This essay discusses where, at the borderline between living and non-living matter, we can position the future products of synthetic biology that belong to the two hybrid entities “synthetic organisms” and “living machines” and how the approaching realization of such hybrid entities affects our understanding of organisms and machines. For this purpose we focus on the description of three different types of synthetic biology products and the aims assigned to their realization: (1) synthetic minimal cells aimed at by protocell synthetic biology, (2) chassis organisms strived for by synthetic genomics and (3) genetically engineered machines produced by bioengineering. We argue that in the case of synthetic biology the purpose is more decisive for the categorization of a product as an organism or a machine than its origin and development. This has certain ethical implications because the definition of an entity as machine seems to allow bypassing the discussion about the assignment and evaluation of instrumental and intrinsic values, which can be raised in the case of organisms

    Geographical distribution of fertility rates in 70 low-income, lower-middle-income, and upper-middle-income countries, 2010–16: a subnational analysis of cross-sectional surveys

    Get PDF
    Background Understanding subnational variation in age-specific fertility rates (ASFRs) and total fertility rates (TFRs), and geographical clustering of high fertility and its determinants in low-income and middle-income countries, is increasingly needed for geographical targeting and prioritising of policy. We aimed to identify variation in fertility rates, to describe patterns of key selected fertility determinants in areas of high fertility. Methods We did a subnational analysis of ASFRs and TFRs from the most recent publicly available and nationally representative cross-sectional Demographic and Health Surveys and Multiple Indicator Cluster Surveys collected between 2010 and 2016 for 70 low-income, lower-middle-income, and upper-middle-income countries, across 932 administrative units. We assessed the degree of global spatial autocorrelation by using Moran's I statistic and did a spatial cluster analysis using the Getis-Ord Gi* local statistic to examine the geographical clustering of fertility and key selected fertility determinants. Descriptive analysis was used to investigate the distribution of ASFRs and of selected determinants in each cluster. Findings TFR varied from below replacement (2·1 children per women) in 36 of the 932 subnational regions (mainly located in India, Myanmar, Colombia, and Armenia), to rates of 8 and higher in 14 subnational regions, located in sub-Saharan Africa and Afghanistan. Areas with high-fertility clusters were mostly associated with areas of low prevalence of women with secondary or higher education, low use of contraception, and high unmet needs for family planning, although exceptions existed. Interpretation Substantial within-country variation in the distribution of fertility rates highlights the need for tailored programmes and strategies in high-fertility cluster areas to increase the use of contraception and access to secondary education, and to reduce unmet need for family planning. Funding Wellcome Trust, the UK Foreign, Commonwealth and Development Office, and the Bill & Melinda Gates Foundation

    Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy

    Get PDF
    Background A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets. Methods Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis. Results A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001). Conclusion We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty

    Effect of Ibandronate on Bending Strength and Toughness on Rodent Cortical bone; possible implications for fracture prevention

    Get PDF
    OBJECTIVES: There remains conflicting evidence regarding cortical bone strength following bisphosphonate therapy. As part of a study to assess the effects of bisphosphonate treatment on the healing of rat tibial fractures, the mechanical properties and radiological density of the uninjured contralateral tibia was assessed. METHODS: Skeletally mature aged rats were used. A total of 14 rats received 1µg/kg ibandronate (iban) daily and 17 rats received 1 ml 0.9% sodium chloride (control) daily. Stress at failure and toughness of the tibial diaphysis were calculated following four-point bending tests. RESULTS: Uninjured cortical bone in the iban group had a significantly greater mean (standard deviation (sd)), p < 0.001, stress at failure of 219.2 MPa (sd 45.99) compared with the control group (169.46 MPa (sd 43.32)) following only nine weeks of therapy. Despite this, the cortical bone toughness and work to failure was similar. There was no significant difference in radiological density or physical dimensions of the cortical bone. CONCLUSIONS: Iban therapy increases the stress at failure of uninjured cortical bone. This has relevance when normalising the strength of repair in a limb when comparing it with the unfractured limb. However, the 20% increase in stress at failure with iban therapy needs to be interpreted with caution as there was no corresponding increase in toughness or work to failure. Further research is required in this area, especially with the increasing clinical burden of low-energy diaphyseal femoral fractures following prolonged use of bisphosphonates. Cite this article: Bone Joint Res 2015;4:99–10

    Population‐based cohort study of outcomes following cholecystectomy for benign gallbladder diseases

    Get PDF
    Background The aim was to describe the management of benign gallbladder disease and identify characteristics associated with all‐cause 30‐day readmissions and complications in a prospective population‐based cohort. Methods Data were collected on consecutive patients undergoing cholecystectomy in acute UK and Irish hospitals between 1 March and 1 May 2014. Potential explanatory variables influencing all‐cause 30‐day readmissions and complications were analysed by means of multilevel, multivariable logistic regression modelling using a two‐level hierarchical structure with patients (level 1) nested within hospitals (level 2). Results Data were collected on 8909 patients undergoing cholecystectomy from 167 hospitals. Some 1451 cholecystectomies (16·3 per cent) were performed as an emergency, 4165 (46·8 per cent) as elective operations, and 3293 patients (37·0 per cent) had had at least one previous emergency admission, but had surgery on a delayed basis. The readmission and complication rates at 30 days were 7·1 per cent (633 of 8909) and 10·8 per cent (962 of 8909) respectively. Both readmissions and complications were independently associated with increasing ASA fitness grade, duration of surgery, and increasing numbers of emergency admissions with gallbladder disease before cholecystectomy. No identifiable hospital characteristics were linked to readmissions and complications. Conclusion Readmissions and complications following cholecystectomy are common and associated with patient and disease characteristics

    Evaluation of appendicitis risk prediction models in adults with suspected appendicitis

    Get PDF
    Background Appendicitis is the most common general surgical emergency worldwide, but its diagnosis remains challenging. The aim of this study was to determine whether existing risk prediction models can reliably identify patients presenting to hospital in the UK with acute right iliac fossa (RIF) pain who are at low risk of appendicitis. Methods A systematic search was completed to identify all existing appendicitis risk prediction models. Models were validated using UK data from an international prospective cohort study that captured consecutive patients aged 16–45 years presenting to hospital with acute RIF in March to June 2017. The main outcome was best achievable model specificity (proportion of patients who did not have appendicitis correctly classified as low risk) whilst maintaining a failure rate below 5 per cent (proportion of patients identified as low risk who actually had appendicitis). Results Some 5345 patients across 154 UK hospitals were identified, of which two‐thirds (3613 of 5345, 67·6 per cent) were women. Women were more than twice as likely to undergo surgery with removal of a histologically normal appendix (272 of 964, 28·2 per cent) than men (120 of 993, 12·1 per cent) (relative risk 2·33, 95 per cent c.i. 1·92 to 2·84; P < 0·001). Of 15 validated risk prediction models, the Adult Appendicitis Score performed best (cut‐off score 8 or less, specificity 63·1 per cent, failure rate 3·7 per cent). The Appendicitis Inflammatory Response Score performed best for men (cut‐off score 2 or less, specificity 24·7 per cent, failure rate 2·4 per cent). Conclusion Women in the UK had a disproportionate risk of admission without surgical intervention and had high rates of normal appendicectomy. Risk prediction models to support shared decision‐making by identifying adults in the UK at low risk of appendicitis were identified

    Oxford advanced learner\u27s dictionary of current english

    No full text

    Guide to patterns and usage in english

    No full text
    corecore