89 research outputs found

    Developing combinatorial multi-component therapies (CMCT) of drugs that are more specific and have fewer side effects than traditional one drug therapies

    Get PDF
    Drugs designed for a specific target are always found to have multiple effects. Rather than hope that one bullet can be designed to hit only one target, nonlinear interactions across genomic and proteomic networks could be used to design Combinatorial Multi-Component Therapies (CMCT) that are more targeted with fewer side effects. We show here how computational approaches can be used to predict which combinations of drugs would produce the best effects. Using a nonlinear model of how the output effect depends on multiple input drugs, we show that an artificial neural network can accurately predict the effect of all 215 = 32,768 combinations of drug inputs using only the limited data of the output effect of the drugs presented one-at-a-time and pairs-at-a-time

    Why do women invest in pre-pregnancy health and care? A qualitative investigation with women attending maternity services

    Get PDF
    Background Despite the importance attributed to good pre-pregnancy care and its potential to improve pregnancy and child health outcomes, relatively little is known about why women invest in pre-pregnancy health and care. We sought to gain insight into why women invested in pre-pregnancy health and care. Methods We carried out 20 qualitative in-depth interviews with pregnant or recently pregnant women who were drawn from a survey of antenatal clinic attendees in London, UK. Interviewees were purposively sampled to include high and low investors in pre-pregnancy health and care, with variation in age, partnership status, ethnicity and pre-existing medical conditions. Data analysis was conducted using the Framework method. Results We identified three groups in relation to pre-pregnancy health and care: 1) The “prepared” group, who had high levels of pregnancy planning and mostly positive attitudes to micronutrient supplementation outside of pregnancy, carried out pre-pregnancy activities such as taking folic acid and making changes to diet and lifestyle. 2) The “poor knowledge” group, who also had high levels of pregnancy planning, did not carry out pre-pregnancy activities and described themselves as having poor knowledge. Elsewhere in their interviews they expressed a strong dislike of micronutrient supplementation. 3) The “absent pre-pregnancy period” group, had the lowest levels of pregnancy planning and also expressed anti-supplement views. Even discussing the pre-pregnancy period with this group was difficult as responses to questions quickly shifted to focus on pregnancy itself. Knowledge of folic acid was poor in all groups. Conclusion Different pre-pregnancy care approaches are likely to be needed for each of the groups. Among the “prepared” group, who were proactive and receptive to health messages, greater availability of information and better response from health professionals could improve the range of pre-pregnancy activities carried out. Among the “poor knowledge” group, better response from health professionals might yield greater uptake of pre-pregnancy information. A different, general health strategy might be more appropriate for the “absent pre-pregnancy period” group. The fact that general attitudes to micronutrient supplementation were closely related to whether or not women invested in pre-pregnancy health and care was an unanticipated finding and warrants further investigation.This report is independent research commissioned and funded by the Department of Health Policy Research Programme Pre-Pregnancy Health and Care in England: Exploring Implementation and Public Health Impact, 006/0068

    Satisfiability Checking for Mission-Time LTL

    Get PDF
    Mission-time LTL (MLTL) is a bounded variant of MTL over naturals designed to generically specify requirements for mission-based system operation common to aircraft, spacecraft, vehicles, and robots. Despite the utility of MLTL as a specification logic, major gaps remain in analyzing MLTL, e.g., for specification debugging or model checking, centering on the absence of any complete MLTL satisfiability checker. We prove that the MLTL satisfiability checking problem is NEXPTIME-complete and that satisfiability checking MLTL0 , the variant of MLTL where all intervals start at 0, is PSPACE-complete. We introduce translations for MLTL-to-LTL, MLTL-to-LTLf , MLTL-to-SMV, and MLTL-to-SMT, creating four options for MLTL satisfiability checking. Our extensive experimental evaluation shows that the MLTL-to-SMT transition with the Z3 SMT solver offers the most scalable performance

    Delivery of a Small for Gestational Age Infant and Greater Maternal Risk of Ischemic Heart Disease

    Get PDF
    Background: Delivery of a small for gestational age (SGA) infant has been associated with increased maternal risk of ischemic heart disease (IHD). It is uncertain whether giving birth to SGA infant is a specific determinant of later IHD, independent of other risk factors, or a marker of general poor health. The purpose of this study was to investigate the association between delivery of a SGA infant and maternal risk for IHD in relation to traditional IHD risk factors. Methods and Findings: Risk of maternal IHD was evaluated in a population based cross-sectional study of 6,608 women with a prior live term birth who participated in the National Health and Nutrition Examination Survey (1999–2006), a probability sample of the U.S. population. Sequence of events was determined from age at last live birth and at diagnosis of IHD. Delivery of a SGA infant is strongly associated with greater maternal risk for IHD (age adjusted OR; 95 % CI: 1.8; 1.2, 2.9; p = 0.012). The association was independent of the family history of IHD, stroke, hypertension and diabetes (family historyadjusted OR; 95 % CI: 1.9; 1.2, 3.0; p = 0.011) as well as other risk factors for IHD (risk factor-adjusted OR; 95 % CI: 1.7; 1.1, 2.7; p = 0.025). Delivery of a SGA infant was associated with earlier onset of IHD and preceded it by a median of 30 (interquartile range: 20, 36) years. Conclusions: Giving birth to a SGA infant is strongly and independently associated with IHD and a potential risk factor that precedes IHD by decades. A pregnancy that produces a SGA infant may induce long-term cardiovascular changes tha

    High Resolution Methylome Map of Rat Indicates Role of Intragenic DNA Methylation in Identification of Coding Region

    Get PDF
    DNA methylation is crucial for gene regulation and maintenance of genomic stability. Rat has been a key model system in understanding mammalian systemic physiology, however detailed rat methylome remains uncharacterized till date. Here, we present the first high resolution methylome of rat liver generated using Methylated DNA immunoprecipitation and high throughput sequencing (MeDIP-Seq) approach. We observed that within the DNA/RNA repeat elements, simple repeats harbor the highest degree of methylation. Promoter hypomethylation and exon hypermethylation were common features in both RefSeq genes and expressed genes (as evaluated by proteomic approach). We also found that although CpG islands were generally hypomethylated, about 6% of them were methylated and a large proportion (37%) of methylated islands fell within the exons. Notably, we obeserved significant differences in methylation of terminal exons (UTRs); methylation being more pronounced in coding/partially coding exons compared to the non-coding exons. Further, events like alternate exon splicing (cassette exon) and intron retentions were marked by DNA methylation and these regions are retained in the final transcript. Thus, we suggest that DNA methylation could play a crucial role in marking coding regions thereby regulating alternative splicing. Apart from generating the first high resolution methylome map of rat liver tissue, the present study provides several critical insights into methylome organization and extends our understanding of interplay between epigenome, gene expression and genome stability

    Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)1.

    Get PDF
    In 2008, we published the first set of guidelines for standardizing research in autophagy. Since then, this topic has received increasing attention, and many scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Thus, it is important to formulate on a regular basis updated guidelines for monitoring autophagy in different organisms. Despite numerous reviews, there continues to be confusion regarding acceptable methods to evaluate autophagy, especially in multicellular eukaryotes. Here, we present a set of guidelines for investigators to select and interpret methods to examine autophagy and related processes, and for reviewers to provide realistic and reasonable critiques of reports that are focused on these processes. These guidelines are not meant to be a dogmatic set of rules, because the appropriateness of any assay largely depends on the question being asked and the system being used. Moreover, no individual assay is perfect for every situation, calling for the use of multiple techniques to properly monitor autophagy in each experimental setting. Finally, several core components of the autophagy machinery have been implicated in distinct autophagic processes (canonical and noncanonical autophagy), implying that genetic approaches to block autophagy should rely on targeting two or more autophagy-related genes that ideally participate in distinct steps of the pathway. Along similar lines, because multiple proteins involved in autophagy also regulate other cellular pathways including apoptosis, not all of them can be used as a specific marker for bona fide autophagic responses. Here, we critically discuss current methods of assessing autophagy and the information they can, or cannot, provide. Our ultimate goal is to encourage intellectual and technical innovation in the field

    Recruitment and Consolidation of Cell Assemblies for Words by Way of Hebbian Learning and Competition in a Multi-Layer Neural Network

    Get PDF
    Current cognitive theories postulate either localist representations of knowledge or fully overlapping, distributed ones. We use a connectionist model that closely replicates known anatomical properties of the cerebral cortex and neurophysiological principles to show that Hebbian learning in a multi-layer neural network leads to memory traces (cell assemblies) that are both distributed and anatomically distinct. Taking the example of word learning based on action-perception correlation, we document mechanisms underlying the emergence of these assemblies, especially (i) the recruitment of neurons and consolidation of connections defining the kernel of the assembly along with (ii) the pruning of the cell assembly’s halo (consisting of very weakly connected cells). We found that, whereas a learning rule mapping covariance led to significant overlap and merging of assemblies, a neurobiologically grounded synaptic plasticity rule with fixed LTP/LTD thresholds produced minimal overlap and prevented merging, exhibiting competitive learning behaviour. Our results are discussed in light of current theories of language and memory. As simulations with neurobiologically realistic neural networks demonstrate here spontaneous emergence of lexical representations that are both cortically dispersed and anatomically distinct, both localist and distributed cognitive accounts receive partial support

    Evaluation of appendicitis risk prediction models in adults with suspected appendicitis

    Get PDF
    Background Appendicitis is the most common general surgical emergency worldwide, but its diagnosis remains challenging. The aim of this study was to determine whether existing risk prediction models can reliably identify patients presenting to hospital in the UK with acute right iliac fossa (RIF) pain who are at low risk of appendicitis. Methods A systematic search was completed to identify all existing appendicitis risk prediction models. Models were validated using UK data from an international prospective cohort study that captured consecutive patients aged 16–45 years presenting to hospital with acute RIF in March to June 2017. The main outcome was best achievable model specificity (proportion of patients who did not have appendicitis correctly classified as low risk) whilst maintaining a failure rate below 5 per cent (proportion of patients identified as low risk who actually had appendicitis). Results Some 5345 patients across 154 UK hospitals were identified, of which two‐thirds (3613 of 5345, 67·6 per cent) were women. Women were more than twice as likely to undergo surgery with removal of a histologically normal appendix (272 of 964, 28·2 per cent) than men (120 of 993, 12·1 per cent) (relative risk 2·33, 95 per cent c.i. 1·92 to 2·84; P < 0·001). Of 15 validated risk prediction models, the Adult Appendicitis Score performed best (cut‐off score 8 or less, specificity 63·1 per cent, failure rate 3·7 per cent). The Appendicitis Inflammatory Response Score performed best for men (cut‐off score 2 or less, specificity 24·7 per cent, failure rate 2·4 per cent). Conclusion Women in the UK had a disproportionate risk of admission without surgical intervention and had high rates of normal appendicectomy. Risk prediction models to support shared decision‐making by identifying adults in the UK at low risk of appendicitis were identified

    Modal strength reduction in quantified discrete duration calculus

    No full text
    QDDC is a logic for specifying quantitative timing properties of reactive systems. An automata theoretic decision procedure for QDDC reduces each formula to a finite state automaton accepting precisely the models of the formula. This construction has been implemented into a validity/model checking tool for QDDC called DCVALID. Unfortunately, the size of the final automaton as well as the intermediate automata which are encountered in the construction can some times be prohibitively large. In this paper, we present some validity preserving transformations to QDDC formulae which result into more efficient construction of the formula automaton and hence reduce the validity checking time. The transformations can be computed in linear time. We provide a theoretical as well as an experimental analysis of the improvements in the formula automaton size and validity checking time due to our transformations
    corecore