7,701 research outputs found

    An examination of factors influencing the choice of therapy for patients with coronary artery disease

    Get PDF
    Background A diverse range of factors influence clinicians' decisions regarding the allocation of patients to different treatments for coronary artery disease in routine cardiology clinics. These include demographic measures, risk factors, co-morbidities, measures of objective cardiac disease, symptom reports and functional limitations. This study examined which of these factors differentiated patients receiving angioplasty from medication; bypass surgery from medication; and bypass surgery from angioplasty. Methods Univariate and multivariate logistic regression analyses were conducted on patient data from 214 coronary artery disease patients who at the time of recruitment had been received a clinical assessment and were reviewed by their cardiologist in order to determine the form of treatment they were to undergo: 70 would receive/continue medication, 71 were to undergo angioplasty and 73 were to undergo bypass surgery. Results Analyses differentiating patients receiving angioplasty from medication produced 9 significant univariate predictors, of which 5 were also multivariately significant (left anterior descending artery disease, previous coronary interventions, age, hypertension and frequency of angina). The analyses differentiating patients receiving surgery from angioplasty produced 12 significant univariate predictors, of which 4 were multivariately significant (limitations in mobility range, circumflex artery disease, previous coronary interventions and educational level). The analyses differentiating patients receiving surgery from medication produced 14 significant univariate predictors, of which 4 were multivariately significant (left anterior descending artery disease, previous cerebral events, limitations in mobility range and circumflex artery disease). Conclusion Variables emphasised in clinical guidelines are clearly involved in coronary artery disease treatment decisions. However, variables beyond these may also be important factors when therapy decisions are undertaken thus their roles require further investigation

    On the Hardness of SAT with Community Structure

    Full text link
    Recent attempts to explain the effectiveness of Boolean satisfiability (SAT) solvers based on conflict-driven clause learning (CDCL) on large industrial benchmarks have focused on the concept of community structure. Specifically, industrial benchmarks have been empirically found to have good community structure, and experiments seem to show a correlation between such structure and the efficiency of CDCL. However, in this paper we establish hardness results suggesting that community structure is not sufficient to explain the success of CDCL in practice. First, we formally characterize a property shared by a wide class of metrics capturing community structure, including "modularity". Next, we show that the SAT instances with good community structure according to any metric with this property are still NP-hard. Finally, we study a class of random instances generated from the "pseudo-industrial" community attachment model of Gir\'aldez-Cru and Levy. We prove that, with high probability, instances from this model that have relatively few communities but are still highly modular require exponentially long resolution proofs and so are hard for CDCL. We also present experimental evidence that our result continues to hold for instances with many more communities. This indicates that actual industrial instances easily solved by CDCL may have some other relevant structure not captured by the community attachment model.Comment: 23 pages. Full version of a SAT 2016 pape

    An automatic method to generate domain-specific investigator networks using PubMed abstracts

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Collaboration among investigators has become critical to scientific research. This includes ad hoc collaboration established through personal contacts as well as formal consortia established by funding agencies. Continued growth in online resources for scientific research and communication has promoted the development of highly networked research communities. Extending these networks globally requires identifying additional investigators in a given domain, profiling their research interests, and collecting current contact information. We present a novel strategy for building investigator networks dynamically and producing detailed investigator profiles using data available in PubMed abstracts.</p> <p>Results</p> <p>We developed a novel strategy to obtain detailed investigator information by automatically parsing the affiliation string in PubMed records. We illustrated the results by using a published literature database in human genome epidemiology (HuGE Pub Lit) as a test case. Our parsing strategy extracted country information from 92.1% of the affiliation strings in a random sample of PubMed records and in 97.0% of HuGE records, with accuracies of 94.0% and 91.0%, respectively. Institution information was parsed from 91.3% of the general PubMed records (accuracy 86.8%) and from 94.2% of HuGE PubMed records (accuracy 87.0). We demonstrated the application of our approach to dynamic creation of investigator networks by creating a prototype information system containing a large database of PubMed abstracts relevant to human genome epidemiology (HuGE Pub Lit), indexed using PubMed medical subject headings converted to Unified Medical Language System concepts. Our method was able to identify 70–90% of the investigators/collaborators in three different human genetics fields; it also successfully identified 9 of 10 genetics investigators within the PREBIC network, an existing preterm birth research network.</p> <p>Conclusion</p> <p>We successfully created a web-based prototype capable of creating domain-specific investigator networks based on an application that accurately generates detailed investigator profiles from PubMed abstracts combined with robust standard vocabularies. This approach could be used for other biomedical fields to efficiently establish domain-specific investigator networks.</p

    Functional cartography of complex metabolic networks

    Full text link
    High-throughput techniques are leading to an explosive growth in the size of biological databases and creating the opportunity to revolutionize our understanding of life and disease. Interpretation of these data remains, however, a major scientific challenge. Here, we propose a methodology that enables us to extract and display information contained in complex networks. Specifically, we demonstrate that one can (i) find functional modules in complex networks, and (ii) classify nodes into universal roles according to their pattern of intra- and inter-module connections. The method thus yields a ``cartographic representation'' of complex networks. Metabolic networks are among the most challenging biological networks and, arguably, the ones with more potential for immediate applicability. We use our method to analyze the metabolic networks of twelve organisms from three different super-kingdoms. We find that, typically, 80% of the nodes are only connected to other nodes within their respective modules, and that nodes with different roles are affected by different evolutionary constraints and pressures. Remarkably, we find that low-degree metabolites that connect different modules are more conserved than hubs whose links are mostly within a single module.Comment: 17 pages, 4 figures. Go to http://amaral.northwestern.edu for the PDF file of the reprin

    On the Inability of Markov Models to Capture Criticality in Human Mobility

    Get PDF
    We examine the non-Markovian nature of human mobility by exposing the inability of Markov models to capture criticality in human mobility. In particular, the assumed Markovian nature of mobility was used to establish a theoretical upper bound on the predictability of human mobility (expressed as a minimum error probability limit), based on temporally correlated entropy. Since its inception, this bound has been widely used and empirically validated using Markov chains. We show that recurrent-neural architectures can achieve significantly higher predictability, surpassing this widely used upper bound. In order to explain this anomaly, we shed light on several underlying assumptions in previous research works that has resulted in this bias. By evaluating the mobility predictability on real-world datasets, we show that human mobility exhibits scale-invariant long-range correlations, bearing similarity to a power-law decay. This is in contrast to the initial assumption that human mobility follows an exponential decay. This assumption of exponential decay coupled with Lempel-Ziv compression in computing Fano's inequality has led to an inaccurate estimation of the predictability upper bound. We show that this approach inflates the entropy, consequently lowering the upper bound on human mobility predictability. We finally highlight that this approach tends to overlook long-range correlations in human mobility. This explains why recurrent-neural architectures that are designed to handle long-range structural correlations surpass the previously computed upper bound on mobility predictability

    Investigating the Evolving Knowledge Structures in New Technology Development

    Get PDF
    Part 8: Knowledge Management and Information SharingInternational audienceThe development of new technology has been identified as one of the key enablers to support business and economic growth in developed countries. For example, the United Kingdom (UK) has invested £968 Million into the creation of Catapult centres to provide ‘pull through’ of low Technology Readiness Level (TRL) research and science. While these Catapults have been instrumental in developing new technologies, the uptake of new technology within industry remains a considerable challenge.One of the reasons for this is that of skills and competencies, and in particular, defining the new skills and competencies necessary to effectively apply and operate the new technology within the context of the business. Addressing this issue is non-trivial because the skills and competencies cannot be defined a priori and will evolve with the maturity of the technology. Therefore, there is a need to create methods that enable the elicitation and definition of skills and competencies that co-evolve with new technology development, and what are referred to herein as knowledge structures.To meet this challenge, this paper reports the results from a dynamic co-word network analysis of the technical documentation from New Technology Development (NTD) programmes at the National Composites Centre (NCC). Through this analysis, emerging knowledge structures can be identified and monitored, and be used to inform industry on the skills & competencies required for a technology
    corecore