196 research outputs found

    Mining Antagonistic Communities From Social Networks

    Get PDF
    In this thesis, we examine the problem of mining antagonistic communities from social networks. In social networks, people with opposite opinions normally behave differently and form sub-communities each of which containing people sharing some common behaviors. In one scenario, people with opposite opinions show differences in their views on a set of items. Another scenario is people explicitly expressing whom they agree with, like or trust as well as whom they disagree with, dislike or distrust. We defined the indirect and direct antagonistic groups based on the two scenarios. We have developed algorithms to mine the two types of antagonistic groups. For indirect antagonistic group mining, our algorithm explores the search space of all the possible antagonistic groups starting from antagonistic groups of size two, followed by searching antagonistic groups of larger sizes. We have als

    Network Archaeology: Uncovering Ancient Networks from Present-day Interactions

    Get PDF
    Often questions arise about old or extinct networks. What proteins interacted in a long-extinct ancestor species of yeast? Who were the central players in the Last.fm social network 3 years ago? Our ability to answer such questions has been limited by the unavailability of past versions of networks. To overcome these limitations, we propose several algorithms for reconstructing a network's history of growth given only the network as it exists today and a generative model by which the network is believed to have evolved. Our likelihood-based method finds a probable previous state of the network by reversing the forward growth model. This approach retains node identities so that the history of individual nodes can be tracked. We apply these algorithms to uncover older, non-extant biological and social networks believed to have grown via several models, including duplication-mutation with complementarity, forest fire, and preferential attachment. Through experiments on both synthetic and real-world data, we find that our algorithms can estimate node arrival times, identify anchor nodes from which new nodes copy links, and can reveal significant features of networks that have long since disappeared.Comment: 16 pages, 10 figure

    Uncertainty estimation for operational ocean forecast products-a multi-model ensemble for the North Sea and the Baltic Sea

    Get PDF
    Multi-model ensembles for sea surface temperature (SST), sea surface salinity (SSS), sea surface currents (SSC), and water transports have been developed for the North Sea and the Baltic Sea using outputs from several operational ocean forecasting models provided by different institutes. The individual models differ in model code, resolution, boundary conditions, atmospheric forcing, and data assimilation. The ensembles are produced on a daily basis. Daily statistics are calculated for each parameter giving information about the spread of the forecasts with standard deviation, ensemble mean and median, and coefficient of variation. High forecast uncertainty, i.e., for SSS and SSC, was found in the Skagerrak, Kattegat (Transition Area between North Sea and Baltic Sea), and the Norwegian Channel. Based on the data collected, longer-term statistical analyses have been done, such as a comparison with satellite data for SST and evaluation of the deviation between forecasts in temporal and spatial scale. Regions of high forecast uncertainty for SSS and SSC have been detected in the Transition Area and the Norwegian Channel where a large spread between the models might evolve due to differences in simulating the frontal structures and their movements. A distinct seasonal pattern could be distinguished for SST with high uncertainty between the forecasts during summer. Forecasts with relatively high deviation from the multi-model ensemble (MME) products or the other individual forecasts were detected for each region and each parameter. The comparison with satellite data showed that the error of the MME products is lowest compared to those of the ensemble members

    BETWEEN BROADCASTING POLITICAL MESSAGES AND INTERACTING WITH VOTERS

    Get PDF
    Politicians across Western democracies are increasingly adopting and experimenting with Twitter, particularly during election time. The purpose of this article is to investigate how candidates are using it during an election campaign. The aim is to create a typology of the various ways in which candidates behaved on Twitter. Our research, which included a content analysis of tweets (n = 26,282) from all twittering Conservative, Labour and Liberal Democrat candidates (n = 416) during the 2010 UK General Election campaign, focused on four aspects of tweets: type, interaction, function and topic. By examining candidates' twittering behaviour, the authors show that British politicians mainly used Twitter as a unidirectional form of communication. However, there were a group of candidates who used it to interact with voters by, for example, mobilizing, helping and consulting them, thus tapping into the potential Twitter offers for facilitating a closer relationship with citizens

    15N photo-CIDNP MAS NMR analysis of reaction centers of Chloracidobacterium thermophilum

    Get PDF
    -OH Chl a have been shown to be the primary electron acceptors in green sulfur bacteria and heliobacteria, respectively, and thus a Chl a molecule serves this role in all known homodimeric type-1 RCs.Solid state NMR/Biophysical Organic Chemistr

    Development and validation of the Diabetes Numeracy Test (DNT)

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Low literacy and numeracy skills are common. Adequate numeracy skills are crucial in the management of diabetes. Diabetes patients use numeracy skills to interpret glucose meters, administer medications, follow dietary guidelines and other tasks. Existing literacy scales may not be adequate to assess numeracy skills. This paper describes the development and psychometric properties of the Diabetes Numeracy Test (DNT), the first scale to specifically measure numeracy skills used in diabetes.</p> <p>Methods</p> <p>The items of the DNT were developed by an expert panel and refined using cognitive response interviews with potential respondents. The final version of the DNT (43 items) and other relevant measures were administered to a convenience sample of 398 patients with diabetes. Internal reliability was determined by the Kuder-Richardson coefficient (KR-20). An <it>a priori </it>hypothetical model was developed to determine construct validity. A shortened 15-item version, the DNT15, was created through split sample analysis.</p> <p>Results</p> <p>The DNT had excellent internal reliability (KR-20 = 0.95). The DNT was significantly correlated (p < 0.05) with education, income, literacy and math skills, and diabetes knowledge, supporting excellent construct validity. The mean score on the DNT was 61% and took an average of 33 minutes to complete. The DNT15 also had good internal reliability (KR-20 = 0.90 and 0.89). In split sample analysis, correlations of the DNT-15 with the full DNT in both sub-samples was high (rho = 0.96 and 0.97, respectively).</p> <p>Conclusion</p> <p>The DNT is a reliable and valid measure of diabetes related numeracy skills. An equally adequate but more time-efficient version of the DNT, the DNT15, can be used for research and clinical purposes to evaluate diabetes related numeracy.</p

    Modern classification of neoplasms: reconciling differences between morphologic and molecular approaches

    Get PDF
    BACKGROUND: For over 150 years, pathologists have relied on histomorphology to classify and diagnose neoplasms. Their success has been stunning, permitting the accurate diagnosis of thousands of different types of neoplasms using only a microscope and a trained eye. In the past two decades, cancer genomics has challenged the supremacy of histomorphology by identifying genetic alterations shared by morphologically diverse tumors and by finding genetic features that distinguish subgroups of morphologically homogeneous tumors. DISCUSSION: The Developmental Lineage Classification and Taxonomy of Neoplasms groups neoplasms by their embryologic origin. The putative value of this classification is based on the expectation that tumors of a common developmental lineage will share common metabolic pathways and common responses to drugs that target these pathways. The purpose of this manuscript is to show that grouping tumors according to their developmental lineage can reconcile certain fundamental discrepancies resulting from morphologic and molecular approaches to neoplasm classification. In this study, six issues in tumor classification are described that exemplify the growing rift between morphologic and molecular approaches to tumor classification: 1) the morphologic separation between epithelial and non-epithelial tumors; 2) the grouping of tumors based on shared cellular functions; 3) the distinction between germ cell tumors and pluripotent tumors of non-germ cell origin; 4) the distinction between tumors that have lost their differentiation and tumors that arise from uncommitted stem cells; 5) the molecular properties shared by morphologically disparate tumors that have a common developmental lineage, and 6) the problem of re-classifying morphologically identical but clinically distinct subsets of tumors. The discussion of these issues in the context of describing different methods of tumor classification is intended to underscore the clinical value of a robust tumor classification. SUMMARY: A classification of neoplasms should guide the rational design and selection of a new generation of cancer medications targeted to metabolic pathways. Without a scientifically sound neoplasm classification, biological measurements on individual tumor samples cannot be generalized to class-related tumors, and constitutive properties common to a class of tumors cannot be distinguished from uninformative data in complex and chaotic biological systems. This paper discusses the importance of biological classification and examines several different approaches to the specific problem of tumor classification

    Towards computerizing intensive care sedation guidelines: design of a rule-based architecture for automated execution of clinical guidelines

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Computerized ICUs rely on software services to convey the medical condition of their patients as well as assisting the staff in taking treatment decisions. Such services are useful for following clinical guidelines quickly and accurately. However, the development of services is often time-consuming and error-prone. Consequently, many care-related activities are still conducted based on manually constructed guidelines. These are often ambiguous, which leads to unnecessary variations in treatments and costs.</p> <p>The goal of this paper is to present a semi-automatic verification and translation framework capable of turning manually constructed diagrams into ready-to-use programs. This framework combines the strengths of the manual and service-oriented approaches while decreasing their disadvantages. The aim is to close the gap in communication between the IT and the medical domain. This leads to a less time-consuming and error-prone development phase and a shorter clinical evaluation phase.</p> <p>Methods</p> <p>A framework is proposed that semi-automatically translates a clinical guideline, expressed as an XML-based flow chart, into a Drools Rule Flow by employing semantic technologies such as ontologies and SWRL. An overview of the architecture is given and all the technology choices are thoroughly motivated. Finally, it is shown how this framework can be integrated into a service-oriented architecture (SOA).</p> <p>Results</p> <p>The applicability of the Drools Rule language to express clinical guidelines is evaluated by translating an example guideline, namely the sedation protocol used for the anaesthetization of patients, to a Drools Rule Flow and executing and deploying this Rule-based application as a part of a SOA. The results show that the performance of Drools is comparable to other technologies such as Web Services and increases with the number of decision nodes present in the Rule Flow. Most delays are introduced by loading the Rule Flows.</p> <p>Conclusions</p> <p>The framework is an effective solution for computerizing clinical guidelines as it allows for quick development, evaluation and human-readable visualization of the Rules and has a good performance. By monitoring the parameters of the patient to automatically detect exceptional situations and problems and by notifying the medical staff of tasks that need to be performed, the computerized sedation guideline improves the execution of the guideline.</p
    corecore