490 research outputs found

    The spread of antimalarial drug resistance: A mathematical model with practical implications for ACT drug policies

    Get PDF
    Most malaria-endemic countries are implementing a change in antimalarial drug policy to artemisinin combination therapy (ACT). The impact of different drug choices and implementation strategies is uncertain. A comprehensive model was constructed incorporating important epidemiological and biological factors and used to illustrate the spread of resistance in low and high transmission settings. The model predicts robustly that in low transmission settings drug resistance spreads faster than in high transmission settings, and that in low transmission areas ACTs slows the spread of drug resistance to a partner drug, especially at high coverage rates. This effect decreases exponentially with increasing delay in deploying the ACT and decreasing rates of coverage. A major obstacle to achieving the benefits of high coverage is the current cost of the drugs. This argues strongly for a global subsidy to make ACTs generally available and affordable in endemic areas

    Expressions for Bayesian confidence of drift diffusion observers in fluctuating stimuli tasks

    Get PDF
    We introduce a new approach to modelling decision confidence, with the aim of enabling computationally cheap predictions while taking into account, and thereby exploiting, trial-by-trial variability in stochastically fluctuating stimuli. Using the framework of the drift diffusion model of decision making, along with time-dependent thresholds and the idea of a Bayesian confidence readout, we derive expressions for the probability distribution over confidence reports. In line with current models of confidence, the derivations allow for the accumulation of “pipeline” evidence that has been received but not processed by the time of response, the effect of drift rate variability, and metacognitive noise. The expressions are valid for stimuli that change over the course of a trial with normally-distributed fluctuations in the evidence they provide. A number of approximations are made to arrive at the final expressions, and we test all approximations via simulation. The derived expressions contain only a small number of standard functions, and require evaluating only once per trial, making trial-by-trial modelling of confidence data in stochastically fluctuating stimuli tasks more feasible. We conclude by using the expressions to gain insight into the confidence of optimal observers, and empirically observed patterns

    KA1-targeted regulatory domain mutations activate Chk1 in the absence of DNA damage

    Get PDF
    The Chk1 protein kinase is activated in response to DNA damage through ATR-mediated phosphorylation at multiple serine-glutamine (SQ) residues within the C-terminal regulatory domain, however the molecular mechanism is not understood. Modelling indicates a high probability that this region of Chk1 contains a kinase-associated 1 (KA1) domain, a small, compact protein fold found in multiple protein kinases including SOS2, AMPK and MARK3. We introduced mutations into Chk1 designed to disrupt specific structural elements of the predicted KA1 domain. Remarkably, six of seven Chk1 KA1 mutants exhibit constitutive biological activity (Chk1-CA) in the absence of DNA damage, profoundly arresting cells in G2 phase of the cell cycle. Cell cycle arrest induced by selected Chk1-CA mutants depends on kinase catalytic activity, which is increased several-fold compared to wild-type, however phosphorylation of the key ATR regulatory site serine 345 (S345) is not required. Thus, mutations targeting the putative Chk1 KA1 domain confer constitutive biological activity by circumventing the need for ATR-mediated positive regulatory phosphorylation

    Understanding the Semantics of Ambiguous Tags in Folksonomies

    Get PDF
    The use of tags to describe Web resources in a collaborative manner has experienced rising popularity among Web users in recent years. The product of such activity is given the name folksonomy, which can be considered as a scheme of organizing information in the users' own way. In this paper, we present a possible way to analyze the tripartite graphs - graphs involving users, tags and resources - of folksonomies and discuss how these elements acquire their meanings through their associations with other elements, a process we call mutual contextualization. In particular, we demonstrate how different meanings of ambiguous tags can be discovered through such analysis of the tripartite graph by studying the tag sf. We also discuss how the result can be used as a basis to better understand the nature of folksonomies

    Tag Meaning Disambiguation through Analysis of Tripartite Structure of Folksonomies

    No full text
    Collaborative tagging systems are becoming very popular recently. Web users use freely-chosen tags to describe shared resources, resulting in a folksonomy. One problem of folksonomies is that tags which appear in the same form may carry multiple meanings and represent different concepts. As this kind of tags are ambiguous, the precisions in both description and retrieval of the shared resources are reduced. We attempt to develop effective methods to disambiguate tags by studying the tripartite structure of folksonomies. This paper describes the network analysis techniques that we employ to discover clusters of nodes in networks and the algorithm for tag disambiguation. Experiments show that the method is very effective in performing the task

    Bayesian confidence in optimal decisions

    Get PDF
    The optimal way to make decisions in many circumstances is to track the difference in evidence collected in favour of the options. The drift diffusion model (DDM) implements this approach, and provides an excellent account of decisions and response times. However, existing DDM-based models of confidence exhibit certain deficits, and many theories of confidence have used alternative, non-optimal models of decisions. Motivated by the historical success of the DDM, we ask whether simple extensions to this framework might allow it to better account for confidence. Motivated by the idea that the brain will not duplicate representations of evidence, in all model variants decisions and confidence are based on the same evidence accumulation process. We compare the models to benchmark results, and successfully apply 4 qualitative tests concerning the relationships between confidence, evidence, and time, in a new preregistered study. Using computationally cheap expressions to model confidence on a trial-by-trial basis, we find that a subset of model variants also provide a very good to excellent account of precise quantitative effects observed in confidence data. Specifically, our results favour the hypothesis that confidence reflects the strength of accumulated evidence penalised by the time taken to reach the decision (Bayesian readout), with the penalty applied not perfectly calibrated to the specific task context. These results suggest there is no need to abandon the DDM or single accumulator models to successfully account for confidence reports

    Spread of anti-malarial drug resistance: Mathematical model with implications for ACT drug policies

    Get PDF
    BACKGROUND: Most malaria-endemic countries are implementing a change in anti-malarial drug policy to artemisinin-based combination therapy (ACT). The impact of different drug choices and implementation strategies is uncertain. Data from many epidemiological studies in different levels of malaria endemicity and in areas with the highest prevalence of drug resistance like borders of Thailand are certainly valuable. Formulating an appropriate dynamic data-driven model is a powerful predictive tool for exploring the impact of these strategies quantitatively. METHODS: A comprehensive model was constructed incorporating important epidemiological and biological factors of human, mosquito, parasite and treatment. The iterative process of developing the model, identifying data needed, and parameterization has been taken to strongly link the model to the empirical evidence. The model provides quantitative measures of outcomes, such as malaria prevalence/incidence and treatment failure, and illustrates the spread of resistance in low and high transmission settings. The model was used to evaluate different anti-malarial policy options focusing on ACT deployment. RESULTS: The model predicts robustly that in low transmission settings drug resistance spreads faster than in high transmission settings, and treatment failure is the main force driving the spread of drug resistance. In low transmission settings, ACT slows the spread of drug resistance to a partner drug, especially at high coverage rates. This effect decreases exponentially with increasing delay in deploying the ACT and decreasing rates of coverage. In the high transmission settings, however, drug resistance is driven by the proportion of the human population with a residual drug level, which gives resistant parasites some survival advantage. The spread of drug resistance could be slowed down by controlling presumptive drug use and avoiding the use of combination therapies containing drugs with mismatched half-lives, together with reducing malaria transmission through vector control measures. CONCLUSION: This paper has demonstrated the use of a comprehensive mathematical model to describe malaria transmission and the spread of drug resistance. The model is strongly linked to the empirical evidence obtained from extensive data available from various sources. This model can be a useful tool to inform the design of treatment policies, particularly at a time when ACT has been endorsed by WHO as first-line treatment for falciparum malaria worldwide

    Distributed human computation framework for linked data co-reference resolution

    No full text
    Distributed Human Computation (DHC) is a technique used to solve computational problems by incorporating the collaborative effort of a large number of humans. It is also a solution to AI-complete problems such as natural language processing. The Semantic Web with its root in AI is envisioned to be a decentralised world-wide information space for sharing machine-readable data with minimal integration costs. There are many research problems in the Semantic Web that are considered as AI-complete problems. An example is co-reference resolution, which involves determining whether different URIs refer to the same entity. This is considered to be a significant hurdle to overcome in the realisation of large-scale Semantic Web applications. In this paper, we propose a framework for building a DHC system on top of the Linked Data Cloud to solve various computational problems. To demonstrate the concept, we are focusing on handling the co-reference resolution in the Semantic Web when integrating distributed datasets. The traditional way to solve this problem is to design machine-learning algorithms. However, they are often computationally expensive, error-prone and do not scale. We designed a DHC system named iamResearcher, which solves the scientific publication author identity co-reference problem when integrating distributed bibliographic datasets. In our system, we aggregated 6 million bibliographic data from various publication repositories. Users can sign up to the system to audit and align their own publications, thus solving the co-reference problem in a distributed manner. The aggregated results are published to the Linked Data Cloud

    NP-hard but no longer hard to solve? Using quantum computing to tackle optimization problems

    Get PDF
    In the last decade, public and industrial research funding has moved quantum computing from the early promises of Shor's algorithm through experiments to the era of noisy intermediate scale quantum devices (NISQ) for solving real-world problems. It is likely that quantum methods can efficiently solve certain (NP-)hard optimization problems where classical approaches fail. In our perspective, we examine the field of quantum optimization where we solve optimization problems using quantum computers. We demonstrate this through a proper use case and discuss the current quality of quantum computers, their solver capabilities, and benchmarking difficulties. Although we show a proof-of-concept rather than a full benchmark, we use the results to emphasize the importance of using appropriate metrics when comparing quantum and classical methods. We conclude with discussion on some recent quantum optimization breakthroughs and the current status and future directions
    • …
    corecore