778,103 research outputs found

    Text and spatial data mining

    Get PDF
    Parcellation of the human brain Parcellation of the human brain by combining text mining and spatial data mining within a neuroinformatics database. Text mining: Analysis of scientific abstracts. Spatial data mining: Modeling of the distribution of Talairach coordinates. Seek communality between the the text representation and spatial representation by multivariate analysis

    Semi-Supervised Kernel PCA

    Get PDF
    We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets

    Databasing Molecular Neuroimaging

    Get PDF
    Molecular neuroimaging Most molecular imaging studies relies on analysis of values from brain regions and report descriptive statistics for these values. There are two significant difficulties when comparing molecular neuroimaging studies: 1. Regions differ between studies: E.g., some include values for “temporal cortex ” others do not. 2. Measured and reported values differ between studies and they are not comparable: Tracers and receptors; transport rates (e.g., K1), distribution volume, binding potentials; different methods to compute the values

    Conceptual modelling: Towards detecting modelling errors in engineering applications

    Get PDF
    Rapid advancements of modern technologies put high demands on mathematical modelling of engineering systems. Typically, systems are no longer “simple” objects, but rather coupled systems involving multiphysics phenomena, the modelling of which involves coupling of models that describe different phenomena. After constructing a mathematical model, it is essential to analyse the correctness of the coupled models and to detect modelling errors compromising the final modelling result. Broadly, there are two classes of modelling errors: (a) errors related to abstract modelling, eg, conceptual errors concerning the coherence of a model as a whole and (b) errors related to concrete modelling or instance modelling, eg, questions of approximation quality and implementation. Instance modelling errors, on the one hand, are relatively well understood. Abstract modelling errors, on the other, are not appropriately addressed by modern modelling methodologies. The aim of this paper is to initiate a discussion on abstract approaches and their usability for mathematical modelling of engineering systems with the goal of making it possible to catch conceptual modelling errors early and automatically by computer assistant tools. To that end, we argue that it is necessary to identify and employ suitable mathematical abstractions to capture an accurate conceptual description of the process of modelling engineering systems

    Mathematical modelling of the cardiovascular system

    Full text link
    In this paper we will address the problem of developing mathematical models for the numerical simulation of the human circulatory system. In particular, we will focus our attention on the problem of haemodynamics in large human arteries

    Insights from quantitative and mathematical modelling on the proposed 2030 goal for gambiense human African trypanosomiasis (gHAT)

    Get PDF
    Gambiense human African trypanosomiasis (gHAT) is a parasitic, vector-borne neglected tropical disease that has historically affected populations across West and Central Africa and can result in death if untreated. Following from the success of recent intervention programmes against gHAT, the World Health Organization (WHO) has defined a 2030 goal of global elimination of transmission (EOT). The key proposed indicator to measure achievement of the goal is to have zero reported cases. Results of previous mathematical modelling and quantitative analyses are brought together to explore both the implications of the proposed indicator and the feasibility of achieving the WHO goal. Whilst the indicator of zero case reporting is clear and measurable, it is an imperfect proxy for EOT and could arise either before or after EOT is achieved. Lagging reporting of infection and imperfect diagnostic specificity could result in case reporting after EOT, whereas the converse could be true due to underreporting, lack of coverage, and cryptic human and animal reservoirs. At the village-scale, the WHO recommendation of continuing active screening until there are three years of zero cases yields a high probability of local EOT, but extrapolating this result to larger spatial scales is complex. Predictive modelling of gHAT has consistently found that EOT by 2030 is unlikely across key endemic regions if current medical-only strategies are not bolstered by improved coverage, reduced time to detection and/or complementary vector control. Unfortunately, projected costs for strategies expected to meet EOT are high in the short term and strategies that are cost-effective in reducing burden are unlikely to result in EOT by 2030. Future modelling work should aim to provide predictions while taking into account uncertainties in stochastic dynamics and infection reservoirs, as well as assessment of multiple spatial scales, reactive strategies, and measurable proxies of EOT

    Teaching mathematical modelling: a research based approach

    Get PDF
    A collaborative, research based laboratory experiment in mathematical modelling was included in a bioprocess engineering laboratory module, taught as part of an interdisciplinary program in biotechnology. The class was divided into six groups of three students and given the task of investigating a novel diafiltration process that is currently the focus of international research. Different aspects of the problem were assigned to each group and inter-group communication via email was required to ensure that there was a coherent set of objectives for each group and for the class as a whole. The software package, Berkeley Madonna, was used for all calculations. As well as giving the students an introduction to mathematical modelling and computer programming, this approach helped to illustrate the importance of research in bioprocess engineering. In general, the experiment was well received by the students and the fact that they were discovering new knowledge generated a degree of enthusiasm. However, many students were consumed by the technical demands of computer programming, especially the attention to detail required. Thus, they did not think too deeply about the physical aspects of the system they were modelling. In future years, therefore, consideration will be given to giving the student prior instruction in the use of the software
    corecore