1,120 research outputs found

    Conceptual framework of a novel hybrid methodology between computational fluid dynamics and data mining techniques for medical dataset application

    Get PDF
    This thesis proposes a novel hybrid methodology that couples computational fluid dynamic (CFD) and data mining (DM) techniques that is applied to a multi-dimensional medical dataset in order to study potential disease development statistically. This approach allows an alternate solution for the present tedious and rigorous CFD methodology being currently adopted to study the influence of geometric parameters on hemodynamics in the human abdominal aortic aneurysm. This approach is seen as a “marriage” between medicine and computer domains

    An Artificial Neural Network Framework to Predict Patients with High Likelihood of Chronic Kidney Disease

    Get PDF
    Chronic kidney disease (CKD) is an important health and healthcare system problem. The ability to predict which patients will develop CKD is a difficult task due to the complex nonlinear relationships among related factors. Using artificial neural networks (ANN), applied to a population 17 through 90 years of age, we achieved 97% accuracy in classification, based on standard laboratory test and patient data. The technique was also helpful in determining which features of the data are most predictive; 75% of the features were sufficient to reach this high level of accuracy

    Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches

    Get PDF
    In the past two decades, functional Magnetic Resonance Imaging has been used to relate neuronal network activity to cognitive processing and behaviour. Recently this approach has been augmented by algorithms that allow us to infer causal links between component populations of neuronal networks. Multiple inference procedures have been proposed to approach this research question but so far, each method has limitations when it comes to establishing whole-brain connectivity patterns. In this work, we discuss eight ways to infer causality in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality, Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and Transfer Entropy. We finish with formulating some recommendations for the future directions in this area

    Comparison of existing aneurysm models and their path forward

    Full text link
    The two most important aneurysm types are cerebral aneurysms (CA) and abdominal aortic aneurysms (AAA), accounting together for over 80\% of all fatal aneurysm incidences. To minimise aneurysm related deaths, clinicians require various tools to accurately estimate its rupture risk. For both aneurysm types, the current state-of-the-art tools to evaluate rupture risk are identified and evaluated in terms of clinical applicability. We perform a comprehensive literature review, using the Web of Science database. Identified records (3127) are clustered by modelling approach and aneurysm location in a meta-analysis to quantify scientific relevance and to extract modelling patterns and further assessed according to PRISMA guidelines (179 full text screens). Beside general differences and similarities of CA and AAA, we identify and systematically evaluate four major modelling approaches on aneurysm rupture risk: finite element analysis and computational fluid dynamics as deterministic approaches and machine learning and assessment-tools and dimensionless parameters as stochastic approaches. The latter score highest in the evaluation for their potential as clinical applications for rupture prediction, due to readiness level and user friendliness. Deterministic approaches are less likely to be applied in a clinical environment because of their high model complexity. Because deterministic approaches consider underlying mechanism for aneurysm rupture, they have improved capability to account for unusual patient-specific characteristics, compared to stochastic approaches. We show that an increased interdisciplinary exchange between specialists can boost comprehension of this disease to design tools for a clinical environment. By combining deterministic and stochastic models, advantages of both approaches can improve accessibility for clinicians and prediction quality for rupture risk.Comment: 46 pages, 5 figure

    User-centered visual analysis using a hybrid reasoning architecture for intensive care units

    Get PDF
    One problem pertaining to Intensive Care Unit information systems is that, in some cases, a very dense display of data can result. To ensure the overview and readability of the increasing volumes of data, some special features are required (e.g., data prioritization, clustering, and selection mechanisms) with the application of analytical methods (e.g., temporal data abstraction, principal component analysis, and detection of events). This paper addresses the problem of improving the integration of the visual and analytical methods applied to medical monitoring systems. We present a knowledge- and machine learning-based approach to support the knowledge discovery process with appropriate analytical and visual methods. Its potential benefit to the development of user interfaces for intelligent monitors that can assist with the detection and explanation of new, potentially threatening medical events. The proposed hybrid reasoning architecture provides an interactive graphical user interface to adjust the parameters of the analytical methods based on the users' task at hand. The action sequences performed on the graphical user interface by the user are consolidated in a dynamic knowledge base with specific hybrid reasoning that integrates symbolic and connectionist approaches. These sequences of expert knowledge acquisition can be very efficient for making easier knowledge emergence during a similar experience and positively impact the monitoring of critical situations. The provided graphical user interface incorporating a user-centered visual analysis is exploited to facilitate the natural and effective representation of clinical information for patient care

    Connectivity models in the neural face perception domain – interfaces to understand the human brain in health and disease?

    Get PDF
    The recognition and processing of faces is a core competence of our human brain, in which many neuronal areas are involved. Faces are not only a means to recognize and distinguish between individuals, but also a means to convey emotions, intentions, or trustworthiness of our counterpart. The processing of faces is an orchestrated interaction of a multitude of neuronal regions. This interplay can be quantified at the neuronal level using so-called e↵ective connectivity analysis. The most common e↵ective connectivity analysis, which is also used in the present work, is called Dynamic Causal Modeling. With its help, interregional interactions are modelled at the neuronal level, and at the measurable level – such as with functional magnetic resonance imaging – evidence is found for the probability of the presence of neuronal connections and also their quantitative expression. E↵ective connectivity analyses can thus reveal the couplings between brain areas during specific cognitive processes, such as face perception. The way we process faces also changes when, for example, mental illness is present. Thus, negative emotions such as fear may be perceived disproportionately more intense, or positive emotions such as joy less intense. The evaluation of neuronal parameters in face processing could be used in clinical practice, e.g. for the early detection of mental illnesses or the quantification of therapy success. A prerequisite for clinical application is the reliability of the modeling method. Thus, results of models should be generalizable and not depend on certain nuances of the modeling. Furthermore, the interpretability of many model parameters turns out to be dicult. However, this is necessary to be able to describe causal relationships. In the present dissertation, so-called Dynamic Causal Models are applied in the field of neural face processing. In a first study a clinical context is used. Here, neural models of emotion regulation in face processing were used to identify potential consequences of risk factors for the development of mental illness. In another study, the generalizability of neural network models was tested in a healthy population. Here, many limitations of the method as a whole were revealed. In a final study, both observed and simulated data were used to uncover more limitations in the interpretation of model parameters.Die Erkennung und Verarbeitung von Gesichtern ist eine Kernkompetenz unseres menschlichen Gehirns, an welcher viele neuronale Areale beteiligt sind. Gesichter dienen nicht nur zur Erkennung und Unterscheidung zwischen Individuen, sondern transportieren zum Beispiel auch Emotionen, Absichten, oder Vertrauenswürdigkeit unseres Gegenübers. Dabei ist die Verarbeitung von Gesichtern ein orchestriertes Zusammenspiel einer Vielzahl von neuronalen Regionen. Dieses Zusammenspiel kann mittels der sogenannten effektiven Konnektivitätsanalyse auf neuronaler Ebene quantifiziert werden. Die häufigste, und auch in der vorliegenden Arbeit verwendete Konnektivitätsanalyse trägt den Namen Dynamic Causal Modeling. Mit ihrer Hilfe modelliert man interregionale Interaktionen auf neuronaler Ebene, und findet auf messbarer Ebene – wie z.B. mit funktioneller Magnetresonanztomographie – Hinweise für die Wahrscheinlichkeit neuronaler Verbindungen und auch deren quantitative Ausprägung. Mit Hilfe von effektiven Konnektivitätsanalysen können somit die Kopplungen zwischen Hirnarealen bei bestimmten kognitiven Vorgängen, wie z.B. der Gesichterwahrnehmung, aufgedeckt werden. Die Art und Weise, wie wir Gesichter verarbeiten, ändert sich beispielsweise, wenn z.B. psychische Erkrankungen vorliegen. So können negative Emotionen wie Furcht unproportional stärker wahrgenommen werden, oder positive Emotionen wie Freude weniger stark. Die Auswertung neuronaler Kennwerte bei der Gesichterverarbeitung könnte perspektivisch im klinischen Alltag zum Einsatz kommen, z.B. zur Früherkennung von psychischen Erkrankungen, oder der Quantifizierung von Therpieerfolg. Voraussetzung für einen klinischen Einsatz ist jedoch eine Verlässlichkeit der Modellierungsmethode. So sollten Ergebnisse von Modellen generalisierbar sein, und nicht von bestimmten Nuancen der Modellierung abhängen. Weiterhin stellt sich die Interpretatierbarkeit vieler Modellparameter als schwierig heraus. Diese ist jedoch notwendig, um ursächliche Zusammenhänge beschreiben zu können. In der vorliegenden Dissertation werden sogenannte Dynamic Causal Models im Bereich der neuronalen Gesichterverarbeitung eingesetzt. In einer ersten Studie wird ein klinischer Kontext herangezogen. Hier wurden anhand neuronaler Modelle der Emotionsregulation in der Gesichterverarbeitung Auswirkungen von möglichen Risikofaktoren zur Entwicklung psychischer Erkrankungen auf die Hirnkonnektivität erkannt. In einer weiteren Studie wird die Generalisier- barkeit neuronaler Netzwerkmodelle an einer gesunden Population erprobt. Hier zeigten sich viele Limitationen der Methode als Ganzes auf. In einer letzten Studie werden sowohl mit echten, als auch mit simulierten Daten, weitere Limitationen in der Interpretation von Modellparametern aufgedeckt

    Data analysis and machine learning approaches for time series pre- and post- processing pipelines

    Get PDF
    157 p.En el ámbito industrial, las series temporales suelen generarse de forma continua mediante sensores quecaptan y supervisan constantemente el funcionamiento de las máquinas en tiempo real. Por ello, esimportante que los algoritmos de limpieza admitan un funcionamiento casi en tiempo real. Además, amedida que los datos evolución, la estrategia de limpieza debe cambiar de forma adaptativa eincremental, para evitar tener que empezar el proceso de limpieza desde cero cada vez.El objetivo de esta tesis es comprobar la posibilidad de aplicar flujos de aprendizaje automática a lasetapas de preprocesamiento de datos. Para ello, este trabajo propone métodos capaces de seleccionarestrategias óptimas de preprocesamiento que se entrenan utilizando los datos históricos disponibles,minimizando las funciones de perdida empíricas.En concreto, esta tesis estudia los procesos de compresión de series temporales, unión de variables,imputación de observaciones y generación de modelos subrogados. En cada uno de ellos se persigue laselección y combinación óptima de múltiples estrategias. Este enfoque se define en función de lascaracterísticas de los datos y de las propiedades y limitaciones del sistema definidas por el usuario
    corecore