586 research outputs found

    Interaction of elastomechanics and fluid dynamics in the human heart : Opportunities and challenges of light coupling strategies

    Get PDF
    Das menschliche Herz ist das hochkomplexe Herzstück des kardiovaskulären Systems, das permanent, zuverlässig und autonom den Blutfluss im Körper aufrechterhält. In Computermodellen wird die Funktionalität des Herzens nachgebildet, um Simulationsstudien durchzuführen, die tiefere Einblicke in die zugrundeliegenden Phänomene ermöglichen oder die Möglichkeit bieten, relevante Parameter unter vollständig kontrollierten Bedingungen zu variieren. Angesichts der Tatsache, dass Herz-Kreislauf-Erkrankungen die häufigste Todesursache in den Ländern der westlichen Hemisphäre sind, ist ein Beitrag zur frühzeit- igen Diagnose derselben von großer klinischer Bedeutung. In diesem Zusammenhang können computergestützte Strömungssimulationen wertvolle Einblicke in die Blutflussdynamik liefern und bieten somit die Möglichkeit, einen zentralen Bereich der Physik dieses multiphysikalischen Organs zu untersuchen. Da die Verformung der Endokardoberfläche den Blutfluss antreibt, müssen die Effekte der Elastomechanik als Randbedingungen für solche Strömungssimulationen berücksichtigt werden. Um im klinischen Kontext relevant zu sein, muss jedoch ein Mittelweg zwischen dem Rechenaufwand und der erforderlichen Genauigkeit gefunden werden, und die Modelle müssen sowohl robust als auch zuverlässig sein. Daher werden in dieser Arbeit die Möglichkeiten und Herausforderungen leichter und daher weniger komplexer Kopplungsstrategien mit Schwerpunkt auf drei Schlüsselaspekten bewertet: Erstens wird ein auf dem Immersed Boundary-Ansatz basierender Fluiddynamik-Löser implementiert, da diese Methode mit einer sehr robusten Darstellung von bewegten Netzen besticht. Die grundlegende Funktionalität wurde für verschiedene vereinfachte Geometrien verifiziert und zeigte eine hohe Übereinstimmung mit der jeweiligen analytischen Lösung. Vergleicht man die 3D-Simulation einer realistischen Geometrie des linken Teils des Herzens mit einem körperangepassten Netzbeschreibung, so wurden grundlegende globale Größen korrekt reproduziert. Allerdings zeigten Variationen der Randbedingungen einen großen Einfluss auf die Simulationsergebnisse. Die Anwendung des Lösers zur Simulation des Einflusses von Pathologien auf die Blutströmungsmuster ergab Ergebnisse in guter Übereinstimmung mit Literaturwerten. Bei Simulationen der Mitralklappeninsuffizienz wurde der rückströmende Anteil mit Hilfe einer Partikelverfolgungsmethode visualisiert. Bei hypertropher Kardiomyopathie wurden die Strömungsmuster im linken Ventrikel mit Hilfe eines passiven Skalartransports bewertet, um die lokale Konzentration des ursprünglichen Blutvolumens zu visualisieren. Da in den vorgenannten Studien nur ein unidirektionaler Informationsfluss vom elas- tomechanischen Modell zum Strömungslöser berücksichtigt wurde, wird die Rückwirkung des räumlich aufgelösten Druckfeldes aus den Strömungssimulationen auf die Elastomechanik quantifiziert. Es wird ein sequenzieller Kopplungsansatz eingeführt, um fluiddynamische Einflüsse in einer Schlag-für-Schlag-Kopplungsstruktur zu berücksichtigen. Die geringen Abweichungen im mechanischen Solver von 2 mm verschwanden bereits nach einer Iteration, was darauf schließen lässt, dass die Rückwirkungen der Fluiddynamik im gesunden Herzen begrenzt ist. Zusammenfassend lässt sich sagen, dass insbesondere bei Strömungsdynamiksimula- tionen die Randbedingungen mit Vorsicht gewählt werden müssen, da sie aufgrund ihres großen Einflusses die Anfälligkeit der Modelle erhöhen. Nichtsdestotrotz zeigten verein- fachte Kopplungsstrategien vielversprechende Ergebnisse bei der Reproduktion globaler fluiddynamischer Größen, während die Abhängigkeit zwischen den Lösern reduziert und Rechenaufwand eingespart wird

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Artificial Intelligence for Cognitive Health Assessment: State-of-the-Art, Open Challenges and Future Directions

    Get PDF
    The subjectivity and inaccuracy of in-clinic Cognitive Health Assessments (CHA) have led many researchers to explore ways to automate the process to make it more objective and to facilitate the needs of the healthcare industry. Artificial Intelligence (AI) and machine learning (ML) have emerged as the most promising approaches to automate the CHA process. In this paper, we explore the background of CHA and delve into the extensive research recently undertaken in this domain to provide a comprehensive survey of the state-of-the-art. In particular, a careful selection of significant works published in the literature is reviewed to elaborate a range of enabling technologies and AI/ML techniques used for CHA, including conventional supervised and unsupervised machine learning, deep learning, reinforcement learning, natural language processing, and image processing techniques. Furthermore, we provide an overview of various means of data acquisition and the benchmark datasets. Finally, we discuss open issues and challenges in using AI and ML for CHA along with some possible solutions. In summary, this paper presents CHA tools, lists various data acquisition methods for CHA, provides technological advancements, presents the usage of AI for CHA, and open issues, challenges in the CHA domain. We hope this first-of-its-kind survey paper will significantly contribute to identifying research gaps in the complex and rapidly evolving interdisciplinary mental health field

    2023 GREAT Day Program

    Get PDF
    SUNY Geneseo’s Seventeenth Annual GREAT Day. Geneseo Recognizing Excellence, Achievement & Talent Day is a college-wide symposium celebrating the creative and scholarly endeavors of our students. http://www.geneseo.edu/great_dayhttps://knightscholar.geneseo.edu/program-2007/1017/thumbnail.jp

    Computational modelling and optimal control of interacting particle systems: connecting dynamic density functional theory and PDE-constrained optimization

    Get PDF
    Processes that can be described by systems of interacting particles are ubiquitous in nature, society, and industry, ranging from animal flocking, the spread of diseases, and formation of opinions to nano-filtration, brewing, and printing. In real-world applications it is often relevant to not only model a process of interest, but to also optimize it in order to achieve a desired outcome with minimal resources, such as time, money, or energy. Mathematically, the dynamics of interacting particle systems can be described using Dynamic Density Functional Theory (DDFT). The resulting models are nonlinear, nonlocal partial differential equations (PDEs) that include convolution integral terms. Such terms also enter the naturally arising no-flux boundary conditions. Due to the nonlocal, nonlinear nature of such problems they are challenging both to analyse and solve numerically. In order to optimize processes that are modelled by PDEs, one can apply tools from PDE-constrained optimization. The aim here is to drive a quantity of interest towards a target state by varying a control variable. This is constrained by a PDE describing the process of interest, in which the control enters as a model parameter. Such problems can be tackled by deriving and solving the (first-order) optimality system, which couples the PDE model with a second PDE and an algebraic equation. Solving such a system numerically is challenging, since large matrices arise in its discretization, for which efficient solution strategies have to be found. Most work in PDE-constrained optimization addresses problems in which the control is applied linearly, and which are constrained by local, often linear PDEs, since introducing nonlinearity significantly increases the complexity in both the analysis and numerical solution of the optimization problem. However, in order to optimize real-world processes described by nonlinear, nonlocal DDFT models, one has to develop an optimal control framework for such models. The aim is to drive the particles to some desired distribution by applying control either linearly, through a particle source, or bilinearly, though an advective field. The optimization process is constrained by the DDFT model that describes how the particles move under the influence of advection, diffusion, external forces, and particle–particle interactions. In order to tackle this, the (first-order) optimality system is derived, which, since it involves nonlinear (integro-)PDEs that are coupled nonlocally in space and time, is significantly harder than in the standard case. Novel numerical methods are developed, effectively combining pseudospectral methods and iterative solvers, to efficiently and accurately solve such a system. In a next step this framework is extended so that it can capture and optimize industrially relevant processes, such as brewing and nano-filtration. In order to do so, extensions to both the DDFT model and the numerical method are made. Firstly, since industrial processes often involve tubes, funnels, channels, or tanks of various shapes, the PDE model itself, as well as the optimization problem, need to be solved on complicated domains. This is achieved by developing a novel spectral element approach that is compatible with both the PDE solver and the optimal control framework. Secondly, many industrial processes, such as nano-filtration, involve more than one type of particle. Therefore, the DDFT model is extended to describe multiple particle species. Finally, depending on the application of interest, additional physical effects need to be included in the model. In this thesis, to model sedimentation processes in brewing, the model is modified to capture volume exclusion effects

    Novel Mixture Allocation Models for Topic Learning

    Get PDF
    Unsupervised learning has been an interesting area of research in recent years. Novel algorithms are being built on the basis of unsupervised learning methodologies to solve many real world problems. Topic modelling is one such fascinating methodology that identifies patterns as topics within data. Introduction of latent Dirichlet Allocation (LDA) has bolstered research on topic modelling approaches with modifications specific to the application. However, the basic assumption of a Dirichlet prior in LDA for topic proportions, might not be applicable in certain real world scenarios. Hence, in this thesis we explore the use of generalized Dirichlet (GD) and Beta-Liouville (BL) as alternative priors for topic proportions. In addition, we assume a mixture of distributions over topic proportions which provides better fit to the data. In order to accommodate application of the resulting models to real-time streaming data, we also provide an online learning solution for the models. A supervised version of the learning framework is also provided and is shown to be advantageous when labelled data are available. There is a slight chance that the topics thus derived may not be that accurate. In order to alleviate this problem, we integrate an interactive approach which uses inputs from the user to improve the quality of identified topics. We have also tweaked our models to be applied for interesting applications such as parallel topics extraction from multilingual texts and content based recommendation systems proving the adaptability of our proposed models. In the case of multilingual topic extraction, we use global topic proportions sampled from a Dirichlet process (DP) to tackle the problem and in the case of recommendation systems, we use the co-occurrences of words to our advantage. For inference, we use a variational approach which makes computation of variational solutions easier. The applications we validated our models with, show the efficiency of proposed models

    Mixture-Based Clustering and Hidden Markov Models for Energy Management and Human Activity Recognition: Novel Approaches and Explainable Applications

    Get PDF
    In recent times, the rapid growth of data in various fields of life has created an immense need for powerful tools to extract useful information from data. This has motivated researchers to explore and devise new ideas and methods in the field of machine learning. Mixture models have gained substantial attention due to their ability to handle high-dimensional data efficiently and effectively. However, when adopting mixture models in such spaces, four crucial issues must be addressed, including the selection of probability density functions, estimation of mixture parameters, automatic determination of the number of components, identification of features that best discriminate the different components, and taking into account the temporal information. The primary objective of this thesis is to propose a unified model that addresses these interrelated problems. Moreover, this thesis proposes a novel approach that incorporates explainability. This thesis presents innovative mixture-based modelling approaches tailored for diverse applications, such as household energy consumption characterization, energy demand management, fault detection and diagnosis and human activity recognition. The primary contributions of this thesis encompass the following aspects: Initially, we propose an unsupervised feature selection approach embedded within a finite bounded asymmetric generalized Gaussian mixture model. This model is adept at handling synthetic and real-life smart meter data, utilizing three distinct feature extraction methods. By employing the expectation-maximization algorithm in conjunction with the minimum message length criterion, we are able to concurrently estimate the model parameters, perform model selection, and execute feature selection. This unified optimization process facilitates the identification of household electricity consumption profiles along with the optimal subset of attributes defining each profile. Furthermore, we investigate the impact of household characteristics on electricity usage patterns to pinpoint households that are ideal candidates for demand reduction initiatives. Subsequently, we introduce a semi-supervised learning approach for the mixture of mixtures of bounded asymmetric generalized Gaussian and uniform distributions. The integration of the uniform distribution within the inner mixture bolsters the model's resilience to outliers. In the unsupervised learning approach, the minimum message length criterion is utilized to ascertain the optimal number of mixture components. The proposed models are validated through a range of applications, including chiller fault detection and diagnosis, occupancy estimation, and energy consumption characterization. Additionally, we incorporate explainability into our models and establish a moderate trade-off between prediction accuracy and interpretability. Finally, we devise four novel models for human activity recognition (HAR): bounded asymmetric generalized Gaussian mixture-based hidden Markov model with feature selection~(BAGGM-FSHMM), bounded asymmetric generalized Gaussian mixture-based hidden Markov model~(BAGGM-HMM), asymmetric generalized Gaussian mixture-based hidden Markov model with feature selection~(AGGM-FSHMM), and asymmetric generalized Gaussian mixture-based hidden Markov model~(AGGM-HMM). We develop an innovative method for simultaneous estimation of feature saliencies and model parameters in BAGGM-FSHMM and AGGM-FSHMM while integrating the bounded support asymmetric generalized Gaussian distribution~(BAGGD), the asymmetric generalized Gaussian distribution~(AGGD) in the BAGGM-HMM and AGGM-HMM respectively. The aforementioned proposed models are validated using video-based and sensor-based HAR applications, showcasing their superiority over several mixture-based hidden Markov models~(HMMs) across various performance metrics. We demonstrate that the independent incorporation of feature selection and bounded support distribution in a HAR system yields benefits; Simultaneously, combining both concepts results in the most effective model among the proposed models

    OBOE: an Explainable Text Classification Framework

    Get PDF
    Explainable Artificial Intelligence (XAI) has recently gained visibility as one of the main topics of Artificial Intelligence research due to, among others, the need to provide a meaningful justification of the reasons behind the decision of black-box algorithms. Current approaches are based on model agnostic or ad-hoc solutions and, although there are frameworks that define workflows to generate meaningful explanations, a text classification framework that provides such explanations considering the different ingredients involved in the classification process (data, model, explanations, and users) is still missing. With the intention of covering this research gap, in this paper we present a text classification framework called OBOE (explanatiOns Based On concEpts), in which such ingredients play an active role to open the black-box. OBOE defines different components whose implementation can be customized and, thus, explanations are adapted to specific contexts. We also provide a tailored implementation to show the customization capability of OBOE. Additionally, we performed (a) a validation of the implemented framework to evaluate the performance using different corpora and (b) a user-based evaluation of the explanations provided by OBOE. The latter evaluation shows that the explanations generated in natural language express the reason for the classification results in a way that is comprehensible to non-technical users

    Discriminative calibration: Check Bayesian computation from simulations and flexible classifier

    Full text link
    To check the accuracy of Bayesian computations, it is common to use rank-based simulation-based calibration (SBC). However, SBC has drawbacks: The test statistic is somewhat ad-hoc, interactions are difficult to examine, multiple testing is a challenge, and the resulting p-value is not a divergence metric. We propose to replace the marginal rank test with a flexible classification approach that learns test statistics from data. This measure typically has a higher statistical power than the SBC rank test and returns an interpretable divergence measure of miscalibration, computed from classification accuracy. This approach can be used with different data generating processes to address likelihood-free inference or traditional inference methods like Markov chain Monte Carlo or variational inference. We illustrate an automated implementation using neural networks and statistically-inspired features, and validate the method with numerical and real data experiments.Comment: Published at Neural Information Processing Systems (NeurIPS 2023
    corecore