1,672 research outputs found

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Leveraging elasticity theory to calculate cell forces: From analytical insights to machine learning

    Get PDF
    Living cells possess capabilities to detect and respond to mechanical features of their surroundings. In traction force microscopy, the traction of cells on an elastic substrate is made visible by observing substrate deformation as measured by the movement of embedded marker beads. Describing the substrates by means of elasticity theory, we can calculate the adhesive forces, improving our understanding of cellular function and behavior. In this dissertation, I combine analytical solutions with numerical methods and machine learning techniques to improve traction prediction in a range of experimental applications. I describe how to include the normal traction component in regularization-based Fourier approaches, which I apply to experimental data. I compare the dominant strategies for traction reconstruction, the direct method and inverse, regularization-based approaches and find, that the latter are more precise while the former is more stress resilient to noise. I find that a point-force based reconstruction can be used to study the force balance evolution in response to microneedle pulling showing a transition from a dipolar into a monopolar force arrangement. Finally, I show how a conditional invertible neural network not only reconstructs adhesive areas more localized, but also reveals spatial correlations and variations in reliability of traction reconstructions

    Interaction of elastomechanics and fluid dynamics in the human heart : Opportunities and challenges of light coupling strategies

    Get PDF
    Das menschliche Herz ist das hochkomplexe Herzstück des kardiovaskulären Systems, das permanent, zuverlässig und autonom den Blutfluss im Körper aufrechterhält. In Computermodellen wird die Funktionalität des Herzens nachgebildet, um Simulationsstudien durchzuführen, die tiefere Einblicke in die zugrundeliegenden Phänomene ermöglichen oder die Möglichkeit bieten, relevante Parameter unter vollständig kontrollierten Bedingungen zu variieren. Angesichts der Tatsache, dass Herz-Kreislauf-Erkrankungen die häufigste Todesursache in den Ländern der westlichen Hemisphäre sind, ist ein Beitrag zur frühzeit- igen Diagnose derselben von großer klinischer Bedeutung. In diesem Zusammenhang können computergestützte Strömungssimulationen wertvolle Einblicke in die Blutflussdynamik liefern und bieten somit die Möglichkeit, einen zentralen Bereich der Physik dieses multiphysikalischen Organs zu untersuchen. Da die Verformung der Endokardoberfläche den Blutfluss antreibt, müssen die Effekte der Elastomechanik als Randbedingungen für solche Strömungssimulationen berücksichtigt werden. Um im klinischen Kontext relevant zu sein, muss jedoch ein Mittelweg zwischen dem Rechenaufwand und der erforderlichen Genauigkeit gefunden werden, und die Modelle müssen sowohl robust als auch zuverlässig sein. Daher werden in dieser Arbeit die Möglichkeiten und Herausforderungen leichter und daher weniger komplexer Kopplungsstrategien mit Schwerpunkt auf drei Schlüsselaspekten bewertet: Erstens wird ein auf dem Immersed Boundary-Ansatz basierender Fluiddynamik-Löser implementiert, da diese Methode mit einer sehr robusten Darstellung von bewegten Netzen besticht. Die grundlegende Funktionalität wurde für verschiedene vereinfachte Geometrien verifiziert und zeigte eine hohe Übereinstimmung mit der jeweiligen analytischen Lösung. Vergleicht man die 3D-Simulation einer realistischen Geometrie des linken Teils des Herzens mit einem körperangepassten Netzbeschreibung, so wurden grundlegende globale Größen korrekt reproduziert. Allerdings zeigten Variationen der Randbedingungen einen großen Einfluss auf die Simulationsergebnisse. Die Anwendung des Lösers zur Simulation des Einflusses von Pathologien auf die Blutströmungsmuster ergab Ergebnisse in guter Übereinstimmung mit Literaturwerten. Bei Simulationen der Mitralklappeninsuffizienz wurde der rückströmende Anteil mit Hilfe einer Partikelverfolgungsmethode visualisiert. Bei hypertropher Kardiomyopathie wurden die Strömungsmuster im linken Ventrikel mit Hilfe eines passiven Skalartransports bewertet, um die lokale Konzentration des ursprünglichen Blutvolumens zu visualisieren. Da in den vorgenannten Studien nur ein unidirektionaler Informationsfluss vom elas- tomechanischen Modell zum Strömungslöser berücksichtigt wurde, wird die Rückwirkung des räumlich aufgelösten Druckfeldes aus den Strömungssimulationen auf die Elastomechanik quantifiziert. Es wird ein sequenzieller Kopplungsansatz eingeführt, um fluiddynamische Einflüsse in einer Schlag-für-Schlag-Kopplungsstruktur zu berücksichtigen. Die geringen Abweichungen im mechanischen Solver von 2 mm verschwanden bereits nach einer Iteration, was darauf schließen lässt, dass die Rückwirkungen der Fluiddynamik im gesunden Herzen begrenzt ist. Zusammenfassend lässt sich sagen, dass insbesondere bei Strömungsdynamiksimula- tionen die Randbedingungen mit Vorsicht gewählt werden müssen, da sie aufgrund ihres großen Einflusses die Anfälligkeit der Modelle erhöhen. Nichtsdestotrotz zeigten verein- fachte Kopplungsstrategien vielversprechende Ergebnisse bei der Reproduktion globaler fluiddynamischer Größen, während die Abhängigkeit zwischen den Lösern reduziert und Rechenaufwand eingespart wird

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Artificial Intelligence for the Edge Computing Paradigm.

    Get PDF
    With modern technologies moving towards the internet of things where seemingly every financial, private, commercial and medical transaction being carried out by portable and intelligent devices; Machine Learning has found its way into every smart device and application possible. However, Machine Learning cannot be used on the edge directly due to the limited capabilities of small and battery-powered modules. Therefore, this thesis aims to provide light-weight automated Machine Learning models which are applied on a standard edge device, the Raspberry Pi, where one framework aims to limit parameter tuning while automating feature extraction and a second which can perform Machine Learning classification on the edge traditionally, and can be used additionally for image-based explainable Artificial Intelligence. Also, a commercial Artificial Intelligence software have been ported to work in a client/server setups on the Raspberry Pi board where it was incorporated in all of the Machine Learning frameworks which will be presented in this thesis. This dissertation also introduces multiple algorithms that can convert images into Time-series for classification and explainability but also introduces novel Time-series feature extraction algorithms that are applied to biomedical data while introducing the concept of the Activation Engine, which is a post-processing block that tunes Neural Networks without the need of particular experience in Machine Leaning. Also, a tree-based method for multiclass classification has been introduced which outperforms the One-to-Many approach while being less complex that the One-to-One method.\par The results presented in this thesis exhibit high accuracy when compared with the literature, while remaining efficient in terms of power consumption and the time of inference. Additionally the concepts, methods or algorithms that were introduced are particularly novel technically, where they include: • Feature extraction of professionally annotated, and poorly annotated time-series. • The introduction of the Activation Engine post-processing block. • A model for global image explainability with inference on the edge. • A tree-based algorithm for multiclass classification

    Complexity Science in Human Change

    Get PDF
    This reprint encompasses fourteen contributions that offer avenues towards a better understanding of complex systems in human behavior. The phenomena studied here are generally pattern formation processes that originate in social interaction and psychotherapy. Several accounts are also given of the coordination in body movements and in physiological, neuronal and linguistic processes. A common denominator of such pattern formation is that complexity and entropy of the respective systems become reduced spontaneously, which is the hallmark of self-organization. The various methodological approaches of how to model such processes are presented in some detail. Results from the various methods are systematically compared and discussed. Among these approaches are algorithms for the quantification of synchrony by cross-correlational statistics, surrogate control procedures, recurrence mapping and network models.This volume offers an informative and sophisticated resource for scholars of human change, and as well for students at advanced levels, from graduate to post-doctoral. The reprint is multidisciplinary in nature, binding together the fields of medicine, psychology, physics, and neuroscience

    Characterising Shape Variation in the Human Right Ventricle Using Statistical Shape Analysis: Preliminary Outcomes and Potential for Predicting Hypertension in a Clinical Setting

    Get PDF
    Variations in the shape of the human right ventricle (RV) have previously been shown to be predictive of heart function and long term prognosis in Pulmonary Hypertension (PH), a deadly disease characterised by high blood pressure in the pulmonary arteries. The extent to which ventricular shape is also affected by non-pathological features such as sex, body mass index (BMI) and age is explored in this thesis. If fundamental differences in the shape of a structurally normal RV exist, these might also impact the success of a predictive model. This thesis evaluates the extent to which non-pathological features affect the shape of the RV and determines the best ways, in terms of procedure and analysis, to adapt the model to consistently predict PH. It also identifies areas where the statistical shape analysis procedure is robust, and considers the extent to which specific, non-pathological, characteristics impact the diagnostic potential of the statistical shape model. Finally, recommendations are made on next steps in the development of a classification procedure for PH. The dataset was composed of clinically-obtained, cardiovascular magnetic resonance images (CMR) from two independent sources; The University of Pittsburgh Medical Center and Newcastle University. Shape change is assessed using a 3D statistical shape analysis technique, which topologically maps heart meshes through an harmonic mapping approach to create a unique shape function for each shape. Proper Orthogonal Decomposition (POD) was applied to the complete set of shape functions in order to determine and rank a set of shape features (i.e. modes and corresponding coefficients from the decomposition). MRI scanning protocol produced the most significant difference in shape; a shape mode associated with detail at the RV apex and ventricular length from apex to base strongly correlated with the MRI sequence used to record each subject. Qualitatively, a protocol which skipped slices produced a shorter RV with less detail at the apex. Decomposition of sex, age and BMI also derives unique RV shape descriptors which correspond to anatomically meaningful features. The shape features are shown to be able to predict presence of PH. The predictive model can be improved by including BMI as a factor, but these improvements are mainly concentrated in identification of healthy subjects
    corecore