37 research outputs found

    System- and Data-Driven Methods and Algorithms

    Get PDF
    An increasing complexity of models used to predict real-world systems leads to the need for algorithms to replace complex models with far simpler ones, while preserving the accuracy of the predictions. This two-volume handbook covers methods as well as applications. This first volume focuses on real-time control theory, data assimilation, real-time visualization, high-dimensional state spaces and interaction of different reduction techniques

    Applications

    Get PDF

    Model Order Reduction

    Get PDF
    An increasing complexity of models used to predict real-world systems leads to the need for algorithms to replace complex models with far simpler ones, while preserving the accuracy of the predictions. This three-volume handbook covers methods as well as applications. This third volume focuses on applications in engineering, biomedical engineering, computational physics and computer science

    Knowledge base ontological debugging guided by linguistic evidence

    Get PDF
    Le résumé en français n'a pas été communiqué par l'auteur.When they grow in size, knowledge bases (KBs) tend to include sets of axioms which are intuitively absurd but nonetheless logically consistent. This is particularly true of data expressed in OWL, as part of the Semantic Web framework, which favors the aggregation of set of statements from multiple sources of knowledge, with overlapping signatures.Identifying nonsense is essential if one wants to avoid undesired inferences, but the sparse usage of negation within these datasets generally prevents the detection of such cases on a strict logical basis. And even if the KB is inconsistent, identifying the axioms responsible for the nonsense remains a non trivial task. This thesis investigates the usage of automatically gathered linguistic evidence in order to detect and repair violations of common sense within such datasets. The main intuition consists in exploiting distributional similarity between named individuals of an input KB, in order to identify consequences which are unlikely to hold if the rest of the KB does. Then the repair phase consists in selecting axioms to be preferably discarded (or at least amended) in order to get rid of the nonsense. A second strategy is also presented, which consists in strengthening the input KB with a foundational ontology, in order to obtain an inconsistency, before performing a form of knowledge base debugging/revision which incorporates this linguistic input. This last step may also be applied directly to an inconsistent input KB. These propositions are evaluated with different sets of statements issued from the Linked Open Data cloud, as well as datasets of a higher quality, but which were automatically degraded for the evaluation. The results seem to indicate that distributional evidence may actually constitute a relevant common ground for deciding between conflicting axioms

    Knowledge base ontological debugging guided by linguistic evidence

    Get PDF
    Le résumé en français n'a pas été communiqué par l'auteur.When they grow in size, knowledge bases (KBs) tend to include sets of axioms which are intuitively absurd but nonetheless logically consistent. This is particularly true of data expressed in OWL, as part of the Semantic Web framework, which favors the aggregation of set of statements from multiple sources of knowledge, with overlapping signatures.Identifying nonsense is essential if one wants to avoid undesired inferences, but the sparse usage of negation within these datasets generally prevents the detection of such cases on a strict logical basis. And even if the KB is inconsistent, identifying the axioms responsible for the nonsense remains a non trivial task. This thesis investigates the usage of automatically gathered linguistic evidence in order to detect and repair violations of common sense within such datasets. The main intuition consists in exploiting distributional similarity between named individuals of an input KB, in order to identify consequences which are unlikely to hold if the rest of the KB does. Then the repair phase consists in selecting axioms to be preferably discarded (or at least amended) in order to get rid of the nonsense. A second strategy is also presented, which consists in strengthening the input KB with a foundational ontology, in order to obtain an inconsistency, before performing a form of knowledge base debugging/revision which incorporates this linguistic input. This last step may also be applied directly to an inconsistent input KB. These propositions are evaluated with different sets of statements issued from the Linked Open Data cloud, as well as datasets of a higher quality, but which were automatically degraded for the evaluation. The results seem to indicate that distributional evidence may actually constitute a relevant common ground for deciding between conflicting axioms

    Probabilistic Inference for Model Based Control

    Get PDF
    Robotic systems are essential for enhancing productivity, automation, and performing hazardous tasks. Addressing the unpredictability of physical systems, this thesis advances robotic planning and control under uncertainty, introducing learning-based methods for managing uncertain parameters and adapting to changing environments in real-time. Our first contribution is a framework using Bayesian statistics for likelihood-free inference of model parameters. This allows employing complex simulators for designing efficient, robust controllers. The method, integrating the unscented transform with a variant of information theoretical model predictive control, shows better performance in trajectory evaluation compared to Monte Carlo sampling, easing the computational load in various control and robotics tasks. Next, we reframe robotic planning and control as a Bayesian inference problem, focusing on the posterior distribution of actions and model parameters. An implicit variational inference algorithm, performing Stein Variational Gradient Descent, estimates distributions over model parameters and control inputs in real-time. This Bayesian approach effectively handles complex multi-modal posterior distributions, vital for dynamic and realistic robot navigation. Finally, we tackle diversity in high-dimensional spaces. Our approach mitigates underestimation of uncertainty in posterior distributions, which leads to locally optimal solutions. Using the theory of rough paths, we develop an algorithm for parallel trajectory optimisation, enhancing solution diversity and avoiding mode collapse. This method extends our variational inference approach for trajectory estimation, employing diversity-enhancing kernels and leveraging path signature representation of trajectories. Empirical tests, ranging from 2-D navigation to robotic manipulators in cluttered environments, affirm our method's efficiency, outperforming existing alternatives

    Non-empirical Force-Field Development for Weakly-Bound Organic Molecules

    Get PDF
    This thesis pioneers the development of non-empirical anisotropic atom-atom force-fields for organic molecules, and their use as state-of-the-art intermolecular potentials for modelling the solid-state. The long-range electrostatic, polarization and dispersion terms have been derived directly from the molecular charge density, while the short-range terms are obtained through fitting to the symmetry-adapted perturbation theory (SAPT(DFT)) intermolecular interaction energies of a large number of different dimer configurations. This study aims to establish how far this approach, previously used for small molecules, could be applied to specialty molecules, and whether these potentials improve on the current empirical force-fields FIT and WILLIAMS01. The scaling of the underlying electronic structure calculations with system size means many adaptions have been made. This project aims to generate force-fields suitable for use in Crystal Structure Prediction (CSP) and for modelling possible polymorphs, particularly high-pressure polymorphs. By accurately modelling the repulsive wall of the potential energy surface, the high pressure/temperature conditions typically sampled by explosive materials could be studied reliably, as shown in a CSP study of pyridine using a non-empirical potential. This thesis also investigates the transferability of these potentials from the gas to condensed-phase, as well as the transferability and importance of the intermolecular interactions of flexible functional groups, in particular NO2 groups. The charge distribution was found to be strongly influenced by variations in the observed NO2 torsion angle and the conformation of the rest of the molecule. This conformation dependence coupled with the novelty of the methods and size of the molecules has made developing non-empirical models for flexible nitro-energetic materials very challenging. The thesis culminates in the development of a bespoke non-empirical force-field for rigid trinitrobenzene and its use in a CSP study

    Multimodal Panoptic Segmentation of 3D Point Clouds

    Get PDF
    The understanding and interpretation of complex 3D environments is a key challenge of autonomous driving. Lidar sensors and their recorded point clouds are particularly interesting for this challenge since they provide accurate 3D information about the environment. This work presents a multimodal approach based on deep learning for panoptic segmentation of 3D point clouds. It builds upon and combines the three key aspects multi view architecture, temporal feature fusion, and deep sensor fusion

    Multimodal Panoptic Segmentation of 3D Point Clouds

    Get PDF

    Simulation Tools and Developments on Integral Formulations for the Computation of Eddy Currents

    Get PDF
    openComputational electromagnetics is a discipline that since many years ago has permitted deep innovations in the study of electromagnetic problems. Even if, nowadays, commercial softwares undeniably show a certain maturity when applied to practical problems, some research work has still to be done in going beyond the theoretical limits underneath the various approaches. With respect to this, integral formulations still present some open issues. Historically, the exploitation of these formulations to study eddy currents started around the 90s with the seminal works of G. Albanese, R. Martone and R. Rubinacci together with the research activity of L. Kettunen and L. R. Turner and then with G. Meunier, who more recently rediscovered them. Lately, the contributions of L. Codecasa, R. Specogna and F. Trevisan have further increased the possibilities offered by this approach by introducing a set of new shape functions for polyhedral grids that are based on a discrete geometrical reinterpretation of the physics of electromagnetic phenomena. One of the main features characterizing integral formulations to compute eddy currents stems from the fact that they do not require any discretization of the complement of the conductor to be studied. As a drawback, they lead to fully populated matrices whose assembly results to be remarkably time consuming and whose size can sometimes saturate the memory of the calculator. In this respect, this composition presents a new volume integral code for polyhedral grids describing how a fast and efficient cohomology computation can be implemented to treat also non-simply connected domains. Then, some tools are provided for the reduction of the size, and thus of the assembly time too, of the fully populated matrix. More precisely, the attention is focused on the exploitation of cyclic symmetry and on the novel topology-related issues arising when integral formulations have to be referred only to the symmetry cell of the complete conducting domain in order not to spoil the block-circulant property of the system matrix when building the cohomology generators or the gauging tree. Furthermore, also new iterative methods are considered as additional approaches to limit the size of the system matrix to be assembled: despite being already known to the computational electromagnetics community, their convergence behaviour has not been studied yet when they are applied to integral formulations as the one here proposed. Specifically, after presenting a purely iterative scheme derived from the volume integral formulation whose convergence can be somehow problematic, we propose a new direct-iterative method based on Krylov subspace techniques and on the domain splitting into multiple conductors that exhibits a much improved behaviour. The study of these methods leads to new interesting findings to be considered in addition to matrix compression techniques.Dottorato di ricerca in Ingegneria industriale e dell'informazioneopenPassarotto, Maur
    corecore