9,926 research outputs found

    DEVELOPMENT OF PROBLEM-SPECIFIC MODELING LANGUAGE TO SUPPORT SOFTWARE VARIABILITY IN "SMART HOME" SYSTEMS

    Get PDF
    Building conceptual models for software design, in particular for high-tech applications such as smart home systems, is a complex task that significantly affects the efficiency of their development processes. One of the innovative methods of solving this problem is the use of domain-specific modeling languages (DSMLs), which can reduce the time and other project resources required to create such systems. The subject of research in this paper is approaches to the development of DSML for Smart Home systems as a separate class of Internet of Things systems. The purpose of this work is to propose an approach to the development of DSMLs based on a model of variability of the properties of such a system. The following tasks are being solved: analysis of some existing approaches to the creation of DSMLs; construction of a multifaceted classification of requirements for them, application of these requirements to the design of the syntax of a specific DSML-V for the creation of variable software in smart home systems; development of a technological scheme and quantitative metrics for experimental evaluation of the effectiveness of the proposed approach. The following methods are used: variability modeling based on the property model, formal notations for describing the syntax of the DSML-V language, and the use of the open CASE tool metaDepth. Results: a multifaceted classification of requirements for a broad class of DSML languages is built; the basic syntactic constructions of the DSML-V language are developed to support the properties of software variability of "Smart Home" systems; a formal description of such syntax in the Backus-Naur notation is given; a technological scheme for compiling DSML-V specifications into the syntax of the language of the open CASE tool metaDepth is created; the effectiveness of the proposed approach using quantitative metrics is experimentally investigated. Conclusions: the proposed method of developing a specialized problem-oriented language for smart home systems allows for multilevel modeling of the variability properties of its software components and provides an increase in the efficiency of programming such models by about 14% compared to existing approaches

    Meta-ontology fault detection

    Get PDF
    Ontology engineering is the field, within knowledge representation, concerned with using logic-based formalisms to represent knowledge, typically moderately sized knowledge bases called ontologies. How to best develop, use and maintain these ontologies has produced relatively large bodies of both formal, theoretical and methodological research. One subfield of ontology engineering is ontology debugging, and is concerned with preventing, detecting and repairing errors (or more generally pitfalls, bad practices or faults) in ontologies. Due to the logical nature of ontologies and, in particular, entailment, these faults are often both hard to prevent and detect and have far reaching consequences. This makes ontology debugging one of the principal challenges to more widespread adoption of ontologies in applications. Moreover, another important subfield in ontology engineering is that of ontology alignment: combining multiple ontologies to produce more powerful results than the simple sum of the parts. Ontology alignment further increases the issues, difficulties and challenges of ontology debugging by introducing, propagating and exacerbating faults in ontologies. A relevant aspect of the field of ontology debugging is that, due to the challenges and difficulties, research within it is usually notably constrained in its scope, focusing on particular aspects of the problem or on the application to only certain subdomains or under specific methodologies. Similarly, the approaches are often ad hoc and only related to other approaches at a conceptual level. There are no well established and widely used formalisms, definitions or benchmarks that form a foundation of the field of ontology debugging. In this thesis, I tackle the problem of ontology debugging from a more abstract than usual point of view, looking at existing literature in the field and attempting to extract common ideas and specially focussing on formulating them in a common language and under a common approach. Meta-ontology fault detection is a framework for detecting faults in ontologies that utilizes semantic fault patterns to express schematic entailments that typically indicate faults in a systematic way. The formalism that I developed to represent these patterns is called existential second-order query logic (abbreviated as ESQ logic). I further reformulated a large proportion of the ideas present in some of the existing research pieces into this framework and as patterns in ESQ logic, providing a pattern catalogue. Most of the work during my PhD has been spent in designing and implementing an algorithm to effectively automatically detect arbitrary ESQ patterns in arbitrary ontologies. The result is what we call minimal commitment resolution for ESQ logic, an extension of first-order resolution, drawing on important ideas from higher-order unification and implementing a novel approach to unification problems using dependency graphs. I have proven important theoretical properties about this algorithm such as its soundness, its termination (in a certain sense and under certain conditions) and its fairness or completeness in the enumeration of infinite spaces of solutions. Moreover, I have produced an implementation of minimal commitment resolution for ESQ logic in Haskell that has passed all unit tests and produces non-trivial results on small examples. However, attempts to apply this algorithm to examples of a more realistic size have proven unsuccessful, with computation times that exceed our tolerance levels. In this thesis, I have provided both details of the challenges faced in this regard, as well as other successful forms of qualitative evaluation of the meta-ontology fault detection approach, and discussions about both what I believe are the main causes of the computational feasibility problems, ideas on how to overcome them, and also ideas on other directions of future work that could use the results in the thesis to contribute to the production of foundational formalisms, ideas and approaches to ontology debugging that can properly combine existing constrained research. It is unclear to me whether minimal commitment resolution for ESQ logic can, in its current shape, be implemented efficiently or not, but I believe that, at the very least, the theoretical and conceptual underpinnings that I have presented in this thesis will be useful to produce more foundational results in the field

    Modified Theories of Gravity and Cosmological Applications

    Get PDF
    This reprint focuses on recent aspects of gravitational theory and cosmology. It contains subjects of particular interest for modified gravity theories and applications to cosmology, special attention is given to Einstein–Gauss–Bonnet, f(R)-gravity, anisotropic inflation, extra dimension theories of gravity, black holes, dark energy, Palatini gravity, anisotropic spacetime, Einstein–Finsler gravity, off-diagonal cosmological solutions, Hawking-temperature and scalar-tensor-vector theories

    Modelling, Monitoring, Control and Optimization for Complex Industrial Processes

    Get PDF
    This reprint includes 22 research papers and an editorial, collected from the Special Issue "Modelling, Monitoring, Control and Optimization for Complex Industrial Processes", highlighting recent research advances and emerging research directions in complex industrial processes. This reprint aims to promote the research field and benefit the readers from both academic communities and industrial sectors

    Efficient finite element methods for solving high-frequency time-harmonic acoustic wave problems in heterogeneous media

    Full text link
    This thesis focuses on the efficient numerical solution of frequency-domain wave propagation problems using finite element methods. In the first part of the manuscript, the development of domain decomposition methods is addressed, with the aim of overcoming the limitations of state-of-the art direct and iterative solvers. To this end, a non-overlapping substructured domain decomposition method with high-order absorbing conditions used as transmission conditions (HABC DDM) is first extended to deal with cross-points, where more than two subdomains meet. The handling of cross-points is a well-known issue for non-overlapping HABC DDMs. Our methodology proposes an efficient solution for lattice-type domain partitions, where the domains meet at right angles. The method is based on the introduction of suitable relations and additional transmission variables at the cross-points, and its effectiveness is demonstrated on several test cases. A similar non-overlapping substructured DDM is then proposed with Perfectly Matched Layers instead of HABCs used as transmission conditions (PML DDM). The proposed approach naturally considers cross-points for two-dimensional checkerboard domain partitions through Lagrange multipliers used for the weak coupling between subproblems defined on rectangular subdomains and the surrounding PMLs. Two discretizations for the Lagrange multipliers and several stabilization strategies are proposed and compared. The performance of the HABC and PML DDM is then compared on test cases of increasing complexity, from two-dimensional wave scattering in homogeneous media to three-dimensional wave propagation in highly heterogeneous media. While the theoretical developments are carried out for the scalar Helmholtz equation for acoustic wave propagation, the extension to elastic wave problems is also considered, highlighting the potential for further generalizations to other physical contexts. The second part of the manuscript is devoted to the presentation of the computational tools developed during the thesis and which were used to produce all the numerical results: GmshFEM, a new C++ finite element library based on the application programming interface of the open-source finite element mesh generator Gmsh; and GmshDDM, a distributed domain decomposition library based on GmshFEM.Cette thèse porte sur la résolution numérique efficace de problèmes de propagation d'ondes dans le domaine fréquentiel avec la méthode des éléments finis. Dans la première partie du manuscrit, le développement de méthodes de décomposition de domaine est abordé, dans le but de surmonter les limitations des solveurs directs et itératifs de l'état de l'art. À cette fin, une méthode de décomposition de domaine sous-structurée sans recouvrement avec des conditions absorbante d'ordre élevé utilisées comme conditions de transmission (HABC DDM) est d'abord étendue pour traiter les points de jonction, où plus de deux sous-domaines se rencontrent. Le traitement des points de jonction est un problème bien connu pour les HABC DDM sans recouvrement. La méthodologie proposée mène à une solution efficace pour les partitions en damier, où les domaines se rencontrent à angle droit. La méthode est basée sur l'introduction de variables de transmission supplémentaires aux points de jonction, et son efficacité est démontrée sur plusieurs cas-tests. Une DDM sans recouvrement similaire est ensuite proposée avec des couches parfaitement adaptées au lieu des HABC (DDM PML). L'approche proposée prend naturellement en compte les points de jonction des partitions de domaine en damier par le biais de multiplicateurs de Lagrange couplant les sous-domaines et les couches PML adjacentes. Deux discrétisations pour les multiplicateurs de Lagrange et plusieurs stratégies de stabilisation sont proposées et comparées. Les performances des DDM HABC et PML sont ensuite comparées sur des cas-tests de complexité croissante, allant de la diffraction d'ondes dans des milieux homogènes bidimensionnelles à la propagation d'ondes tridimensionnelles dans des milieux hautement hétérogènes. Alors que les développements théoriques sont effectués pour l'équation scalaire de Helmholtz pour la simulation d'ondes acoustiques, l'extension aux problèmes d'ondes élastiques est également considérée, mettant en évidence le potentiel de généralisation des méthodes développées à d'autres contextes physiques. La deuxième partie du manuscrit est consacrée à la présentation des outils de calcul développés au cours de la thèse et qui ont été utilisés pour produire tous les résultats numériques : GmshFEM, une nouvelle bibliothèque d'éléments finis C++ basée sur le générateur de maillage open-source Gmsh ; et GmshDDM, une bibliothèque de décomposition de domaine distribuée basée sur GmshFEM

    Analysing Fairness of Privacy-Utility Mobility Models

    Full text link
    Preserving the individuals' privacy in sharing spatial-temporal datasets is critical to prevent re-identification attacks based on unique trajectories. Existing privacy techniques tend to propose ideal privacy-utility tradeoffs, however, largely ignore the fairness implications of mobility models and whether such techniques perform equally for different groups of users. The quantification between fairness and privacy-aware models is still unclear and there barely exists any defined sets of metrics for measuring fairness in the spatial-temporal context. In this work, we define a set of fairness metrics designed explicitly for human mobility, based on structural similarity and entropy of the trajectories. Under these definitions, we examine the fairness of two state-of-the-art privacy-preserving models that rely on GAN and representation learning to reduce the re-identification rate of users for data sharing. Our results show that while both models guarantee group fairness in terms of demographic parity, they violate individual fairness criteria, indicating that users with highly similar trajectories receive disparate privacy gain. We conclude that the tension between the re-identification task and individual fairness needs to be considered for future spatial-temporal data analysis and modelling to achieve a privacy-preserving fairness-aware setting

    Multidimensional adaptive order GP-WENO via kernel-based reconstruction

    Full text link
    This paper presents a fully multidimensional kernel-based reconstruction scheme for finite volume methods applied to systems of hyperbolic conservation laws, with a particular emphasis on the compressible Euler equations. Non-oscillatory reconstruction is achieved through an adaptive order weighted essentially non-oscillatory (WENO-AO) method cast into a form suited to multidimensional stencils and reconstruction. A kernel-based approach inspired by Gaussian process (GP) modeling is presented here. This approach allows the creation of a scheme of arbitrary order with simply defined multidimensional stencils and substencils. Furthermore, the fully multidimensional nature of the reconstruction allows a more straightforward extension to higher spatial dimensions and removes the need for complicated boundary conditions on intermediate quantities in modified dimension-by-dimension methods. In addition, a new simple-yet-effective set of reconstruction variables is introduced, as well as an easy-to-implement effective limiter for positivity preservation, both of which could be useful in existing schemes with little modification. The proposed scheme is applied to a suite of stringent and informative benchmark problems to demonstrate its efficacy and utility.Comment: Submitted to Journal of Computational Physics April 202

    Computational methods for 3D imaging of neural activity in light-field microscopy

    Get PDF
    Light Field Microscopy (LFM) is a 3D imaging technique that captures spatial and angular information from light in a single snapshot. LFM is an appealing technique for applications in biological imaging due to its relatively simple implementation and fast 3D imaging speed. For instance, LFM can help to understand how neurons process information, as shown for functional neuronal calcium imaging. However, traditional volume reconstruction approaches for LFM suffer from low lateral resolution, high computational cost, and reconstruction artifacts near the native object plane. Therefore, in this thesis, we propose computational methods to improve the reconstruction performance of 3D imaging for LFM with applications to imaging neural activity. First, we study the image formation process and propose methods for discretization and simplification of the LF system. Typical approaches for discretization are performed by computing the discrete impulse response at different input locations defined by a sampling grid. Unlike conventional methods, we propose an approach that uses shift-invariant subspaces to generalize the discretization framework used in LFM. Our approach allows the selection of diverse sampling kernels and sampling intervals. Furthermore, the typical discretization method is a particular case of our formulation. Moreover, we propose a description of the system based on filter banks that fit the physics of the system. The periodic-shift invariant property per depth guarantees that the system can be accurately described by using filter banks. This description leads to a novel method to reduce the computational time using singular value decomposition (SVD). Our simplification method capitalizes on the inherent low-rank behaviour of the system. Furthermore, we propose rearranging our filter-bank model into a linear convolution neural network (CNN) that allows more convenient implementation using existing deep-learning software. Then, we study the problem of 3D reconstruction from single light-field images. We propose the shift-invariant-subspace assumption as a prior for volume reconstruction under ideal conditions. We experimentally show that artifact-free reconstruction (aliasing-free) is achievable under these settings. Furthermore, the tools developed to study the forward model are exploited to design a reconstruction algorithm based on ADMM that allows artifact-free 3D reconstruction for real data. Contrary to traditional approaches, our method includes additional priors for reconstruction without dramatically increasing the computational complexity. We extensively evaluate our approach on synthetic and real data and show that our approach performs better than conventional model-based strategies in computational time, image quality, and artifact reduction. Finally, we exploit deep-learning techniques for reconstruction. Specifically, we propose to use two-photon imaging to enhance the performance of LFM when imaging neurons in brain tissues. The architecture of our network is derived from a sparsity-based algorithm for reconstruction named Iterative Shrinkage and Thresholding Algorithm (ISTA). Furthermore, we propose a semi-supervised training based on Generative Adversarial Neural Networks (GANs) that exploits the knowledge of the forward model to achieve remarkable reconstruction quality. We propose efficient architectures to compute the forward model using linear CNNs. This description allows fast computation of the forward model and complements our reconstruction approach. Our method is tested under adverse conditions: lack of training data, background noise, and non-transparent samples. We experimentally show that our method performs better than model-based reconstruction strategies and typical neural networks for imaging neuronal activity in mammalian brain tissue. Our approach enjoys both the robustness of the model-based methods and the reconstruction speed of deep learning.Open Acces

    Composable code generation for high order, compatible finite element methods

    Get PDF
    It has been widely recognised in the HPC communities across the world, that exploiting modern computer architectures, including exascale machines, to a full extent requires software commu- nities to adapt their algorithms. Computational methods with a high ratio of floating point op- erations to bandwidth are favorable. For solving partial differential equations, which can model many physical problems, high order finite element methods can calculate approximations with a high efficiency when a good solver is employed. Matrix-free algorithms solve the corresponding equations with a high arithmetic intensity. Vectorisation speeds up the operations by calculating one instruction on multiple data elements. Another recent development for solving partial differential are compatible (mimetic) finite ele- ment methods. In particular with application to geophysical flows, compatible discretisations ex- hibit desired numerical properties required for accurate approximations. Among others, this has been recognised by the UK Met office and their new dynamical core for weather and climate fore- casting is built on a compatible discretisation. Hybridisation has been proven to be an efficient solver for the corresponding equation systems, because it removes some inter-elemental coupling and localises expensive operations. This thesis combines the recent advances on vectorised, matrix-free, high order finite element methods in the HPC community on the one hand and hybridised, compatible discretisations in the geophysical community on the other. In previous work, a code generation framework has been developed to support the localised linear algebra required for hybridisation. First, the framework is adapted to support vectorisation and further, extended so that the equations can be solved fully matrix-free. Promising performance results are completing the thesis.Open Acces
    corecore