641,250 research outputs found

    Extending Two-level Information Modeling to the Internet of Things

    Get PDF
    Interoperability is a major challenge for the Internet of Things (IoT). The real potential of the IoT lies in facilitating largescale sharing of high-quality context-rich information through systems-of-IoT-systems, rather than IoT systems that operate as isolated technology silos. Real large-scale interoperability requires layers of standards, and each layer addresses different interoperability challenges. The SensorThings API data model seeks to tackle data interoperability at the data and informational layers of IoT platforms. SensorThings API is aligned to the ISO/OGC O&M data standard, and like O&M it is semistructured. Semi-structured models allow for variance within implementations for different use-cases, which is both necessary and detrimental to systems interoperability. In this paper we propose that the SensorThings API data model should be defined as a set of archetypes, used to capture extensible domain concepts using a two-level modeling IoT systems design approach. Extending two-level modeling to the IoT using the SensorThings API as a base for domain concepts definition allows for a powerful framework to manage variance within systems implementation and maintaining semantic interoperability within systems-of-IoT-systems across diverse use-cases

    Semivariogram methods for modeling Whittle-Mat\'ern priors in Bayesian inverse problems

    Full text link
    We present a new technique, based on semivariogram methodology, for obtaining point estimates for use in prior modeling for solving Bayesian inverse problems. This method requires a connection between Gaussian processes with covariance operators defined by the Mat\'ern covariance function and Gaussian processes with precision (inverse-covariance) operators defined by the Green's functions of a class of elliptic stochastic partial differential equations (SPDEs). We present a detailed mathematical description of this connection. We will show that there is an equivalence between these two Gaussian processes when the domain is infinite -- for us, R2\mathbb{R}^2 -- which breaks down when the domain is finite due to the effect of boundary conditions on Green's functions of PDEs. We show how this connection can be re-established using extended domains. We then introduce the semivariogram method for estimating the Mat\'ern covariance parameters, which specify the Gaussian prior needed for stabilizing the inverse problem. Results are extended from the isotropic case to the anisotropic case where the correlation length in one direction is larger than another. Finally, we consider the situation where the correlation length is spatially dependent rather than constant. We implement each method in two-dimensional image inpainting test cases to show that it works on practical examples

    SMAGEXP: a galaxy tool suite for transcriptomics data meta-analysis

    Full text link
    Bakground: With the proliferation of available microarray and high throughput sequencing experiments in the public domain, the use of meta-analysis methods increases. In these experiments, where the sample size is often limited, meta-analysis offers the possibility to considerably enhance the statistical power and give more accurate results. For those purposes, it combines either effect sizes or results of single studies in a appropriate manner. R packages metaMA and metaRNASeq perform meta-analysis on microarray and NGS data, respectively. They are not interchangeable as they rely on statistical modeling specific to each technology. Results: SMAGEXP (Statistical Meta-Analysis for Gene EXPression) integrates metaMA and metaRNAseq packages into Galaxy. We aim to propose a unified way to carry out meta-analysis of gene expression data, while taking care of their specificities. We have developed this tool suite to analyse microarray data from Gene Expression Omnibus (GEO) database or custom data from affymetrix microarrays. These data are then combined to carry out meta-analysis using metaMA package. SMAGEXP also offers to combine raw read counts from Next Generation Sequencing (NGS) experiments using DESeq2 and metaRNASeq package. In both cases, key values, independent from the technology type, are reported to judge the quality of the meta-analysis. These tools are available on the Galaxy main tool shed. Source code, help and installation instructions are available on github. Conclusion: The use of Galaxy offers an easy-to-use gene expression meta-analysis tool suite based on the metaMA and metaRNASeq packages

    An Automated Planning Model for HRI: Use Cases on Social Assistive Robotics

    Get PDF
    Using Automated Planning for the high level control of robotic architectures is becoming very popular thanks mainly to its capability to define the tasks to perform in a declarative way. However, classical planning tasks, even in its basic standard Planning Domain Definition Language (PDDL) format, are still very hard to formalize for non expert engineers when the use case to model is complex. Human Robot Interaction (HRI) is one of those complex environments. This manuscript describes the rationale followed to design a planning model able to control social autonomous robots interacting with humans. It is the result of the authors’ experience in modeling use cases for Social Assistive Robotics (SAR) in two areas related to healthcare: Comprehensive Geriatric Assessment (CGA) and non-contact rehabilitation therapies for patients with physical impairments. In this work a general definition of these two use cases in a unique planning domain is proposed, which favors the management and integration with the software robotic architecture, as well as the addition of new use cases. Results show that the model is able to capture all the relevant aspects of the Human-Robot interaction in those scenarios, allowing the robot to autonomously perform the tasks by using a standard planning-execution architecture.This work has been partially funded by the European Union ECHORD++ project (FP7-ICT-601116), and grants TIN2017-88476-C2-2-R and RTI2018-099522-B-C43 of FEDER/Ministerio de Ciencia e Innovación-Ministerio de Universidades-Agencia Estatal de Investigación. Javier García is partially supported by the Comunidad de Madrid funds under the project 2016-T2/TIC-1712

    MITK-ModelFit: A generic open-source framework for model fits and their exploration in medical imaging -- design, implementation and application on the example of DCE-MRI

    Full text link
    Many medical imaging techniques utilize fitting approaches for quantitative parameter estimation and analysis. Common examples are pharmacokinetic modeling in DCE MRI/CT, ADC calculations and IVIM modeling in diffusion-weighted MRI and Z-spectra analysis in chemical exchange saturation transfer MRI. Most available software tools are limited to a special purpose and do not allow for own developments and extensions. Furthermore, they are mostly designed as stand-alone solutions using external frameworks and thus cannot be easily incorporated natively in the analysis workflow. We present a framework for medical image fitting tasks that is included in MITK, following a rigorous open-source, well-integrated and operating system independent policy. Software engineering-wise, the local models, the fitting infrastructure and the results representation are abstracted and thus can be easily adapted to any model fitting task on image data, independent of image modality or model. Several ready-to-use libraries for model fitting and use-cases, including fit evaluation and visualization, were implemented. Their embedding into MITK allows for easy data loading, pre- and post-processing and thus a natural inclusion of model fitting into an overarching workflow. As an example, we present a comprehensive set of plug-ins for the analysis of DCE MRI data, which we validated on existing and novel digital phantoms, yielding competitive deviations between fit and ground truth. Providing a very flexible environment, our software mainly addresses developers of medical imaging software that includes model fitting algorithms and tools. Additionally, the framework is of high interest to users in the domain of perfusion MRI, as it offers feature-rich, freely available, validated tools to perform pharmacokinetic analysis on DCE MRI data, with both interactive and automatized batch processing workflows.Comment: 31 pages, 11 figures URL: http://mitk.org/wiki/MITK-ModelFi

    Language Design for Reactive Systems: On Modal Models, Time, and Object Orientation in Lingua Franca and SCCharts

    Get PDF
    Reactive systems play a crucial role in the embedded domain. They continuously interact with their environment, handle concurrent operations, and are commonly expected to provide deterministic behavior to enable application in safety-critical systems. In this context, language design is a key aspect, since carefully tailored language constructs can aid in addressing the challenges faced in this domain, as illustrated by the various concurrency models that prevent the known pitfalls of regular threads. Today, many languages exist in this domain and often provide unique characteristics that make them specifically fit for certain use cases. This thesis evolves around two distinctive languages: the actor-oriented polyglot coordination language Lingua Franca and the synchronous statecharts dialect SCCharts. While they take different approaches in providing reactive modeling capabilities, they share clear similarities in their semantics and complement each other in design principles. This thesis analyzes and compares key design aspects in the context of these two languages. For three particularly relevant concepts, it provides and evaluates lean and seamless language extensions that are carefully aligned with the fundamental principles of the underlying language. Specifically, Lingua Franca is extended toward coordinating modal behavior, while SCCharts receives a timed automaton notation with an efficient execution model using dynamic ticks and an extension toward the object-oriented modeling paradigm

    Semantic-guided predictive modeling and relational learning within industrial knowledge graphs

    Get PDF
    The ubiquitous availability of data in today’s manufacturing environments, mainly driven by the extended usage of software and built-in sensing capabilities in automation systems, enables companies to embrace more advanced predictive modeling and analysis in order to optimize processes and usage of equipment. While the potential insight gained from such analysis is high, it often remains untapped, since integration and analysis of data silos from different production domains requires high manual effort and is therefore not economic. Addressing these challenges, digital representations of production equipment, so-called digital twins, have emerged leading the way to semantic interoperability across systems in different domains. From a data modeling point of view, digital twins can be seen as industrial knowledge graphs, which are used as semantic backbone of manufacturing software systems and data analytics. Due to the prevalent historically grown and scattered manufacturing software system landscape that is comprising of numerous proprietary information models, data sources are highly heterogeneous. Therefore, there is an increasing need for semi-automatic support in data modeling, enabling end-user engineers to model their domain and maintain a unified semantic knowledge graph across the company. Once the data modeling and integration is done, further challenges arise, since there has been little research on how knowledge graphs can contribute to the simplification and abstraction of statistical analysis and predictive modeling, especially in manufacturing. In this thesis, new approaches for modeling and maintaining industrial knowledge graphs with focus on the application of statistical models are presented. First, concerning data modeling, we discuss requirements from several existing standard information models and analytic use cases in the manufacturing and automation system domains and derive a fragment of the OWL 2 language that is expressive enough to cover the required semantics for a broad range of use cases. The prototypical implementation enables domain end-users, i.e. engineers, to extend the basis ontology model with intuitive semantics. Furthermore it supports efficient reasoning and constraint checking via translation to rule-based representations. Based on these models, we propose an architecture for the end-user facilitated application of statistical models using ontological concepts and ontology-based data access paradigms. In addition to that we present an approach for domain knowledge-driven preparation of predictive models in terms of feature selection and show how schema-level reasoning in the OWL 2 language can be employed for this task within knowledge graphs of industrial automation systems. A production cycle time prediction model in an example application scenario serves as a proof of concept and demonstrates that axiomatized domain knowledge about features can give competitive performance compared to purely data-driven ones. In the case of high-dimensional data with small sample size, we show that graph kernels of domain ontologies can provide additional information on the degree of variable dependence. Furthermore, a special application of feature selection in graph-structured data is presented and we develop a method that allows to incorporate domain constraints derived from meta-paths in knowledge graphs in a branch-and-bound pattern enumeration algorithm. Lastly, we discuss maintenance of facts in large-scale industrial knowledge graphs focused on latent variable models for the automated population and completion of missing facts. State-of-the art approaches can not deal with time-series data in form of events that naturally occur in industrial applications. Therefore we present an extension of learning knowledge graph embeddings in conjunction with data in form of event logs. Finally, we design several use case scenarios of missing information and evaluate our embedding approach on data coming from a real-world factory environment. We draw the conclusion that industrial knowledge graphs are a powerful tool that can be used by end-users in the manufacturing domain for data modeling and model validation. They are especially suitable in terms of the facilitated application of statistical models in conjunction with background domain knowledge by providing information about features upfront. Furthermore, relational learning approaches showed great potential to semi-automatically infer missing facts and provide recommendations to production operators on how to keep stored facts in synch with the real world

    Modeling strategies for multiple scenarios and fast simulations in large systems: applications to fire safety and energy engineering

    Get PDF
    The use of computational modeling has become very popular and important in many engineering and physical fields, as it is considered a fast and inexpensive technique to support and often substitute experimental analysis. In fact system design and analysis can be carried out through computational studies instead of experiments, that are typically demanding in terms of cost and technical resources; sometimes the systems characteristics and the technical problems make the experiments impossible to perform and the use of computational tools is the only feasible option. Demand of resources for realistic simulation is increasing due to the interest in studying complex and large systems. In these framework smart modeling approaches and model reduction techniques play a crucial role for making complex and large system suitable for simulations. Moreover, it should be considered that often more than one simulation is requested in order to perform an analysis. For instance, if a heuristic method is applied to the optimization of a component, the model has to be run a certain number of times. The same problem arises when a certain level of uncertainty affect the system parameters; in this case also many simulation are required for obtaining the desired information. This is the reason why the use of technique that allows to obtain compact model is an interesting topic nowadays. In this PhD thesis different reduction approaches and strategies have been used in order to analyze three energetic systems involving large domain and long time, one for each reduction approach categories. In all the topic considered, a smart model has been adopted and, when data were available, tested using experimental data. All the model are characterized by large domain and the time involved in the analysis are high in all the cases, therefore a method for compact model achievement is used in all the cases. The considered topics are: • Groundwater temperature perturbations due to geothermal heat pump installations, analyzed trough a multi-level model. • District heating networks (DHN), studied from both the fluid-dynamic and thermal point of view and applied to one of the larger network in Europe, the Turin district heating system (DHS), trough a Proper Orthogonal Decomposition - Radial Basis Function model. • Forest fire propagation simulation carried out using a Proper Orthogonal Decomposition projection model

    TranspLanMeta: A metamodel for TranspLan modeling language

    Get PDF
    Transparency and transparent decision making are essential requirements in information systems. To this end, a modeling language called TranspLan has been proposed. TranspLan is a domain-specific modeling language which is designed for the purpose of analysing and modeling transparency requirements in information systems. This paper presents a metamodel for transparency requirements modeling. We are introducing a model-driven approach to TranspLan language specifications to facilitate the use of the language more efficiently in real life cases. Metamodeling is an effective method for formally defining domain specific languages and moving from specifications to computer-aided modeling. In this paper, we propose a metamodel for TranspLan modeling language which is called as TranspLanMeta. The metamodeling process helps us to transfer TranspLan language specifications into a machine-readable format. The metamodel has been developed with GME (Generic Modelling Environment), which is a configurable toolkit for creating domain-specific modeling and program synthesis environments. By developing TranspLanMeta with GME, an automatically-generated modeling tool for TranspLan language is provided as well. In this way, an effective approach for accelerating software development is followed and the auto-generated modeling editor is used to define various models. This work provides a formal and practical solution for transparency modeling and a well-defined basis for using transparency requirements models in the further steps of the business process
    corecore