543 research outputs found

    Ontology-Based Consistent Specification of Sensor Data Acquisition Plans in Cross-Domain IoT Platforms

    Get PDF
    Nowadays there is an high number of IoT applications that seldom can interact with each other because developed within different Vertical IoT Platforms that adopt different standards. Several efforts are devoted to the construction of cross-layered frameworks that facilitate the interoperability among cross-domain IoT platforms for the development of horizontal applications. Even if their realization poses different challenges across all layers of the network stack, in this paper we focus on the interoperability issues that arise at the data management layer. Specifically, starting from a flexible multi-granular Spatio-Temporal-Thematic data model according to which events generated by different kinds of sensors can be represented, we propose a Semantic Virtualization approach according to which the sensors belonging to different IoT platforms and the schema of the produced event streams are described in a Domain Ontology, obtained through the extension of the well-known Semantic Sensor Network ontology. Then, these sensors can be exploited for the creation of Data Acquisition Plans by means of which the streams of events can be filtered, merged, and aggregated in a meaningful way. A notion of consistency is introduced to bind the output streams of the services contained in the Data Acquisition Plan with the Domain Ontology in order to provide a semantic description of its final output. When these plans meet the consistency constraints, it means that the data they handle are well described at the Ontological level and thus the data acquisition process over passed the interoperability barriers occurring in the original sources. The facilities of the StreamLoader prototype are finally presented for supporting the user in the Semantic Virtualization process and for the construction of meaningful Data Acquisition Plans

    Ontology-based Consistent Specification and Scalable Execution of Sensor Data Acquisition Plans in Cross-Domain loT Platforms

    Get PDF
    Nowadays there is an increased number of vertical Internet of Things (IoT) applications that have been developed within IoT Platforms that often do not interact with each other because of the adoption of different standards and formats. Several efforts are devoted to the construction of software infrastructures that facilitate the interoperability among heterogeneous cross-domain IoT platforms for the realization of horizontal applications. Even if their realization poses different challenges across all layers of the network stack, in this thesis we focus on the interoperability issues that arise at the data management layer. Starting from a flexible multi-granular Spatio-Temporal-Thematic data model according to which events generated by different kinds of sensors can be represented, we propose a Semantic Virtualization approach according to which the sensors belonging to different IoT platforms and the schema of the produced event streams are described in a Domain Ontology, obtained through the extension of the well-known ontologies (SSN and IoT-Lite ontologies) to the needs of a specific domain. Then, these sensors can be exploited for the creation of Data Acquisition Plans (DAPs) by means of which the streams of events can be filtered, merged, and aggregated in a meaningful way. Notions of soundness and consistency are introduced to bind the output streams of the services contained in the DAP with the Domain Ontology for providing a semantic description of its final output. The facilities of the \streamLoader prototype are finally presented for supporting the domain experts in the Semantic Virtualization of the sensors and for the construction of meaningful DAPs. Different graphical facilities have been developed for supporting domain experts in the development of complex DAPs. The system provides also facilities for their syntax-based translations in the Apache Spark Streaming language and execution in real time in a distributed cluster of machines

    Towards a maintenance semantic architecture.

    No full text
    International audienceTechnological and software progress with the evolution of processes within company have highlighted the need to evolve systems of maintenance process from autonomous systems to cooperative and sharing information system based on software platform. However, this need gives rise to various maintenance platforms. The first part of this study investigates the different types of existing industrial platforms and characterizes them compared to two criteria namely : information exchange and relationship intensity. This allowed identifying the e-maintenance architecture as the current most efficient architecture. despite its effectiveness, this latter can only guarantee technical interoperability between various components. Therefore, the second part of this study proposes a semantic-knowledge based architecture, thereby ensuring a higher level of semantic interoperability. To this end, specific maintenance ontology has been developed

    Coalition Battle Management Language (C-BML) Study Group Final Report

    Get PDF
    Interoperability across Modeling and Simulation (M&S) and Command and Control (C2) systems continues to be a significant problem for today\u27s warfighters. M&S is well-established in military training, but it can be a valuable asset for planning and mission rehearsal if M&S and C2 systems were able to exchange information, plans, and orders more effectively. To better support the warfighter with M&S based capabilities, an open standards-based framework is needed that establishes operational and technical coherence between C2 and M&S systems

    Multi-Agent Systems

    Get PDF
    A multi-agent system (MAS) is a system composed of multiple interacting intelligent agents. Multi-agent systems can be used to solve problems which are difficult or impossible for an individual agent or monolithic system to solve. Agent systems are open and extensible systems that allow for the deployment of autonomous and proactive software components. Multi-agent systems have been brought up and used in several application domains

    Combining computer vision and deep learning to enable ultra-scale aerial phenotyping and precision agriculture: A case study of lettuce production:AirSurf-Lettuce

    Get PDF
    Aerial imagery is regularly used by crop researchers, growers and farmers to monitor crops during the growing season. To extract meaningful information from large-scale aerial images collected from the field, high-throughput phenotypic analysis solutions are required, which not only produce high-quality measures of key crop traits, but also support professionals to make prompt and reliable crop management decisions. Here, we report AirSurf, an automated and open-source analytic platform that combines modern computer vision, up-to-date machine learning, and modular software engineering in order to measure yield-related phenotypes from ultra-large aerial imagery. To quantify millions of in-field lettuces acquired by fixed-wing light aircrafts equipped with normalised difference vegetation index (NDVI) sensors, we customised AirSurf by combining computer vision algorithms and a deep-learning classifier trained with over 100,000 labelled lettuce signals. The tailored platform, AirSurf-Lettuce, is capable of scoring and categorising iceberg lettuces with high accuracy (>98%). Furthermore, novel analysis functions have been developed to map lettuce size distribution across the field, based on which associated global positioning system (GPS) tagged harvest regions have been identified to enable growers and farmers to conduct precision agricultural practises in order to improve the actual yield as well as crop marketability before the harvest

    Approximate Assertional Reasoning Over Expressive Ontologies

    Get PDF
    In this thesis, approximate reasoning methods for scalable assertional reasoning are provided whose computational properties can be established in a well-understood way, namely in terms of soundness and completeness, and whose quality can be analyzed in terms of statistical measurements, namely recall and precision. The basic idea of these approximate reasoning methods is to speed up reasoning by trading off the quality of reasoning results against increased speed

    Semantic model-driven framework for validating quality requirements of Internet of Things streaming data

    Get PDF
    The rise of Internet of Things has provided platforms mostly enhanced by real-time data-driven services for reactive services and Smart Cities innovations. However, IoT streaming data are known to be compromised by quality problems, thereby influencing the performance and accuracy of IoT-based reactive services or Smart applications. This research investigates the suitability of the semantic approach for the run-time validation of IoT streaming data for quality problems. To realise this aim, Semantic IoT Streaming Data Validation with its framework (SISDaV) is proposed. The novel approach involves technologies for semantic query and reasoning with semantic rules defined on an established relationship with external data sources with consideration for specific run-time events that can influence the quality of streams. The work specifically targets quality issues relating to inconsistency, plausibility, and incompleteness in IoT streaming data. In particular, the investigation covers various RDF stream processing and rule-based reasoning techniques and effects of RDF Serialised formats on the reasoning process. The contributions of the work include the hierarchy of IoT data stream quality problem, lightweight evolving Smart Space and Sensor Measurement Ontology, generic time-aware validation rules and, SISDaV framework- a unified semantic rule-based validation system for RDF-based IoT streaming data that combines the popular RDF stream processing the system with generic enhanced time-aware rules. The semantic validation process ensures the conformance of the raw streaming data value produced by the IoT node(s) with IoT streaming data quality requirements and the expected value. This is facilitated through a set of generic continuous validation rules, which has been realised by extending the popular Jena rule syntax with a time element. The comparative evaluation of SISDaV is based on its effectiveness and efficiency based on the expressivity of the different serialised RDF data formats. The results are interpreted with relevant statistical estimations and performance metrics. The results from the evaluation approve of the feasibility of the framework in terms of containing the semantic validation process within the interval between reads of sensor nodes as well as provision of additional requirements that can enhance IoT streaming data processing systems which are currently missing in most related state-of-art RDF stream processing systems. Furthermore, the approach can satisfy the main research objectives as identified by the study
    • …
    corecore