13 research outputs found

    TriAL: A navigational algebra for RDF triplestores

    Get PDF

    Data integration support for offshore decommissioning waste management

    Get PDF
    Offshore oil and gas platforms have a design life of about 25 years whereas the techniques and tools used for managing their data are constantly evolving. Therefore, data captured about platforms during their lifetimes will be in varying forms. Additionally, due to the many stakeholders involved with a facility over its life cycle, information representation of its components varies. These challenges make data integration difficult. Over the years, data integration technology application in the oil and gas industry has focused on meeting the needs of asset life cycle stages other than decommissioning. This is the case because most assets are just reaching the end of their design lives. Currently, limited work has been done on integrating life cycle data for offshore decommissioning purposes, and reports by industry stakeholders underscore this need. This thesis proposes a method for the integration of the common data types relevant in oil and gas decommissioning. The key features of the method are that it (i) ensures semantic homogeneity using knowledge representation languages (Semantic Web) and domain specific reference data (ISO 15926); and (ii) allows stakeholders to continue to use their current applications. Prototypes of the framework have been implemented using open source software applications and performance measures made. The work of this thesis has been motivated by the business case of reusing offshore decommissioning waste items. The framework developed is generic and can be applied whenever there is a need to integrate and query disparate data involving oil and gas assets. The prototypes presented show how the data management challenges associated with assessing the suitability of decommissioned offshore facility items for reuse can be addressed. The performance of the prototypes show that significant time and effort is saved compared to the state-of‐the‐art solution. The ability to do this effectively and efficiently during decommissioning will advance the oil the oil and gas industry’s transition toward a circular economy and help save on cost

    Enhancement of the usability of SOA services for novice users

    Get PDF
    Recently, the automation of service integration has provided a significant advantage in delivering services to novice users. This art of integrating various services is known as Service Composition and its main purpose is to simplify the development process for web applications and facilitates reuse of services. It is one of the paradigms that enables services to end-users (i.e.service provisioning) through the outsourcing of web contents and it requires users to share and reuse services in more collaborative ways. Most service composers are effective at enabling integration of web contents, but they do not enable universal access across different groups of users. This is because, the currently existing content aggregators require complex interactions in order to create web applications (e.g., Web Service Business Process Execution Language (WS-BPEL)) as a result not all users are able to use such web tools. This trend demands changes in the web tools that end-users use to gain and share information, hence this research uses Mashups as a service composition technique to allow novice users to integrate publicly available Service Oriented Architecture (SOA) services, where there is a minimal active web application development. Mashups being the platforms that integrate disparate web Application Programming Interfaces (APIs) to create user defined web applications; presents a great opportunity for service provisioning. However, their usability for novice users remains invalidated since Mashup tools are not easy to use they require basic programming skills which makes the process of designing and creating Mashups difficult. This is because Mashup tools access heterogeneous web contents using public web APIs and the process of integrating them become complex since web APIs are tailored by different vendors. Moreover, the design of Mashup editors is unnecessary complex; as a result, users do not know where to start when creating Mashups. This research address the gap between Mashup tools and usability by the designing and implementing a semantically enriched Mashup tool to discover, annotate and compose APIs to improve the utilization of SOA services by novice users. The researchers conducted an analysis of the already existing Mashup tools to identify challenges and weaknesses experienced by novice Mashup users. The findings from the requirement analysis formulated the system usability requirements that informed the design and implementation of the proposed Mashup tool. The proposed architecture addressed three layers: composition, annotation and discovery. The researchers developed a simple Mashup tool referred to as soa-Services Provisioner (SerPro) that allowed novice users to create web application flexibly. Its usability and effectiveness was validated. The proposed Mashup tool enhanced the usability of SOA services, since data analysis and results showed that it was usable to novice users by scoring a System Usability Scale (SUS) score of 72.08. Furthermore, this research discusses the research limitations and future work for further improvements

    Stratégies pour le raisonnement sur le contexte dans les environnements d assistance pour les personnes âgées

    Get PDF
    Tirant parti de notre expérience avec une approche traditionnelle des environnements d'assistance ambiante (AAL) qui repose sur l'utilisation de nombreuses technologies hétérogènes dans les déploiements, cette thèse étudie la possibilité d'une approche simplifiée et complémentaire, ou seul un sous-ensemble hardware réduit est déployé, initiant un transfert de complexité vers le côté logiciel. Axé sur les aspects de raisonnement dans les systèmes AAL, ce travail a permis à la proposition d'un moteur d'inférence sémantique adapté à l'utilisation particulière à ces systèmes, répondant ainsi à un besoin de la communauté scientifique. Prenant en compte la grossière granularité des données situationnelles disponible avec une telle approche, un ensemble de règles dédiées avec des stratégies d'inférence adaptées est proposé, implémenté et validé en utilisant ce moteur. Un mécanisme de raisonnement sémantique novateur est proposé sur la base d'une architecture de raisonnement inspiré du système cognitif. Enfin, le système de raisonnement est intégré dans un framework de provision de services sensible au contexte, se chargeant de l'intelligence vis-à-vis des données contextuelles en effectuant un traitement des événements en direct par des manipulations ontologiques complexes. L ensemble du système est validé par des déploiements in-situ dans une maison de retraite ainsi que dans des maisons privées, ce qui en soi est remarquable dans un domaine de recherche principalement cantonné aux laboratoiresLeveraging our experience with the traditional approach to ambient assisted living (AAL) which relies on a large spread of heterogeneous technologies in deployments, this thesis studies the possibility of a more stripped down and complementary approach, where only a reduced hardware subset is deployed, probing a transfer of complexity towards the software side, and enhancing the large scale deployability of the solution. Focused on the reasoning aspects in AAL systems, this work has allowed the finding of a suitable semantic inference engine for the peculiar use in these systems, responding to a need in this scientific community. Considering the coarse granularity of situational data available, dedicated rule-sets with adapted inference strategies are proposed, implemented, and validated using this engine. A novel semantic reasoning mechanism is proposed based on a cognitively inspired reasoning architecture. Finally, the whole reasoning system is integrated in a fully featured context-aware service framework, powering its context awareness by performing live event processing through complex ontological manipulation. the overall system is validated through in-situ deployments in a nursing home as well as private homes over a few months period, which itself is noticeable in a mainly laboratory-bound research domainEVRY-INT (912282302) / SudocSudocFranceF

    Semantic model-driven framework for validating quality requirements of Internet of Things streaming data

    Get PDF
    The rise of Internet of Things has provided platforms mostly enhanced by real-time data-driven services for reactive services and Smart Cities innovations. However, IoT streaming data are known to be compromised by quality problems, thereby influencing the performance and accuracy of IoT-based reactive services or Smart applications. This research investigates the suitability of the semantic approach for the run-time validation of IoT streaming data for quality problems. To realise this aim, Semantic IoT Streaming Data Validation with its framework (SISDaV) is proposed. The novel approach involves technologies for semantic query and reasoning with semantic rules defined on an established relationship with external data sources with consideration for specific run-time events that can influence the quality of streams. The work specifically targets quality issues relating to inconsistency, plausibility, and incompleteness in IoT streaming data. In particular, the investigation covers various RDF stream processing and rule-based reasoning techniques and effects of RDF Serialised formats on the reasoning process. The contributions of the work include the hierarchy of IoT data stream quality problem, lightweight evolving Smart Space and Sensor Measurement Ontology, generic time-aware validation rules and, SISDaV framework- a unified semantic rule-based validation system for RDF-based IoT streaming data that combines the popular RDF stream processing the system with generic enhanced time-aware rules. The semantic validation process ensures the conformance of the raw streaming data value produced by the IoT node(s) with IoT streaming data quality requirements and the expected value. This is facilitated through a set of generic continuous validation rules, which has been realised by extending the popular Jena rule syntax with a time element. The comparative evaluation of SISDaV is based on its effectiveness and efficiency based on the expressivity of the different serialised RDF data formats. The results are interpreted with relevant statistical estimations and performance metrics. The results from the evaluation approve of the feasibility of the framework in terms of containing the semantic validation process within the interval between reads of sensor nodes as well as provision of additional requirements that can enhance IoT streaming data processing systems which are currently missing in most related state-of-art RDF stream processing systems. Furthermore, the approach can satisfy the main research objectives as identified by the study

    Extracting Bounded-level Modules from Deductive RDF Triplestores

    No full text
    International audienceWe present a novel semantics for extracting bounded-level modules from RDF ontologies and databases augmented with safe inference rules, a la Datalog. Dealing with a recursive rule language poses challenging issues for defining the module semantics, and also makes module extraction algorithmically unsolvable in some cases. Our results include a set of module extraction algorithms compliant with the novel semantics. Experimental results show that the resulting framework is effective in extracting expressive modules from RDF datasets with formal guarantees, whilst controlling their succinctness

    Extracting Bounded-level Modules from Deductive RDF Triplestores

    No full text
    International audienceWe present a novel semantics for extracting bounded-level modules from RDF ontologies and databases augmented with safe inference rules, a la Datalog. Dealing with a recursive rule language poses challenging issues for defining the module semantics, and also makes module extraction algorithmically unsolvable in some cases. Our results include a set of module extraction algorithms compliant with the novel semantics. Experimental results show that the resulting framework is effective in extracting expressive modules from RDF datasets with formal guarantees, whilst controlling their succinctness
    corecore