6,727 research outputs found

    Context-driven progressive enhancement of mobile web applications: a multicriteria decision-making approach

    Get PDF
    Personal computing has become all about mobile and embedded devices. As a result, the adoption rate of smartphones is rapidly increasing and this trend has set a need for mobile applications to be available at anytime, anywhere and on any device. Despite the obvious advantages of such immersive mobile applications, software developers are increasingly facing the challenges related to device fragmentation. Current application development solutions are insufficiently prepared for handling the enormous variety of software platforms and hardware characteristics covering the mobile eco-system. As a result, maintaining a viable balance between development costs and market coverage has turned out to be a challenging issue when developing mobile applications. This article proposes a context-aware software platform for the development and delivery of self-adaptive mobile applications over the Web. An adaptive application composition approach is introduced, capable of autonomously bypassing context-related fragmentation issues. This goal is achieved by incorporating and validating the concept of fine-grained progressive application enhancements based on a multicriteria decision-making strategy

    Dealing with uncertain entities in ontology alignment using rough sets

    Get PDF
    This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Ontology alignment facilitates exchange of knowledge among heterogeneous data sources. Many approaches to ontology alignment use multiple similarity measures to map entities between ontologies. However, it remains a key challenge in dealing with uncertain entities for which the employed ontology alignment measures produce conflicting results on similarity of the mapped entities. This paper presents OARS, a rough-set based approach to ontology alignment which achieves a high degree of accuracy in situations where uncertainty arises because of the conflicting results generated by different similarity measures. OARS employs a combinational approach and considers both lexical and structural similarity measures. OARS is extensively evaluated with the benchmark ontologies of the ontology alignment evaluation initiative (OAEI) 2010, and performs best in the aspect of recall in comparison with a number of alignment systems while generating a comparable performance in precision

    Optimizing Ontology Alignments through NSGA-II without Using Reference Alignment

    Get PDF
    Ontology is widely used to solve the data heterogeneity problems on the semantic web, but the available ontologies could themselves introduce heterogeneity. In order to reconcile these ontologies to implement the semantic interoperability, we need to find the relationships among the entities in various ontologies, and the process of identifying them is called ontology alignment. In all the existing matching systems that use evolutionary approaches to optimize their parameters, a reference alignment between two ontologies to be aligned should be given in advance which could be very expensive to obtain especially when the scale of ontologies is considerably large. To address this issue, in this paper we propose a novel approach to utilize the NSGA-II to optimize the ontology alignments without using the reference alignment. In our approach, an adaptive aggregation strategy is presented to improve the efficiency of optimizing process and two approximate evaluation measures, namely match coverage and match ratio, are introduced to replace the classic recall and precision on reference alignment to evaluate the quality of the alignments. Experimental results show that our approach is effective and can find the solutions that are very close to those obtained by the approaches using reference alignment, and the quality of alignments is in general better than that of state of the art ontology matching systems such as GOAL and SAMBO

    Using Ontologies for the Design of Data Warehouses

    Get PDF
    Obtaining an implementation of a data warehouse is a complex task that forces designers to acquire wide knowledge of the domain, thus requiring a high level of expertise and becoming it a prone-to-fail task. Based on our experience, we have detected a set of situations we have faced up with in real-world projects in which we believe that the use of ontologies will improve several aspects of the design of data warehouses. The aim of this article is to describe several shortcomings of current data warehouse design approaches and discuss the benefit of using ontologies to overcome them. This work is a starting point for discussing the convenience of using ontologies in data warehouse design.Comment: 15 pages, 2 figure

    MinoanER: Schema-Agnostic, Non-Iterative, Massively Parallel Resolution of Web Entities

    Get PDF
    Entity Resolution (ER) aims to identify different descriptions in various Knowledge Bases (KBs) that refer to the same entity. ER is challenged by the Variety, Volume and Veracity of entity descriptions published in the Web of Data. To address them, we propose the MinoanER framework that simultaneously fulfills full automation, support of highly heterogeneous entities, and massive parallelization of the ER process. MinoanER leverages a token-based similarity of entities to define a new metric that derives the similarity of neighboring entities from the most important relations, as they are indicated only by statistics. A composite blocking method is employed to capture different sources of matching evidence from the content, neighbors, or names of entities. The search space of candidate pairs for comparison is compactly abstracted by a novel disjunctive blocking graph and processed by a non-iterative, massively parallel matching algorithm that consists of four generic, schema-agnostic matching rules that are quite robust with respect to their internal configuration. We demonstrate that the effectiveness of MinoanER is comparable to existing ER tools over real KBs exhibiting low Variety, but it outperforms them significantly when matching KBs with high Variety.Comment: Presented at EDBT 2001

    A Novel Approach for Learning How to Automatically Match Job Offers and Candidate Profiles

    Full text link
    Automatic matching of job offers and job candidates is a major problem for a number of organizations and job applicants that if it were successfully addressed could have a positive impact in many countries around the world. In this context, it is widely accepted that semi-automatic matching algorithms between job and candidate profiles would provide a vital technology for making the recruitment processes faster, more accurate and transparent. In this work, we present our research towards achieving a realistic matching approach for satisfactorily addressing this challenge. This novel approach relies on a matching learning solution aiming to learn from past solved cases in order to accurately predict the results in new situations. An empirical study shows us that our approach is able to beat solutions with no learning capabilities by a wide margin.Comment: 15 pages, 6 figure

    Self-adaptive Based Model for Ambiguity Resolution of The Linked Data Query for Big Data Analytics

    Get PDF
    Integration of heterogeneous data sources is a crucial step in big data analytics, although it creates ambiguity issues during mapping between the sources due to the variation in the query terms, data structure and granularity conflicts. However, there are limited researches on effective big data integration to address the ambiguity issue for big data analytics. This paper introduces a self-adaptive model for big data integration by exploiting the data structure during querying in order to mitigate and resolve ambiguities. An assessment of a preliminary work on the Geography and Quran dataset is reported to illustrate the feasibility of the proposed model that motivates future work such as solving complex query
    • …
    corecore