5 research outputs found

    Intelligent SPARQL Endpoints: Optimizing Execution Performance by Automatic Query Relaxation and Queue Scheduling

    Get PDF
    The Web of Data is widely considered as one of the major global repositories populated with countless interconnected and struc- tured data prompting these linked datasets to be continuously and sharply increasing. In this context the so-called SPARQL Protocol and RDF Query Language is commonly used to retrieve and manage stored data by means of SPARQL endpoints, a query processing service especially designed to get access to these databases. Nevertheless, due to the large amount of data tackled by such endpoints and their structural complex- ity, these services usually suffer from severe performance issues, including inadmissible processing times. This work aims at overcoming this noted inefficiency by designing a distributed parallel system architecture that improves the performance of SPARQL endpoints by incorporating two functionalities: 1) a queuing system to avoid bottlenecks during the exe- cution of SPARQL queries; and 2) an intelligent relaxation of the queries submitted to the endpoint at hand whenever the relaxation itself and the consequently lowered complexity of the query are beneficial for the over- all performance of the system. To this end the system relies on a two-fold optimization criterion: the minimization of the query running time, as predicted by a supervised learning model; and the maximization of the quality of the results of the query as quantified by a measure of similar- ity. These two conflicting optimization criteria are efficiently balanced by two bi-objective heuristic algorithms sequentially executed over groups of SPARQL queries. The approach is validated on a prototype and several experiments that evince the applicability of the proposed scheme

    Nature-inspired heuristics for the multiple-vehicle selective pickup and delivery problem under maximum profit and incentive fairness criteria

    Get PDF
    This work focuses on wide-scale freight transportation logistics motivated by the sharp increase of on-line shopping stores and the upsurge of Internet as the most frequently utilized selling channel during the last decade. This huge ecosystem of one-click-away catalogs has ultimately unleashed the need for efficient algorithms aimed at properly scheduling the underlying transportation resources in an efficient fashion, especially over the so-called last mile of the distribution chain. In this context the selective pickup and delivery problem focuses on determining the optimal subset of packets that should be picked from its origin city and delivered to their corresponding destination within a given time frame, often driven by the maximization of the total profit of the courier service company. This manuscript tackles a realistic variant of this problem where the transportation fleet is composed by more than one vehicle, which further complicates the selection of packets due to the subsequent need for coordinating the delivery service from the command center. In particular the addressed problem includes a second optimization metric aimed at reflecting a fair share of the net benefit among the company staff based on their driven distance. To efficiently solve this optimization problem, several nature-inspired metaheuristic solvers are analyzed and statistically compared to each other under different parameters of the problem setup. Finally, results obtained over a realistic scenario over the province of Bizkaia (Spain) using emulated data will be explored so as to shed light on the practical applicability of the analyzed heuristics

    A Heuristically Optimized Complex Event Processing Engine for Big Data Stream Analytics

    Get PDF
    This paper describes a Big Data stream analytics platform developed within the DEWI project for processing upcoming events from wireless sensors installed in a truck. The platform consists of a Complex Event Processing (CEP) engine capable of triggering alarms from a predefined set of rules. In general these rules are characterized by multiple parameters, for which finding their opti- mal value usually yields a challenging task. In this paper we explain a methodol- ogy based on a meta-heuristic solver that is used as a wrapper to obtain optimal parametric rules for the CEP engine. In particular this approach optimizes CEP rules through the refinement of the parameters controlling their behavior based on an alarm detection improvement criterion. As a result the proposed scheme retrieves the rules parameterized in a detection-optimal fashion. Results for a cer- tain use case – i.e. fuel level of the vehicle – are discussed towards assessing the performance gains provided by our method

    Linked open data (LOD) and its implementation in libraries: Initiatives and technologies

    Get PDF
    The web of data is becoming one of the largest global information repositories, thanks to initiatives like LOD (linked open data) that facilitate the standardized publication of open data. The use of this paradigm offers great opportunities for libraries, applying semantic technologies to expedite data management and publication and promoting their connection to other repositories, increasing their presence and impact. In order to ensure the future of libraries in the Web of data, it is necessary to raise awareness among librarians about LCD opportunities and challenges. With this aim, we present the major initiatives in this area, along with the pioneering organizations in the use of linked data in the library domainEuropean Commission, FP7, 61092
    corecore