104 research outputs found

    Smart Trip Alternatives for the Curious

    Get PDF
    International audienceWhen searching for flights, current systems often suggest routesinvolving waiting times at stopovers. There might exist alternative routes which aremore attractive from a touristic perspective because their duration isnot necessarily much longer while offering enough time in anappropriate place. Choosing among suchalternatives requires additional planning efforts to make sure thate.g. points of interest can conveniently be reached in theallowed time frame. We present a system that automatically computes smart tripalternatives between any two cities. To do so, it searchespoints of interest in large semantic datasets considering theset of accessible areas around each possible layover. It then elects feasible alternatives and displays theirdifferences with respect to the default trip

    SPARQLGX in Action: Efficient Distributed Evaluation of SPARQL with Apache Spark

    Get PDF
    International audienceWe demonstrate SPARQLGX: our implementation of a distributed sparql evaluator. We show that sparqlgx makes it possible to evaluate SPARQL queries on billions of triples distributed across multiple nodes, while providing attractive performance figures

    SPARUB: SPARQL UPDATE Benchmark

    Get PDF
    One aim of the RDF data model, as standardized by the W3C, is to facilitate the evolution of data over time without requiring all the data consumers to be changed. To this end, one of the latest addition to the SPARQL standard query language is an update language for RDF graphs. The research on efficient and scalable SPARQL evaluation methods increasingly relies on standardized methodologies for benchmarking and comparing systems. However, current RDF benchmarks do not support graphs updates. We propose and share SPARUB: a benchmark for the SPARQL UPDATE language on RDF graphs. The aim of SPARUB is not to be yet another RDF benchmark. Instead it provides the mean to automatically extend and improve existing RDF benchmarks along a new dimension of data updates, while preserving their structure and query scenarios

    Une classification expérimentale multi-critère des évaluateurs SPARQL répartis

    Get PDF
    International audiencesparql est le langage standard pour interroger des données au format rdf. Il exite une grande variété d'évaluateurs sparql mettant en place différentes architectures tant pour la répartition des données que pour le déroulement des calculs. Ces différences coupléescouplées`coupléesà des optimisations spécifiques pour chaqué evaluateur rendent la comparaison entre ces systèmes impossible d'un point de vue théorique. Nous proposons un nouvel angle de comparaison des évaluateurs sparql répartis basé sur un classement multi-critère. Nous suggérons d'utiliser un ensemble de cinq fonctionnalités afin d'obtenir une description plus fine des comportements des évaluateurs répartis plutôt que de considérer l'analyse plus traditionnelle des performances temporelles. Afin d'illustrer cette méthode, nous avons mené des expérimentations mettant en compétition dix systèmes existants que nous avons ensuite classés en utilisant une grille de lecture aidantàaidant`aidantà la visualisation des avantages et des limitations des techniques dans le domaine de l'évaluation répartie de requêtes sparql

    SPARQLGX: Efficient Distributed Evaluation of SPARQL with Apache Spark

    Get PDF
    International audiencesparql is the w3c standard query language for querying data expressed in the Resource Description Framework (rdf). The increasing amounts of rdf data available raise a major need and research interest in building efficient and scalable distributed sparql query eval-uators. In this context, we propose sparqlgx: our implementation of a distributed rdf datastore based on Apache Spark. sparqlgx is designed to leverage existing Hadoop infrastructures for evaluating sparql queries. sparqlgx relies on a translation of sparql queries into exe-cutable Spark code that adopts evaluation strategies according to (1) the storage method used and (2) statistics on data. We show that spar-qlgx makes it possible to evaluate sparql queries on billions of triples distributed across multiple nodes, while providing attractive performance figures. We report on experiments which show how sparqlgx compares to related state-of-the-art implementations and we show that our approach scales better than these systems in terms of supported dataset size. With its simple design, sparqlgx represents an interesting alternative in several scenarios

    The SPARQLGX System for Distributed Evaluation of SPARQL Queries

    Get PDF
    SPARQL is the W3C standard query language for querying data expressed in the Resource Description Framework (RDF). The increasing amounts of data available in the RDF format raise a major need and research interest in building efficient and scalable distributed SPARQL query evaluators. In this context, we propose SPARQLGX: an implementation of a distributed RDF datastore based on Apache Spark. SPARQLGX is designed to leverage existing Hadoop infrastructures for evaluating SPARQL queries efficiently. SPARQLGX relies on an automated translation of SPARQL queries into optimized executable Spark code. We show that SPARQLGX makes it possible to evaluate SPARQL queries on billions of triples distributed across multiple nodes, while providing attractive performance figures. We report on experiments which show how SPARQLGX compares to state-of-the-art implementations and we show that our approach scales better than other systems in terms of supported dataset size. With its simple design, SPARQLGX represents an interesting alternative in several scenarios

    Preventing Serialization Vulnerabilities through Transient Field Detection

    Get PDF
    International audienceVerifying Android applications' source code is essential to ensure users' security. Due to its complex architecture, Android has specific attack surfaces which the community has to investigate in order to discover new vulnerabilities and prevent as much as possible malicious exploitations. Communication mechanisms are one of the Android components that should be carefully checked and analyzed to avoid data leakage or code injections. Android software components can communicate together using serialization processes. Developers need thereby to indicate manually the transient keyword whenever an object field should not be part of the serialization. In particular, field values encoding memory addresses can leave severe vulnerabilities inside applications if they are not explicitly declared transient. In this study, we propose a novel methodology for automatically detecting, at compilation time, all missing transient keywords directly from Android applications' source code. Our method is based on taint analysis and its implementation provides developers with a useful tool which they might use to improve their code bases. Furthermore, we evaluate our method on a cryptography library as well as on the Telegram application for real world validation. Our approach is able to retrieve previously found vulnerabilities, and, in addition, we find non-exploitable flows hidden within Telegram's code base

    Clinical and prognostic implications of phenomapping in patients with heart failure receiving cardiac resynchronization therapy

    Get PDF
    Despite having an indication for cardiac resynchronization therapy according to current guidelines, patients with heart failure with reduced ejection fraction who receive cardiac resynchronization therapy do not consistently derive benefit from it.; To determine whether unsupervised clustering analysis (phenomapping) can identify distinct phenogroups of patients with differential outcomes among cardiac resynchronization therapy recipients from routine clinical practice.; We used unsupervised hierarchical cluster analysis of phenotypic data after data reduction (55 clinical, biological and echocardiographic variables) to define new phenogroups among 328 patients with heart failure with reduced ejection fraction from routine clinical practice enrolled before cardiac resynchronization therapy. Clinical outcomes and cardiac resynchronization therapy response rate were studied according to phenogroups.; Although all patients met the recommended criteria for cardiac resynchronization therapy implantation, phenomapping analysis classified study participants into four phenogroups that differed distinctively in clinical, biological, electrocardiographic and echocardiographic characteristics and outcomes. Patients from phenogroups 1 and 2 had the most improved outcome in terms of mortality, associated with cardiac resynchronization therapy response rates of 81% and 78%, respectively. In contrast, patients from phenogroups 3 and 4 had cardiac resynchronization therapy response rates of 39% and 59%, respectively, and the worst outcome, with a considerably increased risk of mortality compared with patients from phenogroup 1 (hazard ratio 3.23, 95% confidence interval 1.9-5.5 and hazard ratio 2.49, 95% confidence interval 1.38-4.50, respectively).; Among patients with heart failure with reduced ejection fraction with an indication for cardiac resynchronization therapy from routine clinical practice, phenomapping identifies subgroups of patients with differential clinical, biological and echocardiographic features strongly linked to divergent outcomes and responses to cardiac resynchronization therapy. This approach may help to identify patients who will derive most benefit from cardiac resynchronization therapy in "individualized" clinical practice

    Prospective comparison of speckle tracking longitudinal bidimensional strain between two vendors

    Get PDF
    SummaryBackgroundSpeckle tracking is a relatively new, largely angle-independent technique used for the evaluation of myocardial longitudinal strain (LS). However, significant differences have been reported between LS values obtained by speckle tracking with the first generation of software products.AimsTo compare LS values obtained with the most recently released equipment from two manufacturers.MethodsSystematic scanning with head-to-head acquisition with no modification of the patient's position was performed in 64 patients with equipment from two different manufacturers, with subsequent off-line post-processing for speckle tracking LS assessment (Philips QLAB 9.0 and General Electric [GE] EchoPAC BT12). The interobserver variability of each software product was tested on a randomly selected set of 20 echocardiograms from the study population.ResultsGE and Philips interobserver coefficients of variation (CVs) for global LS (GLS) were 6.63% and 5.87%, respectively, indicating good reproducibility. Reproducibility was very variable for regional and segmental LS values, with CVs ranging from 7.58% to 49.21% with both software products. The concordance correlation coefficient (CCC) between GLS values was high at 0.95, indicating substantial agreement between the two methods. While good agreement was observed between midwall and apical regional strains with the two software products, basal regional strains were poorly correlated. The agreement between the two software products at a segmental level was very variable; the highest correlation was obtained for the apical cap (CCC 0.90) and the poorest for basal segments (CCC range 0.31–0.56).ConclusionsA high level of agreement and reproducibility for global but not for basal regional or segmental LS was found with two vendor-dependent software products. This finding may help to reinforce clinical acceptance of GLS in everyday clinical practice
    corecore