61 research outputs found

    OCB: A Generic Benchmark to Evaluate the Performances of Object-Oriented Database Systems

    Get PDF
    International audienceWe present in this paper a generic object-oriented benchmark (the Object Clustering Benchmark) that has been designed to evaluate the performances of clustering policies in object-oriented databases. OCB is generic because its sample database may be customized to fit the databases introduced by the main existing benchmarks (e.g., OO1). OCB's current form is clustering-oriented because of its clustering-oriented workload, but it can be easily adapted to other purposes. Lastly, OCB's code is compact and easily portable. OCB has been implemented in a real system (Texas, running on a Sun workstation), in order to test a specific clustering policy called DSTC. A few results concerning this test are presented

    Epistemic Neural Networks

    Full text link
    Intelligence relies on an agent's knowledge of what it does not know. This capability can be assessed based on the quality of joint predictions of labels across multiple inputs. Conventional neural networks lack this capability and, since most research has focused on marginal predictions, this shortcoming has been largely overlooked. We introduce the epistemic neural network (ENN) as an interface for models that represent uncertainty as required to generate useful joint predictions. While prior approaches to uncertainty modeling such as Bayesian neural networks can be expressed as ENNs, this new interface facilitates comparison of joint predictions and the design of novel architectures and algorithms. In particular, we introduce the epinet: an architecture that can supplement any conventional neural network, including large pretrained models, and can be trained with modest incremental computation to estimate uncertainty. With an epinet, conventional neural networks outperform very large ensembles, consisting of hundreds or more particles, with orders of magnitude less computation. We demonstrate this efficacy across synthetic data, ImageNet, and some reinforcement learning tasks. As part of this effort we open-source experiment code

    The Neural Testbed: Evaluating Joint Predictions

    Full text link
    Predictive distributions quantify uncertainties ignored by point estimates. This paper introduces The Neural Testbed: an open-source benchmark for controlled and principled evaluation of agents that generate such predictions. Crucially, the testbed assesses agents not only on the quality of their marginal predictions per input, but also on their joint predictions across many inputs. We evaluate a range of agents using a simple neural network data generating process. Our results indicate that some popular Bayesian deep learning agents do not fare well with joint predictions, even when they can produce accurate marginal predictions. We also show that the quality of joint predictions drives performance in downstream decision tasks. We find these results are robust across choice a wide range of generative models, and highlight the practical importance of joint predictions to the community

    Performance Evaluation for Clustering Algorithms in Object-Oriented Database Systems

    Get PDF
    International audienceIt is widely acknowledged that good object clustering is critical to the performance of object-oriented databases. However, object clustering always involves some kind of overhead for the system. The aim of this paper is to propose a modelling methodology in order to evaluate the performances of different clustering policies. This methodology has been used to compare the performances of three clustering algorithms found in the literature (Cactis, CK and ORION) that we considered representative of the current research in the field of object clustering. The actual performance evaluation was performed using simulation. Simulation experiments we performed showed that the Cactis algorithm is better than the ORION algorithm and that the CK algorithm totally outperforms both other algorithms in terms of response time and clustering overhead

    Évaluation des performances des SGBDOO : un modèle de simulation générique

    Get PDF
    20 pagesNational audienceLes performances des Systèmes de Gestion de Bases de Données Orientés Objets (SGBDOO) restent un problème d'actualité, à la fois pour les concepteurs et pour les utilisateurs. L'approche la plus répandue pour évaluer ces performances est l'expérimentation, qui consiste à mesurer directement les performances d'un système existant. Or, la simulation aléatoire à événements discrets présente divers avantages dans ce contexte (évaluation a priori, souplesse, faible coût...), mais reste très peu utilisée dans le domaine des bases de données orientées objet. L'objectif de cet article est de présenter un modèle de simulation générique, VOODB, permettant l'évaluation des performances de différents types de SGBDOO. Afin de valider cette approche, nous avons simulé le fonctionnement du SGBDOO O2 et du gestionnaire d'objets persistants Texas. Les résultats de simulation obtenus ont été comparés avec les performances des systèmes réels, mesurés par expérimentation dans les mêmes conditions, grâce au banc d'essais OCB. Les deux séries de résultats sont apparues cohérentes
    • …
    corecore