127,731 research outputs found

    MESURE Tool to benchmark Java Card platforms

    Full text link
    The advent of the Java Card standard has been a major turning point in smart card technology. With the growing acceptance of this standard, understanding the performance behavior of these platforms is becoming crucial. To meet this need, we present in this paper a novel benchmarking framework to test and evaluate the performance of Java Card platforms. MESURE tool is the first framework which accuracy and effectiveness are independent from the particular Java Card platform tested and CAD used.Comment: International Journal of Computer Science Issues, Volume 1, pp49-57, August 200

    How to collect high quality segmentations: use human or computer drawn object boundaries?

    Full text link
    High quality segmentations must be captured consistently for applications such as biomedical image analysis. While human drawn segmentations are often collected because they provide a consistent level of quality, computer drawn segmentations can be collected efficiently and inexpensively. In this paper, we examine how to leverage available human and computer resources to consistently create high quality segmentations. We propose a quality control methodology. We demonstrate how to apply this approach using crowdsourced and domain expert votes for the "best" segmentation from a collection of human and computer drawn segmentations for 70 objects from a public dataset and 274 objects from biomedical images. We publicly share the library of biomedical images which includes 1,879 manual annotations of the boundaries of 274 objects. We found for the 344 objects that no single segmentation source was preferred and that human annotations are not always preferred over computer annotations. These results motivated us to examine the traditional approach to evaluate segmentation algorithms, which involves comparing the segmentations produced by the algorithms to manual annotations on benchmark datasets. We found that algorithm benchmarking results change when the comparison is made to consensus-voted segmentations. Our results led us to suggest a new segmentation approach that uses machine learning to predict the optimal segmentation source and a modified segmentation evaluation approach.National Science Foundation (IIS-0910908

    Cautionary Tales of Inapproximability

    Get PDF
    Modeling biology as classical problems in computer science allows researchers to leverage the wealth of theoretical advancements in this field. Despite countless studies presenting heuristics that report improvement on specific benchmarking data, there has been comparatively little focus on exploring the theoretical bounds on the performance of practical (polynomial-time) algorithms. Conversely, theoretical studies tend to overstate the generalizability of their conclusions to physical biological processes. In this article we provide a fresh perspective on the concepts of NP-hardness and inapproximability in the computational biology domain, using popular sequence assembly and alignment (mapping) algorithms as illustrative examples. These algorithms exemplify how computer science theory can both (a) lead to substantial improvement in practical performance and (b) highlight areas ripe for future innovation. Importantly, we discuss caveats that seemingly allow the performance of heuristics to exceed their provable bounds

    Infrastructure for machine learning and computer vision

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsThe infrastructure surrounding machine learning projects is of utmost importance: Machine learning projects require data acquisition mechanisms, software for data processing, as well as a benchmarking platform for evaluating performance of machine learning algorithms over time. In this report we describe our work aimed at developing such infrastructure for a Europe based computer vision startup specializing in human behaviour tracking. We discuss three projects comprising the work. One dedicated to creating a machine learning dataset for human behaviour monitoring, another to developing a screen-camera calibration tool, and third to setting up a benchmarking platform. The projects were integrated with the core technology of the startup, and will continue to be applied in the future.A infraestrutura para projetos de machine learning é de extrema importância para o desenvolvimento da tecnologia: exigem-se mecanismos de aquisição de dados, software para processamento de dados e uma plataforma de benchmarking para avaliar o desempenho de algoritmos de machine learning ao longo do tempo. No presente relatório, descreve-se o trabalho destinado a desenvolver essa infraestrutura para uma Startup Europeia de computer vision, especializada em rastreamento de comportamento humano atraves de câmeras de videos. Enfoca-se em três projetos que compõem o trabalho: o primeiro, dedicado à criação de um conjunto de dados de machine learning para monitoramento de comportamento humano; o segundo, sobre o desenvolvimento de uma ferramenta de calibração de câmeras e ecrã; e o terceiro, relata a criação de uma plataforma de benchmarking. Tais projetos foram integrados com a tecnologia central da Startup e serão aplicados no futuro

    Quantum Benchmarking: entanglement measures in quantum computers

    Full text link
    Màster Oficial de Ciència i Tecnologia Quàntiques / Quantum Science and Technology, Facultat de Física, Universitat de Barcelona. Curs: 2022-2023. Tutora : Alba Cervera-LiertaQuantum computation has emerged as a promising paradigm shift in the field of computing, and with the advent of new quantum computers, it has become crucial to assess and quantify their performance. Benchmarking, a wellestablished practice in the field, plays a vital role in this regard. One effective way to evaluate a quantum computer’s capabilities is by measuring the amount of entanglement it exhibits, as entanglement is a fundamental characteristic of quantum systems. In this thesis, we provide a comprehensive overview of the current landscape of quantum benchmarking and propose several protocols for estimating the Rényi entropy of quantum states, which offers valuable insights into the entanglement structure of these states. We present a protocol based on the renowned Swap test, specifically designed for future fault-tolerant devices, as well as another protocol based on randomized measurements to address the limitations of current NISQ devices. We have implemented these protocols on the quantum simulation framework of Qibo, ensuring an efficient and reliable execution on any quantum computer, in particular the one at the Barcelona Supercomputing Center (BSC). Through this work, we aim to contribute to the advancement of quantum benchmarking and facilitate the assessment of entanglement in quantum computing systems
    • …
    corecore