143 research outputs found

    Learning SPARQL Queries from Expected Results

    Get PDF
    We present LSQ, an algorithm learning SPARQL queries from a subset of expected results. The algorithm leverages grouping, aggregates and inline values of SPARQL 1.1 in order to move most of the complex computations to a SPARQL endpoint. It operates by building and testing hypotheses expressed as SPARQL queries and uses active learning to collect a small number of learning examples from the user. We provide an open-source implementation and an on-line interface to test the algorithm. In the experimental evaluation, we use real queries posed in the past to the official DBpedia SPARQL endpoint, and we show that the algorithm is able to learn them, 82 % of them in less than a minute and asking the user just once

    Towards Large-Scale Small Object Detection: Survey and Benchmarks

    Full text link
    With the rise of deep convolutional neural networks, object detection has achieved prominent advances in past years. However, such prosperity could not camouflage the unsatisfactory situation of Small Object Detection (SOD), one of the notoriously challenging tasks in computer vision, owing to the poor visual appearance and noisy representation caused by the intrinsic structure of small targets. In addition, large-scale dataset for benchmarking small object detection methods remains a bottleneck. In this paper, we first conduct a thorough review of small object detection. Then, to catalyze the development of SOD, we construct two large-scale Small Object Detection dAtasets (SODA), SODA-D and SODA-A, which focus on the Driving and Aerial scenarios respectively. SODA-D includes 24828 high-quality traffic images and 278433 instances of nine categories. For SODA-A, we harvest 2513 high resolution aerial images and annotate 872069 instances over nine classes. The proposed datasets, as we know, are the first-ever attempt to large-scale benchmarks with a vast collection of exhaustively annotated instances tailored for multi-category SOD. Finally, we evaluate the performance of mainstream methods on SODA. We expect the released benchmarks could facilitate the development of SOD and spawn more breakthroughs in this field. Datasets and codes are available at: \url{https://shaunyuan22.github.io/SODA}

    Handbook of Vascular Biometrics

    Get PDF

    Explainable AI and Interpretable Computer Vision:From Oversight to Insight

    Get PDF
    The increasing availability of big data and computational power has facilitated unprecedented progress in Artificial Intelligence (AI) and Machine Learning (ML). However, complex model architectures have resulted in high-performing yet uninterpretable ‘black boxes’. This prevents users from verifying that the reasoning process aligns with expectations and intentions. This thesis posits that the sole focus on predictive performance is an unsustainable trajectory, since a model can make right predictions for the wrong reasons. The research field of Explainable AI (XAI) addresses the black-box nature of AI by generating explanations that present (aspects of) a model's behaviour in human-understandable terms. This thesis supports the transition from oversight to insight, and shows that explainability can give users more insight into every part of the machine learning pipeline: from the training data to the prediction model and the resulting explanations. When relying on explanations for judging a model's reasoning process, it is important that the explanations are truthful, relevant and understandable. Part I of this thesis reflects upon explanation quality and identifies 12 desirable properties, including compactness, completeness and correctness. Additionally, it provides an extensive collection of quantitative XAI evaluation methods, and analyses their availabilities in open-source toolkits. As alternative to common post-model explainability that reverse-engineers an already trained prediction model, Part II of this thesis presents in-model explainability for interpretable computer vision. These image classifiers learn prototypical parts, which are used in an interpretable decision tree or scoring sheet. The models are explainable by design since their reasoning depends on the extent to which an image patch “looks like” a learned part-prototype. Part III of this thesis shows that ML can also explain characteristics of a dataset. Because of a model's ability to analyse large amounts of data in little time, extracting hidden patterns can contribute to the validation and potential discovery of domain knowledge, and allows to detect sources of bias and shortcuts early on. Concluding, neither the prediction model nor the data nor the explanation method should be handled as a black box. The way forward? AI with a human touch: developing powerful models that learn interpretable features, and using these meaningful features in a decision process that users can understand, validate and adapt. This in-model explainability, such as the part-prototype models from Part II, opens up the opportunity to ‘re-educate’ models with our desired norms, values and reasoning. Enabling human decision-makers to detect and correct undesired model behaviour will contribute towards an effective but also reliable and responsible usage of AI

    Handbook of Vascular Biometrics

    Get PDF
    This open access handbook provides the first comprehensive overview of biometrics exploiting the shape of human blood vessels for biometric recognition, i.e. vascular biometrics, including finger vein recognition, hand/palm vein recognition, retina recognition, and sclera recognition. After an introductory chapter summarizing the state of the art in and availability of commercial systems and open datasets/open source software, individual chapters focus on specific aspects of one of the biometric modalities, including questions of usability, security, and privacy. The book features contributions from both academia and major industrial manufacturers
    • 

    corecore