370 research outputs found

    The K-Space segmentation tool set

    Get PDF
    In this paper we describe two applications, created as part of the K-Space Network of Excellence, designed to allow researchers to use and experiment with state-of-the-art methods for spatial segmentation of images and video sequences. The first of these tools is an _Interactive Segmentation Tool_, developed to allow accurate human-guided segmentation of semantic objects from images using different segmentation algorithms. The tool is particularly useful for generating ground-truth segmentations, extracting objects for further processing, and as a general image processing application.The second tool we developed is designed for fully automatic spatial region segmentation of image and video. The tool is web-based; usage only requires a browser. Both the automatic and interactive segmentation tools have been made available online; we anticipate they will be a valuable resource for other researchers

    Image segmentation, evaluation, and applications

    Get PDF
    This thesis aims to advance research in image segmentation by developing robust techniques for evaluating image segmentation algorithms. The key contributions of this work are as follows. First, we investigate the characteristics of existing measures for supervised evaluation of automatic image segmentation algorithms. We show which of these measures is most effective at distinguishing perceptually accurate image segmentation from inaccurate segmentation. We then apply these measures to evaluating four state-of-the-art automatic image segmentation algorithms, and establish which best emulates human perceptual grouping. Second, we develop a complete framework for evaluating interactive segmentation algorithms by means of user experiments. Our system comprises evaluation measures, ground truth data, and implementation software. We validate our proposed measures by showing their correlation with perceived accuracy. We then use our framework to evaluate four popular interactive segmentation algorithms, and demonstrate their performance. Finally, acknowledging that user experiments are sometimes prohibitive in practice, we propose a method of evaluating interactive segmentation by algorithmically simulating the user interactions. We explore four strategies for this simulation, and demonstrate that the best of these produces results very similar to those from the user experiments

    Image segmentation evaluation using an integrated framework

    Get PDF
    In this paper we present a general framework we have developed for running and evaluating automatic image and video segmentation algorithms. This framework was designed to allow effortless integration of existing and forthcoming image segmentation algorithms, and allows researchers to focus more on the development and evaluation of segmentation methods, relying on the framework for encoding/decoding and visualization. We then utilize this framework to automatically evaluate four distinct segmentation algorithms, and present and discuss the results and statistical findings of the experiment

    Visual analysis for drum sequence transcription

    Get PDF
    A system is presented for analysing drum performance video sequences. A novel ellipse detection algorithm is introduced that automatically locates drum tops. This algorithm fits ellipses to edge clusters, and ranks them according to various fitness criteria. A background/foreground segmentation method is then used to extract the silhouette of the drummer and drum sticks. Coupled with a motion intensity feature, this allows for the detection of ‘hits’ in each of the extracted regions. In order to obtain a transcription of the performance, each of these regions is automatically labeled with the corresponding instrument class. A partial audio transcription and color cues are used to measure the compatibility between a region and its label, the Kuhn-Munkres algorithm is then employed to find the optimal labeling. Experimental results demonstrate the ability of visual analysis to enhance the performance of an audio drum transcription system

    A framework and user interface for automatic region based segmentation algorithms

    Get PDF
    In this paper we describe a framework and tool developed for running and evaluating automatic region based segmentation algorithms. The tool was designed to allow simple integration of existing and future segmentation algorithms, both single image based algorithms and those that operate on video data. Our framework supports plug-in segmenters, media decoders, and region-map codecs. We provide several sophisticated implementations of these plug-ins, including a video decoder capable of frame accurate decoding of a large variety of video formats, an image decoder which also handles a comprehensive collection of formats, and a efficient implementation of a region-map codec. The tool includes both a graphical user interface to allow users to browse, visually inspect, and evaluate the algorithm output, and a batch processing interface for segmentation of large data collections. The application allows researchers to focus more on the development and evaluation of segmentation methods, relying on the framework for encoding/decoding input and output, and the front end for visualization

    Efficient storage and decoding of SURF feature points

    Get PDF
    Practical use of SURF feature points in large-scale indexing and retrieval engines requires an efficient means for storing and decoding these features. This paper investigates several methods for compression and storage of SURF feature points, considering both storage consumption and disk-read efficiency. We compare each scheme with a baseline plain-text encoding scheme as used by many existing SURF implementations. Our final proposed scheme significantly reduces both the time required to load and decode feature points, and the space required to store them on disk

    Comparing data augmentation strategies for deep image classification

    Get PDF
    Currently deep learning requires large volumes of training data to fit accurate models. In practice, however, there is often insufficient training data available and augmentation is used to expand the dataset. Historically, only simple forms of augmentation, such as cropping and horizontal flips, were used. More complex augmentation methods have recently been developed, but it is still unclear which techniques are most effective, and at what stage of the learning process they should be introduced. This paper investigates data augmentation strategies for image classification, including the effectiveness of different forms of augmentation, dependency on the number of training examples, and when augmentation should be introduced during training. The most accurate results in all experiments are achieved using random erasing due to its ability to simulate occlusion. As expected, reducing the number of training examples significantly increases the importance of augmentation, but surprisingly the improvements in generalization from augmentation do not appear to be only as a result of augmentation preventing overfitting. Results also indicate a learning curriculum that injects augmentation after the initial learning phase has passed is more effective than the standard practice of using augmentation throughout, and that injection too late also reduces accuracy. We find that careful augmentation can improve accuracy by +2.83% to 95.85% using a ResNet model on CIFAR-10 with more dramatic improvements seen when there are fewer training examples. Source code is available at https://git.io/fjPP

    FastSal: a Computationally Efficient Network for Visual Saliency Prediction

    Get PDF
    This paper focuses on the problem of visual saliency prediction, predicting regions of an image that tend to attract human visual attention, under a constrained computational budget. We modify and test various recent efficient convolutional neural network architectures like EfficientNet and MobileNetV2 and compare them with existing state-of-the-art saliency models such as SalGAN and DeepGaze II both in terms of standard accuracy metrics like AUC and NSS, and in terms of the computational complexity and model size. We find that MobileNetV2 makes an excellent backbone for a visual saliency model and can be effective even without a complex decoder. We also show that knowledge transfer from a more computationally expensive model like DeepGaze II can be achieved via pseudo-labelling an unlabelled dataset, and that this approach gives result on-par with many state-of-the-art algorithms with a fraction of the computational cost and model size. Source code is available at https://github.com/feiyanhu/FastSal

    Assessing knee OA severity with CNN attention-based end-to-end architectures

    Get PDF
    This work proposes a novel end-to-end convolutional neural network (CNN) architecture to automatically quantify the severity of knee osteoarthritis (OA) using X-Ray images, which incorporates trainable attention modules acting as unsupervised fine-grained detectors of the region of interest (ROI). The proposed attention modules can be applied at different levels and scales across any CNN pipeline helping the network to learn relevant attention patterns over the most informative parts of the image at different resolutions. We test the proposed attention mechanism on existing state-of-the-art CNN architectures as our base models, achieving promising results on the benchmark knee OA datasets from the osteoarthritis initiative (OAI) and multicenter osteoarthritis study (MOST).Postprint (published version
    corecore