3,515 research outputs found
Edge Detection: A Collection of Pixel based Approach for Colored Images
The existing traditional edge detection algorithms process a single pixel on
an image at a time, thereby calculating a value which shows the edge magnitude
of the pixel and the edge orientation. Most of these existing algorithms
convert the coloured images into gray scale before detection of edges. However,
this process leads to inaccurate precision of recognized edges, thus producing
false and broken edges in the image. This paper presents a profile modelling
scheme for collection of pixels based on the step and ramp edges, with a view
to reducing the false and broken edges present in the image. The collection of
pixel scheme generated is used with the Vector Order Statistics to reduce the
imprecision of recognized edges when converting from coloured to gray scale
images. The Pratt Figure of Merit (PFOM) is used as a quantitative comparison
between the existing traditional edge detection algorithm and the developed
algorithm as a means of validation. The PFOM value obtained for the developed
algorithm is 0.8480, which showed an improvement over the existing traditional
edge detection algorithms.Comment: 5 Page
K-Space at TRECVid 2007
In this paper we describe K-Space participation in
TRECVid 2007. K-Space participated in two tasks, high-level feature extraction and interactive search. We present our approaches for each of these activities and provide a brief analysis of our results. Our high-level feature submission utilized multi-modal low-level features which included visual, audio and temporal elements. Specific concept detectors (such as Face detectors) developed by K-Space partners were also used. We experimented with different machine learning approaches including logistic regression and support vector machines (SVM). Finally we also experimented with both early and late fusion for feature combination. This year we also participated in interactive search, submitting 6 runs. We developed two interfaces which both utilized the same retrieval functionality. Our objective was to measure the effect of context, which was supported to different degrees in each interface, on user performance.
The first of the two systems was a āshotā based interface,
where the results from a query were presented as a ranked
list of shots. The second interface was ābroadcastā based,
where results were presented as a ranked list of broadcasts.
Both systems made use of the outputs of our high-level feature submission as well as low-level visual features
Retinal blood vessels extraction using probabilistic modelling
Ā© 2014 Kaba et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.This article has been made available through the Brunel Open Access Publishing Fund.The analysis of retinal blood vessels plays an important role in detecting and treating retinal diseases. In this review, we present an automated method to segment blood vessels of fundus retinal image. The proposed method could be used to support a non-intrusive diagnosis in modern ophthalmology for early detection of retinal diseases, treatment evaluation or clinical study. This study combines the bias correction and an adaptive histogram equalisation to enhance the appearance of the blood vessels. Then the blood vessels are extracted using probabilistic modelling that is optimised by the expectation maximisation algorithm. The method is evaluated on fundus retinal images of STARE and DRIVE datasets. The experimental results are compared with some recently published methods of retinal blood vessels segmentation. The experimental results show that our method achieved the best overall performance and it is comparable to the performance of human experts.The Department of Information Systems, Computing and Mathematics, Brunel University
- ā¦