9,576 research outputs found

    Edge Detection: A Collection of Pixel based Approach for Colored Images

    Full text link
    The existing traditional edge detection algorithms process a single pixel on an image at a time, thereby calculating a value which shows the edge magnitude of the pixel and the edge orientation. Most of these existing algorithms convert the coloured images into gray scale before detection of edges. However, this process leads to inaccurate precision of recognized edges, thus producing false and broken edges in the image. This paper presents a profile modelling scheme for collection of pixels based on the step and ramp edges, with a view to reducing the false and broken edges present in the image. The collection of pixel scheme generated is used with the Vector Order Statistics to reduce the imprecision of recognized edges when converting from coloured to gray scale images. The Pratt Figure of Merit (PFOM) is used as a quantitative comparison between the existing traditional edge detection algorithm and the developed algorithm as a means of validation. The PFOM value obtained for the developed algorithm is 0.8480, which showed an improvement over the existing traditional edge detection algorithms.Comment: 5 Page

    Model-Based Edge Detector for Spectral Imagery Using Sparse Spatiospectral Masks

    Get PDF
    Two model-based algorithms for edge detection in spectral imagery are developed that specifically target capturing intrinsic features such as isoluminant edges that are characterized by a jump in color but not in intensity. Given prior knowledge of the classes of reflectance or emittance spectra associated with candidate objects in a scene, a small set of spectral-band ratios, which most profoundly identify the edge between each pair of materials, are selected to define a edge signature. The bands that form the edge signature are fed into a spatial mask, producing a sparse joint spatiospectral nonlinear operator. The first algorithm achieves edge detection for every material pair by matching the response of the operator at every pixel with the edge signature for the pair of materials. The second algorithm is a classifier-enhanced extension of the first algorithm that adaptively accentuates distinctive features before applying the spatiospectral operator. Both algorithms are extensively verified using spectral imagery from the airborne hyperspectral imager and from a dots-in-a-well midinfrared imager. In both cases, the multicolor gradient (MCG) and the hyperspectral/spatial detection of edges (HySPADE) edge detectors are used as a benchmark for comparison. The results demonstrate that the proposed algorithms outperform the MCG and HySPADE edge detectors in accuracy, especially when isoluminant edges are present. By requiring only a few bands as input to the spatiospectral operator, the algorithms enable significant levels of data compression in band selection. In the presented examples, the required operations per pixel are reduced by a factor of 71 with respect to those required by the MCG edge detector

    Benchmarking of Gaussian boson sampling using two-point correlators

    Get PDF
    Gaussian boson sampling is a promising scheme for demonstrating a quantum computational advantage using photonic states that are accessible in a laboratory and, thus, offer scalable sources of quantum light. In this contribution, we study two-point photon-number correlation functions to gain insight into the interference of Gaussian states in optical networks. We investigate the characteristic features of statistical signatures which enable us to distinguish classical from quantum interference. In contrast to the typical implementation of boson sampling, we find additional contributions to the correlators under study which stem from the phase dependence of Gaussian states and which are not observable when Fock states interfere. Using the first three moments, we formulate the tools required to experimentally observe signatures of quantum interference of Gaussian states using two outputs only. By considering the current architectural limitations in realistic experiments, we further show that a statistically significant discrimination between quantum and classical interference is possible even in the presence of loss, noise, and a finite photon-number resolution. Therefore, we formulate and apply a theoretical framework to benchmark the quantum features of Gaussian boson sampling under realistic conditions

    Color edges extraction using statistical features and automatic threshold technique: application to the breast cancer cells

    Get PDF
    BACKGROUND: Color image segmentation has been so far applied in many areas; hence, recently many different techniques have been developed and proposed. In the medical imaging area, the image segmentation may be helpful to provide assistance to doctor in order to follow-up the disease of a certain patient from the breast cancer processed images. The main objective of this work is to rebuild and also to enhance each cell from the three component images provided by an input image. Indeed, from an initial segmentation obtained using the statistical features and histogram threshold techniques, the resulting segmentation may represent accurately the non complete and pasted cells and enhance them. This allows real help to doctors, and consequently, these cells become clear and easy to be counted. METHODS: A novel method for color edges extraction based on statistical features and automatic threshold is presented. The traditional edge detector, based on the first and the second order neighborhood, describing the relationship between the current pixel and its neighbors, is extended to the statistical domain. Hence, color edges in an image are obtained by combining the statistical features and the automatic threshold techniques. Finally, on the obtained color edges with specific primitive color, a combination rule is used to integrate the edge results over the three color components. RESULTS: Breast cancer cell images were used to evaluate the performance of the proposed method both quantitatively and qualitatively. Hence, a visual and a numerical assessment based on the probability of correct classification (P( C )), the false classification (P( f )), and the classification accuracy (Sens(%)) are presented and compared with existing techniques. The proposed method shows its superiority in the detection of points which really belong to the cells, and also the facility of counting the number of the processed cells. CONCLUSIONS: Computer simulations highlight that the proposed method substantially enhances the segmented image with smaller error rates better than other existing algorithms under the same settings (patterns and parameters). Moreover, it provides high classification accuracy, reaching the rate of 97.94%. Additionally, the segmentation method may be extended to other medical imaging types having similar properties

    Random convolution ensembles

    Get PDF
    A novel method for creating diverse ensembles of image classifiers is proposed. The idea is that, for each base image classifier in the ensemble, a random image transformation is generated and applied to all of the images in the labeled training set. The base classifiers are then learned using features extracted from these randomly transformed versions of the training data, and the result is a highly diverse ensemble of image classifiers. This approach is evaluated on a benchmark pedestrian detection dataset and shown to be effective

    Unsupervised edge map scoring: a statistical complexity approach

    Get PDF
    We propose a new Statistical Complexity Measure (SCM) to qualify edge maps without Ground Truth (GT) knowledge. The measure is the product of two indices, an \emph{Equilibrium} index E\mathcal{E} obtained by projecting the edge map into a family of edge patterns, and an \emph{Entropy} index H\mathcal{H}, defined as a function of the Kolmogorov Smirnov (KS) statistic. This new measure can be used for performance characterization which includes: (i)~the specific evaluation of an algorithm (intra-technique process) in order to identify its best parameters, and (ii)~the comparison of different algorithms (inter-technique process) in order to classify them according to their quality. Results made over images of the South Florida and Berkeley databases show that our approach significantly improves over Pratt's Figure of Merit (PFoM) which is the objective reference-based edge map evaluation standard, as it takes into account more features in its evaluation
    corecore