41,497 research outputs found

    Two dimensional generalized edge detector

    Get PDF
    Bu çalışmada, daha önce Gökmen ve Jain (1997) tarafından geliştirilen -uzayında görüntü  gösterimi ve ayrıt saptayıcı  iki boyutlu uzaya genişletilmektedir. Bu genişletme özellikle iki açıdan önemlidir. Birinci olarak, görüntülerin -uzayındaki davranışları en iyi, iki boyutlu düzleştirme ve ayrıt saptama süzgeçleri ile modellenebilir. İkincisi, genelleştirilmiş ayrıt saptayıcı (GAS) ile bilinen başarılı birçok ayrıt saptayıcısını üretebildiğinden, iki boyutlu GAS ile bu süzgeçlerin iki boyutlu biçimleri oluşturulabilir. Düzleştirme problemi, zar ve levha modellerinin doğrusal bileşiminden oluşan iki boyutlu karma enerji fonksiyonelinin en aza indirgenmesi olarak tanımlanmıştır. Gökmen ve Jain (1997) karma fonksiyoneli en aza indirgeyen denklemi, ayrıştırılabilir olduğu varsayımı altında tek boyutlu kısmi diferansiyel denklem olarak çözmüşlerdir. Ancak mevcut ayrıştırılabilir çözüm iki boyutlu özgün denklemi sağlamamaktadır. Bu çalışmada, karma fonksiyoneli en aza indirgeyen denklem takımı iki boyutlu uzaydaki çözümü sunulmaktadır. Türetilen süzgeçler önceki süzgeçlerle birinci ve ikinci tür hata karakteristiklerine göre karşılaştırıldığında gürültüye daha az duyar olduğu gözlenmiştir. Gerçek ve yapay görüntüler üzerinde yapılan deneysel sonuçlarla ayrıt saptayıcının performansı ve -uzayındaki davranışı sunulmuştur. Ayrıt saptayıcılar ile çalışırken ayarlanması gereken çok sayıda parametre bulunmaktadır. Verilen bir imge için en iyi parametre kümesini bulmanın genel geçer bir yöntemi bulunmamaktadır. Gerçektende, bir imge için en iyilenen bir ayrıt saptayıcının parametreleri başka bir imge için en iyi olmayacaktır. Bu çalışmada, en iyi GAS parametreleri, verilen bir imge için hesaplanan, alıcı işletim eğrisi üzerinden belirlenmiştir. Burada amaç GAS'ın başarımının sınırlarını göstermektir. Anahtar Kelimeler: Ayrıt saptama, düzenlileştirme kuramı, ölçek-uzayı gösterilimi, yüzey kurma.The aim of edge detection is to provide a meaningful description of object boundaries in a scene from intensity surface. These boundaries are due to discontinuities manifesting themselves as sharp variations in image intensities. There are different sources for sharp changes in images which are created by structure (e.g. texture, occlusion) or illumination (e.g. shadows, highlights). Extracting edges from a still image is certainly the most significant stage of any computer vision algorithm requiring high accuracy of location in the presence of noise. In many contour-based vision algorithms, such as shape-based query, curved-based stereo vision, and edge-based target recognition, their performance is highly dependent on the quality of the detected edges. Therefore, edge detection is an important area of research in computer vision. Despite considerable work and progress made on this subject, edge detection is still a challenging research problem due to the lack of a robust and efficient general purpose algorithm. Most of the efforts in edge detection have been devoted to the development of an optimum edge detector which can resolve the tradeoff between good localization and detection performance. Furthermore, extracting edges at different scales and combining these edges have attracted a substantial amount of interest. In the course of developing optimum edge detectors that can resolve the tradeoff between localization and detection performances, several different approaches have resulted in either a Gaussian filter or a filter whose shape is very similar to a Gaussian. Furthermore, these filters are very suitable for obtaining scale space edge detection since the scale of the filter can be easily controlled by means of a single parameter. For instance, in classical scale-space the kernel is a Gaussian and the scale-space representation is obtained either by convolving the image by a Gaussian with increasing standard deviation or equivalently by solving the linear heat equation in time. This representation is causal, since the isotropic heat equation satisfies a maximum principle. However, the Gaussian scale-space suffers from serious drawbacks such as over-smoothing and location uncertainty along edges at large scales due to interactions between nearby edges and displacements. Although these filters are used widely, it is very difficult to claim that they can provide the desired output for any specific problem. For instance, there are some cases where the improved localization performance is the primary requirement. In these cases, a sub-optimum conditions filter which promotes the localization performance becomes more appropriate. It has been shown that the first order R-filter can deliver improved results on checkerboard and bar images as well as some real images for moderate values of signal-to-noise ratio (SNR). In many vision applications, there is a great demand for a general-purpose edge detector which can produce edge maps with very different characteristics in nature, so that one of these edge maps may meet the requirements of the problem under consideration. Detecting edges in images is one of the most challenging issues in computer vision and image processing due to lack of a robust detector. Gökmen (1997) obtained an edge detector called Generalized Edge Detector (GED), capable of producing most of the existing edge detectors. The original problem was formulated on two-dimensional Hybrid model comprised of the linear combination of membrane and thin-plate functionals. Smoothing problem was then reduced to the solution of two-dimensional partial differential equation (PDE). The filters were obtained for one dimensional case assuming a separable solution. This study extends edge detection of images in lt-space to two-dimensional space. Two-dimensional extension of the representation is important since the properties of images in the space are best modeled by two dimensional smoothing and edge detector filters. Also since GED filters encompass most of the well-known edge detectors, two-dimensional version of these filters could be obtained. The derived filters are more robust to noise when compared to the previous one dimensional scheme in the sense of missing and false alarm characteristics. There are several parameters to tune when dealing with edge detectors. Usually there is no easy way to find the optimal edge detector parameters for an image. In fact, one set of optimal parameters may not be optimal for another image. In this study, we find optimal GED parameters using receiver operator characteristics for an image when its ideal edges are available using exhaustive search to see how best it achieves. Keywords: Edge detection, regularization theory, scale-space representation, surface reconstruction

    Online Nonparametric Anomaly Detection based on Geometric Entropy Minimization

    Full text link
    We consider the online and nonparametric detection of abrupt and persistent anomalies, such as a change in the regular system dynamics at a time instance due to an anomalous event (e.g., a failure, a malicious activity). Combining the simplicity of the nonparametric Geometric Entropy Minimization (GEM) method with the timely detection capability of the Cumulative Sum (CUSUM) algorithm we propose a computationally efficient online anomaly detection method that is applicable to high-dimensional datasets, and at the same time achieve a near-optimum average detection delay performance for a given false alarm constraint. We provide new insights to both GEM and CUSUM, including new asymptotic analysis for GEM, which enables soft decisions for outlier detection, and a novel interpretation of CUSUM in terms of the discrepancy theory, which helps us generalize it to the nonparametric GEM statistic. We numerically show, using both simulated and real datasets, that the proposed nonparametric algorithm attains a close performance to the clairvoyant parametric CUSUM test.Comment: to appear in IEEE International Symposium on Information Theory (ISIT) 201

    Generalized Methodology for Array Processor Design of Real-time Systems

    Get PDF
    Many techniques and design tools have been developed for mapping algorithms to array processors. Linear mapping is usually used for regular algorithms. Large and complex problems are not regular by nature and regularization may cause a computational overhead which prevents the ability to meet real-time deadlines. In this paper, a systematic design methodology for mapping partially-regular as well as regular Dependence Graphs is presented. In this approach the set of all optimal solutions is generated under the given constraints. Due to nature of the problem and the tight timing constraints of real-time systems the set of alternative solutions is limited. An image processing example is discusse

    DOPING: Generative Data Augmentation for Unsupervised Anomaly Detection with GAN

    Full text link
    Recently, the introduction of the generative adversarial network (GAN) and its variants has enabled the generation of realistic synthetic samples, which has been used for enlarging training sets. Previous work primarily focused on data augmentation for semi-supervised and supervised tasks. In this paper, we instead focus on unsupervised anomaly detection and propose a novel generative data augmentation framework optimized for this task. In particular, we propose to oversample infrequent normal samples - normal samples that occur with small probability, e.g., rare normal events. We show that these samples are responsible for false positives in anomaly detection. However, oversampling of infrequent normal samples is challenging for real-world high-dimensional data with multimodal distributions. To address this challenge, we propose to use a GAN variant known as the adversarial autoencoder (AAE) to transform the high-dimensional multimodal data distributions into low-dimensional unimodal latent distributions with well-defined tail probability. Then, we systematically oversample at the `edge' of the latent distributions to increase the density of infrequent normal samples. We show that our oversampling pipeline is a unified one: it is generally applicable to datasets with different complex data distributions. To the best of our knowledge, our method is the first data augmentation technique focused on improving performance in unsupervised anomaly detection. We validate our method by demonstrating consistent improvements across several real-world datasets.Comment: Published as a conference paper at ICDM 2018 (IEEE International Conference on Data Mining

    Edge and Line Feature Extraction Based on Covariance Models

    Get PDF
    age segmentation based on contour extraction usually involves three stages of image operations: feature extraction, edge detection and edge linking. This paper is devoted to the first stage: a method to design feature extractors used to detect edges from noisy and/or blurred images. The method relies on a model that describes the existence of image discontinuities (e.g. edges) in terms of covariance functions. The feature extractor transforms the input image into a “log-likelihood ratio” image. Such an image is a good starting point of the edge detection stage since it represents a balanced trade-off between signal-to-noise ratio and the ability to resolve detailed structures. For 1-D signals, the performance of the edge detector based on this feature extractor is quantitatively assessed by the so called “average risk measure”. The results are compared with the performances of 1-D edge detectors known from literature. Generalizations to 2-D operators are given. Applications on real world images are presented showing the capability of the covariance model to build edge and line feature extractors. Finally it is shown that the covariance model can be coupled to a MRF-model of edge configurations so as to arrive at a maximum a posteriori estimate of the edges or lines in the image

    A new self-organizing neural gas model based on Bregman divergences

    Get PDF
    In this paper, a new self-organizing neural gas model that we call Growing Hierarchical Bregman Neural Gas (GHBNG) has been proposed. Our proposal is based on the Growing Hierarchical Neural Gas (GHNG) in which Bregman divergences are incorporated in order to compute the winning neuron. This model has been applied to anomaly detection in video sequences together with a Faster R-CNN as an object detector module. Experimental results not only confirm the effectiveness of the GHBNG for the detection of anomalous object in video sequences but also its selforganization capabilities.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tec

    Gaussian Belief Propagation Based Multiuser Detection

    Full text link
    In this work, we present a novel construction for solving the linear multiuser detection problem using the Gaussian Belief Propagation algorithm. Our algorithm yields an efficient, iterative and distributed implementation of the MMSE detector. We compare our algorithm's performance to a recent result and show an improved memory consumption, reduced computation steps and a reduction in the number of sent messages. We prove that recent work by Montanari et al. is an instance of our general algorithm, providing new convergence results for both algorithms.Comment: 6 pages, 1 figures, appeared in the 2008 IEEE International Symposium on Information Theory, Toronto, July 200
    corecore