131,347 research outputs found

    Two dimensional generalized edge detector

    Get PDF
    Bu çalışmada, daha önce Gökmen ve Jain (1997) tarafından geliştirilen -uzayında görüntü  gösterimi ve ayrıt saptayıcı  iki boyutlu uzaya genişletilmektedir. Bu genişletme özellikle iki açıdan önemlidir. Birinci olarak, görüntülerin -uzayındaki davranışları en iyi, iki boyutlu düzleştirme ve ayrıt saptama süzgeçleri ile modellenebilir. İkincisi, genelleştirilmiş ayrıt saptayıcı (GAS) ile bilinen başarılı birçok ayrıt saptayıcısını üretebildiğinden, iki boyutlu GAS ile bu süzgeçlerin iki boyutlu biçimleri oluşturulabilir. Düzleştirme problemi, zar ve levha modellerinin doğrusal bileşiminden oluşan iki boyutlu karma enerji fonksiyonelinin en aza indirgenmesi olarak tanımlanmıştır. Gökmen ve Jain (1997) karma fonksiyoneli en aza indirgeyen denklemi, ayrıştırılabilir olduğu varsayımı altında tek boyutlu kısmi diferansiyel denklem olarak çözmüşlerdir. Ancak mevcut ayrıştırılabilir çözüm iki boyutlu özgün denklemi sağlamamaktadır. Bu çalışmada, karma fonksiyoneli en aza indirgeyen denklem takımı iki boyutlu uzaydaki çözümü sunulmaktadır. Türetilen süzgeçler önceki süzgeçlerle birinci ve ikinci tür hata karakteristiklerine göre karşılaştırıldığında gürültüye daha az duyar olduğu gözlenmiştir. Gerçek ve yapay görüntüler üzerinde yapılan deneysel sonuçlarla ayrıt saptayıcının performansı ve -uzayındaki davranışı sunulmuştur. Ayrıt saptayıcılar ile çalışırken ayarlanması gereken çok sayıda parametre bulunmaktadır. Verilen bir imge için en iyi parametre kümesini bulmanın genel geçer bir yöntemi bulunmamaktadır. Gerçektende, bir imge için en iyilenen bir ayrıt saptayıcının parametreleri başka bir imge için en iyi olmayacaktır. Bu çalışmada, en iyi GAS parametreleri, verilen bir imge için hesaplanan, alıcı işletim eğrisi üzerinden belirlenmiştir. Burada amaç GAS'ın başarımının sınırlarını göstermektir. Anahtar Kelimeler: Ayrıt saptama, düzenlileştirme kuramı, ölçek-uzayı gösterilimi, yüzey kurma.The aim of edge detection is to provide a meaningful description of object boundaries in a scene from intensity surface. These boundaries are due to discontinuities manifesting themselves as sharp variations in image intensities. There are different sources for sharp changes in images which are created by structure (e.g. texture, occlusion) or illumination (e.g. shadows, highlights). Extracting edges from a still image is certainly the most significant stage of any computer vision algorithm requiring high accuracy of location in the presence of noise. In many contour-based vision algorithms, such as shape-based query, curved-based stereo vision, and edge-based target recognition, their performance is highly dependent on the quality of the detected edges. Therefore, edge detection is an important area of research in computer vision. Despite considerable work and progress made on this subject, edge detection is still a challenging research problem due to the lack of a robust and efficient general purpose algorithm. Most of the efforts in edge detection have been devoted to the development of an optimum edge detector which can resolve the tradeoff between good localization and detection performance. Furthermore, extracting edges at different scales and combining these edges have attracted a substantial amount of interest. In the course of developing optimum edge detectors that can resolve the tradeoff between localization and detection performances, several different approaches have resulted in either a Gaussian filter or a filter whose shape is very similar to a Gaussian. Furthermore, these filters are very suitable for obtaining scale space edge detection since the scale of the filter can be easily controlled by means of a single parameter. For instance, in classical scale-space the kernel is a Gaussian and the scale-space representation is obtained either by convolving the image by a Gaussian with increasing standard deviation or equivalently by solving the linear heat equation in time. This representation is causal, since the isotropic heat equation satisfies a maximum principle. However, the Gaussian scale-space suffers from serious drawbacks such as over-smoothing and location uncertainty along edges at large scales due to interactions between nearby edges and displacements. Although these filters are used widely, it is very difficult to claim that they can provide the desired output for any specific problem. For instance, there are some cases where the improved localization performance is the primary requirement. In these cases, a sub-optimum conditions filter which promotes the localization performance becomes more appropriate. It has been shown that the first order R-filter can deliver improved results on checkerboard and bar images as well as some real images for moderate values of signal-to-noise ratio (SNR). In many vision applications, there is a great demand for a general-purpose edge detector which can produce edge maps with very different characteristics in nature, so that one of these edge maps may meet the requirements of the problem under consideration. Detecting edges in images is one of the most challenging issues in computer vision and image processing due to lack of a robust detector. Gökmen (1997) obtained an edge detector called Generalized Edge Detector (GED), capable of producing most of the existing edge detectors. The original problem was formulated on two-dimensional Hybrid model comprised of the linear combination of membrane and thin-plate functionals. Smoothing problem was then reduced to the solution of two-dimensional partial differential equation (PDE). The filters were obtained for one dimensional case assuming a separable solution. This study extends edge detection of images in lt-space to two-dimensional space. Two-dimensional extension of the representation is important since the properties of images in the space are best modeled by two dimensional smoothing and edge detector filters. Also since GED filters encompass most of the well-known edge detectors, two-dimensional version of these filters could be obtained. The derived filters are more robust to noise when compared to the previous one dimensional scheme in the sense of missing and false alarm characteristics. There are several parameters to tune when dealing with edge detectors. Usually there is no easy way to find the optimal edge detector parameters for an image. In fact, one set of optimal parameters may not be optimal for another image. In this study, we find optimal GED parameters using receiver operator characteristics for an image when its ideal edges are available using exhaustive search to see how best it achieves. Keywords: Edge detection, regularization theory, scale-space representation, surface reconstruction

    Deformable kernels for early vision

    Get PDF
    Early vision algorithms often have a first stage of linear filtering that 'extracts' from the image information at multiple scales of resolution and multiple orientations. A common difficulty in the design and implementation of such schemes is that one feels compelled to discretize coarsely the space of scales and orientations in order to reduce computation and storage costs. This discretization produces anisotropies due to a loss of traslation-, rotation- scaling- invariance that makes early vision algorithms less precise and more difficult to design. This need not be so: one can compute and store efficiently the response of families of linear filters defined on a continuum of orientations and scales. A technique is presented that allows (1) to compute the best approximation of a given family using linear combinations of a small number of 'basis' functions; (2) to describe all finite-dimensional families, i.e. the families of filters for which a finite dimensional representation is possible with no error. The technique is general and can be applied to generating filters in arbitrary dimensions. Experimental results are presented that demonstrate the applicability of the technique to generating multi-orientation multi-scale 20 edge-detection kernels. The implementation issues are also discussed

    A robust high-sensitivity algorithm for automated detection of proteins in two-dimensional electrophoresis gels

    Get PDF
    The automated interpretation of two-dimensional gel electrophoresis images used in protein separation and analysis presents a formidable problem in the detection and characterization of ill-defined spatial objects. We describe in this paper a hierarchical algorithm that provides a robust, high-sensitivity solution to this problem, which can be easily adapted to a variety of experimental situations. The software implementation of this algorithm functions as part of a complete package designed for general protein gel analysis applications

    How to Solve Classification and Regression Problems on High-Dimensional Data with a Supervised Extension of Slow Feature Analysis

    Get PDF
    Supervised learning from high-dimensional data, e.g., multimedia data, is a challenging task. We propose an extension of slow feature analysis (SFA) for supervised dimensionality reduction called graph-based SFA (GSFA). The algorithm extracts a label-predictive low-dimensional set of features that can be post-processed by typical supervised algorithms to generate the final label or class estimation. GSFA is trained with a so-called training graph, in which the vertices are the samples and the edges represent similarities of the corresponding labels. A new weighted SFA optimization problem is introduced, generalizing the notion of slowness from sequences of samples to such training graphs. We show that GSFA computes an optimal solution to this problem in the considered function space, and propose several types of training graphs. For classification, the most straightforward graph yields features equivalent to those of (nonlinear) Fisher discriminant analysis. Emphasis is on regression, where four different graphs were evaluated experimentally with a subproblem of face detection on photographs. The method proposed is promising particularly when linear models are insufficient, as well as when feature selection is difficult
    corecore