4,403 research outputs found

    Comparative Analysis of common Edge Detection Algorithms using Pre-processing Technique

    Get PDF
    Edge detection is the process of segmenting an image by detecting discontinuities in brightness. So far, several standard segmentation methods have been widely used for edge detection. However, due to inherent quality of images, these methods prove ineffective if they are applied without any preprocessing. In this paper, an image preprocessing approach has been adopted in order to get certain parameters that are useful to perform better edge detection with the standard edge detection methods. The proposed preprocessing approach involves median filtering to reduce the noise in image and then Edge Detection technique is carried out. And atlast Standard edge detection methods can be applied to the resultant preprocessing image and its Simulation results are show that our preprocessed approach when used with a standard edge detection method enhances its performance

    A comparative study of edge detection techniques

    Get PDF
    The problem of detecting edges in gray level digital images is considered. A literature survey of the existing methods is presented. Based on the survey, two methods that are well accepted by a majority of investigators are identified. The methods selected are: 1) Laplacian of Gaussian (LoG) operator, and 2) An optimal detector based on maxima in gradient magnitude of a Gaussian-smoothed image. The latter has been proposed by Canny[], and will be referred as Canny\u27s method. The purpose of the thesis is to compare the performance of these popular methods. In order to increase the scope of such comparison, two additional methods are considered. First is one of the simplest methods, based on the first order approximation of the first derivative of the image. This method has the advantage of relatively low amount of computations. Second is an attempt to develop an edge fitting method based on eigenvector least-squared error fitting of an intensity profile. This method is developed with an intent to keep the edge localization errors small. All the four methods are coded and applied on several digital images, actual as well as synthesized. Results show that the LoG method and Canny\u27s method perform quite well in general, and that demonstrates popularity of these methods. On the other hand, even the simplest method of first derivative is found to perform well if applied properly. Based on the results of the comparative study several critical issues related to edge detection are pointed out. Results also indicate feasibility of the proposed method based on eigenvector fit. Improvements and recommendation for further work are made

    Gravitation-Based Edge Detection in Hyperspectral Images

    Get PDF
    Edge detection is one of the key issues in the field of computer vision and remote sensing image analysis. Although many different edge-detection methods have been proposed for gray-scale, color, and multispectral images, they still face difficulties when extracting edge features from hyperspectral images (HSIs) that contain a large number of bands with very narrow gap in the spectral domain. Inspired by the clustering characteristic of the gravitational theory, a novel edge-detection algorithm for HSIs is presented in this paper. In the proposed method, we first construct a joint feature space by combining the spatial and spectral features. Each pixel of HSI is assumed to be a celestial object in the joint feature space, which exerts gravitational force to each of its neighboring pixel. Accordingly, each object travels in the joint feature space until it reaches a stable equilibrium. At the equilibrium, the image is smoothed and the edges are enhanced, where the edge pixels can be easily distinguished by calculating the gravitational potential energy. The proposed edge-detection method is tested on several benchmark HSIs and the obtained results were compared with those of four state-of-the-art approaches. The experimental results confirm the efficacy of the proposed method

    Deep Learning-Based Point Upsampling for Edge Enhancement of 3D-Scanned Data and Its Application to Transparent Visualization

    Get PDF
    Large-scale 3D-scanned point clouds enable the accurate and easy recording of complex 3D objects in the real world. The acquired point clouds often describe both the surficial and internal 3D structure of the scanned objects. The recently proposed edge-highlighted transparent visualization method is effective for recognizing the whole 3D structure of such point clouds. This visualization utilizes the degree of opacity for highlighting edges of the 3D-scanned objects, and it realizes clear transparent viewing of the entire 3D structures. However, for 3D-scanned point clouds, the quality of any edge-highlighting visualization depends on the distribution of the extracted edge points. Insufficient density, sparseness, or partial defects in the edge points can lead to unclear edge visualization. Therefore, in this paper, we propose a deep learning-based upsampling method focusing on the edge regions of 3D-scanned point clouds to generate more edge points during the 3D-edge upsampling task. The proposed upsampling network dramatically improves the point-distributional density, uniformity, and connectivity in the edge regions. The results on synthetic and scanned edge data show that our method can improve the percentage of edge points more than 15% compared to the existing point cloud upsampling network. Our upsampling network works well for both sharp and soft edges. A combined use with a noise-eliminating filter also works well. We demonstrate the effectiveness of our upsampling network by applying it to various real 3D-scanned point clouds. We also prove that the improved edge point distribution can improve the visibility of the edge-highlighted transparent visualization of complex 3D-scanned objects

    Best Photo Selection

    Get PDF
    The rising of digital photography underlies a clear change on the paradigm of the photography management process by amateur photographers. Nowadays, taking one more photo comes for free, thus it is usual for amateurs to take several photos of the same subject hoping that one of them will match the quality standards of the photographer, namely in terms of illumination, focus and framing. Assuming that the framing issue is easily solved by cropping the photo, there is still the need to select which of the well framed photos, technically similar in terms of illumination and focus, are going to be kept (and in opposition which photos are going to be discarded). The process of visual observation, on a computer screen, in order to select the best photo is inaccurate and thus becomes a generator of insecurity feelings that may lead to no discarding of photos at all. In this work, we propose to address the issue of how to help the amateur photographer to select the best photo from a set of similar photos by analysing them in technical terms. The result is a novel workflow supported by a software package, guided by user input, which will allow the sorting of the similar photos accordingly to their technical characteristics (illumination and focus) and the user requirements. As a result, we expect that the process of choosing the best photo, and discarding of the remaining, becomes reliable and more comfortable

    Edge Detection by Cost Minimization

    Get PDF
    Edge detection is cast as a problem in cost minimization. This is achieved by the formulation of two cost functions which evaluate the quality of edge configurations. The first is a comparative cost function (CCF), which is a linear sum of weighted cost factors. It is heuristic in nature and can be applied only to pairs of similar edge configurations. It measures the relative quality between the configurations. The detection of edges is accomplished by a heuristic iterative search algorithm which uses the CCF to evaluate edge quality. The second cost function is the absolute cost function (ACF), which is also a linear sum of weighted cost factors. The cost factors capture desirable characteristics of edges such as accuracy in localization, thinness, and continuity. Edges are detected by finding the edge configurations that minimize the ACF. We have analyzed the function in terms of the characteristics of the edges in minimum cost configurations. These characteristics are directly dependent of the associated weight of each cost factor. Through the analysis of the ACF, we provide guidelines on the choice of weights to achieve certain characteristics of the detected edges. Minimizing the ACF is accomplished by the use of Simulated Annealing. We have developed a set of strategies for generating next states for the annealing process. The method of generating next states allows the annealing process to be executed largely in parallel. Experimental results are shown which verify the usefulness of the CCF and ACF techniques for edge detection. In comparison, the ACF technique produces better edges than the CCF or other current detection techniques

    Mapping Complex Urban Land Cover from Spaceborne Imagery: The Influence of Spatial Resolution, Spectral Band Set and Classification Approach

    Get PDF
    Detailed land cover information is valuable for mapping complex urban environments. Recent enhancements to satellite sensor technology promise fit-for-purpose data, particularly when processed using contemporary classification approaches. We evaluate this promise by comparing the influence of spatial resolution, spectral band set and classification approach for mapping detailed urban land cover in Nottingham, UK. A WorldView-2 image provides the basis for a set of 12 images with varying spatial and spectral characteristics, and these are classified using three different approaches (maximum likelihood (ML), support vector machine (SVM) and object-based image analysis (OBIA)) to yield 36 output land cover maps. Classification accuracy is evaluated independently and McNemar tests are conducted between all paired outputs (630 pairs in total) to determine which classifications are significantly different. Overall accuracy varied between 35% for ML classification of 30 m spatial resolution, 4-band imagery and 91% for OBIA classification of 2 m spatial resolution, 8-band imagery. The results demonstrate that spatial resolution is clearly the most influential factor when mapping complex urban environments, and modern “very high resolution” or VHR sensors offer great advantage here. However, the advanced spectral capabilities provided by some recent sensors, coupled with contemporary classification approaches (especially SVMs and OBIA), can also lead to significant gains in mapping accuracy. Ongoing development in instrumentation and methodology offer huge potential here and imply that urban mapping opportunities will continue to grow

    3D Reconstruction of Building Rooftop and Power Line Models in Right-of-Ways Using Airborne LiDAR Data

    Get PDF
    The research objectives aimed to achieve thorough the thesis are to develop methods for reconstructing models of building and PL objects of interest in the power line (PL) corridor area from airborne LiDAR data. For this, it is mainly concerned with the model selection problem for which model is more optimal in representing the given data set. This means that the parametric relations and geometry of object shapes are unknowns and optimally determined by the verification of hypothetical models. Therefore, the proposed method achieves high adaptability to the complex geometric forms of building and PL objects. For the building modeling, the method of implicit geometric regularization is proposed to rectify noisy building outline vectors which are due to noisy data. A cost function for the regularization process is designed based on Minimum Description Length (MDL) theory, which favours smaller deviation between a model and observation as well as orthogonal and parallel properties between polylines. Next, a new approach, called Piecewise Model Growing (PMG), is proposed for 3D PL model reconstruction using a catenary curve model. It piece-wisely grows to capture all PL points of interest and thus produces a full PL 3D model. However, the proposed method is limited to the PL scene complexity, which causes PL modeling errors such as partial, under- and over-modeling errors. To correct the incompletion of PL models, the inner and across span analysis are carried out, which leads to replace erroneous PL segments by precise PL models. The inner span analysis is performed based on the MDL theory to correct under- and over-modeling errors. The across span analysis is subsequently carried out to correct partial-modeling errors by finding start and end positions of PLs which denotes Point Of Attachment (POA). As a result, this thesis addresses not only geometrically describing building and PL objects but also dealing with noisy data which causes the incompletion of models. In the practical aspects, the results of building and PL modeling should be essential to effectively analyze a PL scene and quickly alleviate the potentially hazardous scenarios jeopardizing the PL system
    corecore