116 research outputs found

    Parallelizing Algorithm For Object Recognition Based On Edge Detection On Sunfire Cluster.

    Get PDF
    Object Recognition is one of the essential parts for image processing. It has become the major areas for the technologies like Biometrics, image recognition, authentication and accessibility of major security syste

    Neuro-inspired edge feature fusion using Choquet integrals

    Get PDF
    It is known that the human visual system performs a hierarchical information process in which early vision cues (or primitives) are fused in the visual cortex to compose complex shapes and descriptors. While different aspects of the process have been extensively studied, such as lens adaptation or feature detection, some other aspects, such as feature fusion, have been mostly left aside. In this work, we elaborate on the fusion of early vision primitives using generalizations of the Choquet integral, and novel aggregation operators that have been extensively studied in recent years. We propose to use generalizations of the Choquet integral to sensibly fuse elementary edge cues, in an attempt to model the behaviour of neurons in the early visual cortex. Our proposal leads to a fully-framed edge detection algorithm whose performance is put to the test in state-of-the-art edge detection datasets.The authors gratefully acknowledge the financial support of the Spanish Ministry of Science and Technology (project PID2019-108392GB-I00 (AEI/10.13039/501100011033), the Research Services of Universidad Pública de Navarra, CNPq (307781/2016-0, 301618/2019-4), FAPERGS (19/2551-0001660) and PNPD/CAPES (464880/2019-00)

    Edge detection in unorganized 3D point cloud

    Get PDF
    The application of 3D laser scanning in the mining industry is increasing progressively over the years. This presents an opportunity to visualize and analyze the underground world and potentially save countless man- hours and exposure to safety incidents. This thesis envisions to detect the “Edges of the Rocks” in the 3D point cloud collected via scanner, although edge detection in point cloud is considered as a difficult but meaningful problem. As a solution to noisy and unorganized 3D point cloud, a new method, EdgeScan method, has been proposed and implemented to detect fast and accurate edges from the 3D point cloud for real time systems. EdgeScan method is aimed to make use of 2D edge processing techniques to represent the edge characteristics in 3D point cloud with better accuracy. A comparisons of EdgeScan method with other common edge detection methods for 3D point cloud is administered, eventually, results suggest that the stated EdgeScan method furnishes a better speed and accuracy especially for large dataset in real time systems.Master of Science (MSc) in Computational Science

    Automatic snow layer detection in drone-borne radar data using edge detection and morphology

    Get PDF
    The thesis aims to detect the primary interfaces in ground-penetrating radar (GPR) data collected from a snow-pack. An airborne drone was used to collect the data, where a 2D image of the substructures was gattered, including GPS and laser altimeter data. Al these were used under the thesis to develop the method or presentation of the results. The method focused on simpler image processing techniques where more complicated methods would be explored if needed. Ground truth was drawn manually with guidance from a GPR expert. The primary method used in this thesis was Canny edge detection and morphological operators. Two different techniques were used to detect the two different layers because they showed significantly different characteristics. The technique for the top layer resulted in a root mean square error (RMSE) accuracy of 5 cm, which was within the range resolution of the radar system was achieved. A quality estimate was also given to the top layer, indicating the top estimate's quality found through our method. The bottom estimate showed an accuracy of 20 cm because of the complexity of the bottom layer. On the other hand, the method did have a cross-correlation of 0.9, meaning it could follow the bottom layer in most datasets, but it could struggle to have the exact location correct. In short, the method presented could be applied routinely to estimate the primary interfaces in other GPR data, where no method previously existed

    Real Time Pattern Recognition in Digital Video with Applications to Safety in Construction Sites

    Full text link
    In construction sites, various guidelines are provided for the correct use of safety equipment. Many fatalities and injuries occur to people because of the lack of exercise of these guidelines and proper monitoring of the violations. In order to improve these standards and amend the cause, a video based monitoring tool will be created for a construction site. Based on the real time video obtained from cameras on the site, a classification algorithm will be created which has the intelligence to recognize if any safety rules have been violated. A classification vector will be created based on the different classifiers, depending on the properties of the object and the image, to construct a classifier to classify a construction site for a safe state. If any safety rule is being violated the algorithm issues a real time alarm event making the management aware of the violations. The steadiness of the system is indicated by the probability of being in a safe state

    Classification of skin tumours through the analysis of unconstrained images

    Get PDF
    Skin cancer is the most frequent malignant neoplasm for Caucasian individuals. According to the Skin Cancer Foundation, the incidence of melanoma, the most malignant of skin tumours, and resultant mortality, have increased exponentially during the past 30 years, and continues to grow. [1]. Although often intractable in advanced stages, skin cancer in general and melanoma in particular, if detected in an early stage, can achieve cure ratios of over 95% [1,55]. Early screening of the lesions is, therefore, crucial, if a cure is to be achieved. Most skin lesions classification systems rely on a human expert supported dermatoscopy, which is an enhanced and zoomed photograph of the lesion zone. Nevertheless and although contrary claims exist, as far as is known by the author, classification results are currently rather inaccurate and need to be verified through a laboratory analysis of a piece of the lesion’s tissue. The aim of this research was to design and implement a system that was able to automatically classify skin spots as inoffensive or dangerous, with a small margin of error; if possible, with higher accuracy than the results achieved normally by a human expert and certainly better than any existing automatic system. The system described in this thesis meets these criteria. It is able to capture an unconstrained image of the affected skin area and extract a set of relevant features that may lead to, and be representative of, the four main classification characteristics of skin lesions: Asymmetry; Border; Colour; and Diameter. These relevant features are then evaluated either through a Bayesian statistical process - both a simple k-Nearest Neighbour as well as a Fuzzy k-Nearest Neighbour classifier - a Support Vector Machine and an Artificial Neural Network in order to classify the skin spot as either being a Melanoma or not. The characteristics selected and used through all this work are, to the author’s knowledge, combined in an innovative manner. Rather than simply selecting absolute values from the images characteristics, those numbers were combined into ratios, providing a much greater independence from environment conditions during the process of image capture. Along this work, image gathering became one of the most challenging activities. In fact several of the initially potential sources failed and so, the author had to use all the pictures he could find, namely on the Internet. This limited the test set to 136 images, only. Nevertheless, the process results were excellent. The algorithms developed were implemented into a fully working system which was extensively tested. It gives a correct classification of between 76% and 92% – depending on the percentage of pictures used to train the system. In particular, the system gave no false negatives. This is crucial, since a system which gave false negatives may deter a patient from seeking further treatment with a disastrous outcome. These results are achieved by detecting precise edges for every lesion image, extracting features considered relevant according to the giving different weights to the various extracted features and submitting these values to six classification algorithms – k-Nearest Neighbour, Fuzzy k-Nearest Neighbour, Naïve Bayes, Tree Augmented Naïve Bayes, Support Vector Machine and Multilayer Perceptron - in order to determine the most reliable combined process. Training was carried out in a supervised way – all the lesions were previously classified by an expert on the field before being subject to the scrutiny of the system. The author is convinced that the work presented on this PhD thesis is a valid contribution to the field of skin cancer diagnostics. Albeit its scope is limited – one lesion per image – the results achieved by this arrangement of segmentation, feature extraction and classification algorithms showed this is the right path to achieving a reliable early screening system. If and when, to all these data, values for age, gender and evolution might be used as classification features, the results will, no doubt, become even more accurate, allowing for an improvement in the survival rates of skin cancer patients

    A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity

    Full text link
    The richness of natural images makes the quest for optimal representations in image processing and computer vision challenging. The latter observation has not prevented the design of image representations, which trade off between efficiency and complexity, while achieving accurate rendering of smooth regions as well as reproducing faithful contours and textures. The most recent ones, proposed in the past decade, share an hybrid heritage highlighting the multiscale and oriented nature of edges and patterns in images. This paper presents a panorama of the aforementioned literature on decompositions in multiscale, multi-orientation bases or dictionaries. They typically exhibit redundancy to improve sparsity in the transformed domain and sometimes its invariance with respect to simple geometric deformations (translation, rotation). Oriented multiscale dictionaries extend traditional wavelet processing and may offer rotation invariance. Highly redundant dictionaries require specific algorithms to simplify the search for an efficient (sparse) representation. We also discuss the extension of multiscale geometric decompositions to non-Euclidean domains such as the sphere or arbitrary meshed surfaces. The etymology of panorama suggests an overview, based on a choice of partially overlapping "pictures". We hope that this paper will contribute to the appreciation and apprehension of a stream of current research directions in image understanding.Comment: 65 pages, 33 figures, 303 reference

    Automated classification of rainfall systems using statistical characterization.

    Get PDF
    A general, completely automated procedure for classifying rainfall systems is developed. The technique is flexible and universally applicable, in that any rainfall system can be classified regardless of size, location, time of day or year, degree of organization, etc. The knowledge obtained from previous research was used to develop a relatively straightforward and unique classification system. To test the performance of the method, results were validated against a subjective classification based upon objective criteria. From an independent random sample, the automated classification system accurately placed events into stratiform, linear, and cellular classes 85% of the time
    corecore