12 research outputs found

    Overlap Removal of Dimensionality Reduction Scatterplot Layouts

    Full text link
    Dimensionality Reduction (DR) scatterplot layouts have become a ubiquitous visualization tool for analyzing multidimensional data items with presence in different areas. Despite its popularity, scatterplots suffer from occlusion, especially when markers convey information, making it troublesome for users to estimate items' groups' sizes and, more importantly, potentially obfuscating critical items for the analysis under execution. Different strategies have been devised to address this issue, either producing overlap-free layouts, lacking the powerful capabilities of contemporary DR techniques in uncover interesting data patterns, or eliminating overlaps as a post-processing strategy. Despite the good results of post-processing techniques, the best methods typically expand or distort the scatterplot area, thus reducing markers' size (sometimes) to unreadable dimensions, defeating the purpose of removing overlaps. This paper presents a novel post-processing strategy to remove DR layouts' overlaps that faithfully preserves the original layout's characteristics and markers' sizes. We show that the proposed strategy surpasses the state-of-the-art in overlap removal through an extensive comparative evaluation considering multiple different metrics while it is 2 or 3 orders of magnitude faster for large datasets.Comment: 11 pages and 9 figure

    Assessment of Levee Erosion using Image Processing and Contextual Cueing

    Get PDF
    Soil erosion is one of the most severe land degradation problems afflicting many parts of the world where topography of the land is relatively steep. Due to inaccessibility to steep terrain, such as slopes in levees and forested mountains, advanced data processing techniques can be used to identify and assess high risk erosion zones. Unlike existing methods that require human observations, which can be expensive and error-prone, the proposed approach uses a fully automated algorithm to indicate when an area is at risk of erosion; this is accomplished by processing Landsat and aerial images taken using drones. In this paper the image processing algorithm is presented, which can be used to identify the scene of an image by classifying it in one of six categories: levee, mountain, forest, degraded forest, cropland, grassland or orchard. This paper focuses on automatic scene detection using global features with local representations to show the gradient structure of an image. The output of this work counts as a contextual cueing and can be used in erosion assessment, which can be used to predict erosion risks in levees. We also discuss the environmental implications of deferred erosion control in levees

    Robust localization and identification of African clawed frogs in digital images

    Get PDF
    We study the automatic localization and identification of African clawed frogs (Xenopus laevis sp.) in digital images taken in a laboratory environment. We propose a novel and stable frog body localization and skin pattern window extraction algorithm. We show that it compensates scale and rotation changes very well. Moreover, it is able to localize and extract highly overlapping regions (pattern windows) even in the cases of intense affine transformations, blurring, Gaussian noise, and intensity transformations. The frog skin pattern (i.e. texture) provides a unique feature for the identification of individual frogs. We investigate the suitability of five different feature descriptors (Gabor filters, area granulometry, HoG,1 dense SIFT,2 and raw pixel values) to represent frog skin patterns. We compare the robustness of the features based on their identification performance using a nearest neighbor classifier. Our experiments show that among five features that we tested, the best performing feature against rotation, scale, and blurring modifications was the raw pixel feature, whereas the SIFT feature was the best performing one against affine and intensity modifications

    Comparison of machine learning algorithms for detecting coral reef

    Get PDF
    (Received: 2014/07/31 - Accepted: 2014/09/23)This work focuses on developing a fast coral reef detector, which is used for an autonomous underwater vehicle, AUV. A fast detection secures the AUV stabilization respect to an area of reef as fast as possible, and prevents devastating collisions. We use the algorithm of Purser et al. (2009) because of its precision. This detector has two parts: feature extraction that uses Gabor Wavelet filters, and feature classification that uses machine learning based on Neural Networks. Due to the extensive time of the Neural Networks, we exchange for a classification algorithm based on Decision Trees. We use a database of 621 images of coral reef in Belize (110 images for training and 511 images for testing). We implement the bank of Gabor Wavelets filters using C++ and the OpenCV library. We compare the accuracy and running time of 9 machine learning algorithms, whose result was the selection of the Decision Trees algorithm. Our coral detector performs 70ms of running time in comparison to 22s executed by the algorithm of Purser et al. (2009)

    An Event-Based Neurobiological Recognition System with Orientation Detector for Objects in Multiple Orientations

    Get PDF
    A new multiple orientation event-based neurobiological recognition system is proposed by integrating recognition and tracking function in this paper, which is used for asynchronous address-event representation (AER) image sensors. The characteristic of this system has been enriched to recognize the objects in multiple orientations with only training samples moving in a single orientation. The system extracts multi-scale and multi-orientation line features inspired by models of the primate visual cortex. An orientation detector based on modified Gaussian blob tracking algorithm is introduced for object tracking and orientation detection. The orientation detector and feature extraction block work in simultaneous mode, without any increase in categorization time. An addresses lookup table (addresses LUT) is also presented to adjust the feature maps by addresses mapping and reordering, and they are categorized in the trained spiking neural network. This recognition system is evaluated with the MNIST dataset which have played important roles in the development of computer vision, and the accuracy is increase owing to the use of both ON and OFF events. AER data acquired by a DVS are also tested on the system, such as moving digits, pokers, and vehicles. The experimental results show that the proposed system can realize event-based multi-orientation recognition.The work presented in this paper makes a number of contributions to the event-based vision processing system for multi-orientation object recognition. It develops a new tracking-recognition architecture to feedforward categorization system and an address reorder approach to classify multi-orientation objects using event-based data. It provides a new way to recognize multiple orientation objects with only samples in single orientation

    Online sketch-based image retrieval using keyshape mining of geometrical objects

    Get PDF
    Online image retrieval has become an active information-sharing due to the massive use of the Internet. The key challenging problems are the semantic gap between the low-level visual features and high-semantic perception and interpretation, due to understating complexity of images and the hand-drawn query input representation which is not a regular input in addition to the huge amount of web images. Besides, the state-of-art research is highly desired to combine multiple types of different feature representations to close the semantic gap. This study developed a new schema to retrieve images directly from the web repository. It comprises three major phases. Firstly a new online input representation based on pixel mining to detect sketch shape features and correlate them with the semantic sketch objects meaning was designed. Secondly, training process was developed to obtain common templates using Singular Value Decomposition (SVD) technique to detect common sketch template. The outcome of this step is a sketch of variety templates dictionary. Lastly, the retrieval phase matched and compared the sketch with image repository using metadata annotation to retrieve the most relevant images. The sequence of processes in this schema converts the drawn input sketch to a string form which contains the sketch object elements. Then, the string is matched with the templates dictionary to specify the sketch metadata name. This selected name will be sent to a web repository to match and retrieve the relevant images. A series of experiments was conducted to evaluate the performance of the schema against the state of the art found in literature using the same datasets comprising one million images from FlickerIm and 0.2 million images from ImageNet. There was a significant retrieval in all cases of 100% precision for the first five retrieved images whereas the state of the art only achieved 88.8%. The schema has addressed many low features obstacles to retrieve more accurate images such as imperfect sketches, rotation, transpose and scaling. The schema has solved all these problems by using a high level semantic to retrieve accurate images from large databases and the web

    Optoelectronic Multifractal Wavelet Analysis for Fast and Accurate Detection of Rainfall in Weather Radar Images

    Get PDF
    In this thesis we propose an automated process for the removal of non-precipitation echoes present in weather radar signals and accurate detection of rainfall. The process employs multifractal analysis using directional Gabor wavelets for accurate detection of the rain events. An optoelectronic joint transform correlator is proposed to provide ultra fast processing and wavelet analysis. Computer simulations of the proposed system show that the proposed algorithm is successful in the detecting rainfall accurately in radar images. The accuracy of the algorithms proposed are compared to accurate results that were generated under expert supervision. Results of the proposed system are also compared to results of QC algorithm for the ground validation software (GVS) used by TRMM ground validity Project and a previous QC algorithm. Several statistical measures computed for different reflectivity ranges show that the proposed algorithm gives accuracy as high as 98.95%, which exceed the 97.46% maximum accuracy for the GVS results. Also, the minimum error rate obtained by the proposed algorithm for different dB ranges decreases to 1.09% whereas the GVS results show a minimum error rate of 1.80%. The rain rate accumulation confirms the success of the proposed algorithm in the accurate removal of nonprecipitation echoes and a higher precision in rain accumulation estimates

    Optoelectronic Multifractal Wavelet Analysis for Fast and Accurate Detection of Rainfall in Weather Radar Images

    Get PDF
    In this thesis we propose an automated process for the removal of non-precipitation echoes present in weather radar signals and accurate detection of rainfall. The process employs multifractal analysis using directional Gabor wavelets for accurate detection of the rain events. An optoelectronic joint transform correlator is proposed to provide ultra fast processing and wavelet analysis. Computer simulations of the proposed system show that the proposed algorithm is successful in the detecting rainfall accurately in radar images. The accuracy of the algorithms proposed are compared to accurate results that were generated under expert supervision. Results of the proposed system are also compared to results of QC algorithm for the ground validation software (GVS) used by TRMM ground validity Project and a previous QC algorithm. Several statistical measures computed for different reflectivity ranges show that the proposed algorithm gives accuracy as high as 98.95%, which exceed the 97.46% maximum accuracy for the GVS results. Also, the minimum error rate obtained by the proposed algorithm for different dB ranges decreases to 1.09% whereas the GVS results show a minimum error rate of 1.80%. The rain rate accumulation confirms the success of the proposed algorithm in the accurate removal of nonprecipitation echoes and a higher precision in rain accumulation estimates

    Fast and robust image feature matching methods for computer vision applications

    Get PDF
    Service robotic systems are designed to solve tasks such as recognizing and manipulating objects, understanding natural scenes, navigating in dynamic and populated environments. It's immediately evident that such tasks cannot be modeled in all necessary details as easy as it is with industrial robot tasks; therefore, service robotic system has to have the ability to sense and interact with the surrounding physical environment through a multitude of sensors and actuators. Environment sensing is one of the core problems that limit the deployment of mobile service robots since existing sensing systems are either too slow or too expensive. Visual sensing is the most promising way to provide a cost effective solution to the mobile robot sensing problem. It's usually achieved using one or several digital cameras placed on the robot or distributed in its environment. Digital cameras are information rich sensors and are relatively inexpensive and can be used to solve a number of key problems for robotics and other autonomous intelligent systems, such as visual servoing, robot navigation, object recognition, pose estimation, and much more. The key challenges to taking advantage of this powerful and inexpensive sensor is to come up with algorithms that can reliably and quickly extract and match the useful visual information necessary to automatically interpret the environment in real-time. Although considerable research has been conducted in recent years on the development of algorithms for computer and robot vision problems, there are still open research challenges in the context of the reliability, accuracy and processing time. Scale Invariant Feature Transform (SIFT) is one of the most widely used methods that has recently attracted much attention in the computer vision community due to the fact that SIFT features are highly distinctive, and invariant to scale, rotation and illumination changes. In addition, SIFT features are relatively easy to extract and to match against a large database of local features. Generally, there are two main drawbacks of SIFT algorithm, the first drawback is that the computational complexity of the algorithm increases rapidly with the number of key-points, especially at the matching step due to the high dimensionality of the SIFT feature descriptor. The other one is that the SIFT features are not robust to large viewpoint changes. These drawbacks limit the reasonable use of SIFT algorithm for robot vision applications since they require often real-time performance and dealing with large viewpoint changes. This dissertation proposes three new approaches to address the constraints faced when using SIFT features for robot vision applications, Speeded up SIFT feature matching, robust SIFT feature matching and the inclusion of the closed loop control structure into object recognition and pose estimation systems. The proposed methods are implemented and tested on the FRIEND II/III service robotic system. The achieved results are valuable to adapt SIFT algorithm to the robot vision applications
    corecore