6 research outputs found

    Greedy Algorithms for Approximating the Diameter of Machine Learning Datasets in Multidimensional Euclidean Space: Experimental Results

    Get PDF
    Finding the diameter of a dataset in multidimensional Euclidean space is a well-established problem, with well-known algorithms. However, most of the algorithms found in the literature do not scale well with large values of data dimension, so the time complexity grows exponentially in most cases, which makes these algorithms impractical. Therefore, we implemented 4 simple greedy algorithms to be used for approximating the diameter of a multidimensional dataset; these are based on minimum/maximum l2 norms, hill climbing search, Tabu search and Beam search approaches, respectively. The time complexity of the implemented algorithms is near-linear, as they scale near-linearly with data size and its dimensions. The results of the experiments (conducted on different machine learning data sets) prove the efficiency of the implemented algorithms and can therefore be recommended for finding the diameter to be used by different machine learning applications when needed

    A new perceptual dissimilarity measure for image retrieval and clustering

    Get PDF
    Image retrieval and clustering are two important tools for analysing and organising images. Dissimilarity measure is central to both image retrieval and clustering. The performance of image retrieval and clustering algorithms depends on the effectiveness of the dissimilarity measure. ‘Minkowski’ distance, or more specifically, ‘Euclidean’ distance, is the most widely used dissimilarity measure in image retrieval and clustering. Euclidean distance depends only on the geometric position of two data instances in the feature space and completely ignores the data distribution. However, data distribution has an effect on human perception. The argument that two data instances in a dense area are more perceptually dissimilar than the same two instances in a sparser area, is proposed by psychologists. Based on this idea, a dissimilarity measure called, ‘mp’, has been proposed to address Euclidean distance’s limitation of ignoring the data distribution. Here, mp relies on data distribution to calculate the dissimilarity between two instances. As prescribed in mp, higher data mass between two data instances implies higher dissimilarity, and vice versa. mp relies only on data distribution and completely ignores the geometric distance in its calculations. In the aggregation of dissimilarities between two instances over all the dimensions in feature space, both Euclidean distance and mp give same priority to all the dimensions. This may result in a situation that the final dissimilarity between two data instances is determined by a few dimensions of feature vectors with relatively much higher values. As a result, the dissimilarity derived may not align well with human perception. The need to address the limitations of Minkowski distance measures, along with the importance of a dissimilarity measure that considers both geometric distance and the perceptual effect of data distribution in measuring dissimilarity between images motivated this thesis. It studies the performance of mp for image retrieval. It investigates a new dissimilarity measure that combines both Euclidean distance and data distribution. In addition to these, it studies the performance of such a dissimilarity measure for image retrieval and clustering. Our performance study of mp for image retrieval shows that relying only on data distribution to measure the dissimilarity results in some situations, where the mp’s measurement is contrary to human perception. This thesis introduces a new dissimilarity measure called, perceptual dissimilarity measure (PDM). PDM considers the perceptual effect of data distribution in combination with Euclidean distance. PDM has two variants, PDM1 and PDM2. PDM1 focuses on improving mp by weighting it using Euclidean distance in situations where mp may not retrieve accurate results. PDM2 considers the effect of data distribution on the perceived dissimilarity measured by Euclidean distance. PDM2 proposes a weighting system for Euclidean distance using a logarithmic transform of data mass. The proposed PDM variants have been used as alternatives to Euclidean distance and mp to improve the accuracy in image retrieval. Our results show that PDM2 has consistently performed the best, compared to Euclidean distance, mp and PDM1. PDM1’s performance was not consistent, although it has performed better than mp in all the experiments, but it could not outperform Euclidean distance in some cases. Following the promising results of PDM2 in image retrieval, we have studied its performance for image clustering. k-means is the most widely used clustering algorithm in scientific and industrial applications. k-medoids is the closest clustering algorithm to k-means. Unlike k-means which works only with Euclidean distance, k-medoids gives the option to choose the arbitrary dissimilarity measure. We have used Euclidean distance, mp and PDM2 as the dissimilarity measure in k-medoids and compared the results with k-means. Our clustering results show that PDM2 has perfromed overally the best. This confirms our retrieval results and identifies PDM2 as a suitable dissimilarity measure for image retrieval and clustering.Doctor of Philosoph

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Vision artificielle pour les non-voyants : une approche bio-inspirée pour la reconnaissance de formes

    Get PDF
    More than 315 million people worldwide suffer from visual impairments, with several studies suggesting that this number will double by 2030 due to the ageing of the population. To compensate for the loss of sight the current approaches consist of either specific aids designed to answer particular needs or generic systems such as neuroprostheses and sensory substitution devices. These holistic approaches, which try to restore vision as a whole, have been shown to be very inefficient in real life situations given the low resolution of output interfaces. To overcome these obstacles we propose the use of artificial vision in order to pre-process visual scenes and provide the user with relevant information. We have validated this approach through the development of a novel assistive device for the blind called Navig. Through shape recognition and spatialized sounds synthesis, this system allows users to locate and grab objects of interest. It also features navigational aids based on a new positioning method combining GPS, inertial sensors and the visual detection of geolocalized landmarks. To enhance the performance of the visual module we further developed, as part of this thesis, a bio-inspired pattern recognition algorithm which uses latency-based coding of visual information, oriented edge representations and a cascaded architecture combining detection at different resolutions.La déficience visuelle touche aujourd’hui plus de 315 millions de personnes à travers le monde, un chiffre qui pourrait doubler d’ici à 2030 du fait du vieillissement de la population. Les deux grandes approches existantes pour compenser la perte de vision sont les aides spécifiques, répondant à un besoin identifié, et les systèmes génériques tels que les neuroprothèses ou les systèmes de substitution sensorielle. Ces approches holistiques, tentant de restituer l’ensemble de l’information visuelle, s’avèrent inadaptées de par la trop faible résolution des interfaces de sortie, rendant ces systèmes inutilisables dans la vie quotidienne. Face à ce constat, nous proposons dans cette thèse une démarche alternative, consistant à intégrer des méthodes de vision artificielle, afin de prétraiter la scène visuelle, et de ne restituer au non-voyant que les informations extraites pertinentes. Pour valider cette approche, nous présenterons le développement d’un système de suppléance baptisé Navig. Grâce à la reconnaissance de formes et à la synthèse de sons spatialisés, il permet à l’utilisateur de localiser des objets d’intérêt. Il offre également des fonctions de navigation, basées sur une nouvelle méthode de positionnement combinant GPS, données inertielles, et détections de cibles visuelles géolocalisées. Afin d’améliorer les performances du module de vision artificielle, nous proposerons également dans cette thèse un nouvel algorithme de reconnaissance de formes bio-inspiré, reposant sur un codage de l’information visuelle par latence, sur des représentations sous forme d’arêtes orientées, et sur une architecture en cascade combinant des détections à différentes résolutions

    Banknote recognition as a CBIR problem

    No full text
    Automatic banknote recognition is an important aid for visually impaired users, which may provide a complementary evidence to tactile perception. In this paper we propose a framework for banknote recognition based on a traditional Content-Based Image Retrieval pipeline: given a test image, we first extract SURF features, then adopt a Bag of Features representation, finally we associate the image with the banknote amount which ranked best according to a similarity measure of choice. Compared with previous works in the literature, our method is simple, computationally efficient, and does not require a banknote detection stage. In order to validate effectiveness and robustness of the proposed approach, we have collected several datasets of Euro banknotes on a variety of conditions including partial occlusion, cluttered background, and also rotation, viewpoint, and illumination changes. We report a comparative analysis on different image descriptors and similarity measures and show that the proposed scheme achieves high recognition rates also on rather challenging circumstances. In particular, Bag of Features associated with L2 distance appears to be the best combination for the problem at hand, and performances do not degrade if a dimensionality reduction step is applied
    corecore