587 research outputs found

    Operators for transforming kernels into quasi-local kernels that improve SVM accuracy

    Get PDF
    Motivated by the crucial role that locality plays in various learning approaches, we present, in the framework of kernel machines for classification, a novel family of operators on kernels able to integrate local information into any kernel obtaining quasi-local kernels. The quasi-local kernels maintain the possibly global properties of the input kernel and they increase the kernel value as the points get closer in the feature space of the input kernel, mixing the effect of the input kernel with a kernel which is local in the feature space of the input one. If applied on a local kernel the operators introduce an additional level of locality equivalent to use a local kernel with non-stationary kernel width. The operators accept two parameters that regulate the width of the exponential influence of points in the locality-dependent component and the balancing between the feature-space local component and the input kernel. We address the choice of these parameters with a data-dependent strategy. Experiments carried out with SVM applying the operators on traditional kernel functions on a total of 43 datasets with di®erent characteristics and application domains, achieve very good results supported by statistical significance

    Transforming Graph Representations for Statistical Relational Learning

    Full text link
    Relational data representations have become an increasingly important topic due to the recent proliferation of network datasets (e.g., social, biological, information networks) and a corresponding increase in the application of statistical relational learning (SRL) algorithms to these domains. In this article, we examine a range of representation issues for graph-based relational data. Since the choice of relational data representation for the nodes, links, and features can dramatically affect the capabilities of SRL algorithms, we survey approaches and opportunities for relational representation transformation designed to improve the performance of these algorithms. This leads us to introduce an intuitive taxonomy for data representation transformations in relational domains that incorporates link transformation and node transformation as symmetric representation tasks. In particular, the transformation tasks for both nodes and links include (i) predicting their existence, (ii) predicting their label or type, (iii) estimating their weight or importance, and (iv) systematically constructing their relevant features. We motivate our taxonomy through detailed examples and use it to survey and compare competing approaches for each of these tasks. We also discuss general conditions for transforming links, nodes, and features. Finally, we highlight challenges that remain to be addressed

    Tensor singular spectral analysis for 3D feature extraction in hyperspectral images.

    Get PDF
    Due to the cubic structure of a hyperspectral image (HSI), how to characterize its spectral and spatial properties in three dimensions is challenging. Conventional spectral-spatial methods usually extract spectral and spatial information separately, ignoring their intrinsic correlations. Recently, some 3D feature extraction methods are developed for the extraction of spectral and spatial features simultaneously, although they rely on local spatial-spectral regions and thus ignore the global spectral similarity and spatial consistency. Meanwhile, some of these methods contain huge model parameters which require a large number of training samples. In this paper, a novel Tensor Singular Spectral Analysis (TensorSSA) method is proposed to extract global and low-rank features of HSI. In TensorSSA, an adaptive embedding operation is first proposed to construct a trajectory tensor corresponding to the entire HSI, which takes full advantage of the spatial similarity and improves the adequate representation of the global low-rank properties of the HSI. Moreover, the obtained trajectory tensor, which contains the global and local spatial and spectral information of the HSI, is decomposed by the Tensor singular value decomposition (t-SVD) to explore its low-rank intrinsic features. Finally, the efficacy of the extracted features is evaluated using the accuracy of image classification with a support vector machine (SVM) classifier. Experimental results on three publicly available datasets have fully demonstrated the superiority of the proposed TensorSSA over a few state-of-the-art 2D/3D feature extraction and deep learning algorithms, even with a limited number of training samples

    Correlation features and a structured SVM family for phoneme classification and automatic speech recognition

    Get PDF
    Das Hauptziel dieser Arbeit ist, zur Verbesserung der Klassifikation von Phonemen und als direkte Folge davon zur Verbesserung automatischer Spracherkennung beizutragen. Die ausschlaggebende Innovation ist hierbei, dass unterschiedliche Phasen – von der Erstellung der Klassifikationsmerkmale über die innere Struktur der Klassifizierer bis hin zu deren Gesamttopologie – von ein und derselben Grundidee aus deduziert werden. Diese manifestiert sich vor allem in der Interaktion von Korrelation und der verwendeten Tristate-Modellierung von Phonemen. Basis ist dafür die Sprache eigene Charakteristik der (schwachen) Kurzzeitstationarität, repräsentiert durch Segmente mit dieser Eigenschaft und Ubergänge zwischen solchen. Die Tristate-Topologie partitioniert dabei Phoneme, oder allgemeiner Beobachtungen, in drei Bereiche, Starte, Mitte und Ende, und simuliert in Verbindung mit den bekannten Hidden Markov Modellen eben jene Zustandsfolgen von quasi statischen Momenten und Transitionen. Auf Basis der Stationarität und der Tristate Struktur entfaltet sich unser Ansatz wie folgt. Wir betrachten ein Sprachsignal als eine Realisierung eines Zufallsprozesses, welcher innerhalb kurzer Segmente o.g. Eigenschaften annimmt. Durch diese wird die Zeitunabhängigkeit der ersten beiden statistischen Momente determiniert, d.h. die Momente werden allein durch zeitliche Differenzen von Beobachtungen charakterisiert. Mit wechselnden Segmenten und Transitionen zwischen diesen ändern sich daher Auto-und Kreuzkorrelation und in infolgedessen die durch sie definierten, neu entwickelten Merkmale. In diesem Sinne analysieren wir, basierend auf herkömmlichen MFCCVektoren, in einem ersten Schritt mögliche Verbesserungen durch Verwendung von Autokorrelationsdaten und entwickeln aufgrund motivierender Resultate im Weiteren spezielle (Kreuz-) Korrelationsmerkmale. Dabei hilft die Tatsache, dass im Gegensatz zu verschiedenen MFCC-Vektorkomponenten ein und desselben Merkmalvektors (innerhalb dessen die unterschiedliche Komponenten verschiedene Frequenzbänder repräsentieren), gleiche Einträge unterschiedlicher Vektoren im Allgemeinen nicht dekorreliert sind. Im darauffolgenden Schritt geht die Operation der Korrelation direkt in die für die Phonemklassifikation benutzten Support Vektor Klassifizierer insofern ein, als dass deren (reproduzierender) Kern gewonnen wird aus besagter Transformation. Die dafür theoretischen Voraussetzungen werden hergeleitet und die notwendigen Eigenschaften des neuen reproduzierenden Kernes wird bewiesen. Einhergehend mit diesem speziellen Kern wird eine Familie aus Klassifizierern eingeführt, deren Struktur, den Features folgend, direkt an das Tristatemodel angelehnt und ebenfalls von der Korrelation beeinflusst ist. In ihrer Gesamtheit zielen die Konzepte darauf ab, die stationaritären Phasen als auch Transitionen zwischen verschiedenen Sprachsegmenten adäquater zu modellieren als bisherige Verfahren. Die Verbesserung der Erkennungsrate im Vergleich zum Standardansatz wird anschließend anhand von vergleichenden Experimenten gezeigt, und im weiteren Verlauf wird das Verfahren eingebunden in ein allgemeines automatisches Spracherkennungssystem und auf diesem ausgewertet. Vergleichende Experimente mit Standardverfahren demonstrieren dabei das Potential des neuen Ansatzes, und Vorschläge zu Verbesserungen und Weiterentwicklungen schließen die Arbeit ab.The foremost aim of this thesis is to introduce concepts targeting at improving both phoneme classification and in line with this automatic speech recognition. The most distinctive part of the herein presented, new approach is that the different stages of the analysis, from feature vector creation to classification, are all developed upon the common basis. This foundation becomes apparent by the interaction of correlation and the formal structure of a tristate phoneme model that manifests itself in short time weak stationary characteristic and transitions between such segments within phonemes. The tristate layout is a topology that partitions a phoneme, or more generally an observed frame, into three main sections, start, middle and end. In combination with the well known Hidden Markov Model (HMM) it targets at modeling the above mentioned states of transitions and stationarity. On the base of weak stationarity and the tristate structure, our approach evolves as follows. A stochastic process such as a speech signal that is short time weak stationary has first and second order moments independent of time t, they are affected only by the timespan between observations. This effect is reflected by the (auto)covariance of the process and carries over to (auto)correlation and to some degree to cross correlation. In this light, based on common MFCC feature vectors, we first analyze potential improvements when using autocorrelation data and due to motivating results introduce both new MFCC autocorrelation- and later specific cross correlation features. In this context we note that, in contrast to different components (roughly representing the different frequency bands) of a single MFCC vector, identical components across different MFCC vectors in general are not decorrelated. In a subsequent step, the cross correlation transform is integrated into support vector classifiers used for phoneme classification such that a specialized reproducing kernel utilized by the classifiers is deduced directly from the transform. The theoretical prerequisites for the new kernel to be established are derived and proven along with its necessary requirements. Concerning the support vector machines, in line with the new reproducing kernel a family of classifiers is introduced. The structure of the latter evolves around immanent aspects inherited from concepts of phoneme representation and their acoustic progression: The above mentioned tristate model. Based on the topology of the latter and the construction of the features, a specifically structured collection of classes and associated support vector classifiers is designed under additional integration of correlation. All this aims at developing a framework that represents and models both stationarity and transitions within acoustical events to a degree not achieved by recognition and classification systems hitherto. To prove the success of this approach, experiments are conducted to demonstrate the improved recognition rates resulting from the new topology. Further on, the framework is integrated into a common automatic speech recognition system and evaluated in this context. Again, experiments that compare the new approach to a standard recognition system reveal its potentials. Finally, prospects and suggestions for further potential improvements seclude the thesis

    Correlation features and a structured SVM family for phoneme classification and automatic speech recognition

    Get PDF
    Das Hauptziel dieser Arbeit ist, zur Verbesserung der Klassifikation von Phonemen und als direkte Folge davon zur Verbesserung automatischer Spracherkennung beizutragen. Die ausschlaggebende Innovation ist hierbei, dass unterschiedliche Phasen – von der Erstellung der Klassifikationsmerkmale über die innere Struktur der Klassifizierer bis hin zu deren Gesamttopologie – von ein und derselben Grundidee aus deduziert werden. Diese manifestiert sich vor allem in der Interaktion von Korrelation und der verwendeten Tristate-Modellierung von Phonemen. Basis ist dafür die Sprache eigene Charakteristik der (schwachen) Kurzzeitstationarität, repräsentiert durch Segmente mit dieser Eigenschaft und Ubergänge zwischen solchen. Die Tristate-Topologie partitioniert dabei Phoneme, oder allgemeiner Beobachtungen, in drei Bereiche, Starte, Mitte und Ende, und simuliert in Verbindung mit den bekannten Hidden Markov Modellen eben jene Zustandsfolgen von quasi statischen Momenten und Transitionen. Auf Basis der Stationarität und der Tristate Struktur entfaltet sich unser Ansatz wie folgt. Wir betrachten ein Sprachsignal als eine Realisierung eines Zufallsprozesses, welcher innerhalb kurzer Segmente o.g. Eigenschaften annimmt. Durch diese wird die Zeitunabhängigkeit der ersten beiden statistischen Momente determiniert, d.h. die Momente werden allein durch zeitliche Differenzen von Beobachtungen charakterisiert. Mit wechselnden Segmenten und Transitionen zwischen diesen ändern sich daher Auto-und Kreuzkorrelation und in infolgedessen die durch sie definierten, neu entwickelten Merkmale. In diesem Sinne analysieren wir, basierend auf herkömmlichen MFCCVektoren, in einem ersten Schritt mögliche Verbesserungen durch Verwendung von Autokorrelationsdaten und entwickeln aufgrund motivierender Resultate im Weiteren spezielle (Kreuz-) Korrelationsmerkmale. Dabei hilft die Tatsache, dass im Gegensatz zu verschiedenen MFCC-Vektorkomponenten ein und desselben Merkmalvektors (innerhalb dessen die unterschiedliche Komponenten verschiedene Frequenzbänder repräsentieren), gleiche Einträge unterschiedlicher Vektoren im Allgemeinen nicht dekorreliert sind. Im darauffolgenden Schritt geht die Operation der Korrelation direkt in die für die Phonemklassifikation benutzten Support Vektor Klassifizierer insofern ein, als dass deren (reproduzierender) Kern gewonnen wird aus besagter Transformation. Die dafür theoretischen Voraussetzungen werden hergeleitet und die notwendigen Eigenschaften des neuen reproduzierenden Kernes wird bewiesen. Einhergehend mit diesem speziellen Kern wird eine Familie aus Klassifizierern eingeführt, deren Struktur, den Features folgend, direkt an das Tristatemodel angelehnt und ebenfalls von der Korrelation beeinflusst ist. In ihrer Gesamtheit zielen die Konzepte darauf ab, die stationaritären Phasen als auch Transitionen zwischen verschiedenen Sprachsegmenten adäquater zu modellieren als bisherige Verfahren. Die Verbesserung der Erkennungsrate im Vergleich zum Standardansatz wird anschließend anhand von vergleichenden Experimenten gezeigt, und im weiteren Verlauf wird das Verfahren eingebunden in ein allgemeines automatisches Spracherkennungssystem und auf diesem ausgewertet. Vergleichende Experimente mit Standardverfahren demonstrieren dabei das Potential des neuen Ansatzes, und Vorschläge zu Verbesserungen und Weiterentwicklungen schließen die Arbeit ab.The foremost aim of this thesis is to introduce concepts targeting at improving both phoneme classification and in line with this automatic speech recognition. The most distinctive part of the herein presented, new approach is that the different stages of the analysis, from feature vector creation to classification, are all developed upon the common basis. This foundation becomes apparent by the interaction of correlation and the formal structure of a tristate phoneme model that manifests itself in short time weak stationary characteristic and transitions between such segments within phonemes. The tristate layout is a topology that partitions a phoneme, or more generally an observed frame, into three main sections, start, middle and end. In combination with the well known Hidden Markov Model (HMM) it targets at modeling the above mentioned states of transitions and stationarity. On the base of weak stationarity and the tristate structure, our approach evolves as follows. A stochastic process such as a speech signal that is short time weak stationary has first and second order moments independent of time t, they are affected only by the timespan between observations. This effect is reflected by the (auto)covariance of the process and carries over to (auto)correlation and to some degree to cross correlation. In this light, based on common MFCC feature vectors, we first analyze potential improvements when using autocorrelation data and due to motivating results introduce both new MFCC autocorrelation- and later specific cross correlation features. In this context we note that, in contrast to different components (roughly representing the different frequency bands) of a single MFCC vector, identical components across different MFCC vectors in general are not decorrelated. In a subsequent step, the cross correlation transform is integrated into support vector classifiers used for phoneme classification such that a specialized reproducing kernel utilized by the classifiers is deduced directly from the transform. The theoretical prerequisites for the new kernel to be established are derived and proven along with its necessary requirements. Concerning the support vector machines, in line with the new reproducing kernel a family of classifiers is introduced. The structure of the latter evolves around immanent aspects inherited from concepts of phoneme representation and their acoustic progression: The above mentioned tristate model. Based on the topology of the latter and the construction of the features, a specifically structured collection of classes and associated support vector classifiers is designed under additional integration of correlation. All this aims at developing a framework that represents and models both stationarity and transitions within acoustical events to a degree not achieved by recognition and classification systems hitherto. To prove the success of this approach, experiments are conducted to demonstrate the improved recognition rates resulting from the new topology. Further on, the framework is integrated into a common automatic speech recognition system and evaluated in this context. Again, experiments that compare the new approach to a standard recognition system reveal its potentials. Finally, prospects and suggestions for further potential improvements seclude the thesis

    Representing functional data in reproducing Kernel Hilbert spaces with applications to clustering, classification and time series problems

    Get PDF
    In modern data analysis areas such as Image Analysis, Chemometrics or Information Retrieval the raw data are often complex and their representation in Euclidean spaces is not straightforward. However most statistical data analysis techniques are designed to deal with points in Euclidean spaces and hence a representation of the data in some Euclidean coordinate system is always required as a previous step to apply multivariate analysis techniques. This process is crucial to guarantee the success of the data analysis methodologies and will be a core contribution of this thesis. In this work we will develop general data representation techniques in the framework of Functional Data Analysis (FDA) for classification and clustering problems. In Chapter 1 we motivate the problems to solve, describe the roadmap of the contributions and set up the notation of this work. In Chapter 2 we review some aspects concerning Reproducing Kernel Hilbert Spaces (RKHSs), Regularization Theory Integral Operators, Support Vector Machines and Kernel Combinations. In Chapter 3 we propose a new methodology to obtain finite-dimensional representations of functional data. The key idea is to consider each functional curve as a point in a general function space and then project these points onto a Reproducing Kernel Hilbert Space (RKHS) with the aid of Regularization theory. We will describe the projection methods, analyze its theoretical properties and develop an strategy to select appropriate RKHSs to represent the functional data. Following the functional data analysis approach, we develop in Chapter 4 a new procedure to deal with proximity (similarity or distance) matrices in classification problems by studying the connection between proximity measures and a certain class of integral operators. The idea is to come up with a methodology able to estimate an integral operator whose associated kernel function, evaluated at the sample, approximates the sample proximity matrix of the problem. To show the broad scope of application of the methodology,we will apply it to three cases: (1) classification problems where the only available information about the data is an asymmetric similarity matrix (2) partially labeled classification problems and (3) classification problems where several sources of information are available and can be combined to obtain the discrimination function. In Chapter 5 we propose an spectral framework for information fusion when the sources of information are given by a set of proximity matrices. Our approach is based on the simultaneous diagonalization of the original matrices of the problem and it represents a natural way to manage the redundant information involved in the fusion process. In particular, we define a new metric for proximity matrices and we propose a method that automatically eliminates the redundant information among a set of matrices when they are combined. We conclude the contributions of the thesis in Chapter 6 with a battery of simulated and real examples devoted to compare the performance of the proposed methodologies with the state of the art in representation methods. Finally, in Chapter 7 we include a discussion regarding the topics described above and we propose some future lines of research we believe are the natural extensions to the work developed in this thesis. ------------------------------------------------------------------------------------------------------------------------------------------------En áreas de análisis de datos tales como el Análisis de Imágenes, la Quimiometría o la Recuperación de Información los datos son complejos y su representación en espacios Euclídeos no es directa. Sin embargo, la mayoría de los procedimientos estadísticos están diseñados para trabajar con puntos en espacios Euclídeos. Por tanto, representar los datos en un sistema Euclídeo de coordenadas es el paso previo necesario al uso de técnicas estadísticas multivariantes. Este proceso es crucial a la hora de garantizar adecuadas soluciones a nuestros problemas y será el núcleo central de las contribuciones de esta tesis. En este trabajo desarrollaremos técnicas generales de representación de datos en problemas de clasificación y conglomerados en el marco del Análisis Funcional de Datos. En el Capítulo 1 motivaremos los problemas a resolver, describiremos las contribuciones y fijaremos la notación utilizada en este trabajo. En el Capítulo 2 revisamos algunos aspectos relacionados con los espacios de Hilbert de Núcleo reproductivo, la Teoría de Regularización, Operadores integrales, Máquinas de Vectores Soporte y métodos de Combinaciones de Núcleos. En el Capítulo 3, proponemos una nueva metodología para obtener representaciones de dimensión finita de datos funcionales. La idea clave es considerar cada dato funcional como un punto en un espacio general de funciones y posteriormente proyectar estos puntos en un espacio de Hilbert de Núcleo Reproductivo con la ayuda de la teoría de Regularización. En el Capítulo 3 describiremos el método de proyección, analizaremos sus propiedades teóricas y desarrollaremos una estrategia para seleccionar un espacio apropiado en el que representar los datos funcionales. Siguiendo el enfoque de análisis de datos funcionales, desarrollamos en el Capítulo 4 un nuevo procedimiento para trabajar con matrices de proximidades (similaridades o distancias) en problemas de clasificación y conglomerados estudiando la relación entre matrices de proximidad y cierta clase de operadores integrales. La idea es desarrollar una metodología capaz de estimar un operador integral cuya núcleo, evaluado en la muestra, aproxime la matriz de proximidad. Para mostrar la utilidad de la meteodología propuesta la aplicaremos en tres casos: (1) problemas de clasificación donde la información disponible sobre los datos es una matriz de similaridades asimétrica, (2) problemas de clasificación parcialmente etiquetados y (3) problemas de clasificación donde varias fuentes de infomación están disponibles y pueden ser combinadas para obtener el clasificador. En el Capítulo 5 proponemos un marco espectral para la fusión de infomación cuando las fuentes de información vienen dadas por un conjunto de matrices de proximidades. Nuestro enfoque está basado en la diagonalización simultánea de dichas matrices y representa un modo natural de tratar con la infomación redundante involucrada en el proceso de combinación. En particular, definiremos una nueva métrica para matrices de proximidades y propondremos un método que elimina automáticamente la infomación redundante de una serie de matrices cuando son combinadas. Concluimos las contribuciones de esta tesis en el Capítulo 6 con una batería experimentos reales y simulados cuyo objetivo es comparar la metodología propuesta con el estado de arte en métodos de representatión de objetos. Finalmente, en el Capítulo 7 incluimos una discusión sobre los temas tratados en anteriormente y futuras líneas de investigación que creemos son la prolongación natural de las contribuciones de esta tesis

    Quantum Neural Networks with Qutrits

    Get PDF
    Οι κβαντικοί υπολογιστές, εκμεταλλευόμενοι τις αρχές της κβαντικής μηχανικής, έχουν τη δυνατότητα να μεταμορφώσουν πολλούς τεχνολογικούς τομείς, χρησιμοποιώντας κβαντικά bit (qubits) που μπορούν να υπάρχουν σε υπέρθεση και εναγκαλισμό, επιτρέποντας, μεταξύ άλλων δυνατοτήτων, την παράλληλη αναζήτηση λύσεων. Πρόσφατες εξελίξεις στο κβαντικό υλικό επέτρεψαν την υλοποίηση πολυδιάστατων κβαντικών καταστάσεων σε νέες πλατφόρμες μικροκυκλωμάτων, προτείνοντας μια ακόμη ενδιαφέρουσα προσέγγιση. Η χρήση qudits, κβαντικών συστημάτων με υψηλότερες διάστασεις, προσφέρει αυξημένο χώρο για αναπαράστη πληροφορίας, αλλά επίσης πειραματικές υλοποιήσεις έχουν επιδείξει ανθεκτικότητα έναντι θορύβου και σφαλμάτων. Αυτό επισημαίνει περαιτέρω την θέση τους στο μέλλον του κβαντικού υπολογισμού. Σε αυτήν τη πτυχιακή, εξετάζεται η δυνατότητα των qutrits για την επίλυση προβλημάτων μηχανικής μάθησης σε κβαντικό υπολογιστή. Ο επεκταμένος χώρος καταστάσεων που προσφέρουν τα qutrits επιτρέπει πλουσιότερη αναπαράσταση δεδομένων. Για το σκοπό αυτό, χρησιμοποιώντας το μαθηματικό πλαίσιο του SU(3), εισάγεται η χρήση των πινάκων Gell-Mann για την κωδικοποίηση σε έναν 8-διάστατο χώρο. Αυτό εξοπλίζει τα συστήματα κβαντικού υπολογισμού με τη δυνατότητα επεξεργασίας και αναπαράστασης περισσότερων δεδομένων σε ένα μόνο qutrit. Η έρευνα επικεντρώνεται σε προβλήματα ταξινόμησης χρησιμοποιώντας qutrits, όπου διεξάγεται μια συγκριτική ανάλυση μεταξύ του προτεινόμενου χάρτη χαρακτηριστικών Gell-Mann, κυκλώματων που χρησιμοποιούν qubits και μοντέλων κλασσικής μηχανικής μάθησης. Επιπλέον, εξερευνούνται τεχνικές βελτιστοποίησης σε χώρους Hilbert υψηλών διαστάσεων, με σκοπό την αντιμετώπιση προκλήσεων, όπως τα vanishing gradients και το πρόβλημα των barren plateaus. Τέλος, καλύπτονται πρόσφατες εξελίξεις στον κβαντικό υλικό, με ειδική έμφαση σε συστήματα βασισμένα σε qutrits. Ο κύριος στόχος αυτής της πτυχιακής εργασίας είναι να εξετάσει τη δυνατότητα κωδικοποίησης Gell-Mann για προβλήματα ταξινόμησης, να αποδείξει την εφικτότητα της επέκτασης των χώρων Hilbert για εργασίες μηχανικής μάθησης και να ορίσει μια αξιόπιστη βάση για εργασία με γεωμετρικούς χάρτες χαρακτηριστικών. Αναλύωντας τις σχεδιαστικές επιλογές και πειραματικές διατάξεις λεπτομερώς, αυτή η έρευνα στοχεύει να συμβάλει στην ευρύτερη κατανόηση των δυνατοτήτων και των περιορισμών των συστημάτων με qutrits στο πλαίσιο της κβαντικής μηχανικής μάθησης, συνεισφέροντας στην πρόοδο του κβαντικού υπολογισμού και των εφαρμογών του σε πρακτικούς τομείς.Quantum computers, leveraging the principles of quantum physics, have the potential to revolutionize various domains by utilizing quantum bits (qubits) that can exist in superpositions and entanglement, allowing for parallel exploration of solutions. Recent advancements in quantum hardware have enabled the realization of high-dimensional quantum states on a chip-scale platform, proposing another potential avenue. The utilization of qudits, quantum systems with levels exceeding 2, not only offer increased information capacity, but also exhibit improved resilience against noise and errors. Experimental implementations have successfully showcased the potential of high-dimensional quantum systems in efficiently encoding complex quantum circuits, further highlighting their promise for the future of quantum computing. In this thesis, the potential of qutrits is explored to enhance machine learning tasks in quantum computing. The expanded state space offered by qutrits enables richer data representation, capturing intricate patterns and relationships. To this end, employing the mathematical framework of SU(3), the Gell-Mann feature map is introduced to encode information within an 8-dimensional space. This empowers quantum computing systems to process and represent larger amounts of data within a single qutrit. The primary focus of this thesis centers on classification tasks utilizing qutrits, where a comparative analysis is conducted between the proposed Gell-Mann feature map, well-established qubit feature maps, and classical machine learning models. Furthermore, optimization techniques within expanded Hilbert spaces are explored, addressing challenges such as vanishing gradients and barren plateaus landscapes. This work explores foundational concepts and principles in quantum computing and machine learning to ensure a solid understanding of the subject. It also highlights recent advancements in quantum hardware, specifically focusing on qutrit-based systems. The main objective is to explore the feasibility of the Gell-Mann encoding for multiclass classification in the SU(3) space, demonstrate the viability of expanded Hilbert spaces for machine learning tasks, and establish a robust foundation for working with geometric feature maps. By delving into the design considerations and experimental setups in detail, this research aims to contribute to the broader understanding of the capabilities and limitations of qutrit-based systems in the context of quantum machine learning, contributing to the advancement of quantum computing and its applications in practical domains

    Advanced Techniques for Ground Penetrating Radar Imaging

    Get PDF
    Ground penetrating radar (GPR) has become one of the key technologies in subsurface sensing and, in general, in non-destructive testing (NDT), since it is able to detect both metallic and nonmetallic targets. GPR for NDT has been successfully introduced in a wide range of sectors, such as mining and geology, glaciology, civil engineering and civil works, archaeology, and security and defense. In recent decades, improvements in georeferencing and positioning systems have enabled the introduction of synthetic aperture radar (SAR) techniques in GPR systems, yielding GPR–SAR systems capable of providing high-resolution microwave images. In parallel, the radiofrequency front-end of GPR systems has been optimized in terms of compactness (e.g., smaller Tx/Rx antennas) and cost. These advances, combined with improvements in autonomous platforms, such as unmanned terrestrial and aerial vehicles, have fostered new fields of application for GPR, where fast and reliable detection capabilities are demanded. In addition, processing techniques have been improved, taking advantage of the research conducted in related fields like inverse scattering and imaging. As a result, novel and robust algorithms have been developed for clutter reduction, automatic target recognition, and efficient processing of large sets of measurements to enable real-time imaging, among others. This Special Issue provides an overview of the state of the art in GPR imaging, focusing on the latest advances from both hardware and software perspectives
    corecore