2,241 research outputs found

    Transforming Bell's Inequalities into State Classifiers with Machine Learning

    Full text link
    Quantum information science has profoundly changed the ways we understand, store, and process information. A major challenge in this field is to look for an efficient means for classifying quantum state. For instance, one may want to determine if a given quantum state is entangled or not. However, the process of a complete characterization of quantum states, known as quantum state tomography, is a resource-consuming operation in general. An attractive proposal would be the use of Bell's inequalities as an entanglement witness, where only partial information of the quantum state is needed. The problem is that entanglement is necessary but not sufficient for violating Bell's inequalities, making it an unreliable state classifier. Here we aim at solving this problem by the methods of machine learning. More precisely, given a family of quantum states, we randomly picked a subset of it to construct a quantum-state classifier, accepting only partial information of each quantum state. Our results indicated that these transformed Bell-type inequalities can perform significantly better than the original Bell's inequalities in classifying entangled states. We further extended our analysis to three-qubit and four-qubit systems, performing classification of quantum states into multiple species. These results demonstrate how the tools in machine learning can be applied to solving problems in quantum information science

    Kernel combination via debiased object correspondence analysis

    Get PDF
    This paper addresses the problem of combining multi-modal kernels in situations in which object correspondence information is unavailable between modalities, for instance, where missing feature values exist, or when using proprietary databases in multi-modal biometrics. The method thus seeks to recover inter-modality kernel information so as to enable classifiers to be built within a composite embedding space. This is achieved through a principled group-wise identification of objects within differing modal kernel matrices in order to form a composite kernel matrix that retains the full freedom of linear kernel combination existing in multiple kernel learning. The underlying principle is derived from the notion of tomographic reconstruction, which has been applied successfully in conventional pattern recognition. In setting out this method, we aim to improve upon object-correspondence insensitive methods, such as kernel matrix combination via the Cartesian product of object sets to which the method defaults in the case of no discovered pairwise object identifications. We benchmark the method against the augmented kernel method, an order-insensitive approach derived from the direct sum of constituent kernel matrices, and also against straightforward additive kernel combination where the correspondence information is given a priori. We find that the proposed method gives rise to substantial performance improvements

    Kernel combination via debiased object correspondence analysis

    Get PDF
    This paper addresses the problem of combining multi-modal kernels in situations in which object correspondence information is unavailable between modalities, for instance, where missing feature values exist, or when using proprietary databases in multi-modal biometrics. The method thus seeks to recover inter-modality kernel information so as to enable classifiers to be built within a composite embedding space. This is achieved through a principled group-wise identification of objects within differing modal kernel matrices in order to form a composite kernel matrix that retains the full freedom of linear kernel combination existing in multiple kernel learning. The underlying principle is derived from the notion of tomographic reconstruction, which has been applied successfully in conventional pattern recognition. In setting out this method, we aim to improve upon object-correspondence insensitive methods, such as kernel matrix combination via the Cartesian product of object sets to which the method defaults in the case of no discovered pairwise object identifications. We benchmark the method against the augmented kernel method, an order-insensitive approach derived from the direct sum of constituent kernel matrices, and also against straightforward additive kernel combination where the correspondence information is given a priori. We find that the proposed method gives rise to substantial performance improvements

    Interactive volumetric segmentation for textile micro-tomography data using wavelets and nonlocal means

    Get PDF
    This work addresses segmentation of volumetric images of woven carbon fiber textiles from micro-tomography data. We propose a semi-supervised algorithm to classify carbon fibers that requires sparse input as opposed to completely labeled images. The main contributions are: (a) design of effective discriminative classifiers, for three-dimensional textile samples, trained on wavelet features for segmentation; (b) coupling of previous step with nonlocal means as simple, efficient alternative to the Potts model; and (c) demonstration of reuse of classifier to diverse samples containing similar content. We evaluate our work by curating test sets of voxels in the absence of a complete ground truth mask. The algorithm obtains an average 0.95 F1 score on test sets and average F1 score of 0.93 on new samples. We conclude with discussion of failure cases and propose future directions toward analysis of spatiotemporal high-resolution micro-tomography images

    Applications of pattern classification to time-domain signals

    Get PDF
    Many different kinds of physics are used in sensors that produce time-domain signals, such as ultrasonics, acoustics, seismology, and electromagnetics. The waveforms generated by these sensors are used to measure events or detect flaws in applications ranging from industrial to medical and defense-related domains. Interpreting the signals is challenging because of the complicated physics of the interaction of the fields with the materials and structures under study. often the method of interpreting the signal varies by the application, but automatic detection of events in signals is always useful in order to attain results quickly with less human error. One method of automatic interpretation of data is pattern classification, which is a statistical method that assigns predicted labels to raw data associated with known categories. In this work, we use pattern classification techniques to aid automatic detection of events in signals using features extracted by a particular application of the wavelet transform, the Dynamic Wavelet Fingerprint (DWFP), as well as features selected through physical interpretation of the individual applications. The wavelet feature extraction method is general for any time-domain signal, and the classification results can be improved by features drawn for the particular domain. The success of this technique is demonstrated through four applications: the development of an ultrasonographic periodontal probe, the identification of flaw type in Lamb wave tomographic scans of an aluminum pipe, prediction of roof falls in a limestone mine, and automatic identification of individual Radio Frequency Identification (RFID) tags regardless of its programmed code. The method has been shown to achieve high accuracy, sometimes as high as 98%

    Supervised machine learning based multi-task artificial intelligence classification of retinopathies

    Full text link
    Artificial intelligence (AI) classification holds promise as a novel and affordable screening tool for clinical management of ocular diseases. Rural and underserved areas, which suffer from lack of access to experienced ophthalmologists may particularly benefit from this technology. Quantitative optical coherence tomography angiography (OCTA) imaging provides excellent capability to identify subtle vascular distortions, which are useful for classifying retinovascular diseases. However, application of AI for differentiation and classification of multiple eye diseases is not yet established. In this study, we demonstrate supervised machine learning based multi-task OCTA classification. We sought 1) to differentiate normal from diseased ocular conditions, 2) to differentiate different ocular disease conditions from each other, and 3) to stage the severity of each ocular condition. Quantitative OCTA features, including blood vessel tortuosity (BVT), blood vascular caliber (BVC), vessel perimeter index (VPI), blood vessel density (BVD), foveal avascular zone (FAZ) area (FAZ-A), and FAZ contour irregularity (FAZ-CI) were fully automatically extracted from the OCTA images. A stepwise backward elimination approach was employed to identify sensitive OCTA features and optimal-feature-combinations for the multi-task classification. For proof-of-concept demonstration, diabetic retinopathy (DR) and sickle cell retinopathy (SCR) were used to validate the supervised machine leaning classifier. The presented AI classification methodology is applicable and can be readily extended to other ocular diseases, holding promise to enable a mass-screening platform for clinical deployment and telemedicine.Comment: Supplemental material attached at the en
    • …
    corecore