2,235 research outputs found

    Automated thematic mapping and change detection of ERTS-A images

    Get PDF
    The author has identified the following significant results. For the recognition of terrain types, spatial signatures are developed from the diffraction patterns of small areas of ERTS-1 images. This knowledge is exploited for the measurements of a small number of meaningful spatial features from the digital Fourier transforms of ERTS-1 image cells containing 32 x 32 picture elements. Using these spatial features and a heuristic algorithm, the terrain types in the vicinity of Phoenix, Arizona were recognized by the computer with a high accuracy. Then, the spatial features were combined with spectral features and using the maximum likelihood criterion the recognition accuracy of terrain types increased substantially. It was determined that the recognition accuracy with the maximum likelihood criterion depends on the statistics of the feature vectors. Nonlinear transformations of the feature vectors are required so that the terrain class statistics become approximately Gaussian. It was also determined that for a given geographic area the statistics of the classes remain invariable for a period of a month but vary substantially between seasons

    Entanglement Detection in the Stabilizer Formalism

    Full text link
    We investigate how stabilizer theory can be used for constructing sufficient conditions for entanglement. First, we show how entanglement witnesses can be derived for a given state, provided some stabilizing operators of the state are known. These witnesses require only a small effort for an experimental implementation and are robust against noise. Second, we demonstrate that also nonlinear criteria based on uncertainty relations can be derived from stabilizing operators. These criteria can sometimes improve the witnesses by adding nonlinear correction terms. All our criteria detect states close to Greenberger-Horne-Zeilinger states, cluster and graph states. We show that similar ideas can be used to derive entanglement conditions for states which do not fit the stabilizer formalism, such as the three-qubit W state. We also discuss connections between the witnesses and some Bell inequalities.Comment: 15 pages including 2 figures, revtex4; typos corrected, presentation improved; to appear in PR

    A CASE STUDY ON SUPPORT VECTOR MACHINES VERSUS ARTIFICIAL NEURAL NETWORKS

    Get PDF
    The capability of artificial neural networks for pattern recognition of real world problems is well known. In recent years, the support vector machine has been advocated for its structure risk minimization leading to tolerance margins of decision boundaries. Structures and performances of these pattern classifiers depend on the feature dimension and training data size. The objective of this research is to compare these pattern recognition systems based on a case study. The particular case considered is on classification of hypertensive and normotensive right ventricle (RV) shapes obtained from Magnetic Resonance Image (MRI) sequences. In this case, the feature dimension is reasonable, but the available training data set is small, however, the decision surface is highly nonlinear.For diagnosis of congenital heart defects, especially those associated with pressure and volume overload problems, a reliable pattern classifier for determining right ventricle function is needed. RV¡¦s global and regional surface to volume ratios are assessed from an individual¡¦s MRI heart images. These are used as features for pattern classifiers. We considered first two linear classification methods: the Fisher linear discriminant and the linear classifier trained by the Ho-Kayshap algorithm. When the data are not linearly separable, artificial neural networks with back-propagation training and radial basis function networks were then considered, providing nonlinear decision surfaces. Thirdly, a support vector machine was trained which gives tolerance margins on both sides of the decision surface. We have found in this case study that the back-propagation training of an artificial neural network depends heavily on the selection of initial weights, even though randomized. The support vector machine where radial basis function kernels are used is easily trained and provides decision tolerance margins, in spite of only small margins

    On the role of distance transformations in Baddeley’s Delta Metric

    Get PDF
    Comparison and similarity measurement have been a key topic in computer vision for a long time. There is, indeed, an extensive list of algorithms and measures for image or subimage comparison. The superiority or inferiority of different measures is hard to scrutinize, especially considering the dimensionality of their parameter space and their many different configurations. In this work, we focus on the comparison of binary images, and study different variations of Baddeley's Delta Metric, a popular metric for such images. We study the possible parameterizations of the metric, stressing the numerical and behavioural impact of different settings. Specifically, we consider the parameter settings proposed by the original author, as well as the substitution of distance transformations by regularized distance transformations, as recently presented by Brunet and Sills. We take a qualitative perspective on the effects of the settings, and also perform quantitative experiments on separability of datasets for boundary evaluation.The authors gratefully acknowledge the financial support by the Spanish Ministry of Science (project PID2019-108392GB-I00 AEI/FEDER, UE), as well as that by Navarra Servicios y TecnologĂ­as S.A. (NASERTIC)
    • …
    corecore