30 research outputs found

    Optimal linear and nonlinear feature extraction based on the minimization of the increased risk of misclassification

    Get PDF
    General classes of nonlinear and linear transformations were investigated for the reduction of the dimensionality of the classification (feature) space so that, for a prescribed dimension m of this space, the increase of the misclassification risk is minimized

    Classification by means of B-spline potential functions with applications to remote sensing

    Get PDF
    A method is presented for using B-splines as potential functions in the estimation of likelihood functions (probability density functions conditioned on pattern classes), or the resulting discriminant functions. The consistency of this technique is discussed. Experimental results of using the likelihood functions in the classification of remotely sensed data are given

    A technique for 3-D robot vision for space applications

    Get PDF
    An extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using Moment Invariants as features of object representation is discussed. Results of tests conducted on the algorithm under conditions simulating space conditions are presented

    Classification improvement by optimal dimensionality reduction when training sets are of small size

    Get PDF
    A computer simulation was performed to test the conjecture that, when the sizes of the training sets are small, classification in a subspace of the original data space may give rise to a smaller probability of error than the classification in the data space itself; this is because the gain in the accuracy of estimation of the likelihood functions used in classification in the lower dimensional space (subspace) offsets the loss of information associated with dimensionality reduction (feature extraction). A number of pseudo-random training and data vectors were generated from two four-dimensional Gaussian classes. A special algorithm was used to create an optimal one-dimensional feature space on which to project the data. When the sizes of the training sets are small, classification of the data in the optimal one-dimensional space is found to yield lower error rates than the one in the original four-dimensional space

    An algorithm for optimal single linear feature extraction from several Gaussian pattern classes

    Get PDF
    A computational algorithm is presented for the extraction of an optimal single linear feature from several Gaussian pattern classes. The algorithm minimizes the increase in the probability of misclassification in the transformed (feature) space. Numerical results on the application of this procedure to the remotely sensed data from the Purdue Cl flight line as well as LANDSAT data are presented. It was found that classification using the optimal single linear feature yielded a value for the probability of misclassification on the order of 30% less than that obtained by using the best single untransformed feature. Also, the optimal single linear feature gave performance results comparable to those obtained by using the two features which maximized the average divergence

    A smart telerobotic system driven by monocular vision

    Get PDF
    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry

    A Framework for Inverse Queries in Learning Problems

    No full text
    This report discusses the problem of inverting the nonlinear functions that arise in solutions to statistical learning problems. Such problems arise when inverse queries need to be answered. To provide background for the kinds of inverse queries that can arise in learning, we first briefly present, as examples of the forward problem, the three most common learning problems arising in applications, viz. regression estimation, density estimation and pattern recognition. These learning problems are usually formulated in terms of the estimation of a nonlinear learning function f from a selected class of functions using given training data. Many approaches to this estimation problem give rise to a common general form of f . Once estimated, this f then constitutes a solution to the forward learning problem. Inverse queries are then presented as questions asked of this f that can be formulated as an inverse problem, viz. to find the solutions fxjf(x) = y o g of the equation y o = f(x). The i..
    corecore