7,627 research outputs found

    Radar-based Feature Design and Multiclass Classification for Road User Recognition

    Full text link
    The classification of individual traffic participants is a complex task, especially for challenging scenarios with multiple road users or under bad weather conditions. Radar sensors provide an - with respect to well established camera systems - orthogonal way of measuring such scenes. In order to gain accurate classification results, 50 different features are extracted from the measurement data and tested on their performance. From these features a suitable subset is chosen and passed to random forest and long short-term memory (LSTM) classifiers to obtain class predictions for the radar input. Moreover, it is shown why data imbalance is an inherent problem in automotive radar classification when the dataset is not sufficiently large. To overcome this issue, classifier binarization is used among other techniques in order to better account for underrepresented classes. A new method to couple the resulting probabilities is proposed and compared to others with great success. Final results show substantial improvements when compared to ordinary multiclass classificationComment: 8 pages, 6 figure

    Spectral Textile Detection in the VNIR/SWIR Band

    Get PDF
    Dismount detection, the detection of persons on the ground and outside of a vehicle, has applications in search and rescue, security, and surveillance. Spatial dismount detection methods lose e effectiveness at long ranges, and spectral dismount detection currently relies on detecting skin pixels. In scenarios where skin is not exposed, spectral textile detection is a more effective means of detecting dismounts. This thesis demonstrates the effectiveness of spectral textile detectors on both real and simulated hyperspectral remotely sensed data. Feature selection methods determine sets of wavebands relevant to spectral textile detection. Classifiers are trained on hyperspectral contact data with the selected wavebands, and classifier parameters are optimized to improve performance on a training set. Classifiers with optimized parameters are used to classify contact data with artificially added noise and remotely-sensed hyperspectral data. The performance of optimized classifiers on hyperspectral data is measured with Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) curve. The best performances on the contact data are 0.892 and 0.872 for Multilayer Perceptrons (MLPs) and Support Vector Machines (SVMs), respectively. The best performances on the remotely-sensed data are AUC = 0.947 and AUC = 0.970 for MLPs and SVMs, respectively. The difference in classifier performance between the contact and remotely-sensed data is due to the greater variety of textiles represented in the contact data. Spectral textile detection is more reliable in scenarios with a small variety of textiles

    Textile Fingerprinting for Dismount Analysis in the Visible, Near, and Shortwave Infrared Domain

    Get PDF
    The ability to accurately and quickly locate an individual, or a dismount, is useful in a variety of situations and environments. A dismount\u27s characteristics such as their gender, height, weight, build, and ethnicity could be used as discriminating factors. Hyperspectral imaging (HSI) is widely used in efforts to identify materials based on their spectral signatures. More specifically, HSI has been used for skin and clothing classification and detection. The ability to detect textiles (clothing) provides a discriminating factor that can aid in a more comprehensive detection of dismounts. This thesis demonstrates the application of several feature selection methods (i.e., support vector machines with recursive feature reduction, fast correlation based filter) in highly dimensional data collected from a spectroradiometer. The classification of the data is accomplished with the selected features and artificial neural networks. A model for uniquely identifying (fingerprinting) textiles are designed, where color and composition are determined in order to fingerprint a specific textile. An artificial neural network is created based on the knowledge of the textile\u27s color and composition, providing a uniquely identifying fingerprinting of a textile. Results show 100% accuracy for color and composition classification, and 98% accuracy for the overall textile fingerprinting process

    Fault analysis using state-of-the-art classifiers

    Get PDF
    Fault Analysis is the detection and diagnosis of malfunction in machine operation or process control. Early fault analysis techniques were reserved for high critical plants such as nuclear or chemical industries where abnormal event prevention is given utmost importance. The techniques developed were a result of decades of technical research and models based on extensive characterization of equipment behavior. This requires in-depth knowledge of the system and expert analysis to apply these methods for the application at hand. Since machine learning algorithms depend on past process data for creating a system model, a generic autonomous diagnostic system can be developed which can be used for application in common industrial setups. In this thesis, we look into some of the techniques used for fault detection and diagnosis multi-class and one-class classifiers. First we study Feature Selection techniques and the classifier performance is analyzed against the number of selected features. The aim of feature selection is to reduce the impact of irrelevant variables and to reduce computation burden on the learning algorithm. We introduce the feature selection algorithms as a literature survey. Only few algorithms are implemented to obtain the results. Fault data from a Radio Frequency (RF) generator is used to perform fault detection and diagnosis. Comparison between continuous and discrete fault data is conducted for the Support Vector Machines (SVM) and Radial Basis Function Network (RBF) classifiers. In the second part we look into one-class classification techniques and their application to fault detection. One-class techniques were primarily developed to identify one class of objects from all other possible objects. Since all fault occurrences in a system cannot be simulated or recorded, one-class techniques help in identifying abnormal events. We introduce four one-class classifiers and analyze them using Receiver-Operating Characteristic (ROC) curve. We also develop a feature extraction method for the RF generator data which is used to obtain results for one-class classifiers and Radial Basis Function Network two class classification. To apply these techniques for real-time verification, the RIT Fault Prediction software is built. LabView environment is used to build a basic data management and fault detection using Radial Basis Function Network. This software is stand alone and acts as foundation for future implementations

    Declassification: transforming java programs to remove intermediate classes

    Get PDF
    Computer applications are increasingly being written in object-oriented languages like Java and C++ Object-onented programming encourages the use of small methods and classes. However, this style of programming introduces much overhead as each method call results in a dynamic dispatch and each field access becomes a pointer dereference to the heap allocated object. Many of the classes in these programs are included to provide structure rather than to act as reusable code, and can therefore be regarded as intermediate. We have therefore developed an optimisation technique, called declassification, which will transform Java programs into equivalent programs from which these intermediate classes have been removed. The optimisation technique developed involves two phases, analysis and transformation. The analysis involves the identification of intermediate classes for removal. A suitable class is defined to be a class which is used exactly once within a program. Such classes are identified by this analysis The subsequent transformation involves eliminating these intermediate classes from the program. This involves inlinmg the fields and methods of each intermediate class within the enclosing class which uses it. In theory, declassification reduces the number of classes which are instantiated and used in a program during its execution. This should reduce the overhead of object creation and maintenance as child objects are no longer created, and it should also reduce the number of field accesses and dynamic dispatches required by a program to execute. An important feature of the declassification technique, as opposed to other similar techniques, is that it guarantees there will be no increase in code size. An empirical study was conducted on a number of reasonable-sized Java programs and it was found that very few suitable classes were identified for miming. The results showed that the declassification technique had a small influence on the memory consumption and a negligible influence on the run-time performance of these programs. It is therefore concluded that the declassification technique was not successful in optimizing the test programs but further extensions to this technique combined with an intrinsically object-onented set of test programs could greatly improve its success

    Finding and understanding bugs in C compilers

    Get PDF
    ManuscriptCompilers should be correct. To improve the quality of C compilers, we created Csmith, a randomized test-case generation tool, and spent three years using it to find compiler bugs. During this period we reported more than 325 previously unknown bugs to compiler developers. Every compiler we tested was found to crash and also to silently generate wrong code when presented with valid input. In this paper we present our compiler-testing tool and the results of our bug-hunting study. Our first contribution is to advance the state of the art in compiler testing. Unlike previous tools, Csmith generates programs that cover a large subset of C while avoiding the undefined and unspecified behaviors that would destroy its ability to automatically find wrong-code bugs. Our second contribution is a collection of qualitative and quantitative results about the bugs we have found in open-source C compilers

    Fuzzy-Rough Attribute Reduction with Application to Web Categorization

    Get PDF
    Due to the explosive growth of electronically stored information, automatic methods must be developed to aid users in maintaining and using this abundance of informa-tion eectively. In particular, the sheer volume of redundancy present must be dealt with, leaving only the information-rich data to be processed. This paper presents a novel approach, based on an integrated use of fuzzy and rough set theories, to greatly reduce this data redundancy. Formal concepts of fuzzy-rough attribute re-duction are introduced and illustrated with a simple example. The work is applied to the problem of web categorization, considerably reducing dimensionality with minimal loss of information. Experimental results show that fuzzy-rough reduction is more powerful than the conventional rough set-based approach. Classiers that use a lower dimensional set of attributes which are retained by fuzzy-rough reduc-tion outperform those that employ more attributes returned by the existing crisp rough reduction method.
    corecore