52,759 research outputs found

    The detection of globular clusters in galaxies as a data mining problem

    Get PDF
    We present an application of self-adaptive supervised learning classifiers derived from the Machine Learning paradigm, to the identification of candidate Globular Clusters in deep, wide-field, single band HST images. Several methods provided by the DAME (Data Mining & Exploration) web application, were tested and compared on the NGC1399 HST data described in Paolillo 2011. The best results were obtained using a Multi Layer Perceptron with Quasi Newton learning rule which achieved a classification accuracy of 98.3%, with a completeness of 97.8% and 1.6% of contamination. An extensive set of experiments revealed that the use of accurate structural parameters (effective radius, central surface brightness) does improve the final result, but only by 5%. It is also shown that the method is capable to retrieve also extreme sources (for instance, very extended objects) which are missed by more traditional approaches.Comment: Accepted 2011 December 12; Received 2011 November 28; in original form 2011 October 1

    Advanced deep learning approaches for biosingnals applications

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.A wide gap exists between clinical application results and those from laboratory observations concerning hand rehabilitation devices. In most instances, laboratory observations show superior outcomes the real-time applications demonstrate poor consequences. The robust nature of the electromyography signal and limited laboratory applications are the principal reasons for the gap. This thesis aims to introduce and develop a deep learning model that is capable of learning features from biosignals. The deep learning model is expected to tame the variable nature of the electromyography signal which will lead to the best available outcomes. Furthermore, the suggested deep learning scheme will be trained to be skilled in learning the best features that match the biosignal application regardless of the number of classes. Moreover, traditional feature extraction is time consuming and extremely reliant on the user’s experience and the application. The objective of this research is accomplished via the following four implemented models. 1. Developing a deep learning model via implementing a two-stage autoencoder along with applying different signal representations like spectrogram, wavelet and wavelet packet to tame variations of the electromyography signal. Support vector machine, extreme learning machine with two activation functions (sigmoid and radial basis function) and softmax layer were used for classifications. Moreover, the classifier fusion layer achieved testing accuracy of more than 92% and training attained more than 98%. The same dataset was implemented for superimposed signal representations for two stages autoencoder and softmax layer, support vector machine, k-nearest neighbor and discriminant analysis for classification besides the classifier fusion which led to testing accuracy of more than 90%. 2. Presenting principal component analysis and independent component analysis for feature learning purposes after applying different signal representations algorithms such as spectrogram, wavelet and wavelet packet. Discriminant analysis, extreme learning machine and support vector machine were used for classification. Furthermore, the two proposed models showed acceptable accuracy along with shorter simulation time. The testing accuracy achieved more than 90% by implementing a classifier fusion layer. Manhattan index was estimated for all features and only the top 50 Manhattan index features were included to decrease the simulation time while attaining acceptable accuracy values. 3. Introducing a self-organising map for deep learning whereby the biosignal was represented by spectrograms, wavelet and wavelet packet. The presented biosignal was introduced to a layer of self- organising map then the suggested system performance was evaluated by extreme learning machine, self-adaptive evolutionally extreme learning machine, discriminant analysis and support vector machine for classification. Adding a classifier fusion layer increased the testing accuracy to 96.60% for ten-finger movements and 99.73% for training. The proposed system showed superior behavior regarding accuracy and simulation time. 4. Presenting a deep learning model where 1) the data was augmented after representing the biosignal by a spectrogram, 2) the augmented signal was represented by a tensor, and finally 3) The signal was introduced to the two-stage autoencoder. The same dataset was used with traditional pattern recognition for comparison purposes. Classifier fusion layer was executed in deep learning scheme whereby the ten-finger movements achieved 90.25% and 87.11% attained by pattern recognition. Besides, the six finger movement dataset was acquired from amputee participants and accomplished 91.85% for deep learning and reached 89.64% for traditional pattern recognition. Furthermore, different datasets for different applications were tested using the recommended deep learning model. Eventually, feeding the deep learning model with various datasets for different applications afforded the model with higher fidelity, combined with real outcomes and generalization

    Adaptive, fast walking in a biped robot under neuronal control and learning

    Get PDF
    Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori–motor loops where the walking process provides feedback signals to the walker's sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (> 3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks
    corecore