17,107 research outputs found

    Unsupervised Shift-invariant Feature Learning from Time-series Data

    Get PDF
    Unsupervised feature learning is one of the key components of machine learningand articial intelligence. Learning features from high dimensional streaming data isan important and dicult problem which is incorporated with number of challenges.Moreover, feature learning algorithms need to be evaluated and generalized for timeseries with dierent patterns and components. A detailed study is needed to clarifywhen simple algorithms fail to learn features and whether we need more complicatedmethods.In this thesis, we show that the systematic way to learn meaningful featuresfrom time-series is by using convolutional or shift-invariant versions of unsupervisedfeature learning. We experimentally compare the shift-invariant versions of clustering,sparse coding and non-negative matrix factorization algorithms for: reconstruction,noise separation, prediction, classication and simulating auditory lters from acousticsignals. The results show that the most ecient and highly scalable clustering algorithmwith a simple modication in inference and learning phase is able to produce meaningfulresults. Clustering features are also comparable with sparse coding and non-negativematrix factorization in most of the tasks (e.g. classication) and even more successful insome (e.g. prediction). Shift invariant sparse coding is also used on a novel application,inferring hearing loss from speech signal and produced promising results.Performance of algorithms with regard to some important factors such as: timeseries components, number of features and size of receptive eld is also analyzed. Theresults show that there is a signicant positive correlation between performance of clusteringwith degree of trend, frequency skewness, frequency kurtosis and serial correlationof data, whereas, the correlation is negative in the case of dataset average bandwidth.Performance of shift invariant sparse coding is aected by frequency skewness, frequencykurtosis and serial correlation of data. Non-Negative matrix factorization is influenced by data characteristics same as clustering

    Adaptive machinery fault diagnosis based on improved shift-invariant sparse coding

    Get PDF
    In machinery fault diagnosis, it is common that one kind of fault may correspond to several conditions, these conditions may contain different loads, different speeds and so on. When using conventional intelligent machinery fault diagnosis methods on diagnosing this kind of faults, if only one condition among all of these conditions was trained, when using this trained classifier for diagnosing fault which containing all conditions, it would obtain a classification result with higher error, it is the problem of robustness; but if we train all these data in each condition, the robustness can be improved a lot, but the time would be wasted. In order to balance these two aspects of fault diagnosis which seem contradict with each other, someone proposed a new method which based on shift-invariant sparse coding (SISC) method, this method can learn features from each condition of the same fault, and these features are adaptive to other conditions, which solve the first problem, but time efficiency of this algorithm is lower, in this paper, by improving the efficiency of shift-invariant sparse coding, we can reduce a lot of time on learning features. Through the experiment testing, it showed that this new method proposed in this paper produced better performance than SISC algorithm

    Sparse Image Representation with Epitomes

    Get PDF
    Sparse coding, which is the decomposition of a vector using only a few basis elements, is widely used in machine learning and image processing. The basis set, also called dictionary, is learned to adapt to specific data. This approach has proven to be very effective in many image processing tasks. Traditionally, the dictionary is an unstructured "flat" set of atoms. In this paper, we study structured dictionaries which are obtained from an epitome, or a set of epitomes. The epitome is itself a small image, and the atoms are all the patches of a chosen size inside this image. This considerably reduces the number of parameters to learn and provides sparse image decompositions with shiftinvariance properties. We propose a new formulation and an algorithm for learning the structured dictionaries associated with epitomes, and illustrate their use in image denoising tasks.Comment: Computer Vision and Pattern Recognition, Colorado Springs : United States (2011
    • …
    corecore