165,960 research outputs found

    Improved support vector clustering algorithm for color image segmentation

    Get PDF
    Color image segmentation has attracted more and more attention in various application fields during the past few years. Essentially speaking, color image segmentation problem is a process of clustering according to the color of pixels. But, traditional clustering methods do not scale well with the number of training sample, which limits the ability of handling massive data effectively. With the utilization of an improved approximate Minimum Enclosing Ball algorithm, this article develops an fast support vector clustering algorithm for computing the different clusters of given color images in kernel-introduced space to segment the color images. We prove theoretically that the proposed algorithm converges to the optimum within any given precision quickly. Compared to other popular algorithms, it has the competitive performances both on training time and accuracy. Color image segmentation experiments on both synthetic and real-world data sets demonstrate the validity of the proposed algorithm

    Network anomaly detection: a survey and comparative analysis of stochastic and deterministic methods

    Get PDF
    7 pages. 1 more figure than final CDC 2013 versionWe present five methods to the problem of network anomaly detection. These methods cover most of the common techniques in the anomaly detection field, including Statistical Hypothesis Tests (SHT), Support Vector Machines (SVM) and clustering analysis. We evaluate all methods in a simulated network that consists of nominal data, three flow-level anomalies and one packet-level attack. Through analyzing the results, we point out the advantages and disadvantages of each method and conclude that combining the results of the individual methods can yield improved anomaly detection results

    Satellite image segmentation using RVM and Fuzzy clustering

    Get PDF
    Image segmentation is common but still very challenging problem in the area of image processing but it has its application in many industries and medical field for example target tracking, object recognition and medical image processing. The task of image segmentation is to divide image into number of meaningful pieces on the basis of features of image such as color, texture. In this thesis some recently developed fuzzy clustering algorithms as well as supervised learning classifier Relevance Vector Machine has been used to get improved solution. First of all various fuzzy clustering algorithms such as FCM, DeFCM are used to produce different clustering solutions and then we improve each solution by again classifying remaining pixels of satellite image using Relevance Vector Machine (RVM classifier. Result of different supervised learning classifier such as Support Vector Machine (SVM), Relevance Vector Machine (RVM), K-nearest neighbors (KNN) has been compared on basis of error rate and time. One of the major drawback of any clustering algorithm is their input argument that is number of clusters in unlabelled data. In this thesis an attempt has been made to evaluate optimal number of clusters present in satellite image using DAVIES-BOULDIN Index

    Virtual environment trajectory analysis:a basis for navigational assistance and scene adaptivity

    Get PDF
    This paper describes the analysis and clustering of motion trajectories obtained while users navigate within a virtual environment (VE). It presents a neural network simulation that produces a set of five clusters which help to differentiate users on the basis of efficient and inefficient navigational strategies. The accuracy of classification carried out with a self-organising map algorithm was tested and improved to in excess of 85% by using learning vector quantisation. This paper considers how such user classifications could be utilised in the delivery of intelligent navigational support and the dynamic reconfiguration of scenes within such VEs. We explore how such intelligent assistance and system adaptivity could be delivered within a Multi-Agent Systems (MAS) context

    On the Effect of Semantically Enriched Context Models on Software Modularization

    Full text link
    Many of the existing approaches for program comprehension rely on the linguistic information found in source code, such as identifier names and comments. Semantic clustering is one such technique for modularization of the system that relies on the informal semantics of the program, encoded in the vocabulary used in the source code. Treating the source code as a collection of tokens loses the semantic information embedded within the identifiers. We try to overcome this problem by introducing context models for source code identifiers to obtain a semantic kernel, which can be used for both deriving the topics that run through the system as well as their clustering. In the first model, we abstract an identifier to its type representation and build on this notion of context to construct contextual vector representation of the source code. The second notion of context is defined based on the flow of data between identifiers to represent a module as a dependency graph where the nodes correspond to identifiers and the edges represent the data dependencies between pairs of identifiers. We have applied our approach to 10 medium-sized open source Java projects, and show that by introducing contexts for identifiers, the quality of the modularization of the software systems is improved. Both of the context models give results that are superior to the plain vector representation of documents. In some cases, the authoritativeness of decompositions is improved by 67%. Furthermore, a more detailed evaluation of our approach on JEdit, an open source editor, demonstrates that inferred topics through performing topic analysis on the contextual representations are more meaningful compared to the plain representation of the documents. The proposed approach in introducing a context model for source code identifiers paves the way for building tools that support developers in program comprehension tasks such as application and domain concept location, software modularization and topic analysis

    Unsupervised and supervised machine learning for performance improvement of NFT optical transmission

    Get PDF
    We apply both the unsupervised and supervised machine learning (ML) methods, in particular, the k-means clustering and support vector machine (SVM) to improve the performance of the optical communication system based on the nonlinear Fourier transform (NFT). The NFT system employs the continuous NFT spectrum part to carry data up to 1000 km using the 16-QAM OFDM modulation. We classify the performance of the system in terms of BER versus signal power dependence. We show that the NFT system performance can be improved considerably by means of the ML techniques and that the more advanced SVM method typically outperforms the k-means clustering
    corecore