304,309 research outputs found

    A Study on Pattern Classification of Bioinformatics Datasets

    Get PDF
    Pattern Classification is a supervised technique in which the patterns are organized into groups of pattern sharing the same set of properties. Classification involves the use of techniques including applied mathematics, informatics, statistics, computer science and artificial intelligence to solve the classification problem at the attribute level and return to an output space of two or more than two classes. Probabilistic Neural Networks(PNN) is an effective neural network in the field of pattern classification. It uses training and testing data samples to build a model. However, the network becomes very complex and difficult to handle when there are large numbers of training data samples. Many other approaches like K-Nearest Neighbour (KNN) algorithms have been implemented so far to improve the performance accuracy and the convergence rate. K-Nearest Neighbour is a supervised classification scheme in which we select a subset from our whole dataset and that is used to classify the samples. Then we select a classified dataset subset and that is used to classify the training dataset. The Computation cost becomes too expensive when we have a larger dataset. Then we use genetic algorithm to design a classifier. Here we use genetic algorithm to divide the samples into different class boundaries by the help of different lines. After each generation we get the accuracy of our algorithm then we continue till we get our desired accuracy or our desired number of generation. In this project, a comparative study of Probabilistic Neural Network, K-Nearest Neighbour and Genetic Algorithm as a Classifier is done. We have tested these different algorithms using instances from lung cancer dataset, Libra Movement dataset, Parkinson dataset and Iris dataset (taken from the UCI repository and then normalized). The efficiency of the three techniques are compared on the basis of the performance accuracy on the test data, convergence time and on the implementation complexity

    Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks

    Full text link
    It is desirable to train convolutional networks (CNNs) to run more efficiently during inference. In many cases however, the computational budget that the system has for inference cannot be known beforehand during training, or the inference budget is dependent on the changing real-time resource availability. Thus, it is inadequate to train just inference-efficient CNNs, whose inference costs are not adjustable and cannot adapt to varied inference budgets. We propose a novel approach for cost-adjustable inference in CNNs - Stochastic Downsampling Point (SDPoint). During training, SDPoint applies feature map downsampling to a random point in the layer hierarchy, with a random downsampling ratio. The different stochastic downsampling configurations known as SDPoint instances (of the same model) have computational costs different from each other, while being trained to minimize the same prediction loss. Sharing network parameters across different instances provides significant regularization boost. During inference, one may handpick a SDPoint instance that best fits the inference budget. The effectiveness of SDPoint, as both a cost-adjustable inference approach and a regularizer, is validated through extensive experiments on image classification

    Power Allocation and Cooperative Diversity in Two-Way Non-Regenerative Cognitive Radio Networks

    Full text link
    In this paper, we investigate the performance of a dual-hop block fading cognitive radio network with underlay spectrum sharing over independent but not necessarily identically distributed (i.n.i.d.) Nakagami-mm fading channels. The primary network consists of a source and a destination. Depending on whether the secondary network which consists of two source nodes have a single relay for cooperation or multiple relays thereby employs opportunistic relay selection for cooperation and whether the two source nodes suffer from the primary users' (PU) interference, two cases are considered in this paper, which are referred to as Scenario (a) and Scenario (b), respectively. For the considered underlay spectrum sharing, the transmit power constraint of the proposed system is adjusted by interference limit on the primary network and the interference imposed by primary user (PU). The developed new analysis obtains new analytical results for the outage capacity (OC) and average symbol error probability (ASEP). In particular, for Scenario (a), tight lower bounds on the OC and ASEP of the secondary network are derived in closed-form. In addition, a closed from expression for the end-to-end OC of Scenario (a) is achieved. With regards to Scenario (b), a tight lower bound on the OC of the secondary network is derived in closed-form. All analytical results are corroborated using Monte Carlo simulation method

    Peer to Peer Information Retrieval: An Overview

    Get PDF
    Peer-to-peer technology is widely used for file sharing. In the past decade a number of prototype peer-to-peer information retrieval systems have been developed. Unfortunately, none of these have seen widespread real- world adoption and thus, in contrast with file sharing, information retrieval is still dominated by centralised solutions. In this paper we provide an overview of the key challenges for peer-to-peer information retrieval and the work done so far. We want to stimulate and inspire further research to overcome these challenges. This will open the door to the development and large-scale deployment of real-world peer-to-peer information retrieval systems that rival existing centralised client-server solutions in terms of scalability, performance, user satisfaction and freedom
    corecore