178 research outputs found

    A brief history of learning classifier systems: from CS-1 to XCS and its variants

    Get PDF
    © 2015, Springer-Verlag Berlin Heidelberg. The direction set by Wilson’s XCS is that modern Learning Classifier Systems can be characterized by their use of rule accuracy as the utility metric for the search algorithm(s) discovering useful rules. Such searching typically takes place within the restricted space of co-active rules for efficiency. This paper gives an overview of the evolution of Learning Classifier Systems up to XCS, and then of some of the subsequent developments of Wilson’s algorithm to different types of learning

    Connectionist Theory Refinement: Genetically Searching the Space of Network Topologies

    Full text link
    An algorithm that learns from a set of examples should ideally be able to exploit the available resources of (a) abundant computing power and (b) domain-specific knowledge to improve its ability to generalize. Connectionist theory-refinement systems, which use background knowledge to select a neural network's topology and initial weights, have proven to be effective at exploiting domain-specific knowledge; however, most do not exploit available computing power. This weakness occurs because they lack the ability to refine the topology of the neural networks they produce, thereby limiting generalization, especially when given impoverished domain theories. We present the REGENT algorithm which uses (a) domain-specific knowledge to help create an initial population of knowledge-based neural networks and (b) genetic operators of crossover and mutation (specifically designed for knowledge-based networks) to continually search for better network topologies. Experiments on three real-world domains indicate that our new algorithm is able to significantly increase generalization compared to a standard connectionist theory-refinement system, as well as our previous algorithm for growing knowledge-based networks.Comment: See http://www.jair.org/ for any accompanying file

    A novel non-intrusive objective method to predict voice quality of service in LTE networks.

    Get PDF
    This research aimed to introduce a novel approach for non-intrusive objective measurement of voice Quality of Service (QoS) in LTE networks. While achieving this aim, the thesis established a thorough knowledge of how voice traffic is handled in LTE networks, the LTE network architecture and its similarities and differences to its predecessors and traditional ground IP networks and most importantly those QoS affecting parameters which are exclusive to LTE environments. Mean Opinion Score (MOS) is the scoring system used to measure the QoS of voice traffic which can be measured subjectively (as originally intended). Subjective QoS measurement methods are costly and time-consuming, therefore, objective methods such as Perceptual Evaluation of Speech Quality (PESQ) were developed to address these limitations. These objective methods have a high correlation with subjective MOS scores. However, they either require individual calculation of many network parameters or have an intrusive nature that requires access to both the reference signal and the degraded signal for comparison by software. Therefore, the current objective methods are not suitable for application in real-time measurement and prediction scenarios. A major contribution of the research was identifying LTE-specific QoS affecting parameters. There is no previous work that combines these parameters to assess their impacts on QoS. The experiment was configured in a hardware in the loop environment. This configuration could serve as a platform for future research which requires simulation of voice traffic in LTE environments. The key contribution of this research is a novel non-intrusive objective method for QoS measurement and prediction using neural networks. A comparative analysis is presented that examines the performance of four neural network algorithms for non-intrusive measurement and prediction of voice quality over LTE networks. In conclusion, the Bayesian Regularization algorithm with 4 neurons in the hidden layer and sigmoid symmetric transfer function was identified as the best solution with a Mean Square Error (MSE) rate of 0.001 and regression value of 0.998 measured for the testing data set

    The 11th Conference of PhD Students in Computer Science

    Get PDF
    • …
    corecore