181 research outputs found
Hybrid Neuro-Fuzzy Classifier Based On Nefclass Model
The paper presents hybrid neuro-fuzzy classifier, based on NEFCLASS model, which wasmodified. The presented classifier was compared to popular classifiers – neural networks andk-nearest neighbours. Efficiency of modifications in classifier was compared with methodsused in original model NEFCLASS (learning methods). Accuracy of classifier was testedusing 3 datasets from UCI Machine Learning Repository: iris, wine and breast cancer wisconsin.Moreover, influence of ensemble classification methods on classification accuracy waspresented
A comparison of machine learning techniques for survival prediction in breast cancer
<p>Abstract</p> <p>Background</p> <p>The ability to accurately classify cancer patients into risk classes, i.e. to predict the outcome of the pathology on an individual basis, is a key ingredient in making therapeutic decisions. In recent years gene expression data have been successfully used to complement the clinical and histological criteria traditionally used in such prediction. Many "gene expression signatures" have been developed, i.e. sets of genes whose expression values in a tumor can be used to predict the outcome of the pathology. Here we investigate the use of several machine learning techniques to classify breast cancer patients using one of such signatures, the well established <it>70-gene signature</it>.</p> <p>Results</p> <p>We show that Genetic Programming performs significantly better than Support Vector Machines, Multilayered Perceptrons and Random Forests in classifying patients from the NKI breast cancer dataset, and comparably to the scoring-based method originally proposed by the authors of the 70-gene signature. Furthermore, Genetic Programming is able to perform an automatic feature selection.</p> <p>Conclusions</p> <p>Since the performance of Genetic Programming is likely to be improvable compared to the out-of-the-box approach used here, and given the biological insight potentially provided by the Genetic Programming solutions, we conclude that Genetic Programming methods are worth further investigation as a tool for cancer patient classification based on gene expression data.</p
Evolutionary Design of Neural Architectures -- A Preliminary Taxonomy and Guide to Literature
This report briefly motivates current research on evolutionary design of neural architectures (EDNA) and presents a short overview of major research issues in this area. It also includes a preliminary taxonomy of research on EDNA and an extensive bibliography of publications on this topic. The taxonomy is an attempt to categorize current research on EDNA in terms of major research issues addressed and approaches pursued. It is our hope that this will help identify open research questions as well as promising directions for further research on EDNA. The report also includes an appendix that provides some suggestions for effective use of the electronic version of the bibliography
Distributed classifier based on genetically engineered bacterial cell cultures
We describe a conceptual design of a distributed classifier formed by a
population of genetically engineered microbial cells. The central idea is to
create a complex classifier from a population of weak or simple classifiers. We
create a master population of cells with randomized synthetic biosensor
circuits that have a broad range of sensitivities towards chemical signals of
interest that form the input vectors subject to classification. The randomized
sensitivities are achieved by constructing a library of synthetic gene circuits
with randomized control sequences (e.g. ribosome-binding sites) in the front
element. The training procedure consists in re-shaping of the master population
in such a way that it collectively responds to the "positive" patterns of input
signals by producing above-threshold output (e.g. fluorescent signal), and
below-threshold output in case of the "negative" patterns. The population
re-shaping is achieved by presenting sequential examples and pruning the
population using either graded selection/counterselection or by
fluorescence-activated cell sorting (FACS). We demonstrate the feasibility of
experimental implementation of such system computationally using a realistic
model of the synthetic sensing gene circuits.Comment: 31 pages, 9 figure
Neural networks: from the perceptron to deep nets
Artificial networks have been studied through the prism of statistical
mechanics as disordered systems since the 80s, starting from the simple models
of Hopfield's associative memory and the single-neuron perceptron classifier.
Assuming data is generated by a teacher model, asymptotic generalisation
predictions were originally derived using the replica method and the online
learning dynamics has been described in the large system limit. In this
chapter, we review the key original ideas of this literature along with their
heritage in the ongoing quest to understand the efficiency of modern deep
learning algorithms. One goal of current and future research is to characterize
the bias of the learning algorithms toward well-generalising minima in a
complex overparametrized loss landscapes with many solutions perfectly
interpolating the training data. Works on perceptrons, two-layer committee
machines and kernel-like learning machines shed light on these benefits of
overparametrization. Another goal is to understand the advantage of depth while
models now commonly feature tens or hundreds of layers. If replica computations
apparently fall short in describing general deep neural networks learning,
studies of simplified linear or untrained models, as well as the derivation of
scaling laws provide the first elements of answers.Comment: Contribution to the book Spin Glass Theory and Far Beyond: Replica
Symmetry Breaking after 40 Years; Chap. 2
WoodFisher: Efficient Second-Order Approximation for Neural Network Compression
Second-order information, in the form of Hessian- or Inverse-Hessian-vector
products, is a fundamental tool for solving optimization problems. Recently,
there has been significant interest in utilizing this information in the
context of deep neural networks; however, relatively little is known about the
quality of existing approximations in this context. Our work examines this
question, identifies issues with existing approaches, and proposes a method
called WoodFisher to compute a faithful and efficient estimate of the inverse
Hessian.
Our main application is to neural network compression, where we build on the
classic Optimal Brain Damage/Surgeon framework. We demonstrate that WoodFisher
significantly outperforms popular state-of-the-art methods for one-shot
pruning. Further, even when iterative, gradual pruning is considered, our
method results in a gain in test accuracy over the state-of-the-art approaches,
for pruning popular neural networks (like ResNet-50, MobileNetV1) trained on
standard image classification datasets such as ImageNet ILSVRC. We examine how
our method can be extended to take into account first-order information, as
well as illustrate its ability to automatically set layer-wise pruning
thresholds and perform compression in the limited-data regime. The code is
available at the following link, https://github.com/IST-DASLab/WoodFisher.Comment: NeurIPS 202
- …