418 research outputs found
Machine Learning Approach to Automated Quality Identification of Human Induced Pluripotent Stem Cell Colony Images
The focus of this research is on automated identification of the quality of human induced pluripotent stem cell (iPSC) colony images. iPS cell technology is a contemporary method by which the patient’s cells are reprogrammed back to stem cells and are differentiated to any cell type wanted. iPS cell technology will be used in future to patient specific drug screening, disease modeling, and tissue repairing, for instance. However, there are technical challenges before iPS cell technology can be used in practice and one of them is quality control of growing iPSC colonies which is currently done manually but is unfeasible solution in large-scale cultures. The monitoring problem returns to image analysis and classification problem. In this paper, we tackle this problem using machine learning methods such as multiclass Support Vector Machines and several baseline methods together with Scaled Invariant Feature Transformation based features. We perform over 80 test arrangements and do a thorough parameter value search. The best accuracy (62.4%) for classification was obtained by using a k-NN classifier showing improved accuracy compared to earlier studies
On the Bayes-optimality of F-measure maximizers
The F-measure, which has originally been introduced in information retrieval,
is nowadays routinely used as a performance metric for problems such as binary
classification, multi-label classification, and structured output prediction.
Optimizing this measure is a statistically and computationally challenging
problem, since no closed-form solution exists. Adopting a decision-theoretic
perspective, this article provides a formal and experimental analysis of
different approaches for maximizing the F-measure. We start with a Bayes-risk
analysis of related loss functions, such as Hamming loss and subset zero-one
loss, showing that optimizing such losses as a surrogate of the F-measure leads
to a high worst-case regret. Subsequently, we perform a similar type of
analysis for F-measure maximizing algorithms, showing that such algorithms are
approximate, while relying on additional assumptions regarding the statistical
distribution of the binary response variables. Furthermore, we present a new
algorithm which is not only computationally efficient but also Bayes-optimal,
regardless of the underlying distribution. To this end, the algorithm requires
only a quadratic (with respect to the number of binary responses) number of
parameters of the joint distribution. We illustrate the practical performance
of all analyzed methods by means of experiments with multi-label classification
problems
Customer selection for direct marketing : bi-objective optimization using support vector machine
A major challenge in direct marketing is to build a customer-selection model that can help achieve higher response rate and greater profit at the same time. In this study, I adopt a bi-objective optimization (BOO) approach and propose a two-stage method using support vector machine (SVM) and support vector regression (SVR) to maximize response rate and profit simultaneously. To deal with the difficulty of learning models from imbalanced data, synthetic minority over-sampling technique (SMOTE) is used to generate more balanced datasets. Experiments are conducted on two datasets, a direct marketing dataset and the KDD-98 dataset, to compare the predictive performance of the two-stage BOOSVM with other benchmark methods including logistic regression and the parallel Multi-objective Evolutionary Algorithm (MOEA). The results of decile analysis suggest that the proposed two-stage BOOSVM model with SMOTE method is more effective and efficient than the competing models in improving response rate and profitability
Using neural networks and support vector machines for default prediction in South Africa
A thesis submitted to the Faculty of Computer Science and Applied Mathematics,
University of Witwatersrand,
in fulfillment of the requirements for the
Master of Science (MSc)
Johannesburg
Feb 2017This is a thesis on credit risk and in particular bankruptcy prediction. It investigates
the application of machine learning techniques such as support vector machines and
neural networks for this purpose. This is not a thesis on support vector machines
and neural networks, it simply looks at using these functions as tools to preform the
analysis.
Neural networks are a type of machine learning algorithm. They are nonlinear mod-
els inspired from biological network of neurons found in the human central nervous
system. They involve a cascade of simple nonlinear computations that when aggre-
gated can implement robust and complex nonlinear functions. Neural networks can
approximate most nonlinear functions, making them a quite powerful class of models.
Support vector machines (SVM) are the most recent development from the machine
learning community. In machine learning, support vector machines (SVMs) are su-
pervised learning algorithms that analyze data and recognize patterns, used for clas-
si cation and regression analysis. SVM takes a set of input data and predicts, for
each given input, which of two possible classes comprises the input, making the SVM
a non-probabilistic binary linear classi er. A support vector machine constructs a
hyperplane or set of hyperplanes in a high or in nite dimensional space, which can
be used for classi cation into the two di erent data classes.
Traditional bankruptcy prediction medelling has been criticised as it makes certain
underlying assumptions on the underlying data. For instance, a frequent requirement
for multivarate analysis is a joint normal distribution and independence of variables.
Support vector machines (and neural networks) are a useful tool for default analysis
because they make far fewer assumptions on the underlying data.
In this framework support vector machines are used as a classi er to discriminate
defaulting and non defaulting companies in a South African context. The input data
required is a set of nancial ratios constructed from the company's historic nancial
statements. The data is then Divided into the two groups: a company that has
defaulted and a company that is healthy (non default). The nal data sample used
for this thesis consists of 23 nancial ratios from 67 companies listed on the jse.
Furthermore for each company the company's probability of default is predicted.
The results are benchmarked against more classical methods that are commonly used
for bankruptcy prediction such as linear discriminate analysis and logistic regression.
Then the results of the support vector machines, neural networks, linear discriminate
analysis and logistic regression are assessed via their receiver operator curves and
pro tability ratios to gure out which model is more successful at predicting default.MT 201
Visual Transfer Learning in the Absence of the Source Data
Image recognition has become one of the most popular topics in machine learning. With the development of Deep Convolutional Neural Networks (CNN) and the help of the large scale labeled image database such as ImageNet, modern image recognition models can achieve competitive performance compared to human annotation in some general image recognition tasks. Many IT companies have adopted it to improve their visual related tasks. However, training these large scale deep neural networks requires thousands or even millions of labeled images, which is an obstacle when applying it to a specific visual task with limited training data. Visual transfer learning is proposed to solve this problem. Visual transfer learning aims at transferring the knowledge from a source visual task to a target visual task. Typically, the target task is related to the source task, and the training data in the target task is relatively small. In visual transfer learning, the majority of existing methods assume that the source data is freely available and use the source data to measure the discrepancy between the source and target task to help the transfer process. However, in many real applications, source data are often a subject of legal, technical and contractual constraints between data owners and data customers. Beyond privacy and disclosure obligations, customers are often reluctant to share their data. When operating customer care, collected data may include information on recent technical problems which is a highly sensitive topic that companies are not willing to share. This scenario is often called Hypothesis Transfer Learning (HTL) where the source data is absent. Therefore, these previous methods cannot be applied to many real visual transfer learning problems. In this thesis, we investigate the visual transfer learning problem under HTL setting. Instead of using the source data to measure the discrepancy, we use the source model as the proxy to transfer the knowledge from the source task to the target task. Compared to the source data, the well-trained source model is usually freely accessible in many tasks and contains equivalent source knowledge as well. Specifically, in this thesis, we investigate the visual transfer learning in two scenarios: domain adaptation and learning new categories. In contrast to the previous methods in HTL, our methods can both leverage knowledge from more types of source models and achieve better transfer performance. In chapter 3, we investigate the visual domain adaptation problem under the setting of Hypothesis Transfer Learning. We propose Effective Multiclass Transfer Learning (EMTLe) that can effectively transfer the knowledge when the size of the target set is small. Specifically, EMTLe can effectively transfer the knowledge using the outputs of the source models as the auxiliary bias to adjust the prediction in the target task. Experiment results show that EMTLe can outperform other baselines under the setting of HTL. In chapter 4, we investigate the semi-supervised domain adaptation scenario under the setting of HTL and propose our framework Generalized Distillation Semi-supervised Domain Adaptation (GDSDA). Specifically, we show that GDSDA can effectively transfer the knowledge using the unlabeled data. We also demonstrate that the imitation parameter, the hyperparameter in GDSDA that balances the knowledge from source and target task, is important to the transfer performance. Then we propose GDSDA-SVM which uses SVMs as the base classifier in GDSDA. We show that GDSDA-SVM can determine the imitation parameter in GDSDA autonomously. Compared to previous methods, whose imitation parameter can only be determined by either brutal force search or background knowledge, GDSDA-SVM is more effective in real applications. In chapter 5, we investigate the problem of fine-tuning the deep CNN to learn new food categories using the large ImageNet database as our source. Without accessing to the source data, i.e. the ImageNet dataset, we show that by fine-tuning the parameters of the source model with our target food dataset, we can achieve better performance compared to those previous methods. To conclude, the main contribution of is that we investigate the visual transfer learning problem under the HTL setting. We propose several methods to transfer the knowledge from the source task in supervised and semi-supervised learning scenarios. Extensive experiments results show that without accessing to any source data, our methods can outperform previous work
Gene set based ensemble methods for cancer classification
Diagnosis of cancer very often depends on conclusions drawn after both clinical and microscopic examinations of tissues to study the manifestation of the disease in order to place tumors in known categories. One factor which determines the categorization of cancer is the tissue from which the tumor originates. Information gathered from clinical exams may be partial or not completely predictive of a specific category of cancer. Further complicating the problem of categorizing various tumors is that the histological classification of the cancer tissue and description of its course of development may be atypical. Gene expression data gleaned from micro-array analysis provides tremendous promise for more accurate cancer diagnosis. One hurdle in the classification of tumors based on gene expression data is that the data space is ultra-dimensional with relatively few points; that is, there are a small number of examples with a large number of genes. A second hurdle is expression bias caused by the correlation of genes. Analysis of subsets of genes, known as gene set analysis, provides a mechanism by which groups of differentially expressed genes can be identified. We propose an ensemble of classifiers whose base classifiers are â„“1-regularized logistic regression models with restriction of the feature space to biologically relevant genes. Some researchers have already explored the use of ensemble classifiers to classify cancer but the effect of the underlying base classifiers in conjunction with biologically-derived gene sets on cancer classification has not been explored
Towards A Robust Arabic Speech Recognition System Based On Reservoir Computing
In this thesis we investigate the potential of developing a speech recognition system based on a recently introduced artificial neural network (ANN) technique, namely Reservoir Computing (RC). This technique has, in theory, a higher capability for modelling dynamic behaviour compared to feed-forward ANNs due to the recurrent connections between the nodes in the reservoir layer, which serves as a memory. We conduct this study on the Arabic language, (one of the most spoken languages in the world and the official language in 26 countries), because there is a serious gap in the literature on speech recognition systems for Arabic, making the potential impact high. The investigation covers a variety of tasks, including the implementation of the first reservoir-based Arabic speech recognition system. In addition, a thorough evaluation of the developed system is conducted including several comparisons to other state- of-the-art models found in the literature, and baseline models. The impact of feature extraction methods are studied in this work, and a new biologically inspired feature extraction technique, namely the Auditory Nerve feature, is applied to the speech recognition domain. Comparing different feature extraction methods requires access to the original recorded sound, which is not possible in the only publicly accessible Arabic corpus. We have developed the largest public Arabic corpus for isolated words, which contains roughly 10,000 samples. Our investigation has led us to develop two novel approaches based on reservoir computing, ESNSVMs (Echo State Networks with Support Vector Machines) and ESNEKMs (Echo State Networks with Extreme Kernel Machines). These aim to improve the performance of the conventional RC approach by proposing different readout architectures. These two approaches have been compared to the conventional RC approach and other state-of-the- art systems. Finally, these developed approaches have been evaluated on the presence of different types and levels of noise to examine their resilience to noise, which is crucial for real world applications
Recommended from our members
Effective techniques for handling incomplete data using decision trees
Decision Trees (DTs) have been recognized as one of the most successful formalisms for knowledge representation and reasoning and are currently applied to a variety of data mining or knowledge discovery applications, particularly for classification problems. There are several efficient methods to learn a DT from data. However, these methods are often limited to the assumption that data are complete.
In this thesis, some contributions to the field of machine learning and statistics that solve the problem of extracting DTs for learning and classification tasks from incomplete databases are presented. The methodology underlying the thesis blends together well-established statistical theories with the most advanced techniques for machine learning and automated reasoning with uncertainty.
The first contribution is the extensive simulations which study the impact of missing data on predictive accuracy of existing DTs which can cope with missing values, when missing values are in both the training and test sets or when they are in either of the two sets. All simulations are performed under missing completely at random, missing at random and informatively missing mechanisms and for different missing data patterns and proportions.
The proposal of a simple, novel, yet effective proposed procedure for training and testing using decision trees in the presence of missing data is the next contribution. Original and simple splitting criteria for attribute selection in tree building are put forward. The proposed technique is evaluated and validated in empirical tests over many real world application domains. In this work, the proposed algorithm maintains (sometimes exceeds) the outstanding accuracy of multiple imputation, especially on datasets containing mixed attributes and purely nominal attributes. Also, the proposed algorithm greatly improves in accuracy for IM data. Another major advantage of this method over multiple imputation is the important saving in computational resources due to it simplicity.
The next contribution is the proposal of three versions of simple probabilistic techniques that could be used for classifying incomplete vectors using decision trees based on complete data. The proposed procedure is superficially similar to that of fractional cases but more effective. The experimental results demonstrate that these approaches can achieve comparative quality to sophisticated algorithms like multiple imputation and therefore are applicable to all kinds of datasets.
Finally, novel uses of two proposed ensemble procedures for handling incomplete training and test data are proposed and discussed. The algorithms combine the two best approaches either with resampling (REMIMIA) or without resampling (EMIMIA) of the training data before growing the decision trees. Experiments are used to evaluate and validate the success of the proposed ensemble methods with respect to individual missing data techniques in the form of empirical tests. EMIMIA attains the highest overall level of prediction accuracy
Statistical Methods to Enhance Clinical Prediction with High-Dimensional Data and Ordinal Response
Der technologische Fortschritt ermöglicht es heute, die moleculare
Konfiguration einzelner Zellen oder ganzer Gewebeproben zu
untersuchen. Solche in groĂźen Mengen produzierten
hochdimensionalen Omics-Daten aus der Molekularbiologie lassen sich
zu immer niedrigeren Kosten erzeugen und werden so immer
häufiger auch in klinischen Fragestellungen eingesetzt.
Personalisierte Diagnose oder auch die Vorhersage eines
Behandlungserfolges auf der Basis solcher Hochdurchsatzdaten stellen
eine moderne Anwendung von Techniken aus dem maschinellen Lernen dar.
In der Praxis werden klinische Parameter, wie etwa der
Gesundheitszustand oder die Nebenwirkungen einer Therapie, häufig auf
einer ordinalen Skala erhoben (beispielsweise gut, normal,
schlecht).
Es ist verbreitet, Klassifikationsproblme mit ordinal skaliertem
Endpunkt wie generelle Mehrklassenproblme zu behandeln und somit die
Information, die in der Ordnung zwischen den Klassen enthalten ist, zu
ignorieren. Allerdings kann das Vernachlässigen dieser Information zu
einer verminderten KlassifikationsgĂĽte fĂĽhren oder sogar eine
ungĂĽnstige ungeordnete Klassifikation erzeugen.
Klassische Ansätze, einen ordinal skalierten Endpunkt direkt zu
modellieren, wie beispielsweise mit einem kumulativen Linkmodell,
lassen sich typischerweise nicht auf hochdimensionale Daten anwenden.
Wir präsentieren in dieser Arbeit hierarchical twoing (hi2) als
einen Algorithmus fĂĽr die Klassifikation hochdimensionler Daten in
ordinal Skalierte Kategorien. hi2 nutzt die Mächtigkeit der
sehr gut verstandenen binären Klassifikation, um auch in ordinale
Kategorien zu klassifizieren. Eine Opensource-Implementierung von
hi2 ist online verfĂĽgbar.
In einer Vergleichsstudie zur Klassifikation von echten wie von
simulierten Daten mit ordinalem Endpunkt produzieren etablierte
Methoden, die speziell fĂĽr geordnete Kategorien entworfen wurden,
nicht generell bessere Ergebnisse als state-of-the-art
nicht-ordinale Klassifikatoren. Die Fähigkeit eines Algorithmus, mit
hochdimensionalen Daten umzugehen, dominiert die
Klassifikationsleisting. Wir zeigen, dass unser Algorithmus hi2
konsistent gute Ergebnisse erzielt und in vielen Fällen besser
abschneidet als die anderen Methoden
- …