485,705 research outputs found
Longitudinal performance analysis of machine learning based Android malware detectors
This paper presents a longitudinal study of the performance of machine learning classifiers for Android malware detection. The study is undertaken using features extracted from Android applications first seen between 2012 and 2016. The aim is to investigate the extent of performance decay over time for various machine learning classifiers trained with static features extracted from date-labelled benign and malware application sets. Using date-labelled apps allows for true mimicking of zero-day testing, thus providing a more realistic view of performance than the conventional methods of evaluation that do not take date of appearance into account. In this study, all the investigated machine learning classifiers showed progressive diminishing performance when tested on sets of samples from a later time period. Overall, it was found that false positive rate (misclassifying benign samples as malicious) increased more substantially compared to the fall in True Positive rate (correct classification of malicious apps) when older models were tested on newer app samples
Reliability measurement without limits
In computational linguistics, a reliability measurement of 0.8 on some statistic such as is widely thought to guarantee that hand-coded data is fit for purpose, with lower values suspect. We demonstrate that the main use of such data, machine learning, can tolerate data with a low reliability as long as any disagreement among human coders looks like random noise. When it does not, however, data can have a reliability of more than 0.8 and still be unsuitable for use: the disagreement may indicate erroneous patterns that machine-learning can learn, and evaluation against test data that contain these same erroneous patterns may lead us to draw wrong conclusions about our machine-learning algorithms. Furthermore, lower reliability values still held as acceptable by many researchers, between 0.67 and 0.8, may even yield inflated performance figures in some circumstances. Although this is a common sense result, it has implications for how we work that are likely to reach beyond the machine-learning applications we discuss. At the very least, computational linguists should look for any patterns in the disagreement among coders and assess what impact they will have
Machine Learning and Clinical Text. Supporting Health Information Flow
Fluent health information flow is critical for clinical decision-making. However, a considerable part of this information is free-form text and inabilities to utilize it create risks to patient safety and cost-effective hospital administration. Methods for automated processing of clinical text are emerging.
The aim in this doctoral dissertation is to study machine learning and clinical text in order to support health information flow.First, by analyzing the content of authentic patient records, the aim is to specify clinical needs in order to guide the development of machine learning applications.The contributions are a model of the ideal information flow,a model of the problems and challenges in reality,
and a road map for the technology development.
Second, by developing applications for practical cases,the aim is to concretize ways to support health information flow.
Altogether five machine learning applications for three practical cases are described: The first two applications are binary classification and regression related to the practical case of topic labeling and relevance ranking.The third and fourth application are supervised and unsupervised multi-class classification for the practical case of topic segmentation and labeling.These four applications are tested with Finnish intensive care patient records.The fifth application is multi-label classification for the practical task of diagnosis coding. It is tested with English radiology reports.The performance of all these applications is promising.
Third, the aim is to study how the quality of machine learning applications can be reliably evaluated.The associations between performance evaluation measures and methods are addressed,and a new hold-out method is introduced.This method contributes not only to processing time but also to the evaluation diversity and quality.
The main conclusion is that developing machine learning applications for text requires interdisciplinary, international collaboration. Practical cases are very different, and hence the development must begin from genuine user needs and domain expertise. The technological expertise must cover linguistics,machine learning, and information systems. Finally, the methods must be evaluated both statistically and through authentic user-feedback.Siirretty Doriast
CrypTen: Secure Multi-Party Computation Meets Machine Learning
Secure multi-party computation (MPC) allows parties to perform computations
on data while keeping that data private. This capability has great potential
for machine-learning applications: it facilitates training of machine-learning
models on private data sets owned by different parties, evaluation of one
party's private model using another party's private data, etc. Although a range
of studies implement machine-learning models via secure MPC, such
implementations are not yet mainstream. Adoption of secure MPC is hampered by
the absence of flexible software frameworks that "speak the language" of
machine-learning researchers and engineers. To foster adoption of secure MPC in
machine learning, we present CrypTen: a software framework that exposes popular
secure MPC primitives via abstractions that are common in modern
machine-learning frameworks, such as tensor computations, automatic
differentiation, and modular neural networks. This paper describes the design
of CrypTen and measure its performance on state-of-the-art models for text
classification, speech recognition, and image classification. Our benchmarks
show that CrypTen's GPU support and high-performance communication between (an
arbitrary number of) parties allows it to perform efficient private evaluation
of modern machine-learning models under a semi-honest threat model. For
example, two parties using CrypTen can securely predict phonemes in speech
recordings using Wav2Letter faster than real-time. We hope that CrypTen will
spur adoption of secure MPC in the machine-learning community
AutoMLBench: A Comprehensive Experimental Evaluation of Automated Machine Learning Frameworks
Nowadays, machine learning is playing a crucial role in harnessing the power
of the massive amounts of data that we are currently producing every day in our
digital world. With the booming demand for machine learning applications, it
has been recognized that the number of knowledgeable data scientists can not
scale with the growing data volumes and application needs in our digital world.
In response to this demand, several automated machine learning (AutoML)
techniques and frameworks have been developed to fill the gap of human
expertise by automating the process of building machine learning pipelines. In
this study, we present a comprehensive evaluation and comparison of the
performance characteristics of six popular AutoML frameworks, namely,
Auto-Weka, AutoSKlearn, TPOT, Recipe, ATM, and SmartML across 100 data sets
from established AutoML benchmark suites. Our experimental evaluation considers
different aspects for its comparison including the performance impact of
several design decisions including time budget, size of search space,
meta-learning, and ensemble construction. The results of our study reveal
various interesting insights that can significantly guide and impact the design
of AutoML frameworks
- …