3 research outputs found
Packaging and Sharing Machine Learning Models via the Acumos AI Open Platform
Applying Machine Learning (ML) to business applications for automation
usually faces difficulties when integrating diverse ML dependencies and
services, mainly because of the lack of a common ML framework. In most cases,
the ML models are developed for applications which are targeted for specific
business domain use cases, leading to duplicated effort, and making reuse
impossible. This paper presents Acumos, an open platform capable of packaging
ML models into portable containerized microservices which can be easily shared
via the platform's catalog, and can be integrated into various business
applications. We present a case study of packaging sentiment analysis and
classification ML models via the Acumos platform, permitting easy sharing with
others. We demonstrate that the Acumos platform reduces the technical burden on
application developers when applying machine learning models to their business
applications. Furthermore, the platform allows the reuse of readily available
ML microservices in various business domains.Comment: ICMLA 2018: International Conference on Machine Learning and
Application
ISTHMUS: Secure, Scalable, Real-time and Robust Machine Learning Platform for Healthcare
In recent times, machine learning (ML) and artificial intelligence (AI) based
systems have evolved and scaled across different industries such as finance,
retail, insurance, energy utilities, etc. Among other things, they have been
used to predict patterns of customer behavior, to generate pricing models, and
to predict the return on investments. But the successes in deploying machine
learning models at scale in those industries have not translated into the
healthcare setting. There are multiple reasons why integrating ML models into
healthcare has not been widely successful, but from a technical perspective,
general-purpose commercial machine learning platforms are not a good fit for
healthcare due to complexities in handling data quality issues, mandates to
demonstrate clinical relevance, and a lack of ability to monitor performance in
a highly regulated environment with stringent security and privacy needs. In
this paper, we describe Isthmus, a turnkey, cloud-based platform which
addresses the challenges above and reduces time to market for operationalizing
ML/AI in healthcare. Towards the end, we describe three case studies which shed
light on Isthmus capabilities. These include (1) supporting an end-to-end
lifecycle of a model which predicts trauma survivability at hospital trauma
centers, (2) bringing in and harmonizing data from disparate sources to create
a community data platform for inferring population as well as patient level
insights for Social Determinants of Health (SDoH), and (3) ingesting
live-streaming data from various IoT sensors to build models, which can
leverage real-time and longitudinal information to make advanced time-sensitive
predictions.Comment: 11 pages, 7 figures. Comments are welcom
Quantifying Transparency of Machine Learning Systems through Analysis of Contributions
Increased adoption and deployment of machine learning (ML) models into
business, healthcare and other organisational processes, will result in a
growing disconnect between the engineers and researchers who developed the
models and the model's users and other stakeholders, such as regulators or
auditors. This disconnect is inevitable, as models begin to be used over a
number of years or are shared among third parties through user communities or
via commercial marketplaces, and it will become increasingly difficult for
users to maintain ongoing insight into the suitability of the parties who
created the model, or the data that was used to train it. This could become
problematic, particularly where regulations change and once-acceptable
standards become outdated, or where data sources are discredited, perhaps
judged to be biased or corrupted, either deliberately or unwittingly. In this
paper we present a method for arriving at a quantifiable metric capable of
ranking the transparency of the process pipelines used to generate ML models
and other data assets, such that users, auditors and other stakeholders can
gain confidence that they will be able to validate and trust the data sources
and human contributors in the systems that they rely on for their business
operations. The methodology for calculating the transparency metric, and the
type of criteria that could be used to make judgements on the visibility of
contributions to systems are explained and illustrated through an example
scenario