26,843 research outputs found
Machine Learning at Microsoft with ML .NET
Machine Learning is transitioning from an art and science into a technology
available to every developer. In the near future, every application on every
platform will incorporate trained models to encode data-based decisions that
would be impossible for developers to author. This presents a significant
engineering challenge, since currently data science and modeling are largely
decoupled from standard software development processes. This separation makes
incorporating machine learning capabilities inside applications unnecessarily
costly and difficult, and furthermore discourage developers from embracing ML
in first place. In this paper we present ML .NET, a framework developed at
Microsoft over the last decade in response to the challenge of making it easy
to ship machine learning models in large software applications. We present its
architecture, and illuminate the application demands that shaped it.
Specifically, we introduce DataView, the core data abstraction of ML .NET which
allows it to capture full predictive pipelines efficiently and consistently
across training and inference lifecycles. We close the paper with a
surprisingly favorable performance study of ML .NET compared to more recent
entrants, and a discussion of some lessons learned
Serving deep learning models in a serverless platform
Serverless computing has emerged as a compelling paradigm for the development
and deployment of a wide range of event based cloud applications. At the same
time, cloud providers and enterprise companies are heavily adopting machine
learning and Artificial Intelligence to either differentiate themselves, or
provide their customers with value added services. In this work we evaluate the
suitability of a serverless computing environment for the inferencing of large
neural network models. Our experimental evaluations are executed on the AWS
Lambda environment using the MxNet deep learning framework. Our experimental
results show that while the inferencing latency can be within an acceptable
range, longer delays due to cold starts can skew the latency distribution and
hence risk violating more stringent SLAs
To NACK or not to NACK? Negative Acknowledgments in Information-Centric Networking
Information-Centric Networking (ICN) is an internetworking paradigm that
offers an alternative to the current IP\nobreakdash-based Internet
architecture. ICN's most distinguishing feature is its emphasis on information
(content) instead of communication endpoints. One important open issue in ICN
is whether negative acknowledgments (NACKs) at the network layer are useful for
notifying downstream nodes about forwarding failures, or requests for incorrect
or non-existent information. In benign settings, NACKs are beneficial for ICN
architectures, such as CCNx and NDN, since they flush state in routers and
notify consumers. In terms of security, NACKs seem useful as they can help
mitigating so-called Interest Flooding attacks. However, as we show in this
paper, network-layer NACKs also have some unpleasant security implications. We
consider several types of NACKs and discuss their security design requirements
and implications. We also demonstrate that providing secure NACKs triggers the
threat of producer-bound flooding attacks. Although we discuss some potential
countermeasures to these attacks, the main conclusion of this paper is that
network-layer NACKs are best avoided, at least for security reasons.Comment: 10 pages, 7 figure
Aging Gracefully: The PACE Approach to Caring for Frail Elders in the Community
Mountain Empire is one of the newest of more than 100 independent PACE organizations across the nation that serve both as health plans and as medical and long-term service providers to elders—offering meals, checkups, rehabilitation services, home visits, and many other supports that enable enrollees to preserve their independence. The model for PACE dates back to 1971, when a public health dentist and social worker from the San Francisco Public Health Department working in Chinatown-North Beach noticed that as their clients aged, many needed extra support but dreaded moving into nursing homes. They founded On Lok Senior Health Services as an alternative to institutional care that would allow elders to "age in place" in their homes; on lokis Cantonese for "peaceful, happy abode."On Lok's founders were particularly concerned about elderly clients who suffered when their various clinicians failed to work together, sometimes leading to complications that necessitated moves into institutional care. They designed On Lok to promote what was then an innovative approach: coordinating care from an interdisciplinary team of professionals who provide all primary care services and oversee specialists' services.A Medicare-funded demonstration spanning 1979 to 1983 found this approach had many benefits. Care teams were able to prevent or quickly address problems, resulting in better health and quality of life and producing 15 percent lower costs than traditional Medicare. In the decades since, the model has spread slowly, though enrollment has grown nearly 40 percent in the past three years. As of January 2016, there were 118 PACE organizations in 31 states serving some 39,000 elders
Sentara Healthcare: A Case Study Series on Disruptive Innovation Within Integrated Health Systems
Examines how integration and ties with health plans, physicians, and hospitals helped protect against revenue volatility and enabled experimentation; factors that facilitate integration; innovative practices; lessons learned; and policy implications
Trusted CI Experiences in Cybersecurity and Service to Open Science
This article describes experiences and lessons learned from the Trusted CI
project, funded by the US National Science Foundation to serve the community as
the NSF Cybersecurity Center of Excellence. Trusted CI is an effort to address
cybersecurity for the open science community through a single organization that
provides leadership, training, consulting, and knowledge to that community. The
article describes the experiences and lessons learned of Trusted CI regarding
both cybersecurity for open science and managing the process of providing
centralized services to a broad and diverse community.Comment: 8 pages, PEARC '19: Practice and Experience in Advanced Research
Computing, July 28-August 1, 2019, Chicago, IL, US
iTeleScope: Intelligent Video Telemetry and Classification in Real-Time using Software Defined Networking
Video continues to dominate network traffic, yet operators today have poor
visibility into the number, duration, and resolutions of the video streams
traversing their domain. Current approaches are inaccurate, expensive, or
unscalable, as they rely on statistical sampling, middle-box hardware, or
packet inspection software. We present {\em iTelescope}, the first intelligent,
inexpensive, and scalable SDN-based solution for identifying and classifying
video flows in real-time. Our solution is novel in combining dynamic flow rules
with telemetry and machine learning, and is built on commodity OpenFlow
switches and open-source software. We develop a fully functional system, train
it in the lab using multiple machine learning algorithms, and validate its
performance to show over 95\% accuracy in identifying and classifying video
streams from many providers including Youtube and Netflix. Lastly, we conduct
tests to demonstrate its scalability to tens of thousands of concurrent
streams, and deploy it live on a campus network serving several hundred real
users. Our system gives unprecedented fine-grained real-time visibility of
video streaming performance to operators of enterprise and carrier networks at
very low cost.Comment: 12 pages, 16 figure
The Group Employed Model as a Foundation for Health Care Delivery Reform
Outlines group employed models, with salaried primary and specialty care physicians and quality of care- and satisfaction-based incentives as high-quality, low-cost alternatives to fee-for-service; elements of success; and implications beyond Medicare
Towards Structured Analysis of Broadcast Badminton Videos
Sports video data is recorded for nearly every major tournament but remains
archived and inaccessible to large scale data mining and analytics. It can only
be viewed sequentially or manually tagged with higher-level labels which is
time consuming and prone to errors. In this work, we propose an end-to-end
framework for automatic attributes tagging and analysis of sport videos. We use
commonly available broadcast videos of matches and, unlike previous approaches,
does not rely on special camera setups or additional sensors.
Our focus is on Badminton as the sport of interest. We propose a method to
analyze a large corpus of badminton broadcast videos by segmenting the points
played, tracking and recognizing the players in each point and annotating their
respective badminton strokes. We evaluate the performance on 10 Olympic matches
with 20 players and achieved 95.44% point segmentation accuracy, 97.38% player
detection score ([email protected]), 97.98% player identification accuracy, and stroke
segmentation edit scores of 80.48%. We further show that the automatically
annotated videos alone could enable the gameplay analysis and inference by
computing understandable metrics such as player's reaction time, speed, and
footwork around the court, etc.Comment: 9 page
- …