235,491 research outputs found
Recommended from our members
Applying machine learning to predict future adherence to physical activity programs.
BackgroundIdentifying individuals who are unlikely to adhere to a physical exercise regime has potential to improve physical activity interventions. The aim of this paper is to develop and test adherence prediction models using objectively measured physical activity data in the Mobile Phone-Based Physical Activity Education program (mPED) trial. To the best of our knowledge, this is the first to apply Machine Learning methods to predict exercise relapse using accelerometer-recorded physical activity data.MethodsWe use logistic regression and support vector machine methods to design two versions of a Discontinuation Prediction Score (DiPS), which uses objectively measured past data (e.g., steps and goal achievement) to provide a numerical quantity indicating the likelihood of exercise relapse in the upcoming week. The respective prediction accuracy of these two versions of DiPS are compared, and then numerical simulation is performed to explore the potential of using DiPS to selectively allocate financial incentives to participants to encourage them to increase physical activity.Resultswe had access to a physical activity trial data that were continuously collected every 60âsec every day for 9âmonths in 210 participants. By using the first 15âweeks of data as training and test on weeks 16-30, we show that both versions of DiPS have a test AUC of 0.9 with high sensitivity and specificity in predicting the probability of exercise adherence. Simulation results assuming different intervention regimes suggest the potential benefit of using DiPS as a score to allocate resources in physical activity intervention programs in reducing costs over other allocation schemes.ConclusionsDiPS is capable of making accurate and robust predictions for future weeks. The most predictive features are steps and physical activity intensity. Furthermore, the use of DiPS scores can be a promising approach to determine when or if to provide just-in-time messages and step goal adjustments to improve compliance. Further studies on the use of DiPS in the design of physical activity promotion programs are warranted.Trial registrationClinicalTrials.gov NCT01280812 Registered on January 21, 2011
Instead of Rewriting Foreign Code for Machine Learning, Automatically Synthesize Fast Gradients
Applying differentiable programming techniques and machine learning
algorithms to foreign programs requires developers to either rewrite their code
in a machine learning framework, or otherwise provide derivatives of the
foreign code. This paper presents Enzyme, a high-performance automatic
differentiation (AD) compiler plugin for the LLVM compiler framework capable of
synthesizing gradients of statically analyzable programs expressed in the LLVM
intermediate representation (IR). Enzyme synthesizes gradients for programs
written in any language whose compiler targets LLVM IR including C, C++,
Fortran, Julia, Rust, Swift, MLIR, etc., thereby providing native AD
capabilities in these languages. Unlike traditional source-to-source and
operator-overloading tools, Enzyme performs AD on optimized IR. On a
machine-learning focused benchmark suite including Microsoft's ADBench, AD on
optimized IR achieves a geometric mean speedup of 4.5x over AD on IR before
optimization allowing Enzyme to achieve state-of-the-art performance. Packaging
Enzyme for PyTorch and TensorFlow provides convenient access to gradients of
foreign code with state-of-the art performance, enabling foreign code to be
directly incorporated into existing machine learning workflows.Comment: To be published in NeurIPS 202
Generalizing Programs via Subsumption
In this paper we present a class of operators for Machine Learning based on Logic Programming which represents a characterization of the subsumption relation in the following sense: The clause C 1 subsumes the clause C 2 iff C 1 can be reached from C 2 by applying these operators. We give a formalization of the closeness among clauses based on these operators and an algorithm to compute it as well as a bound for a quick estimation. We extend the operator to programs and we also get a characterization of the subsumption between programs. Finally, a weak metric is presented to compute the closeness among programs based on subsumption.Ministerio de Ciencia y TecnologĂa TIC 2000-1368-C03-0Junta de AndalucĂa TIC-13
Recommended from our members
Artificial Intelligenceâs Fair Use Crisis
As automation supplants more forms of labor, creative expression still seems like a distinctly human enterprise. This may someday change: by ingesting works of authorship as âtraining data,â computer programs can teach themselves to write natural prose, compose music, and generate movies. Machine learning is an artificial intelligence (âAIâ) technology with immense potential and a commensurate appetite for copyrighted works. In the United States, the copyright law mechanism most likely to facilitate machine learningâs uses of protected data is the fair use doctrine. However, current fair use doctrine threatens either to derail the progress of machine learning or to disenfranchise the human creators whose work makes it possible.
This Article addresses the problem in three Parts: using popular machine learning datasets and research as case studies, Part I describes how programs âlearnâ from corpora of copyrighted works and catalogs the legal risks of this practice. It concludes that fair use may not protect expressive machine learning applications, including the burgeoning field of natural language generation. Part II explains that applying todayâs fair use doctrine to expressive machine learning will yield one of two undesirable outcomes: if U.S. courts reject the fair use defense for machine learning, valuable innovation may move to another jurisdiction or halt entirely; alternatively, if courts find the technology to be fair use, sophisticated software may divert rightful earnings from the authors of input data. This dilemma shows that fair use may no longer serve its historical purpose. Traditionally, fair use is understood to benefit the public by fostering expressive activity. Today, the doctrine increasingly serves the economic interests of powerful firms at the expense of disempowered individual rights holders. Finally, in Part III, this Article contemplates changes in doctrine and policy that could address these problems. It concludes that the United Statesâ interest in avoiding both prongs of AIâs fair use dilemma offers a novel justification for redistributive measures that could promote social equity alongside technological progress
A Multi-Engine Approach to Answer Set Programming
Answer Set Programming (ASP) is a truly-declarative programming paradigm
proposed in the area of non-monotonic reasoning and logic programming, that has
been recently employed in many applications. The development of efficient ASP
systems is, thus, crucial. Having in mind the task of improving the solving
methods for ASP, there are two usual ways to reach this goal: extending
state-of-the-art techniques and ASP solvers, or designing a new ASP
solver from scratch. An alternative to these trends is to build on top of
state-of-the-art solvers, and to apply machine learning techniques for choosing
automatically the "best" available solver on a per-instance basis.
In this paper we pursue this latter direction. We first define a set of
cheap-to-compute syntactic features that characterize several aspects of ASP
programs. Then, we apply classification methods that, given the features of the
instances in a {\sl training} set and the solvers' performance on these
instances, inductively learn algorithm selection strategies to be applied to a
{\sl test} set. We report the results of a number of experiments considering
solvers and different training and test sets of instances taken from the ones
submitted to the "System Track" of the 3rd ASP Competition. Our analysis shows
that, by applying machine learning techniques to ASP solving, it is possible to
obtain very robust performance: our approach can solve more instances compared
with any solver that entered the 3rd ASP Competition. (To appear in Theory and
Practice of Logic Programming (TPLP).)Comment: 26 pages, 8 figure
- âŠ