8,558 research outputs found
A comparative study of calibration methods for low-cost ozone sensors in IoT platforms
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper shows the result of the calibration process of an Internet of Things platform for the measurement of tropospheric ozone (O 3 ). This platform, formed by 60 nodes, deployed in Italy, Spain, and Austria, consisted of 140 metal–oxide O 3 sensors, 25 electro-chemical O 3 sensors, 25 electro-chemical NO 2 sensors, and 60 temperature and relative humidity sensors. As ozone is a seasonal pollutant, which appears in summer in Europe, the biggest challenge is to calibrate the sensors in a short period of time. In this paper, we compare four calibration methods in the presence of a large dataset for model training and we also study the impact of a limited training dataset on the long-range predictions. We show that the difficulty in calibrating these sensor technologies in a real deployment is mainly due to the bias produced by the different environmental conditions found in the prediction with respect to those found in the data training phase.Peer ReviewedPostprint (author's final draft
Introducing a framework to assess newly created questions with Natural Language Processing
Statistical models such as those derived from Item Response Theory (IRT)
enable the assessment of students on a specific subject, which can be useful
for several purposes (e.g., learning path customization, drop-out prediction).
However, the questions have to be assessed as well and, although it is possible
to estimate with IRT the characteristics of questions that have already been
answered by several students, this technique cannot be used on newly generated
questions. In this paper, we propose a framework to train and evaluate models
for estimating the difficulty and discrimination of newly created Multiple
Choice Questions by extracting meaningful features from the text of the
question and of the possible choices. We implement one model using this
framework and test it on a real-world dataset provided by CloudAcademy, showing
that it outperforms previously proposed models, reducing by 6.7% the RMSE for
difficulty estimation and by 10.8% the RMSE for discrimination estimation. We
also present the results of an ablation study performed to support our features
choice and to show the effects of different characteristics of the questions'
text on difficulty and discrimination.Comment: Accepted at the International Conference of Artificial Intelligence
in Educatio
Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation
Black-box risk scoring models permeate our lives, yet are typically
proprietary or opaque. We propose Distill-and-Compare, a model distillation and
comparison approach to audit such models. To gain insight into black-box
models, we treat them as teachers, training transparent student models to mimic
the risk scores assigned by black-box models. We compare the student model
trained with distillation to a second un-distilled transparent model trained on
ground-truth outcomes, and use differences between the two models to gain
insight into the black-box model. Our approach can be applied in a realistic
setting, without probing the black-box model API. We demonstrate the approach
on four public data sets: COMPAS, Stop-and-Frisk, Chicago Police, and Lending
Club. We also propose a statistical test to determine if a data set is missing
key features used to train the black-box model. Our test finds that the
ProPublica data is likely missing key feature(s) used in COMPAS.Comment: Camera-ready version for AAAI/ACM AIES 2018. Data and pseudocode at
https://github.com/shftan/auditblackbox. Previously titled "Detecting Bias in
Black-Box Models Using Transparent Model Distillation". A short version was
presented at NIPS 2017 Symposium on Interpretable Machine Learnin
On the combination of omics data for prediction of binary outcomes
Enrichment of predictive models with new biomolecular markers is an important
task in high-dimensional omic applications. Increasingly, clinical studies
include several sets of such omics markers available for each patient,
measuring different levels of biological variation. As a result, one of the
main challenges in predictive research is the integration of different sources
of omic biomarkers for the prediction of health traits. We review several
approaches for the combination of omic markers in the context of binary outcome
prediction, all based on double cross-validation and regularized regression
models. We evaluate their performance in terms of calibration and
discrimination and we compare their performance with respect to single-omic
source predictions. We illustrate the methods through the analysis of two real
datasets. On the one hand, we consider the combination of two fractions of
proteomic mass spectrometry for the calibration of a diagnostic rule for the
detection of early-stage breast cancer. On the other hand, we consider
transcriptomics and metabolomics as predictors of obesity using data from the
Dietary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome
(DILGOM) study, a population-based cohort, from Finland
Reliable ABC model choice via random forests
Approximate Bayesian computation (ABC) methods provide an elaborate approach
to Bayesian inference on complex models, including model choice. Both
theoretical arguments and simulation experiments indicate, however, that model
posterior probabilities may be poorly evaluated by standard ABC techniques. We
propose a novel approach based on a machine learning tool named random forests
to conduct selection among the highly complex models covered by ABC algorithms.
We thus modify the way Bayesian model selection is both understood and
operated, in that we rephrase the inferential goal as a classification problem,
first predicting the model that best fits the data with random forests and
postponing the approximation of the posterior probability of the predicted MAP
for a second stage also relying on random forests. Compared with earlier
implementations of ABC model choice, the ABC random forest approach offers
several potential improvements: (i) it often has a larger discriminative power
among the competing models, (ii) it is more robust against the number and
choice of statistics summarizing the data, (iii) the computing effort is
drastically reduced (with a gain in computation efficiency of at least fifty),
and (iv) it includes an approximation of the posterior probability of the
selected model. The call to random forests will undoubtedly extend the range of
size of datasets and complexity of models that ABC can handle. We illustrate
the power of this novel methodology by analyzing controlled experiments as well
as genuine population genetics datasets. The proposed methodologies are
implemented in the R package abcrf available on the CRAN.Comment: 39 pages, 15 figures, 6 table
- …