100 research outputs found
A Neural Approach to Ordinal Regression for the Preventive Assessment of Developmental Dyslexia
Developmental Dyslexia (DD) is a learning disability related to the
acquisition of reading skills that affects about 5% of the population. DD can
have an enormous impact on the intellectual and personal development of
affected children, so early detection is key to implementing preventive
strategies for teaching language. Research has shown that there may be
biological underpinnings to DD that affect phoneme processing, and hence these
symptoms may be identifiable before reading ability is acquired, allowing for
early intervention. In this paper we propose a new methodology to assess the
risk of DD before students learn to read. For this purpose, we propose a mixed
neural model that calculates risk levels of dyslexia from tests that can be
completed at the age of 5 years. Our method first trains an auto-encoder, and
then combines the trained encoder with an optimized ordinal regression neural
network devised to ensure consistency of predictions. Our experiments show that
the system is able to detect unaffected subjects two years before it can assess
the risk of DD based mainly on phonological processing, giving a specificity of
0.969 and a correct rate of more than 0.92. In addition, the trained encoder
can be used to transform test results into an interpretable subject spatial
distribution that facilitates risk assessment and validates methodology.Comment: 12 pages, 4 figure
On the Dark Side of Calibration for Modern Neural Networks
Modern neural networks are highly uncalibrated. It poses a significant
challenge for safety-critical systems to utilise deep neural networks (DNNs),
reliably. Many recently proposed approaches have demonstrated substantial
progress in improving DNN calibration. However, they hardly touch upon
refinement, which historically has been an essential aspect of calibration.
Refinement indicates separability of a network's correct and incorrect
predictions. This paper presents a theoretically and empirically supported
exposition for reviewing a model's calibration and refinement. Firstly, we show
the breakdown of expected calibration error (ECE), into predicted confidence
and refinement. Connecting with this result, we highlight that regularisation
based calibration only focuses on naively reducing a model's confidence. This
logically has a severe downside to a model's refinement. We support our claims
through rigorous empirical evaluations of many state of the art calibration
approaches on standard datasets. We find that many calibration approaches with
the likes of label smoothing, mixup etc. lower the utility of a DNN by
degrading its refinement. Even under natural data shift, this
calibration-refinement trade-off holds for the majority of calibration methods.
These findings call for an urgent retrospective into some popular pathways
taken for modern DNN calibration.Comment: 15 pages including references and supplementa
- …