9,477 research outputs found
Recommended from our members
Situating multimodal learning analytics
The digital age has introduced a host of new challenges and opportunities for the learning sciences community. These challenges and opportunities are particularly abundant in multimodal learning analytics (MMLA), a research methodology that aims to extend work from Educational Data Mining (EDM) and Learning Analytics (LA) to multimodal learning environments by treating multimodal data. Recognizing the short-term opportunities and longterm challenges will help develop proof cases and identify grand challenges that will help propel the field forward. To support the field's growth, we use this paper to describe several ways that MMLA can potentially advance learning sciences research and touch upon key challenges that researchers who utilize MMLA have encountered over the past few years
Fine-grained traffic state estimation and visualisation
Tools for visualising the current traffic state are used by local authorities for strategic monitoring of the traffic network and by everyday users for planning their journey. Popular visualisations include those provided by Google Maps and by Inrix. Both employ a traffic lights colour-coding system, where roads on a map are coloured green if traffic is flowing normally and red or black if there is congestion. New sensor technology, especially from wireless sources, is allowing resolution down to lane level. A case study is reported in which a traffic micro-simulation test bed is used to generate high-resolution estimates. An interactive visualisation of the fine-grained traffic state is presented. The visualisation is demonstrated using Google Earth and affords the user a detailed three-dimensional view of the traffic state down to lane level in real time
Quantifying the knowledge in Deep Neural Networks: an overview
Deep Neural Networks (DNNs) have proven to be extremely effective at learning a wide range of tasks. Due
to their complexity and frequently inexplicable internal state, DNNs are difficult to analyze: their black-box nature
makes it challenging for humans to comprehend their internal behavior. Several attempts to interpret their operation
have been made during the last decade, but analyzing deep neural models from the perspective of the knowledge
encoded in their layers is a very promising research direction, which has barely been touched upon. Such a research
approach could provide a more accurate insight into a DNN model, its internal state, learning progress, and knowledge
storage capabilities. The purpose of this survey is two-fold: a) to review the concept of DNN knowledge quantification
and highlight it as an important near-future challenge, as well as b) to provide a brief account of the scant existing
methods attempting to actually quantify DNN knowledge. Although a few such algorithms have been proposed, this
is an emerging topic still under investigation
- …