67,521 research outputs found
Paving the Roadway for Safety of Automated Vehicles: An Empirical Study on Testing Challenges
The technology in the area of automated vehicles is gaining speed and
promises many advantages. However, with the recent introduction of
conditionally automated driving, we have also seen accidents. Test protocols
for both, conditionally automated (e.g., on highways) and automated vehicles do
not exist yet and leave researchers and practitioners with different
challenges. For instance, current test procedures do not suffice for fully
automated vehicles, which are supposed to be completely in charge for the
driving task and have no driver as a back up. This paper presents current
challenges of testing the functionality and safety of automated vehicles
derived from conducting focus groups and interviews with 26 participants from
five countries having a background related to testing automotive safety-related
topics.We provide an overview of the state-of-practice of testing active safety
features as well as challenges that needs to be addressed in the future to
ensure safety for automated vehicles. The major challenges identified through
the interviews and focus groups, enriched by literature on this topic are
related to 1) virtual testing and simulation, 2) safety, reliability, and
quality, 3) sensors and sensor models, 4) required scenario complexity and
amount of test cases, and 5) handover of responsibility between the driver and
the vehicle.Comment: 8 page
DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems
Deep learning (DL) defines a new data-driven programming paradigm that
constructs the internal system logic of a crafted neuron network through a set
of training data. We have seen wide adoption of DL in many safety-critical
scenarios. However, a plethora of studies have shown that the state-of-the-art
DL systems suffer from various vulnerabilities which can lead to severe
consequences when applied to real-world applications. Currently, the testing
adequacy of a DL system is usually measured by the accuracy of test data.
Considering the limitation of accessible high quality test data, good accuracy
performance on test data can hardly provide confidence to the testing adequacy
and generality of DL systems. Unlike traditional software systems that have
clear and controllable logic and functionality, the lack of interpretability in
a DL system makes system analysis and defect detection difficult, which could
potentially hinder its real-world deployment. In this paper, we propose
DeepGauge, a set of multi-granularity testing criteria for DL systems, which
aims at rendering a multi-faceted portrayal of the testbed. The in-depth
evaluation of our proposed testing criteria is demonstrated on two well-known
datasets, five DL systems, and with four state-of-the-art adversarial attack
techniques against DL. The potential usefulness of DeepGauge sheds light on the
construction of more generic and robust DL systems.Comment: The 33rd IEEE/ACM International Conference on Automated Software
Engineering (ASE 2018
Psychometrics in Practice at RCEC
A broad range of topics is dealt with in this volume: from combining the psychometric generalizability and item response theories to the ideas for an integrated formative use of data-driven decision making, assessment for learning and diagnostic testing. A number of chapters pay attention to computerized (adaptive) and classification testing. Other chapters treat the quality of testing in a general sense, but for topics like maintaining standards or the testing of writing ability, the quality of testing is dealt with more specifically.\ud
All authors are connected to RCEC as researchers. They present one of their current research topics and provide some insight into the focus of RCEC. The selection of the topics and the editing intends that the book should be of special interest to educational researchers, psychometricians and practitioners in educational assessment
- âŠ