22,808 research outputs found
TASKers: A Whole-System Generator for Benchmarking Real-Time-System Analyses
Implementation-based benchmarking of timing and schedulability analyses requires system code that can be executed on real hardware and has defined properties, for example, known worst-case execution times (WCETs) of tasks. Traditional approaches for creating benchmarks with such characteristics often result in implementations that do not resemble real-world systems, either due to work only being simulated by means of busy waiting, or because tasks have no control-flow dependencies between each other. In this paper, we address this problem with TASKers, a generator that constructs realistic benchmark systems with predefined properties. To achieve this, TASKers composes patterns of real-world programs to generate tasks that produce known outputs and exhibit preconfigured WCETs when being executed with certain inputs. Using this knowledge during the generation process, TASKers is able to specifically introduce inter-task control-flow dependencies by mapping the output of one task to the input of another
Recommended from our members
Overcoming non-determinism in testing smart devices: how to build models of device behaviour
Justification of smart instruments has become an important topic in the nuclear industry. In practice, however, the publicly available artefacts are often the only source of information about the device. Therefore, in many cases independent black-box testing may be the only way to increase the confidence in the device. In this paper we provide a set of recommendations, which we consider to be the best practices for performing black-box assessments. We present our method of testing smart instruments, in which we use the publicly available artefacts only. We present a test harness and describe a method of test automation. We focus on the analysis of test results, which is made particularly complex by the inherent non determinism in the testing of analogue devices. In the paper we analyse the sources of non-determinism, which for instance may arise from inaccuracy in an analogue measurement made by the device when two alternative actions are possible. We propose three alternative ideas on how to build models of device behaviour, which can cope with this kind of non-determinism. We compare and contrast these three solutions, and express our recommendations. Finally, we use a case study, in which a black box assessment of two similar smart instruments is performed to illustrate the differences between the solutions
Using LibAnswers in the Archives: A Review and Implementation Report
[Excerpt] The implementation of LibAnswers by the University of Saskatchewan represents the culmination of fundamental changes to the way reference service is delivered in University Archives & Special Collections. In 2013, there was an amalgamation of two units that shared space but were organizationally independent. Previously, e-mail reference was primarily handled by one employee from each unit, with assistance and referrals as needed. With the 2013 amalgamation, the delivery model changed to have all staff members – archivists, librarians, and senior library/archives assistants – take half-day shifts on the reference desk, which would include walk-in traffic, phone calls and e-mail
MONROE-Nettest: A Configurable Tool for Dissecting Speed Measurements in Mobile Broadband Networks
As the demand for mobile connectivity continues to grow, there is a strong
need to evaluate the performance of Mobile Broadband (MBB) networks. In the
last years, mobile "speed", quantified most commonly by data rate, gained
popularity as the widely accepted metric to describe their performance.
However, there is a lack of consensus on how mobile speed should be measured.
In this paper, we design and implement MONROE-Nettest to dissect mobile speed
measurements, and investigate the effect of different factors on speed
measurements in the complex mobile ecosystem. MONROE-Nettest is built as an
Experiment as a Service (EaaS) on top of the MONROE platform, an open dedicated
platform for experimentation in operational MBB networks. Using MONROE-Nettest,
we conduct a large scale measurement campaign and quantify the effects of
measurement duration, number of TCP flows, and server location on measured
downlink data rate in 6 operational MBB networks in Europe. Our results
indicate that differences in parameter configuration can significantly affect
the measurement results. We provide the complete MONROE-Nettest toolset as open
source and our measurements as open data.Comment: 6 pages, 3 figures, submitted to INFOCOM CNERT Workshop 201
Integration and Visualization Public Health Dashboard: The medi plus board Pilot Project
Traditional public health surveillance systems would benefit from integration with knowledge created by new situation-aware realtime signals from social media, online searches, mobile/sensor networks and citizens' participatory surveillance systems. However, the challenge of threat validation, cross-verification and information integration for risk assessment has so far been largely untackled. In this paper, we propose a new system, medi+board, monitoring epidemic intelligence sources and traditional case-based surveillance to better automate early warning, cross-validation of signals for outbreak detection and visualization of results on an interactive dashboard. This enables public health professionals to see all essential information at a glance. Modular and configurable to any 'event' defined by public health experts, medi+board scans multiple data sources, detects changing patterns and uses a configurable analysis module for signal detection to identify a threat. These can be validated by an analysis module and correlated with other sources to assess the reliability of the event classified as the reliability coefficient which is a real number between zero and one. Events are reported and visualized on the medi+board dashboard which integrates all information sources and can be navigated by a timescale widget. Simulation with three datasets from the swine flu 2009 pandemic (HPA surveillance, Google news, Twitter) demonstrates the potential of medi+board to automate data processing and visualization to assist public health experts in decision making on control and response measures
Evaluating Security and Usability of Profile Based Challenge Questions Authentication in Online Examinations
© 2014 Ullah et al.; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.Student authentication in online learning environments is an increasingly challenging issue due to the inherent absence of physical interaction with online users and potential security threats to online examinations. This study is part of ongoing research on student authentication in online examinations evaluating the potential benefits of using challenge questions. The authors developed a Profile Based Authentication Framework (PBAF), which utilises challenge questions for students’ authentication in online examinations. This paper examines the findings of an empirical study in which 23 participants used the PBAF including an abuse case security analysis of the PBAF approach. The overall usability analysis suggests that the PBAF is efficient, effective and usable. However, specific questions need replacement with suitable alternatives due to usability challenges. The results of the current research study suggest that memorability, clarity of questions, syntactic variation and question relevance can cause usability issues leading to authentication failure. A configurable traffic light system was designed and implemented to improve the usability of challenge questions. The security analysis indicates that the PBAF is resistant to informed guessing in general, however, specific questions were identified with security issues. The security analysis identifies challenge questions with potential risks of informed guessing by friends and colleagues. The study was performed with a small number of participants in a simulation online course and the results need to be verified in a real educational context on a larger sample sizePeer reviewedFinal Published versio
Recommended from our members
Improving a tutor’s feedback assessment tool: transforming Open Mentor following two recent deployments
Evidence shows the vital role that the quality of feedback plays on students’ performance and on the overall increase of learning opportunities that good feedback creates for students. Based on this evidence, the Open University developed Open Mentor (OM), a system to support tutors enhance their feedback practice. Open Mentor Technology transfer (OMTetra), a JISC funded project, took OM and deployed it in two Higher Education institutions with the purpose of evaluating the process of transferability and continue the development of the tools available to tutors within the system. This paper describes the original OM and the enhancements identified after use and evaluations from tutors of the institutions involved
- …