4,006 research outputs found

    On Individual Risk

    Full text link
    We survey a variety of possible explications of the term "Individual Risk." These in turn are based on a variety of interpretations of "Probability," including Classical, Enumerative, Frequency, Formal, Metaphysical, Personal, Propensity, Chance and Logical conceptions of Probability, which we review and compare. We distinguish between "groupist" and "individualist" understandings of Probability, and explore both "group to individual" (G2i) and "individual to group" (i2G) approaches to characterising Individual Risk. Although in the end that concept remains subtle and elusive, some pragmatic suggestions for progress are made.Comment: 31 page

    Authentication of Students and Students’ Work in E-Learning : Report for the Development Bid of Academic Year 2010/11

    Get PDF
    Global e-learning market is projected to reach $107.3 billion by 2015 according to a new report by The Global Industry Analyst (Analyst 2010). The popularity and growth of the online programmes within the School of Computer Science obviously is in line with this projection. However, also on the rise are students’ dishonesty and cheating in the open and virtual environment of e-learning courses (Shepherd 2008). Institutions offering e-learning programmes are facing the challenges of deterring and detecting these misbehaviours by introducing security mechanisms to the current e-learning platforms. In particular, authenticating that a registered student indeed takes an online assessment, e.g., an exam or a coursework, is essential for the institutions to give the credit to the correct candidate. Authenticating a student is to ensure that a student is indeed who he says he is. Authenticating a student’s work goes one step further to ensure that an authenticated student indeed does the submitted work himself. This report is to investigate and compare current possible techniques and solutions for authenticating distance learning student and/or their work remotely for the elearning programmes. The report also aims to recommend some solutions that fit with UH StudyNet platform.Submitted Versio

    Operating System Support for Redundant Multithreading

    Get PDF
    Failing hardware is a fact and trends in microprocessor design indicate that the fraction of hardware suffering from permanent and transient faults will continue to increase in future chip generations. Researchers proposed various solutions to this issue with different downsides: Specialized hardware components make hardware more expensive in production and consume additional energy at runtime. Fault-tolerant algorithms and libraries enforce specific programming models on the developer. Compiler-based fault tolerance requires the source code for all applications to be available for recompilation. In this thesis I present ASTEROID, an operating system architecture that integrates applications with different reliability needs. ASTEROID is built on top of the L4/Fiasco.OC microkernel and extends the system with Romain, an operating system service that transparently replicates user applications. Romain supports single- and multi-threaded applications without requiring access to the application's source code. Romain replicates applications and their resources completely and thereby does not rely on hardware extensions, such as ECC-protected memory. In my thesis I describe how to efficiently implement replication as a form of redundant multithreading in software. I develop mechanisms to manage replica resources and to make multi-threaded programs behave deterministically for replication. I furthermore present an approach to handle applications that use shared-memory channels with other programs. My evaluation shows that Romain provides 100% error detection and more than 99.6% error correction for single-bit flips in memory and general-purpose registers. At the same time, Romain's execution time overhead is below 14% for single-threaded applications running in triple-modular redundant mode. The last part of my thesis acknowledges that software-implemented fault tolerance methods often rely on the correct functioning of a certain set of hardware and software components, the Reliable Computing Base (RCB). I introduce the concept of the RCB and discuss what constitutes the RCB of the ASTEROID system and other fault tolerance mechanisms. Thereafter I show three case studies that evaluate approaches to protecting RCB components and thereby aim to achieve a software stack that is fully protected against hardware errors

    On Statistical Methods for Safety Validation of Automated Vehicles

    Get PDF
    Automated vehicles (AVs) are expected to bring safer and more convenient transport in the future. Consequently, before introducing AVs at scale to the general public, the required levels of safety should be shown with evidence. However, statistical evidence generated by brute force testing using safety drivers in real traffic does not scale well. Therefore, more efficient methods are needed to evaluate if an AV exhibits acceptable levels of risk.This thesis studies the use of two methods to evaluate the AV\u27s safety performance efficiently. Both methods are based on assessing near-collision using threat metrics to estimate the frequency of actual collisions. The first method, called subset simulation, is here used to search the scenario parameter space in a simulation environment to estimate the probability of collision for an AV under development. More specifically, this thesis explores how the choice of threat metric, used to guide the search, affects the precision of the failure rate estimation. The result shows significant differences between the metrics and that some provide precise and accurate estimates.The second method is based on Extreme Value Theory (EVT), which is used to model the behavior of rare events. In this thesis, near-collision scenarios are identified using threat metrics and then extrapolated to estimate the frequency of actual collisions. The collision frequency estimates from different types of threat metrics are assessed when used with EVT for AV safety validation. Results show that a metric relating to the point where a collision is unavoidable works best and provides credible estimates. In addition, this thesis proposes how EVT and threat metrics can be used as a proactive safety monitor for AVs deployed in real traffic. The concept is evaluated in a fictive development case and compared to a reactive approach of counting the actual events. It is found that the risk exposure of releasing a non-safe function can be significantly reduced by applying the proposed EVT monitor

    Traffic Optimization in Data Center and Software-Defined Programmable Networks

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    How can we know a self-driving car is safe?

    Get PDF
    Self-driving cars promise solutions to some of the hazards of human driving but there are important questions about the safety of these new technologies. This paper takes a qualitative social science approach to the question ‘how safe is safe enough?’ Drawing on 50 interviews with people developing and researching self-driving cars, I describe two dominant narratives of safety. The first, safety-in-numbers, sees safety as a self-evident property of the technology and offers metrics in an attempt to reassure the public. The second approach, safety-by-design, starts with the challenge of safety assurance and sees the technology as intrinsically problematic. The first approach is concerned only with performance—what a self-driving system does. The second is also concerned with why systems do what they do and how they should be tested. Using insights from workshops with members of the public, I introduce a further concern that will define trustworthy self-driving cars: the intended and perceived purposes of a system. Engineers’ safety assurances will have their credibility tested in public. ‘How safe is safe enough?’ prompts further questions: ‘safe enough for what?’ and ‘safe enough for whom?

    From Bugs to Decision Support – Leveraging Historical Issue Reports in Software Evolution

    Get PDF
    Software developers in large projects work in complex information landscapes and staying on top of all relevant software artifacts is an acknowledged challenge. As software systems often evolve over many years, a large number of issue reports is typically managed during the lifetime of a system, representing the units of work needed for its improvement, e.g., defects to fix, requested features, or missing documentation. Efficient management of incoming issue reports requires the successful navigation of the information landscape of a project. In this thesis, we address two tasks involved in issue management: Issue Assignment (IA) and Change Impact Analysis (CIA). IA is the early task of allocating an issue report to a development team, and CIA is the subsequent activity of identifying how source code changes affect the existing software artifacts. While IA is fundamental in all large software projects, CIA is particularly important to safety-critical development. Our solution approach, grounded on surveys of industry practice as well as scientific literature, is to support navigation by combining information retrieval and machine learning into Recommendation Systems for Software Engineering (RSSE). While the sheer number of incoming issue reports might challenge the overview of a human developer, our techniques instead benefit from the availability of ever-growing training data. We leverage the volume of issue reports to develop accurate decision support for software evolution. We evaluate our proposals both by deploying an RSSE in two development teams, and by simulation scenarios, i.e., we assess the correctness of the RSSEs' output when replaying the historical inflow of issue reports. In total, more than 60,000 historical issue reports are involved in our studies, originating from the evolution of five proprietary systems for two companies. Our results show that RSSEs for both IA and CIA can help developers navigate large software projects, in terms of locating development teams and software artifacts. Finally, we discuss how to support the transfer of our results to industry, focusing on addressing the context dependency of our tool support by systematically tuning parameters to a specific operational setting
    • …
    corecore