477 research outputs found

    Large-Block Modular Addition Checksum Algorithms

    Full text link
    Checksum algorithms are widely employed due to their use of a simple algorithm with fast computational speed to provide a basic detection capability for corrupted data. This paper describes the benefits of adding the design parameter of increased data block size for modular addition checksums, combined with an empirical approach to modulus selection. A longer processing block size with the right modulus can provide significantly better fault detection performance with no change in the number of bytes used to store the check value. In particular, a large-block dual-sum approach provides Hamming Distance 3-class fault detection performance for many times the data word length capability of previously studied Fletcher and Adler checksums. Moduli of 253 and 65525 are identified as being particularly effective for general-purpose checksum use.Comment: 21 pages, 15 figure

    An Improved Modular Addition Checksum Algorithm

    Full text link
    This paper introduces a checksum algorithm that provides a new point in the performance/complexity/effectiveness checksum tradeoff space. It has better fault detection properties than single-sum and dual-sum modular addition checksums. It is also simpler to compute efficiently than a cyclic redundancy check (CRC) due to exploiting commonly available hardware and programming language support for unsigned integer division. The key idea is to compute a single running sum, but introduce a left shift by the size (in bits) of the modulus before performing the modular reduction after each addition step. This approach provides a Hamming Distance of 3 for longer data word lengths than dual-sum approaches such as the Fletcher checksum. Moreover, it provides this capability using a single running sum that is only twice the size of the final computed check value, while providing fault detection capabilities even better than large-block variants of dual-sum approaches that require larger division operations.Comment: 9 pages, 3 figure

    Winning the Imitation Game: Setting Safety Expectations for Automated Vehicles

    Get PDF
    This article suggests that legislatures amend existing law to create a new legal category of computer driver to allow a plaintiff to make a negligence claim against an automated vehicle manufacturer for loss proximately caused by any negligent driving behavior exhibited by the driving automation systems which it produced. Creating this new legal category will allow a status quo approach to attribution and allocation of liability, including permitting defendants to take advantage of contributory negligence and comparative fault rules. Creation of the category also allows for continued functioning of the structure of our existing liability laws and regulations for motor vehicles in which the federal government regulates automotive equipment, and the state governments regulate drivers, driving, licensing and registration. The law often needs a statute to address changes in technology for which existing law understandably fails to provide answers. Creating the category of computer driver avoids the conceptual difficulties caused by an uncertain boundary between regulation of equipment and regulation of drivers the very disruptive situation created by the new technologies of driving automation in which computer drivers replace human drivers. It prevents shifting regulatory responsibility for liability attribution to the federal government and away from state governments when the human driver is replaced by equipment in the form of certain sophisticated driving automation systems which we capture with the legal fiction of a computer driver

    The Awkward Middle for Automated Vehicles: Liability Attribution Rules When Humans and Computers Share Driving Responsibilities

    Get PDF
    This Article proposes an architecture of concepts and language for use in a state statute that establishes when a human occupant of an automated vehicle (AV) has contributory negligence for her interactions with a driving automation system. Existing law provides an insufficient basis for addressing the question of liability because a driving automation system intentionally places some burden for safe operation of an AV on a human driver. Without further statutory guidance, leaving resolution to the courts will likely significantly delay legal certainty by creating inefficient and potentially inconsistent results across jurisdictions because of the technological complexity of the area. To provide legal certainty, the approach recommended uses four operational modes: testing, autonomous, supervisory, and conventional. Transition rules for transfer of responsibility from machine to human clarify at what times a computer driver or human driver has primary responsibility for avoiding or mitigating harm. Importantly, specifying clear parameters for a finding of contributory negligence prevents the complexity of machine/human interactions from creating an overbroad liability shield. Such a shield could deprive deserving plaintiffs of appropriate recoveries when a computer driver exhibits behavior that would be negligent if a human driver were to drive in a similar manner

    Challenges in Autonomous Vehicle Testing and Validation

    Get PDF
    Abstract Software testing is all too often simply a bug hunt rather than a wellconsidered exercise in ensuring quality. A more methodical approach than a simple cycle of system-level test-fail-patch-test will be required to deploy safe autonomous vehicles at scale. The ISO 26262 development V process sets up a framework that ties each type of testing to a corresponding design or requirement document, but presents challenges when adapted to deal with the sorts of novel testing problems that face autonomous vehicles. This paper identifies five major challenge areas in testing according to the V model for autonomous vehicles: driver out of the loop, complex requirements, non-deterministic algorithms, inductive learning algorithms, and failoperational systems. General solution approaches that seem promising across these different challenge areas include: phased deployment using successively relaxed operational scenarios, use of a monitor/actuator pair architecture to separate the most complex autonomy functions from simpler safety functions, and fault injection as a way to perform more efficient edge case testing. While significant challenges remain in safety-certifying the type of algorithms that provide high-level autonomy themselves, it seems within reach to instead architect the system and its accompanying design process to be able to employ existing software safety approaches
    corecore