21,538 research outputs found

    A Darwinian approach to control-structure design

    Get PDF
    Genetic algorithms (GA's), as introduced by Holland (1975), are one form of directed random search. The form of direction is based on Darwin's 'survival of the fittest' theories. GA's are radically different from the more traditional design optimization techniques. GA's work with a coding of the design variables, as opposed to working with the design variables directly. The search is conducted from a population of designs (i.e., from a large number of points in the design space), unlike the traditional algorithms which search from a single design point. The GA requires only objective function information, as opposed to gradient or other auxiliary information. Finally, the GA is based on probabilistic transition rules, as opposed to deterministic rules. These features allow the GA to attack problems with local-global minima, discontinuous design spaces and mixed variable problems, all in a single, consistent framework

    Justice and predictability in torts

    Get PDF
    Recent reexaminations of the principles of tort liability have entertained two possible rationales for the fault principle, one "moral" and the other economic. Neither is satisfactory. I propose here a third rationale and show how it suffices to refute at least some of the challenges to the negligence system. The character of this rationale is causal, and the central thesis of this paper is that in as much as the tort system should aim to place the costs of accidents on the source of those accidents, then we have not yet found an acceptable alternative to the negligence system. This thesis is defended and developed through a reexamination of some recent theories of strict liability and reflection on some of what has been said about the role of causation in torts. A backdrop to the entire discussion is the question of how one might best ensure that potential defendants will be able to predict with reasonable certainty which courses of action will make them liable, should damages ensue

    ATP: a Datacenter Approximate Transmission Protocol

    Full text link
    Many datacenter applications such as machine learning and streaming systems do not need the complete set of data to perform their computation. Current approximate applications in datacenters run on a reliable network layer like TCP. To improve performance, they either let sender select a subset of data and transmit them to the receiver or transmit all the data and let receiver drop some of them. These approaches are network oblivious and unnecessarily transmit more data, affecting both application runtime and network bandwidth usage. On the other hand, running approximate application on a lossy network with UDP cannot guarantee the accuracy of application computation. We propose to run approximate applications on a lossy network and to allow packet loss in a controlled manner. Specifically, we designed a new network protocol called Approximate Transmission Protocol, or ATP, for datacenter approximate applications. ATP opportunistically exploits available network bandwidth as much as possible, while performing a loss-based rate control algorithm to avoid bandwidth waste and re-transmission. It also ensures bandwidth fair sharing across flows and improves accurate applications' performance by leaving more switch buffer space to accurate flows. We evaluated ATP with both simulation and real implementation using two macro-benchmarks and two real applications, Apache Kafka and Flink. Our evaluation results show that ATP reduces application runtime by 13.9% to 74.6% compared to a TCP-based solution that drops packets at sender, and it improves accuracy by up to 94.0% compared to UDP
    • ā€¦
    corecore