28,212 research outputs found

    Optimal Dislocation with Persistent Errors in Subquadratic Time

    Get PDF
    We study the problem of sorting N elements in presence of persistent errors in comparisons: In this classical model, each comparison between two elements is wrong independently with some probability p, but repeating the same comparison gives always the same result. The best known algorithms for this problem have running time O(N^2) and achieve an optimal maximum dislocation of O(log N) for constant error probability. Note that no algorithm can achieve dislocation o(log N), regardless of its running time. In this work we present the first subquadratic time algorithm with optimal maximum dislocation: Our algorithm runs in tilde{O}(N^{3/2}) time and guarantees O(log N) maximum dislocation with high probability. Though the first version of our algorithm is randomized, it can be derandomized by extracting the necessary random bits from the results of the comparisons (errors)

    Unobservable Persistant Productivity and Long Term Contracts

    Get PDF
    We study the problem of a firm that faces asymmetric information about the productivity of its potential workers. In our framework, a worker’s productivity is either assigned by nature at birth, or determined by an unobservable initial action of the worker that has persistent effects over time. We provide a characterization of the optimal dynamic compensation scheme that attracts only high productivity workers: consumption –regardless of time period– is ranked according to likelihood ratios of output histories, and the inverse of the marginal utility of consumption satisfies the martingale property derived in Rogerson (1985). However, in the case of i.i.d. output and square root utility we show that, contrary to the features of the optimal contract for a repeated moral hazard problem, the level and the variance of consumption are negatively correlated, due to the influence of early luck into future compensation. Moreover, in this example long-term inequality is lower under persistent private informatio

    Production Inefficiency in Fed Cattle Marketing and the Value of Sorting Pens into Alternative Marketing Groups Using Ultrasound Technology

    Get PDF
    The cattle industry batch markets animals in pens. Because of this, animals within any one pen can be both underfed and overfed. Thus, there is a production inefficiency associated with batch marketing. We simulate the value of sorting animals through weight and ultrasound measurements from original pens into smaller alternative marketing groups. Sorting exploits the production inefficiency and enables cattle feeding enterprises to avoid meat quality discounts, capture premiums, more efficiently use feed resources, and increase returns. The value of sorting is between 15and15 and 25 per head, with declining marginal returns as the number of sort groups increases.cattle feeding, production efficiency, simulation, sorting, value-based marketing, ultrasound, Agribusiness, Livestock Production/Industries, Marketing, Research and Development/Tech Change/Emerging Technologies, C15, D21, D23, Q12,

    Unobservable Persistant Productivity and Long Term Contracts

    Get PDF
    We study the problem of a firm that faces asymmetric information about the productivity of its potential workers. In our framework, a worker’s productivity is either assigned by nature at birth, or determined by an unobservable initial action of the worker that has persistent effects over time. We provide a characterization of the optimal dynamic compensation scheme that attracts only high productivity workers: consumption –regardless of time period– is ranked according to likelihood ratios of output histories, and the inverse of the marginal utility of consumption satisfies the martingale property derived in Rogerson (1985). However, in the case of i.i.d. output and square root utility we show that, contrary to the features of the optimal contract for a repeated moral hazard problem, the level and the variance of consumption are negatively correlated, due to the influence of early luck into future compensation. Moreover, in this example long-term inequality is lower under persistent private informationMechanism design, Moral hazard, Persistence

    Longest Increasing Subsequence under Persistent Comparison Errors

    Full text link
    We study the problem of computing a longest increasing subsequence in a sequence SS of nn distinct elements in the presence of persistent comparison errors. In this model, every comparison between two elements can return the wrong result with some fixed (small) probability p p , and comparisons cannot be repeated. Computing the longest increasing subsequence exactly is impossible in this model, therefore, the objective is to identify a subsequence that (i) is indeed increasing and (ii) has a length that approximates the length of the longest increasing subsequence. We present asymptotically tight upper and lower bounds on both the approximation factor and the running time. In particular, we present an algorithm that computes an O(logn)O(\log n)-approximation in time O(nlogn)O(n\log n), with high probability. This approximation relies on the fact that that we can approximately sort nn elements in O(nlogn)O(n\log n) time such that the maximum dislocation of an element is at most O(logn)O(\log n). For the lower bounds, we prove that (i) there is a set of sequences, such that on a sequence picked randomly from this set every algorithm must return an Ω(logn)\Omega(\log n)-approximation with high probability, and (ii) any O(logn)O(\log n)-approximation algorithm for longest increasing subsequence requires Ω(nlogn)\Omega(n \log n) comparisons, even in the absence of errors

    Noisy Sorting Without Searching: Data Oblivious Sorting with Comparison Errors

    Get PDF
    We provide and study several algorithms for sorting an array of n comparable distinct elements subject to probabilistic comparison errors. In this model, the comparison of two elements returns the wrong answer according to a fixed probability, p_e < 1/2, and otherwise returns the correct answer. The dislocation of an element is the distance between its position in a given (current or output) array and its position in a sorted array. There are various algorithms that can be utilized for sorting or near-sorting elements subject to probabilistic comparison errors, but these algorithms are not data oblivious because they all make heavy use of noisy binary searching. In this paper, we provide new methods for sorting with comparison errors that are data oblivious while avoiding the use of noisy binary search methods. In addition, we experimentally compare our algorithms and other sorting algorithms

    Instant restore after a media failure

    Full text link
    Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read/write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques

    Land subsidence over oilfields in the Yellow River Delta

    Get PDF
    Subsidence in river deltas is a complex process that has both natural and human causes. Increasing human activities like aquaculture and petroleum extraction are affecting the Yellow River delta, and one consequence is subsidence. The purpose of this study is to measure the surface displacements in the Yellow River delta region and to investigate the corresponding subsidence source. In this paper, the Stanford Method for Persistent Scatterers (StaMPS) package was employed to process Envisat ASAR images collected between 2007 and 2010. Consistent results between two descending tracks show subsidence with a mean rate up to 30 mm/yr in the radar line of sight direction in Gudao Town (oilfield), Gudong oilfield and Xianhe Town of the delta, each of which is within the delta, and also show that subsidence is not uniform across the delta. Field investigation shows a connection between areas of non-uniform subsidence and of petroleum extraction. In a 9 km2 area of the Gudao Oilfield, a poroelastic disk reservoir model is used to model the InSAR derived displacements. In general, good fits between InSAR observations and modeled displacements are seen. The subsidence observed in the vicinity of the oilfield is thus suggested to be caused by fluid extraction

    Task-Specific Experience and Task-Specific Talent: Decomposing the Productivity of High School Teachers

    Get PDF
    We use administrative panel data to decompose worker performance into components relating to general talent, task-specific talent, general experience, and task-specific experience. We consider the context of high school teachers, in which tasks consist of teaching particular subjects in particular tracks. Using the timing of changes in the subjects and levels to which teachers are assigned to provide identifying variation, we show that much of the productivity gains to teacher experience estimated in the literature are actually subject-specific. By contrast, very little of the variation in the permanent component of productivity among teachers is subject-specific or level-specific. Counterfactual simulations suggest that maximizing the value of task-specific experience could produce nearly costless efficiency gains on the order of .02 test score standard deviations
    corecore