48,829 research outputs found

    Random Sampling with Removal

    Get PDF
    Random sampling is a classical tool in constrained optimization. Under favorable conditions, the optimal solution subject to a small subset of randomly chosen constraints violates only a small subset of the remaining constraints. Here we study the following variant that we call random sampling with removal: suppose that after sampling the subset, we remove a fixed number of constraints from the sample, according to an arbitrary rule. Is it still true that the optimal solution of the reduced sample violates only a small subset of the constraints? The question naturally comes up in situations where the solution subject to the sampled constraints is used as an approximate solution to the original problem. In this case, it makes sense to improve cost and volatility of the sample solution by removing some of the constraints that appear most restricting. At the same time, the approximation quality (measured in terms of violated constraints) should remain high. We study random sampling with removal in a generalized, completely abstract setting where we assign to each subset R of the constraints an arbitrary set V(R) of constraints disjoint from R; in applications, V(R) corresponds to the constraints violated by the optimal solution subject to only the constraints in R. Furthermore, our results are parametrized by the dimension d, i.e., we assume that every set R has a subset B of size at most d with the same set of violated constraints. This is the first time this generalized setting is studied. In this setting, we prove matching upper and lower bounds for the expected number of constraints violated by a random sample, after the removal of k elements. For a large range of values of k, the new upper bounds improve the previously best bounds for LP-type problems, which moreover had only been known in special cases. We show that this bound on special LP-type problems, can be derived in the much more general setting of violator spaces, and with very elementary proofs

    Random Sampling with Removal

    Get PDF
    Abstract Random sampling is a classical tool in constrained optimization. Under favorable conditions, the optimal solution subject to a small subset of randomly chosen constraints violates only a small subset of the remaining constraints. Here we study the following variant that we call random sampling with removal: suppose that after sampling the subset, we remove a fixed number of constraints from the sample, according to an arbitrary rule. Is it still true that the optimal solution of the reduced sample violates only a small subset of the constraints? The question naturally comes up in situations where the solution subject to the sampled constraints is used as an approximate solution to the original problem. We study random sampling with removal in a generalized, completely abstract setting where we assign to each subset R of the constraints an arbitrary set V (R) of constraints disjoint from R; in applications, V (R) corresponds to the constraints violated by the optimal solution subject to only the constraints in R. Furthermore, our results are parametrized by the dimension δ, i.e., we assume that every set R has a subset B of size at most δ with the same set of violated constraints. This is the first time this generalized setting is studied. In this setting, we prove matching upper and lower bounds for the expected number of constraints violated by a random sample, after the removal of k elements. For a large range of values of k, the new upper bounds improve the previously best bounds for LPtype problems, which moreover had only been known in special cases. We show that this bound on special LP-type problems can be derived in the much more general setting of violator spaces, and with very elementary proofs

    Models wagging the dog: are circuits constructed with disparate parameters?

    Get PDF
    In a recent article, Prinz, Bucher, and Marder (2004) addressed the fundamental question of whether neural systems are built with a fixed blueprint of tightly controlled parameters or in a way in which properties can vary largely from one individual to another, using a database modeling approach. Here, we examine the main conclusion that neural circuits indeed are built with largely varying parameters in the light of our own experimental and modeling observations. We critically discuss the experimental and theoretical evidence, including the general adequacy of database approaches for questions of this kind, and come to the conclusion that the last word for this fundamental question has not yet been spoken

    Fast Distributed Algorithms for LP-Type Problems of Bounded Dimension

    Full text link
    In this paper we present various distributed algorithms for LP-type problems in the well-known gossip model. LP-type problems include many important classes of problems such as (integer) linear programming, geometric problems like smallest enclosing ball and polytope distance, and set problems like hitting set and set cover. In the gossip model, a node can only push information to or pull information from nodes chosen uniformly at random. Protocols for the gossip model are usually very practical due to their fast convergence, their simplicity, and their stability under stress and disruptions. Our algorithms are very efficient (logarithmic rounds or better with just polylogarithmic communication work per node per round) whenever the combinatorial dimension of the given LP-type problem is constant, even if the size of the given LP-type problem is polynomially large in the number of nodes

    VARIANCE COMPONENTS AND SELECTION FOR FEATHER PECKING BEHAVIOR IN LAYING HENS

    Get PDF
    Variance components and selection response for feather pecking behaviour were studied by analysing the data from a divergent selection experiment. An investigation show that a Box-Cox transformation with power =-0.2 made the data be approximately normally distributed and fit best by the given model. Variance components and selection response were estimated using Bayesian analysis with Gibbs sampling technique. The total variation was rather large for the two traits in both low feather pecking line (LP) and high feather pecking line (HP). The standard deviation was about three times as large as the mean in the observed scale, and about the same value as the mean in the transformed scale. Based on the mean of marginal posterior distribution, in the Box-Cox transformed scale, heritability for number of feather pecking bouts (FP bouts) was 0.174 in line LP and 0.139 in line HP. For number of feather pecking pecks (FP pecks), heritability was 0.139 in line LP and 0.105 in line HP. No full-sib group effect and observing pen effect were found in the two traits. After 4 generations of selection, the total response for number of FP bouts in the transformed scale was 58% and 74% of the mean of the first generation in line LP and line HP, respectively. And the total response for number of FP pecks was 47% and 46% of the mean of the first generation in line LP and line HP, respectively. The total response in original scale in line HP was rather larger than that in line LP. These results show that the heritability for feather pecking behaviour is moderately low but the variation is large. And genetic improvement on feather pecking behaviour by selection is effective

    Fast and Deterministic Approximations for k-Cut

    Get PDF
    In an undirected graph, a k-cut is a set of edges whose removal breaks the graph into at least k connected components. The minimum weight k-cut can be computed in n^O(k) time, but when k is treated as part of the input, computing the minimum weight k-cut is NP-Hard [Goldschmidt and Hochbaum, 1994]. For poly(m,n,k)-time algorithms, the best possible approximation factor is essentially 2 under the small set expansion hypothesis [Manurangsi, 2017]. Saran and Vazirani [1995] showed that a (2 - 2/k)-approximately minimum weight k-cut can be computed via O(k) minimum cuts, which implies a O~(km) randomized running time via the nearly linear time randomized min-cut algorithm of Karger [2000]. Nagamochi and Kamidoi [2007] showed that a (2 - 2/k)-approximately minimum weight k-cut can be computed deterministically in O(mn + n^2 log n) time. These results prompt two basic questions. The first concerns the role of randomization. Is there a deterministic algorithm for 2-approximate k-cuts matching the randomized running time of O~(km)? The second question qualitatively compares minimum cut to 2-approximate minimum k-cut. Can 2-approximate k-cuts be computed as fast as the minimum cut - in O~(m) randomized time? We give a deterministic approximation algorithm that computes (2 + eps)-minimum k-cuts in O(m log^3 n / eps^2) time, via a (1 + eps)-approximation for an LP relaxation of k-cut
    • …
    corecore