18,344 research outputs found

    Non-Clairvoyant Batch Sets Scheduling: Fairness is Fair enough

    Full text link
    Scheduling questions arise naturally in many different areas among which operating system design, compiling,... In real life systems, the characteristics of the jobs (such as release time and processing time) are usually unknown and unpredictable beforehand. The system is typically unaware of the remaining work in each job or of the ability of the job to take advantage of more resources. Following these observations, we adopt the job model by Edmonds et al (2000, 2003) in which the jobs go through a sequence of different phases. Each phase consists of a certain quantity of work and a speed-up function that models how it takes advantage of the number of processors it receives. We consider the non-clairvoyant online setting where a collection of jobs arrives at time 0. We consider the metrics setflowtime introduced by Robert et al (2007). The goal is to minimize the sum of the completion time of the sets, where a set is completed when all of its jobs are done. If the input consists of a single set of jobs, this is simply the makespan of the jobs; and if the input consists of a collection of singleton sets, it is simply the flowtime of the jobs. We show that the non-clairvoyant strategy EQUIoEQUI that evenly splits the available processors among the still unserved sets and then evenly splits these processors among the still uncompleted jobs of each unserved set, achieves a competitive ratio (2+\sqrt3+o(1))\frac{ln n}{lnln n} for the setflowtime minimization and that this is asymptotically optimal (up to a constant factor), where n is the size of the largest set. For makespan minimization, we show that the non-clairvoyant strategy EQUI achieves a competitive ratio of (1+o(1))\frac{ln n}{lnln n}, which is again asymptotically optimal.Comment: 12 pages, 1 figur

    Rejoinder: Harold Jeffreys's Theory of Probability Revisited

    Full text link
    We are grateful to all discussants of our re-visitation for their strong support in our enterprise and for their overall agreement with our perspective. Further discussions with them and other leading statisticians showed that the legacy of Theory of Probability is alive and lasting. [arXiv:0804.3173]Comment: Published in at http://dx.doi.org/10.1214/09-STS284REJ the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Generalized Quantum Search with Parallelism

    Get PDF
    We generalize Grover's unstructured quantum search algorithm to enable it to use an arbitrary starting superposition and an arbitrary unitary matrix simultaneously. We derive an exact formula for the probability of the generalized Grover's algorithm succeeding after n iterations. We show that the fully generalized formula reduces to the special cases considered by previous authors. We then use the generalized formula to determine the optimal strategy for using the unstructured quantum search algorithm. On average the optimal strategy is about 12% better than the naive use of Grover's algorithm. The speedup obtained is not dramatic but it illustrates that a hybrid use of quantum computing and classical computing techniques can yield a performance that is better than either alone. We extend the analysis to the case of a society of k quantum searches acting in parallel. We derive an analytic formula that connects the degree of parallelism with the optimal strategy for k-parallel quantum search. We then derive the formula for the expected speed of k-parallel quantum search.Comment: 14 pages, 2 figure

    A Fast Gradient Method for Nonnegative Sparse Regression with Self Dictionary

    Full text link
    A nonnegative matrix factorization (NMF) can be computed efficiently under the separability assumption, which asserts that all the columns of the given input data matrix belong to the cone generated by a (small) subset of them. The provably most robust methods to identify these conic basis columns are based on nonnegative sparse regression and self dictionaries, and require the solution of large-scale convex optimization problems. In this paper we study a particular nonnegative sparse regression model with self dictionary. As opposed to previously proposed models, this model yields a smooth optimization problem where the sparsity is enforced through linear constraints. We show that the Euclidean projection on the polyhedron defined by these constraints can be computed efficiently, and propose a fast gradient method to solve our model. We compare our algorithm with several state-of-the-art methods on synthetic data sets and real-world hyperspectral images

    Micromechanical investigation of the influence of defects in high cycle fatigue

    Get PDF
    This study aims to analyse the influence of geometrical defects (notches and holes) on the high cycle fatigue behaviour of an electrolytic copper based on finite element simulations of 2D polycrystalline aggregates. In order to investigate the role of each source of anisotropy on the mechanical response at the grain scale, three different material constitutive models are assigned successively to the grains: isotropic elasticity, cubic elasticity and crystal plasticity in addition to the cubic elasticity. The significant influence of the elastic anisotropy on the mechanical response of the grains is highlighted. When considering smooth microstructures, the crystal plasticity have has a slight effect in comparison with the cubic elasticity influence. However, in the case of notched microstructures, it has been shown that the influence of the plasticity is no more negligible. Finally, the predictions of three fatigue criteria are analysed. Their ability to predict the defect size effect on the fatigue strength is evaluated thanks to a comparison with experimental data from the literature
    corecore