9,111 research outputs found

    Cleanroom software development

    Get PDF
    The 'cleanroom' software development process is a technical and organizational approach to developing software with certifiable reliability. Key ideas behind the process are well structured software specifications, randomized testing methods and the introduction of statistical controls; but the main point is to deny entry for defects during the development of software. This latter point suggests the use of the term 'cleanroom' in analogy to the defect prevention controls used in the manufacturing of high technology hardware. In the 'cleanroom', the entire software development process is embedded within a formal statistical design, in contrast to executing selected tests and appealing to the randomness of operational settings for drawing statistical inferences. Instead, random testing is introduced as a part of the statistical design itself so that when development and testing are completed, statistical inferences are made about the operation of the system

    Spatial Mixing of Coloring Random Graphs

    Full text link
    We study the strong spatial mixing (decay of correlation) property of proper qq-colorings of random graph G(n,d/n)G(n, d/n) with a fixed dd. The strong spatial mixing of coloring and related models have been extensively studied on graphs with bounded maximum degree. However, for typical classes of graphs with bounded average degree, such as G(n,d/n)G(n, d/n), an easy counterexample shows that colorings do not exhibit strong spatial mixing with high probability. Nevertheless, we show that for qαd+βq\ge\alpha d+\beta with α>2\alpha>2 and sufficiently large β=O(1)\beta=O(1), with high probability proper qq-colorings of random graph G(n,d/n)G(n, d/n) exhibit strong spatial mixing with respect to an arbitrarily fixed vertex. This is the first strong spatial mixing result for colorings of graphs with unbounded maximum degree. Our analysis of strong spatial mixing establishes a block-wise correlation decay instead of the standard point-wise decay, which may be of interest by itself, especially for graphs with unbounded degree

    Rapid Mixing for Lattice Colorings with Fewer Colors

    Full text link
    We provide an optimally mixing Markov chain for 6-colorings of the square lattice on rectangular regions with free, fixed, or toroidal boundary conditions. This implies that the uniform distribution on the set of such colorings has strong spatial mixing, so that the 6-state Potts antiferromagnet has a finite correlation length and a unique Gibbs measure at zero temperature. Four and five are now the only remaining values of q for which it is not known whether there exists a rapidly mixing Markov chain for q-colorings of the square lattice.Comment: Appeared in Proc. LATIN 2004, to appear in JSTA

    The energy dilemma and its impact on air transportation

    Get PDF
    The dimensions of the energy situation are discussed in relation to air travel. Energy conservation, fuel consumption, and combustion efficiency are examined, as well as the proposal for subsonic aircraft using hydrogen fuel

    Quantum speedup of classical mixing processes

    Get PDF
    Most approximation algorithms for #P-complete problems (e.g., evaluating the permanent of a matrix or the volume of a polytope) work by reduction to the problem of approximate sampling from a distribution π\pi over a large set §\S. This problem is solved using the {\em Markov chain Monte Carlo} method: a sparse, reversible Markov chain PP on §\S with stationary distribution π\pi is run to near equilibrium. The running time of this random walk algorithm, the so-called {\em mixing time} of PP, is O(δ1log1/π)O(\delta^{-1} \log 1/\pi_*) as shown by Aldous, where δ\delta is the spectral gap of PP and π\pi_* is the minimum value of π\pi. A natural question is whether a speedup of this classical method to O(δ1log1/π)O(\sqrt{\delta^{-1}} \log 1/\pi_*), the diameter of the graph underlying PP, is possible using {\em quantum walks}. We provide evidence for this possibility using quantum walks that {\em decohere} under repeated randomized measurements. We show: (a) decoherent quantum walks always mix, just like their classical counterparts, (b) the mixing time is a robust quantity, essentially invariant under any smooth form of decoherence, and (c) the mixing time of the decoherent quantum walk on a periodic lattice Znd\Z_n^d is O(ndlogd)O(n d \log d), which is indeed O(δ1log1/π)O(\sqrt{\delta^{-1}} \log 1/\pi_*) and is asymptotically no worse than the diameter of Znd\Z_n^d (the obvious lower bound) up to at most a logarithmic factor.Comment: 13 pages; v2 revised several part

    Technical management techniques for identification and control of industrial safety and pollution hazards

    Get PDF
    Constructive recommendations are suggested for pollution problems from offshore energy resources industries on outer continental shelf. Technical management techniques for pollution identification and control offer possible applications to space engineering and management

    Ablation debris control by means of closed thick film filtered water immersion

    Get PDF
    The performance of laser ablation generated debris control by means of open immersion techniques have been shown to be limited by flow surface ripple effects on the beam and the action of ablation plume pressure loss by splashing of the immersion fluid. To eradicate these issues a closed technique has been developed which ensured a controlled geometry for both the optical interfaces of the flowing liquid film. This had the action of preventing splashing, ensuring repeatable machining conditions and allowed for control of liquid flow velocity. To investigate the performance benefits of this closed immersion technique bisphenol A polycarbonate samples have been machined using filtered water at a number of flow velocities. The results demonstrate the efficacy of the closed immersion technique: a 93% decrease in debris is produced when machining under closed filtered water immersion; the average debris particle size becomes larger, with an equal proportion of small and medium sized debris being produced when laser machining under closed flowing filtered water immersion; large debris is shown to be displaced further by a given flow velocity than smaller debris, showing that the action of flow turbulence in the duct has more impact on smaller debris. Low flow velocities were found to be less effective at controlling the positional trend of deposition of laser ablation generated debris than high flow velocities; but, use of excessive flow velocities resulted in turbulence motivated deposition. This work is of interest to the laser micromachining community and may aide in the manufacture of 2.5D laser etched patterns covering large area wafers and could be applied to a range of wavelengths and laser types

    Counting approximately-shortest paths in directed acyclic graphs

    Full text link
    Given a directed acyclic graph with positive edge-weights, two vertices s and t, and a threshold-weight L, we present a fully-polynomial time approximation-scheme for the problem of counting the s-t paths of length at most L. We extend the algorithm for the case of two (or more) instances of the same problem. That is, given two graphs that have the same vertices and edges and differ only in edge-weights, and given two threshold-weights L_1 and L_2, we show how to approximately count the s-t paths that have length at most L_1 in the first graph and length at most L_2 in the second graph. We believe that our algorithms should find application in counting approximate solutions of related optimization problems, where finding an (optimum) solution can be reduced to the computation of a shortest path in a purpose-built auxiliary graph

    Fast Genome-Wide QTL Association Mapping on Pedigree and Population Data

    Full text link
    Since most analysis software for genome-wide association studies (GWAS) currently exploit only unrelated individuals, there is a need for efficient applications that can handle general pedigree data or mixtures of both population and pedigree data. Even data sets thought to consist of only unrelated individuals may include cryptic relationships that can lead to false positives if not discovered and controlled for. In addition, family designs possess compelling advantages. They are better equipped to detect rare variants, control for population stratification, and facilitate the study of parent-of-origin effects. Pedigrees selected for extreme trait values often segregate a single gene with strong effect. Finally, many pedigrees are available as an important legacy from the era of linkage analysis. Unfortunately, pedigree likelihoods are notoriously hard to compute. In this paper we re-examine the computational bottlenecks and implement ultra-fast pedigree-based GWAS analysis. Kinship coefficients can either be based on explicitly provided pedigrees or automatically estimated from dense markers. Our strategy (a) works for random sample data, pedigree data, or a mix of both; (b) entails no loss of power; (c) allows for any number of covariate adjustments, including correction for population stratification; (d) allows for testing SNPs under additive, dominant, and recessive models; and (e) accommodates both univariate and multivariate quantitative traits. On a typical personal computer (6 CPU cores at 2.67 GHz), analyzing a univariate HDL (high-density lipoprotein) trait from the San Antonio Family Heart Study (935,392 SNPs on 1357 individuals in 124 pedigrees) takes less than 2 minutes and 1.5 GB of memory. Complete multivariate QTL analysis of the three time-points of the longitudinal HDL multivariate trait takes less than 5 minutes and 1.5 GB of memory
    corecore