15,480 research outputs found

    Path methods for strong shift equivalence of positive matrices

    Full text link
    In the early 1990's, Kim and Roush developed path methods for establishing strong shift equivalence (SSE) of positive matrices over a dense subring U of the real numbers R. This paper gives a detailed, unified and generalized presentation of these path methods. New arguments which address arbitrary dense subrings U of R are used to show that for any dense subring U of R, positive matrices over U which have just one nonzero eigenvalue and which are strong shift equivalent over U must be strong shift equivalent over U_+. In addition, we show positive real matrices on a path of shift equivalent positive real matrices are SSE over R_+; positive rational matrices which are SSE over R_+ must be SSE over Q_+; and for any dense subring U of R, within the set of positive matrices over U which are conjugate over U to a given matrix, there are only finitely many SSE-U_+ classes.Comment: This version adds a 3-part program for studying SEE over the reals. One part is handled by the arxiv post "Strong shift equivalence and algebraic K-theory". This version is the author version of the paper published in the Kim memorial volume. From that, my short lifestory of Kim (and more) is on my web page http://www.math.umd.edu/~mboyle/papers/index.htm

    Synthetic Biology: Caught Between Property Rights, the Public Domain, and the Commons

    Get PDF
    Synthetic biologists aim to make biology a true engineering discipline. In the same way that electrical engineers rely on standard capacitors and resistors, or computer programmers rely on modular blocks of code, synthetic biologists wish to create an array of modular biological parts that can be readily synthesized and mixed together in different combinations. Synthetic biology has already produced important results, including more accurate AIDS tests and the possibility of unlimited supplies of previously scarce drugs for malaria. Proponents hope to use synthetic organisms to produce not only medically relevant chemicals but also a large variety of industrial materials, including ecologically friendly biofuels such as hydrogen and ethanol. The relationship of synthetic biology to intellectual property law has, however, been largely unexplored. Two key issues deserve further attention. First, synthetic biology, which operates at the confluence of biotechnology and computation, presents a particularly revealing example of a difficulty that the law has frequently faced over the last 30 years -- the assimilation of a new technology into the conceptual limits around existing intellectual property rights, with possible damage to both in the process. There is reason to fear that tendencies in the way that the law has handled software on the one hand and biotechnology on the other could come together in a perfect storm that will impede the potential of the technology. Second, synthetic biology raises with remarkable clarity an issue that has seemed of only theoretical interest until now. It points out a tension between different methods of creating openness. On the one hand, we have intellectual property law\u27s insistence that certain types of material remain in the public domain, outside the world of property. On the other, we have the attempt by individuals to use intellectual property rights to create a commons, just as developers of free and open source software use the leverage of software copyrights to impose requirements of openness on future programmers, requirements greater than those attaching to a public domain work

    Observations of QSO J2233-606 in the Southern Hubble Deep Field

    Get PDF
    The Hubble Deep Field South (HDF-S) HST observations are expected to begin in October 1998. We present a composite spectrum of the QSO in the HDF-S field covering UV/optical/near IR wavelengths, obtained by combining data from the ANU 2.3m Telescope with STIS on the HST. This intermediate resolution spectrum covers the range 1600-10000A and allows us to derive some basic information on the intervening absorption systems which will be important in planning future higher resolution studies of this QSO.Comment: 9 pages and 2 figures, submitted to ApJ

    Hardware and software status of QCDOC

    Full text link
    QCDOC is a massively parallel supercomputer whose processing nodes are based on an application-specific integrated circuit (ASIC). This ASIC was custom-designed so that crucial lattice QCD kernels achieve an overall sustained performance of 50% on machines with several 10,000 nodes. This strong scalability, together with low power consumption and a price/performance ratio of $1 per sustained MFlops, enable QCDOC to attack the most demanding lattice QCD problems. The first ASICs became available in June of 2003, and the testing performed so far has shown all systems functioning according to specification. We review the hardware and software status of QCDOC and present performance figures obtained in real hardware as well as in simulation.Comment: Lattice2003(machine), 6 pages, 5 figure

    Hadronic contribution to the muon g-2: a theoretical determination

    Full text link
    The leading order hadronic contribution to the muon g-2, aμHADa_{\mu}^{HAD}, is determined entirely from theory using an approach based on Cauchy's theorem in the complex squared energy s-plane. This is possible after fitting the integration kernel in aμHADa_{\mu}^{HAD} with a simpler function of ss. The integral determining aμHADa_{\mu}^{HAD} in the light-quark region is then split into a low energy and a high energy part, the latter given by perturbative QCD (PQCD). The low energy integral involving the fit function to the integration kernel is determined by derivatives of the vector correlator at the origin, plus a contour integral around a circle calculable in PQCD. These derivatives are calculated using hadronic models in the light-quark sector. A similar procedure is used in the heavy-quark sector, except that now everything is calculable in PQCD, thus becoming the first entirely theoretical calculation of this contribution. Using the dual resonance model realization of Large NcN_{c} QCD to compute the derivatives of the correlator leads to agreement with the experimental value of aμa_\mu. Accuracy, though, is currently limited by the model dependent calculation of derivatives of the vector correlator at the origin. Future improvements should come from more accurate chiral perturbation theory and/or lattice QCD information on these derivatives, allowing for this method to be used to determine aμHADa_{\mu}^{HAD} accurately entirely from theory, independently of any hadronic model.Comment: Several additional clarifying paragraphs have been added. 1/N_c corrections have been estimated. No change in result

    High-resolution mapping of cancer cell networks using co-functional interactions.

    Get PDF
    Powerful new technologies for perturbing genetic elements have recently expanded the study of genetic interactions in model systems ranging from yeast to human cell lines. However, technical artifacts can confound signal across genetic screens and limit the immense potential of parallel screening approaches. To address this problem, we devised a novel PCA-based method for correcting genome-wide screening data, bolstering the sensitivity and specificity of detection for genetic interactions. Applying this strategy to a set of 436 whole genome CRISPR screens, we report more than 1.5 million pairs of correlated "co-functional" genes that provide finer-scale information about cell compartments, biological pathways, and protein complexes than traditional gene sets. Lastly, we employed a gene community detection approach to implicate core genes for cancer growth and compress signal from functionally related genes in the same community into a single score. This work establishes new algorithms for probing cancer cell networks and motivates the acquisition of further CRISPR screen data across diverse genotypes and cell types to further resolve complex cellular processes

    The Major Sources of the Cosmic Reionizing Background at z ~ 6

    Full text link
    In this paper, we address which sources contributed most of the reionizing photons. Our argument assumes that the reionization ended around z ~ 6 and that it was a relatively quick process, i.e., that there was a non-negligible fraction of neutral hydrogen in the Universe at somewhat earlier epochs. Starting from our earlier estimate of the luminosity function (LF) of galaxies at z ~ 6, we quantitatively show that the major sources of reionization are most likely galaxies with L < L*. Our approach allows us to put stronger constraints to the LF of galaxies at z ~ 6. To have the Universe completely ionized at this redshift, the faint-end slope of the LF should be steeper than α=1.6\alpha=-1.6, which is the value measured at lower redshifts (z ~ 3), unless either the normalization (Phi*) of the LF or the clumping factor of the ionized hydrogen has been significantly underestimated. If Phi* is actually lower than what we assumed by a factor of two, a steep slope close to α=2.0\alpha=-2.0 is required. Our LF predicts a total of 50 -- 80 z ~ 6 galaxies in the HST Ultra Deep Field (UDF) to a depth of AB=28.4 mag, which can be used to constraint both Phi* and α\alpha. We conclude that the least luminous galaxies existing at this redshift should reach as low as some critical luminosity in order to accumulate the entire reionizing photon budget. On the other hand, the existence of significant amounts of neutral hydrogen at slightly earlier epochs, e.g. z ~ 7, requires that the least luminous galaxies should not be fainter than another critical value (i.e., the LF should cut-off at this point).Comment: ApJL in press (Jan 1, 2004 issue
    corecore