16,145 research outputs found

    Higher su(N) tensor products

    Full text link
    We extend our recent results on ordinary su(N) tensor product multiplicities to higher su(N) tensor products. Particular emphasis is put on four-point couplings where the tensor product of four highest weight modules is considered. The number of times the singlet occurs in the decomposition is the associated multiplicity. In this framework, ordinary tensor products correspond to three-point couplings. As in that case, the four-point multiplicity may be expressed explicitly as a multiple sum measuring the discretised volume of a convex polytope. This description extends to higher-point couplings as well. We also address the problem of determining when a higher-point coupling exists, i.e., when the associated multiplicity is non-vanishing. The solution is a set of inequalities in the Dynkin labels.Comment: 17 pages, LaTe

    Transfer of BECs through discrete breathers in an optical lattice

    Full text link
    We study the stability of a stationary discrete breather (DB) on a nonlinear trimer in the framework of the discrete nonlinear Schr\"odinger equation (DNLS). In previous theoretical investigations of the dynamics of Bose-Einstein condensates in leaking optical lattices, collisions between a DB and a lattice excitation, e.g. a moving breather (MB) or phonon, were studied. These collisions lead to the transmission of a fraction of the incident (atomic) norm of the MB through the DB, while the DB can be shifted in the direction of the incident lattice excitation. Here we show that there exists a total energy threshold of the trimer, above which the lattice excitation can trigger the destabilization of the DB and that this is the mechanism leading to the movement of the DB. Furthermore, we give an analytic estimate of upper bound to the norm that is transmitted through the DB. Our analysis explains the results of the earlier numerical studies and may help to clarify functional operations with BECs in optical lattices such as blocking and filtering coherent (atomic) beams.Comment: 8 pages, 5 figure

    3D simulations of self-propelled, reconstructed jellyfish using vortex methods

    Full text link
    We present simulations of the vortex dynamics associated with the self-propelled motion of jellyfish. The geometry is obtained from image segmentation of video recordings from live jellyfish. The numerical simulations are performed using three-dimensional viscous, vortex particle methods with Brinkman penalization to impose the kinematics of the jellyfish motion. We study two types of strokes recorded in the experiment1. The first type (stroke A) produces two vortex rings during the stroke: one outside the bell during the power stroke and one inside the bell during the recovery stroke. The second type (stroke B) produces three vortex rings: one ring during the power stroke and two vortex rings during the recovery stroke. Both strokes propel the jellyfish, with stroke B producing the highest velocity. The speed of the jellyfish scales with the square root of the Reynolds number. The simulations are visualized in a fluid dynamics video.Comment: 1 page, 1 figur

    Genome-wide inference of ancestral recombination graphs

    Get PDF
    The complex correlation structure of a collection of orthologous DNA sequences is uniquely captured by the "ancestral recombination graph" (ARG), a complete record of coalescence and recombination events in the history of the sample. However, existing methods for ARG inference are computationally intensive, highly approximate, or limited to small numbers of sequences, and, as a consequence, explicit ARG inference is rarely used in applied population genomics. Here, we introduce a new algorithm for ARG inference that is efficient enough to apply to dozens of complete mammalian genomes. The key idea of our approach is to sample an ARG of n chromosomes conditional on an ARG of n-1 chromosomes, an operation we call "threading." Using techniques based on hidden Markov models, we can perform this threading operation exactly, up to the assumptions of the sequentially Markov coalescent and a discretization of time. An extension allows for threading of subtrees instead of individual sequences. Repeated application of these threading operations results in highly efficient Markov chain Monte Carlo samplers for ARGs. We have implemented these methods in a computer program called ARGweaver. Experiments with simulated data indicate that ARGweaver converges rapidly to the true posterior distribution and is effective in recovering various features of the ARG for dozens of sequences generated under realistic parameters for human populations. In applications of ARGweaver to 54 human genome sequences from Complete Genomics, we find clear signatures of natural selection, including regions of unusually ancient ancestry associated with balancing selection and reductions in allele age in sites under directional selection. Preliminary results also indicate that our methods can be used to gain insight into complex features of human population structure, even with a noninformative prior distribution.Comment: 88 pages, 7 main figures, 22 supplementary figures. This version contains a substantially expanded genomic data analysi

    Leucine supplementation differentially enhances pancreatic cancer growth in lean and overweight mice

    Get PDF
    Kristyn A Liu1†, Laura M Lashinger1†, Audrey J Rasmussen1† and Stephen D Hursting12* Author Affiliations 1 Department of Nutritional Sciences, University of Texas at Austin, Austin, TX 78723, USA 2 Department of Molecular Carcinogenesis, University of Texas M.D. Anderson Cancer Center, 1808 Park Road 1c, Smithville, TX 78957, USABackground: The risk of pancreatic cancer, the 4th deadliest cancer for both men and women in the United States, is increased by obesity. Calorie restriction (CR) is a well-known dietary regimen that prevents or reverses obesity and suppresses tumorigenesis in a variety of animal models, at least in part via inhibition of mammalian target of rapamycin (mTOR) signaling. Branched-chain amino acids (BCAA), especially leucine, activate mTOR and enhance growth and proliferation of myocytes and epithelial cells, which is why leucine is a popular supplement among athletes. Leucine is also increasingly being used as a treatment for pancreatic cancer cachexia, but the effects of leucine supplementation on pancreatic tumor growth have not been elucidated. Results: Supplementation with leucine increased pancreatic tumor growth in both lean (104 ± 17 mm3 versus 46 ± 13 mm3; P <0.05) and overweight (367 ± 45 mm3 versus 230 ± 39 mm3; P <0.01) mice, but tumor enhancement was associated with different biological outcomes depending on the diet. In the lean mice, leucine increased phosphorylation of mTOR and downstream effector S6 ribosomal protein, but in the overweight mice, leucine reduced glucose clearance and thus increased the amount of circulating glucose available to the tumor. Conclusion: These findings show that leucine supplementation enhances tumor growth in both lean and overweight mice through diet-dependent effects in a murine model of pancreatic cancer, suggesting caution against the clinical use of leucine supplementation for the purposes of skeletal muscle enhancement in cachectic patients.Nutritional Science

    A Girsanov approach to slow parameterizing manifolds in the presence of noise

    Full text link
    We consider a three-dimensional slow-fast system with quadratic nonlinearity and additive noise. The associated deterministic system of this stochastic differential equation (SDE) exhibits a periodic orbit and a slow manifold. The deterministic slow manifold can be viewed as an approximate parameterization of the fast variable of the SDE in terms of the slow variables. In other words the fast variable of the slow-fast system is approximately "slaved" to the slow variables via the slow manifold. We exploit this fact to obtain a two dimensional reduced model for the original stochastic system, which results in the Hopf-normal form with additive noise. Both, the original as well as the reduced system admit ergodic invariant measures describing their respective long-time behaviour. We will show that for a suitable metric on a subset of the space of all probability measures on phase space, the discrepancy between the marginals along the radial component of both invariant measures can be upper bounded by a constant and a quantity describing the quality of the parameterization. An important technical tool we use to arrive at this result is Girsanov's theorem, which allows us to modify the SDEs in question in a way that preserves transition probabilities. This approach is then also applied to reduced systems obtained through stochastic parameterizing manifolds, which can be viewed as generalized notions of deterministic slow manifolds.Comment: 54 pages, 6 figure

    Parallel Batch-Dynamic Graph Connectivity

    Full text link
    In this paper, we study batch parallel algorithms for the dynamic connectivity problem, a fundamental problem that has received considerable attention in the sequential setting. The most well known sequential algorithm for dynamic connectivity is the elegant level-set algorithm of Holm, de Lichtenberg and Thorup (HDT), which achieves O(log2n)O(\log^2 n) amortized time per edge insertion or deletion, and O(logn/loglogn)O(\log n / \log\log n) time per query. We design a parallel batch-dynamic connectivity algorithm that is work-efficient with respect to the HDT algorithm for small batch sizes, and is asymptotically faster when the average batch size is sufficiently large. Given a sequence of batched updates, where Δ\Delta is the average batch size of all deletions, our algorithm achieves O(lognlog(1+n/Δ))O(\log n \log(1 + n / \Delta)) expected amortized work per edge insertion and deletion and O(log3n)O(\log^3 n) depth w.h.p. Our algorithm answers a batch of kk connectivity queries in O(klog(1+n/k))O(k \log(1 + n/k)) expected work and O(logn)O(\log n) depth w.h.p. To the best of our knowledge, our algorithm is the first parallel batch-dynamic algorithm for connectivity.Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201

    Bent-Double Radio Sources as Probes of Intergalactic Gas

    Full text link
    As the most common environment in the universe, groups of galaxies are likely to contain a significant fraction of the missing baryons in the form of intergalactic gas. The density of this gas is an important factor in whether ram pressure stripping and strangulation affect the evolution of galaxies in these systems. We present a method for measuring the density of intergalactic gas using bent-double radio sources that is independent of temperature, making it complementary to current absorption line measurements. We use this method to probe intergalactic gas in two different environments: inside a small group of galaxies as well as outside of a larger group at a 2 Mpc radius and measure total gas densities of 4±12+6×1034 \pm 1_{-2}^{+6} \times 10^{-3} and 9±35+10×1049 \pm 3_{-5}^{+10} \times 10^{-4} per cubic centimeter (random and systematic errors) respectively. We use X-ray data to place an upper limit of 2×1062 \times 10^6 K on the temperature of the intragroup gas in the small group.Comment: 6 pages, 1 figure, accepted for publication in Ap

    Guidelines for physical weed control research: flame weeding, weed harrowing and intra-row cultivation

    Get PDF
    A prerequisite for good research is the use of appropriate methodology. In order to aggregate sound research methodology, this paper presents some tentative guidelines for physical weed control research in general, and flame weeding, weed harrowing and intra-row cultivation in particular. Issues include the adjustment and use of mechanical weeders and other equipment, the recording of impact factors that affect weeding performance, methods to assess effectiveness, the layout of treatment plots, and the conceptual models underlying the experimental designs (e.g. factorial comparison, dose response). First of all, the research aims need to be clearly defined, an appropriate experimental design produced and statistical methods chosen accordingly. Suggestions on how to do this are given. For assessments, quantitative measures would be ideal, but as they require more resources, visual classification may in some cases be more feasible. The timing of assessment affects the results and their interpretation. When describing the weeds and crops, one should list the crops and the most abundantly present weed species involved, giving their density and growth stages at the time of treatment. The location of the experimental field, soil type, soil moisture and amount of fertilization should be given, as well as weather conditions at the time of treatment. The researcher should describe the weed control equipment and adjustments accurately, preferably according to the prevailing practice within the discipline. Things to record are e.g. gas pressure, burner properties, burner cover dimensions and LPG consumption in flame weeding; speed, angle of tines, number of passes and direction in weed harrowing. The authors hope this paper will increase comparability among experiments, help less experienced scientists to prevent mistakes and essential omissions, and foster the advance of knowledge on non-chemical weed management
    corecore