10,954 research outputs found

    On vanishing of Kronecker coefficients

    Full text link
    We show that the problem of deciding positivity of Kronecker coefficients is NP-hard. Previously, this problem was conjectured to be in P, just as for the Littlewood-Richardson coefficients. Our result establishes in a formal way that Kronecker coefficients are more difficult than Littlewood-Richardson coefficients, unless P=NP. We also show that there exists a #P-formula for a particular subclass of Kronecker coefficients whose positivity is NP-hard to decide. This is an evidence that, despite the hardness of the positivity problem, there may well exist a positive combinatorial formula for the Kronecker coefficients. Finding such a formula is a major open problem in representation theory and algebraic combinatorics. Finally, we consider the existence of the partition triples (λ,μ,π)(\lambda, \mu, \pi) such that the Kronecker coefficient kμ,πλ=0k^\lambda_{\mu, \pi} = 0 but the Kronecker coefficient klμ,lπlλ>0k^{l \lambda}_{l \mu, l \pi} > 0 for some integer l>1l>1. Such "holes" are of great interest as they witness the failure of the saturation property for the Kronecker coefficients, which is still poorly understood. Using insight from computational complexity theory, we turn our hardness proof into a positive result: We show that not only do there exist many such triples, but they can also be found efficiently. Specifically, we show that, for any 0<ϵ≤10<\epsilon\leq1, there exists 0<a<10<a<1 such that, for all mm, there exist Ω(2ma)\Omega(2^{m^a}) partition triples (λ,μ,μ)(\lambda,\mu,\mu) in the Kronecker cone such that: (a) the Kronecker coefficient kμ,μλk^\lambda_{\mu,\mu} is zero, (b) the height of μ\mu is mm, (c) the height of λ\lambda is ≤mϵ\le m^\epsilon, and (d) ∣λ∣=∣μ∣≤m3|\lambda|=|\mu| \le m^3. The proof of the last result illustrates the effectiveness of the explicit proof strategy of GCT.Comment: 43 pages, 1 figur

    A Massively Parallel Algorithm for the Approximate Calculation of Inverse p-th Roots of Large Sparse Matrices

    Get PDF
    We present the submatrix method, a highly parallelizable method for the approximate calculation of inverse p-th roots of large sparse symmetric matrices which are required in different scientific applications. We follow the idea of Approximate Computing, allowing imprecision in the final result in order to be able to utilize the sparsity of the input matrix and to allow massively parallel execution. For an n x n matrix, the proposed algorithm allows to distribute the calculations over n nodes with only little communication overhead. The approximate result matrix exhibits the same sparsity pattern as the input matrix, allowing for efficient reuse of allocated data structures. We evaluate the algorithm with respect to the error that it introduces into calculated results, as well as its performance and scalability. We demonstrate that the error is relatively limited for well-conditioned matrices and that results are still valuable for error-resilient applications like preconditioning even for ill-conditioned matrices. We discuss the execution time and scaling of the algorithm on a theoretical level and present a distributed implementation of the algorithm using MPI and OpenMP. We demonstrate the scalability of this implementation by running it on a high-performance compute cluster comprised of 1024 CPU cores, showing a speedup of 665x compared to single-threaded execution

    New Risks Ahead:The Eastward Enlargement of the Eurozone

    Get PDF
    Eastward enlargement is one of the hot topics in European economics. The accession of central and eastern European Countries (CEEC) into the European Union (EU) is accompanied by an extension of the eurozone to this region. This paper surveys likely outcomes and challenges of this specific feature of EU enlargement. Moreover, the ar-ticle represents the start of an international research project dealing with these ques-tions. Research is structured along different markets. Hence, the impact of an adoption of the euro is analysed for capital and labour markets as well as with respect to exchange rate and monetary policies. Our main position is that the euro has in general beneficiary ef-fects for the CEEC and the current EU in all examined markets. However, these bene-fits evolve mainly in the long run, whereas the short-term costs of adaptation to the new situation may be high. Although we believe that the present value of long-term benefits exceeds these costs, it is by no means clear that policy-makers will share this view. Due to the usual political-economy transformation, the assessment of costs and benefits may be different for politicians than compared to any overall perspective. If of-ficial policies become unforeseeable, so will private behaviour. International investors may reverse their capital flows, draining precious liquidity, and leading to currency and financial crises whenever they perceive the authorities’ commitment to EMU less credi-ble. This article highlights some thinkable mechanisms how any such crisis could evolve. It, thus, sets the agenda for further research, mainly, with the focus on appropriate policy strategies to keep adaptation costs as low as possible, minimise other external risks, without hampering the long-term benefits.EMU, EU enlargement, monetary integration

    SWEEPFINDER2: Increased sensitivity, robustness, and flexibility

    Full text link
    SweepFinder is a popular program that implements a powerful likelihood-based method for detecting recent positive selection, or selective sweeps. Here, we present SweepFinder2, an extension of SweepFinder with increased sensitivity and robustness to the confounding effects of mutation rate variation and background selection, as well as increased flexibility that enables the user to examine genomic regions in greater detail and to specify a fixed distance between test sites. Moreover, SweepFinder2 enables the use of invariant sites for sweep detection, increasing both its power and precision relative to SweepFinder

    Automated precision alignment of optical components for hydroxide catalysis bonding

    Get PDF
    We describe an interferometric system that can measure the alignment and separation of a polished face of a optical component and an adjacent polished surface. Accuracies achieved are ∼ 1μrad for the relative angles in two orthogonal directions and ∼ 30μm in separation. We describe the use of this readout system to automate the process of hydroxide catalysis bonding of a fused-silica component to a fused-silica baseplate. The complete alignment and bonding sequence was typically achieved in a timescale of a few minutes, followed by an initial cure of 10 minutes. A series of bonds were performed using two fluids - a simple sodium hydroxide solution and a sodium hydroxide solution with some sodium silicate solution added. In each case we achieved final bonded component angular alignment within 10 μrad and position in the critical direction within 4 μm of the planned targets. The small movements of the component during the initial bonding and curing phases were monitored. The bonds made using the sodium silicate mixture achieved their final bonded alignment over a period of ∼ 15 hours. Bonds using the simple sodium hydroxide solution achieved their final alignment in a much shorter time of a few minutes. The automated system promises to speed the manufacture of precision-aligned assemblies using hydroxide catalysis bonding by more than an order of magnitude over the more manual approach used to build the optical interferometer at the heart of the recent ESA LISA Pathfinder technology demonstrator mission. This novel approach will be key to the time-efficient and low-risk manufacture of the complex optical systems needed for the forthcoming ESA spaceborne gravitational waves observatory mission, provisionally named LISA

    Lidar cloud studies for FIRE and ECLIPS

    Get PDF
    Optical remote sensing measurements of cirrus cloud properties were collected by one airborne and four ground-based lidar systems over a 32 h period during this case study from the First ISCCP (International Satellite Cloud Climatology Program) Regional Experiment (FIRE) Intensive Field Observation (IFO) program. The lidar systems were variously equipped to collect linear depolarization, intrinsically calibrated backscatter, and Doppler velocity information. Data presented describe the temporal evolution and spatial distribution of cirrus clouds over an area encompassing southern and central Wisconsin. The cirrus cloud types include: dissipating subvisual and thin fibrous cirrus cloud bands, an isolated mesoscale uncinus complex (MUC), a large-scale deep cloud that developed into an organized cirrus structure within the lidar array, and a series of intensifying mesoscale cirrus cloud masses. Although the cirrus frequently developed in the vertical from particle fall-streaks emanating from generating regions at or near cloud tops, glaciating supercooled (-30 to -35 C) altocumulus clouds contributed to the production of ice mass at the base of the deep cirrus cloud, apparently even through riming, and other mechanisms involving evaporation, wave motions, and radiative effects are indicated. The generating regions ranged in scale from approximately 1.0 km cirrus uncinus cells, to organized MUC structures up to approximately 120 km across
    • …
    corecore