13,114 research outputs found

    Galaxy types in the Sloan Digital Sky Survey using supervised artificial neural networks

    Get PDF
    Supervised artificial neural networks are used to predict useful properties of galaxies in the Sloan Digital Sky Survey, in this instance morphological classifications, spectral types and redshifts. By giving the trained networks unseen data, it is found that correlations between predicted and actual properties are around 0.9 with rms errors of order ten per cent. Thus, given a representative training set, these properties may be reliably estimated for galaxies in the survey for which there are no spectra and without human intervention

    Convergence to equilibrium for the discrete coagulation-fragmentation equations with detailed balance

    Full text link
    Under the condition of detailed balance and some additional restrictions on the size of the coefficients, we identify the equilibrium distribution to which solutions of the discrete coagulation-fragmentation system of equations converge for large times, thus showing that there is a critical mass which marks a change in the behavior of the solutions. This was previously known only for particular cases as the generalized Becker-D\"oring equations. Our proof is based on an inequality between the entropy and the entropy production which also gives some information on the rate of convergence to equilibrium for solutions under the critical mass.Comment: 28 page

    ΔπN\Delta\pi N coupling constant in light cone QCD sum rules

    Full text link
    We employ the light cone QCD sum rules to calculate ΔπN\Delta\pi N coupling constant by studying the two point correlation function between the vacuum and the pion state. Our result is consistent with the traditional QCD sum rules calculations and it is in agreement with the experimental value.Comment: 8 pages, latex, 2 figure

    On almost randomizing channels with a short Kraus decomposition

    Full text link
    For large d, we study quantum channels on C^d obtained by selecting randomly N independent Kraus operators according to a probability measure mu on the unitary group U(d). When mu is the Haar measure, we show that for N>d/epsilon^2,suchachannelisepsilon−randomizingwithhighprobability,whichmeansthatitmapseverystatewithindistanceepsilon/d(inoperatornorm)ofthemaximallymixedstate.ThisslightlyimprovesonaresultbyHayden,Leung,ShorandWinterbyoptimizingtheirdiscretizationargument.Moreover,forgeneralmu,weobtainaepsilon−randomizingchannelprovidedN>d(log⁡d)6/epsilon2, such a channel is epsilon-randomizing with high probability, which means that it maps every state within distance epsilon/d (in operator norm) of the maximally mixed state. This slightly improves on a result by Hayden, Leung, Shor and Winter by optimizing their discretization argument. Moreover, for general mu, we obtain a epsilon-randomizing channel provided N > d (\log d)^6/epsilon^2. For d=2^k (k qubits), this includes Kraus operators obtained by tensoring k random Pauli matrices. The proof uses recent results on empirical processes in Banach spaces.Comment: We added some background on geometry of Banach space

    Market-Based Alternatives for Managing Congestion at New York’s LaGuardia Airport

    Get PDF
    We summarize the results of a project that was motivated by the expiration of the “High Density Rule,” which defined the slot controls employed at New York’s LaGuardia Airport for more than 30 years. The scope of the project included the analysis of several administrative measures, congestion pricing options and slot auctions. The research output includes a congestion pricing procedure and also the specification of a slot auction mechanism. The research results are based in part on two strategic simulations. These were multi-day events that included the participation of airport operators, most notably the Port Authority of New York and New Jersey, FAA and DOT executives, airline representatives and other members of the air transportation community. The first simulation placed participants in a stressful, high congestion future scenario and then allowed participants to react and problem solve under various administrative measures and congestion pricing options. The second simulation was a mock slot auction in which participants bid on LGA arrival and departure slots for fictitious airlines.Auctions, airport slot auctions, combinatorial auctions

    Chaotic Dynamics in Optimal Monetary Policy

    Get PDF
    There is by now a large consensus in modern monetary policy. This consensus has been built upon a dynamic general equilibrium model of optimal monetary policy as developed by, e.g., Goodfriend and King (1997), Clarida et al. (1999), Svensson (1999) and Woodford (2003). In this paper we extend the standard optimal monetary policy model by introducing nonlinearity into the Phillips curve. Under the specific form of nonlinearity proposed in our paper (which allows for convexity and concavity and secures closed form solutions), we show that the introduction of a nonlinear Phillips curve into the structure of the standard model in a discrete time and deterministic framework produces radical changes to the major conclusions regarding stability and the efficiency of monetary policy. We emphasize the following main results: (i) instead of a unique fixed point we end up with multiple equilibria; (ii) instead of saddle--path stability, for different sets of parameter values we may have saddle stability, totally unstable equilibria and chaotic attractors; (iii) for certain degrees of convexity and/or concavity of the Phillips curve, where endogenous fluctuations arise, one is able to encounter various results that seem intuitively correct. Firstly, when the Central Bank pays attention essentially to inflation targeting, the inflation rate has a lower mean and is less volatile; secondly, when the degree of price stickiness is high, the inflation rate displays a larger mean and higher volatility (but this is sensitive to the values given to the parameters of the model); and thirdly, the higher the target value of the output gap chosen by the Central Bank, the higher is the inflation rate and its volatility.Comment: 11 page

    Heavy-to-Light Form Factors in the Final Hadron Large Energy Limit of QCD

    Get PDF
    We argue that the Large Energy Effective Theory (LEET), originally proposed by Dugan and Grinstein, is applicable to exclusive semileptonic, radiative and rare heavy-to-light transitions in the region where the energy release E is large compared to the strong interaction scale and to the mass of the final hadron, i.e. for q^2 not close to the zero-recoil point. We derive the Effective Lagrangian from the QCD one, and show that in the limit of heavy mass M for the initial hadron and large energy E for the final one, the heavy and light quark fields behave as two-component spinors. Neglecting QCD short-distance corrections, this implies that there are only three form factors describing all the pseudoscalar to pseudoscalar or vector weak current matrix elements. We argue that the dependence of these form factors with respect to M and E should be factorizable, the M-dependence (sqrt(M)) being derived from the usual heavy quark expansion while the E-dependence is controlled by the behaviour of the light-cone distribution amplitude near the end-point u=1. The usual expectation of the (1-u) behaviour leads to a 1/E^2 scaling law, that is a dipole form in q^2. We also show explicitly that in the appropriate limit, the Light-Cone Sum Rule method satisfies our general relations as well as the scaling laws in M and E of the form factors, and obtain very compact and simple expressions for the latter. Finally we note that this formalism gives theoretical support to the quark model-inspired methods existing in the literature.Comment: Latex2e, 25 pages, no figure. Slight changes in the title and the phrasing. Misprint in Eq. (25) corrected. To appear in Phys. Rev.

    Parameter estimators of random intersection graphs with thinned communities

    Full text link
    This paper studies a statistical network model generated by a large number of randomly sized overlapping communities, where any pair of nodes sharing a community is linked with probability qq via the community. In the special case with q=1q=1 the model reduces to a random intersection graph which is known to generate high levels of transitivity also in the sparse context. The parameter qq adds a degree of freedom and leads to a parsimonious and analytically tractable network model with tunable density, transitivity, and degree fluctuations. We prove that the parameters of this model can be consistently estimated in the large and sparse limiting regime using moment estimators based on partially observed densities of links, 2-stars, and triangles.Comment: 15 page

    Pauvreté et accÚs à l'eau dans la vallée du Sénégal

    Get PDF
    Water poverty in the Senegal Valley Considering the flood-recession agriculture, the hydraulic history of the valley of the Senegal River is ancient, but knows a deep mutation since the introduction of the irrigation. Can one speak of water poverty, whereas two dams now control the water flow of the Senegal River and ensure a permanent abundance in water ? The answer to this questioning is proposed through the link between poverty, access to funding, access to land, and involvement in resource management.L'histoire hydraulique de la vallée du fleuve Sénégal est ancienne et remonte à la culture de décrue. Mais elle connaßt une profonde mutation depuis l'introduction de l'irrigation. Peut-on parler de « pauvreté hydraulique », alors que deux barrages régulent à présent le régime du fleuve Sénégal et assurent une permanence de l'abondance en eau ? La réponse à cette question est proposée sous l'angle de l'articulation entre la pauvreté, l'accÚs au financement, l'accÚs au foncier, la participation à la gestion de la ressource
    • 

    corecore