29,498 research outputs found

    On presuppositions in requirements

    Get PDF

    Tri-bimaximal Neutrino Mixing from A(4) and \theta_{13} \sim \theta_C

    Get PDF
    It is a common believe that, if the Tri-bimaximal mixing (TBM) pattern is explained by vacuum alignment in an A(4) model, only a very small reactor angle, say \theta_{13} \sim \lambda^2_C being \lambda_C \equiv \theta_C the Cabibbo angle, can be accommodated. This statement is based on the assumption that all the flavon fields acquire VEVs at a very similar scale and the departures from exact TBM arise at the same perturbation level. From the experimental point of view, however, a relatively large value \theta_{13} \sim \lambda_C is not yet excluded by present data. In this paper, we propose a Seesaw A(4) model in which the previous assumption can naturally be evaded. The aim is to describe a \theta_{13} \sim \lambda_C without conflicting with the TBM prediction for \theta_{12} which is rather close to the observed value (at \lambda^2_C level). In our model the deviation of the atmospherical angle from maximal is subject to the sum-rule: \sin ^2 \theta_{23} \approx 1/2 + \sqrt{2}/2 \sin \delta \cos \theta_{13} which is a next-to-leading order prediction of our model.Comment: 16 pages, revised, typos corrected, references adde

    Run Generation Revisited: What Goes Up May or May Not Come Down

    Full text link
    In this paper, we revisit the classic problem of run generation. Run generation is the first phase of external-memory sorting, where the objective is to scan through the data, reorder elements using a small buffer of size M , and output runs (contiguously sorted chunks of elements) that are as long as possible. We develop algorithms for minimizing the total number of runs (or equivalently, maximizing the average run length) when the runs are allowed to be sorted or reverse sorted. We study the problem in the online setting, both with and without resource augmentation, and in the offline setting. (1) We analyze alternating-up-down replacement selection (runs alternate between sorted and reverse sorted), which was studied by Knuth as far back as 1963. We show that this simple policy is asymptotically optimal. Specifically, we show that alternating-up-down replacement selection is 2-competitive and no deterministic online algorithm can perform better. (2) We give online algorithms having smaller competitive ratios with resource augmentation. Specifically, we exhibit a deterministic algorithm that, when given a buffer of size 4M , is able to match or beat any optimal algorithm having a buffer of size M . Furthermore, we present a randomized online algorithm which is 7/4-competitive when given a buffer twice that of the optimal. (3) We demonstrate that performance can also be improved with a small amount of foresight. We give an algorithm, which is 3/2-competitive, with foreknowledge of the next 3M elements of the input stream. For the extreme case where all future elements are known, we design a PTAS for computing the optimal strategy a run generation algorithm must follow. (4) Finally, we present algorithms tailored for nearly sorted inputs which are guaranteed to have optimal solutions with sufficiently long runs

    Tri-bimaximal Neutrino Mixing and Quark Masses from a Discrete Flavour Symmetry

    Get PDF
    We build a supersymmetric model of quark and lepton masses based on the discrete flavour symmetry group T', the double covering of A_4. In the lepton sector our model is practically indistinguishable from recent models based on A_4 and, in particular, it predicts a nearly tri-bimaximal mixing, in good agreement with present data. In the quark sector a realistic pattern of masses and mixing angles is obtained by exploiting the doublet representations of T', not available in A_4. To this purpose, the flavour symmetry T' should be broken spontaneously along appropriate directions in flavour space. In this paper we fully discuss the related vacuum alignment problem, both at the leading order and by accounting for small effects coming from higher-order corrections. As a result we get the relations: \sqrt{m_d/m_s}\approx |V_{us}| and \sqrt{m_d/m_s}\approx |V_{td}/V_{ts}|.Comment: 27 pages, 1 figure; minor correction

    Tri-Bimaximal Lepton Mixing and Leptogenesis

    Get PDF
    In models with flavour symmetries added to the gauge group of the Standard Model the CP-violating asymmetry necessary for leptogenesis may be related with low-energy parameters. A particular case of interest is when the flavour symmetry produces exact Tri-Bimaximal lepton mixing leading to a vanishing CP-violating asymmetry. In this paper we present a model-independent discussion that confirms this always occurs for unflavoured leptogenesis in type I see-saw scenarios, noting however that Tri-Bimaximal mixing does not imply a vanishing asymmetry in general scenarios where there is interplay between type I and other see-saws. We also consider a specific model where the exact Tri-Bimaximal mixing is lifted by corrections that can be parametrised by a small number of degrees of freedom and analyse in detail the existing link between low and high-energy parameters - focusing on how the deviations from Tri-Bimaximal are connected to the parameters governing leptogenesis.Comment: 29 pages, 6 figures; version 2: references added, minor correction

    Lepton Flavour Violation in a Supersymmetric Model with A4 Flavour Symmetry

    Full text link
    We compute the branching ratios for mu-> e gamma, tau-> mu gamma and tau -> e gamma in a supersymmetric model invariant under the flavour symmetry group A4 X Z3 X U(1)_{FN}, in which near tri-bimaximal lepton mixing is naturally predicted. At leading order in the small symmetry breaking parameter u, which is of the same order as the reactor mixing angle theta_{13}, we find that the branching ratios generically scale as u^2. Applying the current bound on the branching ratio of mu -> e gamma shows that small values of u or tan(beta) are preferred in the model for mass parameters m_{SUSY} and m_{1/2} smaller than 1000 GeV. The bound expected from the on-going MEG experiment will provide a severe constraint on the parameter space of the model either enforcing u approx 0.01 and small tan(beta) or m_{SUSY} and m_{1/2} above 1000 GeV. In the special case of universal soft supersymmetry breaking terms in the flavon sector a cancellation takes place in the amplitudes and the branching ratios scale as u^4, allowing for smaller slepton masses. The branching ratios for tau -> mu gamma and tau -> e gamma are predicted to be of the same order as the one for mu -> e gamma, which precludes the possibility of observing these tau decays in the near future.Comment: 44 page

    Nonadditive entropy for random quantum spin-S chains

    Full text link
    We investigate the scaling of Tsallis entropy in disordered quantum spin-S chains. We show that an extensive scaling occurs for specific values of the entropic index. Those values depend only on the magnitude S of the spins, being directly related with the effective central charge associated with the model.Comment: 5 pages, 7 figures. v3: Minor corrections and references updated. Published versio

    Improving Outfit Recommendation with Co-supervision of Fashion Generation

    Get PDF
    The task of fashion recommendation includes two main challenges: visual understanding and visual matching. Visual understanding aims to extract effective visual features. Visual matching aims to model a human notion of compatibility to compute a match between fashion items. Most previous studies rely on recommendation loss alone to guide visual understanding and matching. Although the features captured by these methods describe basic characteristics (e.g., color, texture, shape) of the input items, they are not directly related to the visual signals of the output items (to be recommended). This is problematic because the aesthetic characteristics (e.g., style, design), based on which we can directly infer the output items, are lacking. Features are learned under the recommendation loss alone, where the supervision signal is simply whether the given two items are matched or not. To address this problem, we propose a neural co-supervision learning framework, called the FAshion Recommendation Machine (FARM). FARM improves visual understanding by incorporating the supervision of generation loss, which we hypothesize to be able to better encode aesthetic information. FARM enhances visual matching by introducing a novel layer-to-layer matching mechanism to fuse aesthetic information more effectively, and meanwhile avoiding paying too much attention to the generation quality and ignoring the recommendation performance. Extensive experiments on two publicly available datasets show that FARM outperforms state-of-the-art models on outfit recommendation, in terms of AUC and MRR. Detailed analyses of generated and recommended items demonstrate that FARM can encode better features and generate high quality images as references to improve recommendation performance
    corecore