1,121 research outputs found

    Leptonic CP Violation and Wolfenstein Parametrization for Lepton Mixing

    Get PDF
    We investigate a general structure of lepton mixing matrix resulting from the SUF_F(3) gauge family model with an appropriate vacuum structure of SUF_F(3) symmetry breaking. It is shown that the lepton mixing matrix can be parametrized by using the Wolfenstein parametrization method to characterize its deviation from the tri-bimaximal mixing. A general analysis for the allowed leptonic CP-violating phase δe\delta_e and the leptonic Wolfenstein parameters λe\lambda_e, AeA_e, ρe\rho_e is carried out based on the observed lepton mixing angles. We demonstrate how the leptonic CP violation correlates to the leptonic Wolfenstein parameters. It is found that the phase δe\delta_e is strongly constrained and only a large or nearly maximal leptonic CP-violating phase δe3π/4π/2|\delta_e| \simeq 3\pi/4 \sim \pi/2 is favorable when λe>0.15\lambda_e > 0.15 . In particular, when taking λe\lambda_e to be the Cabbibo angle \gl_e\simeq \lambda \simeq 0.225, a sensible result for leptonic Wolfenstein parameters and CP violation is obtained with Ae=1.40 A_e=1.40, ρe=0.20\rho_e=0.20, \delta_{e}\sim 101.76\;^o, which is compatible with the one in quark sector. An interesting correlation between leptons and quarks is observed, which indicates a possible common origin of masses and mixing for the charged-leptons and quarks.Comment: 18 pages, 5 figures, sources of CP-violating phases are clarified, references adde

    Utopian fantasy

    Get PDF
    I design furniture and objects to express my utopian fantasy to people. I hope users can imagine the fantasy through the interaction with my furniture and objects. While people are interacting with my works, they become part of the fantasy. My works are the NPCs (nonplayer characters) of a game created by myself, called Utopian Fantasy. My works are creature-istic, anthropomorphic, and always interactive. They are inspired by nature and everyday life. This series of works I created during my time at RISD express my appreciation for the underwater world and the Internet. My designs beg for interaction and play. Wander around one of my pieces for a couple of seconds, test it, play with it and through interaction my furniture will be happy to share with you everything they know. My furniture’s functional purpose emerges from play, while visually engaging people nearby. I love adding surprises into my pieces, giving them a sense of humor, a point of interaction and exchange. I want people to say, “Oh, how joyful!”, when they interact with my work

    SU(3)FSU(3)_{F} Gauge Family Model and New Symmetry Breaking Scale From FCNC Processes

    Get PDF
    Based on the SU(3)FSU(3)_{F} gauge family symmetry model which was proposed to explain the observed mass and mixing pattern of neutrinos, we investigate the symmetry breaking, the mixing pattern in quark and lepton sectors, and the contribution of the new gauge bosons to some flavour changing neutral currents (FCNC) processes at low energy. With the current data of the mass differences in the neutral pseudo-scalar P0Pˉ0P^{0}-\bar{P}^{0} systems, we find that the SU(3)FSU(3)_{F} symmetry breaking scale can be as low as 300TeV and the mass of the lightest gauge boson be about 100100TeV. Other FCNC processes, such as the lepton flavour number violation process μee+e\mu^{-}\rightarrow e^{-}e^{+}e^{-} and the semi-leptonic rare decay KπνˉνK\rightarrow \pi \bar{\nu} \nu, contain contributions via the new gauge bosons exchanging. With the constrains got from P0Pˉ0P^0-\bar{P}^0 system, we estimate that the contribution of the new physics is around 101610^{-16}, far below the current experimental bounds.Comment: 3figure

    ICE-Score: Instructing Large Language Models to Evaluate Code

    Full text link
    Recent advancements in the field of natural language generation have facilitated the use of large language models to assess the quality of generated text. Although these models have shown promising results in tasks such as machine translation and summarization, their applicability in code intelligence tasks remains limited without human involvement. The complexity of programming concepts required for such tasks makes it difficult to develop evaluation metrics that align with human judgment. Token-matching-based metrics, such as BLEU, have demonstrated weak correlations with human practitioners in code intelligence tasks. Moreover, utilizing human-written test suites to evaluate functional correctness can be challenging in domains with low resources. To overcome these obstacles, we propose \texttt{ICE-Score}, a new evaluation metric via instructing large language models (LLMs) for code assessments. Our metric addresses the limitations of existing approaches by achieving superior correlations with functional correctness and human preferences, without the need for test oracles or references. We evaluate the efficacy of our metric on two different aspects (\textit{human preference} and \textit{execution success}) and four programming languages. Our results demonstrate that our metric surpasses state-of-the-art metrics for code generation, delivering high levels of accuracy and consistency across various programming languages and tasks. We also make our evaluation metric and datasets available to the public\footnote{\url{https://github.com/terryyz/ice-score}}, encouraging further research in evaluating code intelligence tasks.Comment: Accepted to Findings of EACL 202

    Holographic Dark Energy Characterized by the Total Comoving Horizon and Insights to Cosmological Constant and Coincidence Problem

    Full text link
    The observed acceleration of the present universe is shown to be well explained by the holographic dark energy characterized by the total comoving horizon of the universe (η\etaHDE). It is of interest to notice that the very large primordial part of the comoving horizon generated by the inflation of early universe makes the η\etaHDE behave like a cosmological constant. As a consequence, both the fine-tuning problem and the coincidence problem can reasonably be understood with the inflationary universe and holographical principle. We present a systematic analysis and obtain a consistent cosmological constraint on the η\etaHDE model based on the recent cosmological observations. It is found that the η\etaHDE model gives the best-fit result Ωm0=0.270\Omega_{m0}=0.270 (Ωde0=0.730\Omega_{de0}=0.730) and the minimal χmin2=542.915\chi^2_{min}=542.915 which is compatible with χΛCDM2=542.919\chi^2_{\Lambda {\rm CDM}}=542.919 for the Λ\LambdaCDM model.Comment: 17 pages, 4 figures, two eqs. (26)(27) added for the consistent approximate solution of dark energy in early universe, references added, published version in PR

    A Topology-aware Graph Coarsening Framework for Continual Graph Learning

    Full text link
    Continual learning on graphs tackles the problem of training a graph neural network (GNN) where graph data arrive in a streaming fashion and the model tends to forget knowledge from previous tasks when updating with new data. Traditional continual learning strategies such as Experience Replay can be adapted to streaming graphs, however, these methods often face challenges such as inefficiency in preserving graph topology and incapability of capturing the correlation between old and new tasks. To address these challenges, we propose TACO\mathbb{CO}, a (t)opology-(a)ware graph (co)arsening and (co)ntinual learning framework that stores information from previous tasks as a reduced graph. At each time period, this reduced graph expands by combining with a new graph and aligning shared nodes, and then it undergoes a "zoom out" process by reduction to maintain a stable size. We design a graph coarsening algorithm based on node representation proximities to efficiently reduce a graph and preserve topological information. We empirically demonstrate the learning process on the reduced graph can approximate that of the original graph. Our experiments validate the effectiveness of the proposed framework on three real-world datasets using different backbone GNN models

    Development of Terahertz Quantum Well Photodetector at 3 THz

    Get PDF
    This thesis reports a new terahertz (=3.22 THz) quantum well photodetector (THz QWP) as well as comprehensive numerical models simulating device active region and different grating couplers including traditional diffraction metal grating and novel patch antenna structure. Among all the terahertz detectors, terahertz quantum well photodetector (THz QWP), has proved its fast optical response speed and remarkable sensitivity and shown great potential for applications in security, bio-medical technology and space communication. However, due to high requirements in experiment condition and growth quality, THz QWPs absorbed at 3 THz or lower have not been much developed yet. Furthermore, THz QWP working around 3 THz can be combined with one of the strongest terahertz emitters, terahertz quantum cascade laser (THz QCL), for ultrafast spectroscopy and imaging applications. Recently, THz QWPs integrated with different grating couplers have been explored and are now intensively investigated to improve device temperature performance. In that regard, we aim to improve the simulation model and develop a new THz QWP absorbing at 3 THz. Moreover, numerical COMSOL models are built to analyze optical properties of traditional diffraction metal grating coupler and novel patch antenna coupler in this work. According to measured current density-voltage (j-V) profiles and absorption spectrum, the proposed THz QWP with a new active region design has its background-limited infrared performance (BLIP) temperature at 10 K and manages to achieve terahertz range absorption at lower frequency around 3 THz. Results show a peak responsivity of 1.9 A/W, a peak detectivity of 4.63×1010 cmHz1/2/W and an absorption range from 94.5 cm-1 (2.83 THz) to 142.7 cm-1 (4.25 THz). To our knowledge, this THz QWP has the lowest peak absorption frequency (3.22 THz) and it is the second one that works near 3 THz. Measured j-V profiles show a size-dependent shifting, indicating the existence of sidewall leakage currents, and the comparison between simulation and measurement results of j-V profiles reveals the consistency, especially at higher temperature. In addition, simulation results on 1D metal grating coupler and patch antenna coupler are demonstrated and they strongly agree with the experiment data from literatures as well as general rules of thumb

    Timing and Congestion Driven Algorithms for FPGA Placement

    Get PDF
    Placement is one of the most important steps in physical design for VLSI circuits. For field programmable gate arrays (FPGAs), the placement step determines the location of each logic block. I present novel timing and congestion driven placement algorithms for FPGAs with minimal runtime overhead. By predicting the post-routing timing-critical edges and estimating congestion accurately, this algorithm is able to simultaneously reduce the critical path delay and the minimum number of routing tracks. The core of the algorithm consists of a criticality-history record of connection edges and a congestion map. This approach is applied to the 20 largest Microelectronics Center of North Carolina (MCNC) benchmark circuits. Experimental results show that compared with the state-of-the-art FPGA place and route package, the Versatile Place and Route (VPR) suite, this algorithm yields an average of 8.1% reduction (maximum 30.5%) in the critical path delay and 5% reduction in channel width. Meanwhile, the average runtime of the algorithm is only 2.3X as of VPR
    corecore