594 research outputs found

    CP Violation from Flavor Symmetry in a Lepton Quarticity Dark Matter Model

    Get PDF
    We propose a simple Δ(27)Z4\Delta (27) \otimes Z_4 model where neutrinos are predicted to be Dirac fermions. The smallness of their masses follows from a type-I seesaw mechanism and the leptonic CP violating phase correlates with the pattern of Δ(27)\Delta (27) flavor symmetry breaking. The scheme naturally harbors a WIMP dark matter candidate associated to the Dirac nature of neutrinos, in that the same Z4Z_4 lepton number symmetry also ensures dark matter stability.Comment: 16 pages, 5 figures, Dark Matter Direct Detection Constraints Updated, Conclusions Unchanged, Published Versio

    Generalized Bottom-Tau unification, neutrino oscillations and dark matter: predictions from a lepton quarticity flavor approach

    Full text link
    We propose an A4A_4 extension of the Standard Model with a Lepton Quarticity symmetry correlating dark matter stability with the Dirac nature of neutrinos. The flavor symmetry predicts (i) a generalized bottom-tau mass relation involving all families, (ii) small neutrino masses are induced a la seesaw, (iii) CP must be significantly violated in neutrino oscillations, (iv) the atmospheric angle θ23\theta_{23} lies in the second octant, and (v) only the normal neutrino mass ordering is realized.Comment: 13 pages, 3 figure

    Seesaw roadmap to neutrino mass and dark matter

    Full text link
    We describe the many pathways to generate Majorana and Dirac neutrino mass through generalized dimension-5 operators a la Weinberg. The presence of new scalars beyond the Standard Model Higgs doublet implies new possible field contractions, which are required in the case of Dirac neutrinos. We also notice that, in the Dirac neutrino case, the extra symmetries needed to ensure the Dirac nature of neutrinos can also be made responsible for stability of dark matter.Comment: 12 pages, 5 figures, published versio

    Online Learning in Neural Machine Translation

    Full text link
    [EN] High quality translations are in high demand these days. Although machine translation offers acceptable performance, it is not sufficient in some cases and human supervision is required. In order to ease the translation task of the human, machine translation systems take part in this process. When a sentence in the source language needs to be translated, it is fed to the system which outputs a hypothesis translation. The human then, corrects this hypothesis (also known as post-editing) in order to obtain a high quality translation. Being able to transfer the knowledge that a human translator exhibit when post-editing a translation to the machine translation system is a desirable feature, as it has been proven that a more accurate machine translation system helps to increase the efficiency of the post-editing process. Because the post-editing scenario requires an already trained system, online learning techniques are suited for this task. In this work, three online learning algorithms have been proposed and applied to a neural machine translation sys- tem in a post-editing scenario. They rely on the Passive-Aggressive online learn- ing approach in which the model is updated after every sample in order to fulfil a correctness criterion while remembering previously learned information. The goal is to adapt and refine an already trained system with new samples on-the- fly as the post-editing process takes place (hence, the update time must be kept under control). Moreover, these new algorithms are compared with well-stablished online learning variants of the stochastic gradient descent algorithm. Results show im- provements on the translation quality of the system after applying these algo- rithms, reducing human effort in the post-editing process.[ES] La traducción de gran calidad está muy demandada en la actualidad. A pesar de que la traducción automática ofrece unas prestaciones aceptables, en algunos casos no es suficiente y es necesaria la supervisión humana. Para facilitar la tarea de traducción del humano, los sistemas de traducción automática toman parte en este proceso. Cuando una nueva oración en el idioma origen necesita ser tradu- cida, esta se introduce en el sistema, el cual obtiene como salida una hipótesis de traducción. El humano entonces, corrige esta hipótesis (también conocido como post-editar) para obtener una traducción de mayor calidad. Ser capaz de transfe- rir el conocimiento que el humano exhibe cuando realiza la tarea de post-edición al sistema de traducción automática es una característica deseable puesto que se ha demostrado que un sistema de traducción mas preciso ayuda a aumentar la eficiencia del proceso de post-edición. Debido a que el proceso de post-edición requiere un sistema ya entrenado, las técnicas de aprendizaje en línea son las adecuadas para esta tarea. En este traba- jo, se proponen tres algoritmos de aprendizaje en línea aplicados a un traductor automático neuronal en un escenario de post-edición. Estos algoritmos se basan en la aproximación en línea Passive-Aggressive en la cual el modelo se actualiza después de cada muestra con el objetivo de cumplir un criterio de corrección a la vez que manteniendo información previa aprendida. El objetivo es adaptar y refinar un sistema ya entrenado con nuevas muestras al vuelo mientras el pro- ceso de post-edición se lleva a cabo (por tanto, el tiempo de actualización debe mantenerse bajo control). Además, estos algoritmos se comparan con otras bien conocidas variantes en línea del algoritmo de descenso por gradiente estocástico. Los resultados mues- tran una mejora en la calidad de las traducciones después de aplicar estos algo- ritmos, reduciendo así el esfuerzo humano en el proceso de post-edición.[CA] La traducció de gran qualitat es troba molt demanada en l’actualitat. Tot i que la traducció automàtica oferix unes prestacions acceptables, en alguns casos no és suficient i és necessària la supervisió humana. Per a facilitar la tasca de traducció de l’humà, els sistemes de traducció automàtica prenen part en aquest procés. Quan una nova oració en el llenguatge origen necessita ser traduïda, esta s’introduïx en el sistema, el qual obté com a eixida una hipòtesi de traducció. Llavors, l’humà corregix aquesta hipòtesi (també conegut com a post-editar) per a obtindre una traducció de major qualitat. Ser capaços de transferir el coneixement que l’ humà exhibix quan realitza la tasca de post-edició al sistema de traducció automàtica és una característica desitjable ja que s’ha demostrat que un sistema de traducció mes precís ajuda a augmentar l‘eficiència del procés de post-edició. Pel fet que el procés de post-edició requerix un sistema ja entrenat, les tècniques d’aprenentatge en línia són les adequades per aquesta tasca. En este treball, es proposen tres algoritmes d’aprenentatge en línia aplicats a un traductor automàtic neuronal en un escenari de post-edició. Estos algoritmes es basen en l’aproximació en línia Passive-Aggressive en la qual el model s’actualitza després de cada mostra amb l’objectiu de complir un criteri de correcció al mateix temps que manté informació prèvia apresa. L’objectiu és adaptar i refinar un sistema ja entrenat amb noves mostres al vol mentre el procés de post-edició es du a terme (per tant, el temps d’actualització ha de mantenir-se controlat). A més, estos algoritmes es comparen amb altres ben conegudes variants en línia de l’algoritme de descens per gradient estocàstic. Els resultats mostren una millora en la qualitat de les traduccions després d’aplicar estos algoritmes, reduint així l’esforç humà en el procés de post-edició.Cebrián Chuliá, L. (2017). Aprendizaje en línea en traducción automática basada en redes neuronales. http://hdl.handle.net/10251/86299TFG

    Learning Probabilistic Finite State Automata For Opponent Modelling

    Get PDF
    Artificial Intelligence (AI) is the branch of the Computer Science field that tries to imbue intelligent behaviour in software systems. In the early years of the field, those systems were limited to big computing units where researchers built expert systems that exhibited some kind of intelligence. But with the advent of different kinds of networks, which the more prominent of those is the Internet, the field became interested in Distributed Artificial Intelligence (DAI) as the normal move. The field thus moved from monolithic software architectures for its AI sys- tems to architectures where several pieces of software were trying to solve a problem or had interests on their own. Those pieces of software were called Agents and the architectures that allowed the interoperation of multiple agents were called Multi-Agent Systems (MAS). The agents act as a metaphor that tries to describe those software systems that are embodied in a given environ- ment and that behave or react intelligently to events in the environment. The AI mainstream was initially interested in systems that could be taught to behave depending on the inputs perceived. However this rapidly showed ineffective because the human or the expert acted as the knowledge bottleneck for distilling useful and efficient rules. This was in best cases, in worst cases the task of enumerating the rules was difficult or plainly not affordable. This sparked the interest of another subfield, Machine Learning and its counter part in a MAS, Distributed Machine Learning. If you can not code all the scenario combinations, code within the agent the rules that allows it to learn from the environment and the actions performed. With this framework in mind, applications are endless. Agents can be used to trade bonds or other financial derivatives without human intervention, or they can be embedded in a robotics hardware and learn unseen map config- uration in distant locations like distant planets. Agents are not restricted to interactions with humans or the environment, they can also interact with other agents themselves. For instance, agents can negotiate the quality of service of a channel before establishing a communication or they can share information about the environment in a cooperative setting like robot soccer players. But there are some shortcomings that emerge in a MAS architecture. The one related to this thesis is that partitioning the task at hand into agents usually entails that agents have less memory or computing power. It is not economically feasible to replicate the big computing unit on each separate agent in our system. Thus we can say that we should think about our agents as computationally bounded , that is, they have a limited amount of computing power to learn from the environment. This has serious implications on the algorithms that are commonly used for learning in these settings. The classical approach for learning in MAS system is to use some variation of a Reinforcement Learning (RL) algorithm [BT96, SB98]. The main idea around those algorithms is that the agent has to maintain a table with the per- ceived value of each action/state pair and through multiple iterations obtain a set of decision rules that allows to take the best action for a given environment. This approach has several flaws when the current action depends on a single observation seen in the past (for instance, a warning sign that a robot per- ceives). Several techniques has been proposed to alleviate those shortcomings. For instance to avoid the combinatorial explosion of states and actions, instead of storing a table with the value of the pairs an approximating function like a neural network can be used instead. And for events in the past, we can extend the state definition of the environment creating dummy states that correspond to the N-tuple (stateN, stateN−1, . . . , stateN−t

    Reivindicación de la comanditaria por acciones ante el anteproyecto de ley de código mercantil

    Get PDF
    La sociedad comanditaria por acciones podría servir para facilitar la financiación de las PYMES y Empresas familiares, junto a otras funciones, pero en Derecho español está desaprovechada, debido a que la Ley de Sociedades de Capital la regula siguiendo el modelo suizo-italiano, y no el franco-alemán, que es el que ha dado mejores resultados. A este error se añaden otros en su concreta regulación. Además, la Ley del Mercado de Valores no la regula como sociedad cotizada. El Anteproyecto de Ley de Código Mercantil tampoco mejora su regulación

    Dirac Neutrinos and Dark Matter Stability from Lepton Quarticity

    Full text link
    We propose to relate dark matter stability to the possible Dirac nature of neutrinos. The idea is illustrated in a simple scheme where small Dirac neutrino masses arise from a type--I seesaw mechanism as a result of a Z4Z_4 discrete lepton number symmetry. The latter implies the existence of a viable WIMP dark matter candidate, whose stability arises from the same symmetry which ensures the Diracness of neutrinos.Comment: 12 pages, 6 figures, Report N IFIC/16-4

    CP Symmetries as Guiding Posts: revamping tri-bi-maximal Mixing. Part II

    Full text link
    In this follow up of arXiv:1812.04663 we analyze the generalized CP symmetries of the charged lepton mass matrix compatible with the complex version of the Tri-Bi-Maximal (TBM) lepton mixing pattern. These symmetries are used to `revamp' the simplest TBM \textit{Ansatz} in a systematic way. Our generalized patterns share some of the attractive features of the original TBM matrix and are consistent with current oscillation experiments. We also discuss their phenomenological implications both for upcoming neutrino oscillation and neutrinoless double beta decay experiments.Comment: 19 pages, 8 figures. Title change to match the first par
    corecore