8,847 research outputs found

    Weak decays of 4He-Lambda

    Get PDF
    We measured the lifetime and the mesonic and non-mesonic decay rates of the 4He-Lambda hypernucleus. The hypernuclei were created using a 750 MeV/c momentum K- beam on a liquid 4He target by the reaction 4He(K-,pi-)4He-Lambda. The 4He-Lambda lifetime was directly measured using protons from Lambda p -> n p non-mesonic decay (also referred to as proton-stimulated decay) and was found to have a value of tau = 245 +/- 24 ps. The mesonic decay rates were determined from the observed numbers of pi-'s and pi0's as Gamma_pi-/Gamma_tot = 0.270 +/- 0.024 and Gamma_pi0/Gamma_tot = 0.564 +/- 0.036, respectively, and the values of the proton- and neutron-stimulated decay rates were extracted as Gamma_p/Gamma_tot = 0.169 +/- 0.019 and Gamma_n/Gamma_tot <= 0.032 (95% CL), respectively. The effects of final-state interactions and possible 3-body Lambda N N decay contributions were studied in the context of a simple model of nucleon-stimulated decay. Nucleon-nucleon coincidence events were observed and were used in the determination of the non-mesonic branching fractions. The implications of the results of this analysis were considered for the empirical Delta I = 1/2 rule and the decay rates of the 4H-Lambda hypernucleus.Comment: 15 pages, 11 figures, published in PRC, revised content to match published versio

    Electrical power grid network optimisation by evolutionary computing

    Get PDF
    A major factor in the consideration of an electrical power network of the scale of a national grid is the calculation of power flow and in particular, optimal power flow. This paper considers such a network, in which distributed generation is used, and examines how the network can be optimized, in terms of transmission line capacity, in order to obtain optimal or at least high-performing configurations, using multi-objective optimisation by evolutionary computing methods

    Machine and deep learning applications for improving the measurement of key indicators for financial institutions: stock market volatility and general insurance reserving risk

    Get PDF
    Esta tesis trata de lograr mejoras en los modelos de estimación de los riesgo financieros y actuariales a través del uso de técnicas punteras en el campo del aprendizaje automático y profundo (machine y deep learning), de manera que los modelos de riesgo generen resultados que den un mejor soporte al proceso de toma de decisiones de las instituciones financieras. Para ello, se fijan dos objetivos. En primer lugar, traer al campo financiero y actuarial los mecanismos más punteros del campo del aprendizaje automático y profundo. Los algoritmos más novedosos de este campo son de amplia aplicación en robótica, conducción autónoma o reconocimiento facial, entre otros. En segundo lugar, se busca aprovechar la gran capacidad predictiva de los algoritmos anteriormente adaptados para construir modelos de riesgo más precisos y que, por tanto, sean capaces de generar resultados que puedan dar un mejor soporte a la toma de decisiones de las instituciones financieras. Dentro del universo de modelos de riesgos financieros, esta tesis se centra en los modelos de riesgo de renta variable y reservas de siniestros. Esta tesis introduce dos modelos de riesgo de renta variable y otros dos de reservas. Por lo que se refiere a la renta variable, el primero de los modelos apila algoritmos tales como redes neuronales, bosques aleatorios o regresiones aditivas múltiples con árboles con el objetivo de mejorar la estimación de la volatilidad y, por tanto, generar modelos de riesgo más precisos. El segundo de los modelos de riesgo adapta al mundo financiero y actuarial los Transformer, un tipo de red neuronal que, debido a su alta precisión, ha apartado al resto de algoritmos en el campo del procesamiento del lenguaje natural. Adicionalmente, se propone una extensión de esta arquitectura, llamada Multi-Transformer y cuyo objetivo es mejorar el rendimiento del algoritmo inicial mediante el ensamblaje y aleatorización de los mecanismos de atención. En lo relativo a los dos modelos de reservas introducidos por esta tesis el primero de ellos trata de mejorar la estimación de reservas y generar modelos de riesgo más precisos apilando algoritmos de aprendizaje automático con modelos de reservas basados en estadística bayesiana y Chain Ladder. El segundo modelo de reservas trata de mejorar los resultados de un modelo de uso habitual, como es el modelo de Mack, a través de la aplicación de redes neuronales recurrentes y conexiones residuales

    Classification hardness for supervised learners on 20 years of intrusion detection data

    Get PDF
    This article consolidates analysis of established (NSL-KDD) and new intrusion detection datasets (ISCXIDS2012, CICIDS2017, CICIDS2018) through the use of supervised machine learning (ML) algorithms. The uniformity in analysis procedure opens up the option to compare the obtained results. It also provides a stronger foundation for the conclusions about the efficacy of supervised learners on the main classification task in network security. This research is motivated in part to address the lack of adoption of these modern datasets. Starting with a broad scope that includes classification by algorithms from different families on both established and new datasets has been done to expand the existing foundation and reveal the most opportune avenues for further inquiry. After obtaining baseline results, the classification task was increased in difficulty, by reducing the available data to learn from, both horizontally and vertically. The data reduction has been included as a stress-test to verify if the very high baseline results hold up under increasingly harsh constraints. Ultimately, this work contains the most comprehensive set of results on the topic of intrusion detection through supervised machine learning. Researchers working on algorithmic improvements can compare their results to this collection, knowing that all results reported here were gathered through a uniform framework. This work's main contributions are the outstanding classification results on the current state of the art datasets for intrusion detection and the conclusion that these methods show remarkable resilience in classification performance even when aggressively reducing the amount of data to learn from

    Measuring the kinetic power of AGN in the radio mode

    Full text link
    (Abridged) We have studied the relationship among nuclear radio and X-ray power, Bondi rate and the kinetic luminosity of sub-Eddington active galactic nuclear (AGN) jets. Besides the recently discovered correlation between jet kinetic and Bondi power, we show that a clear correlation exists also between Eddington-scaled kinetic power and bolometric luminosity, given by: Log(L_kin/L_Edd)=0.49*Log(L_bol/L_Edd)-0.78. The measured slope suggests that these objects are in a radiatively inefficient accretion mode, and has been used to put stringent constraints on the properties of the accretion flow. We found no statistically significant correlations between Bondi power and bolometric AGN luminosity, apart from that induced by their common dependence on L_kin. Analyzing the relation between kinetic power and radio core luminosity, we are then able to determine, statistically, both the probability distribution of the mean jets Lorentz factor, peaking at \Gamma~7, and the intrinsic relation between kinetic and radio core luminosity, that we estimate as: Log(L_kin)=0.81*Log(L_R)+11.9, in good agreement with theoretical predictions of synchrotron jet models. With the aid of these findings, quantitative assessments of kinetic feedback from supermassive black holes in the radio mode will be possible based on accurate determinations of the central engine properties alone. As an example, Sgr A* may follow the correlations of radio mode AGN, based on its observed radiative output and on estimates of the accretion rate both at the Bondi radius and in the inner flow. If this is the case, the SMBH in the Galactic center is the source of ~ 5 times 10^38 ergs/s of mechanical power, equivalent to about 1.5 supernovae every 10^5 years.Comment: 13 pages, 6 figures. Accepted for publication in MNRA
    corecore