83 research outputs found

    GNA: new framework for statistical data analysis

    Full text link
    We report on the status of GNA --- a new framework for fitting large-scale physical models. GNA utilizes the data flow concept within which a model is represented by a directed acyclic graph. Each node is an operation on an array (matrix multiplication, derivative or cross section calculation, etc). The framework enables the user to create flexible and efficient large-scale lazily evaluated models, handle large numbers of parameters, propagate parameters' uncertainties while taking into account possible correlations between them, fit models, and perform statistical analysis. The main goal of the paper is to give an overview of the main concepts and methods as well as reasons behind their design. Detailed technical information is to be published in further works.Comment: 9 pages, 3 figures, CHEP 2018, submitted to EPJ Web of Conference

    Frequency diversity wideband digital receiver and signal processor for solid-state dual-polarimetric weather radars

    Get PDF
    2012 Summer.Includes bibliographical references.The recent spate in the use of solid-state transmitters for weather radar systems has unexceptionably revolutionized the research in meteorology. The solid-state transmitters allow transmission of low peak powers without losing the radar range resolution by allowing the use of pulse compression waveforms. In this research, a novel frequency-diversity wideband waveform is proposed and realized to extenuate the low sensitivity of solid-state radars and mitigate the blind range problem tied with the longer pulse compression waveforms. The latest developments in the computing landscape have permitted the design of wideband digital receivers which can process this novel waveform on Field Programmable Gate Array (FPGA) chips. In terms of signal processing, wideband systems are generally characterized by the fact that the bandwidth of the signal of interest is comparable to the sampled bandwidth; that is, a band of frequencies must be selected and filtered out from a comparable spectral window in which the signal might occur. The development of such a wideband digital receiver opens a window for exciting research opportunities for improved estimation of precipitation measurements for higher frequency systems such as X, Ku and Ka bands, satellite-borne radars and other solid-state ground-based radars. This research describes various unique challenges associated with the design of a multi-channel wideband receiver. The receiver consists of twelve channels which simultaneously downconvert and filter the digitized intermediate-frequency (IF) signal for radar data processing. The product processing for the multi-channel digital receiver mandates a software and network architecture which provides for generating and archiving a single meteorological product profile culled from multi-pulse profiles at an increased data date. The multi-channel digital receiver also continuously samples the transmit pulse for calibration of radar receiver gain and transmit power. The multi-channel digital receiver has been successfully deployed as a key component in the recently developed National Aeronautical and Space Administration (NASA) Global Precipitation Measurement (GPM) Dual-Frequency Dual-Polarization Doppler Radar (D3R). The D3R is the principal ground validation instrument for the precipitation measurements of the Dual Precipitation Radar (DPR) onboard the GPM Core Observatory satellite scheduled for launch in 2014. The D3R system employs two broadly separated frequencies at Ku- and Ka-bands that together make measurements for precipitation types which need higher sensitivity such as light rain, drizzle and snow. This research describes unique design space to configure the digital receiver for D3R at several processing levels. At length, this research presents analysis and results obtained by employing the multi-carrier waveforms for D3R during the 2012 GPM Cold-Season Precipitation Experiment (GCPEx) campaign in Canada

    Effects of gluon kinematics and the Sudakov form factor on the dipole amplitude

    Full text link
    We investigate effects of exact gluon kinematics on the parameters of the Golec-Biernat-W\"usthoff, and Bartels-Golec-Biernat-Kowalski saturation models. The resulting fits show some differences, particularly, in the normalization of the dipole cross section σ0\sigma_0. The refitted models are used for the dijet production process in DIS to investigate effects of the Sudakov form factor at Electron Ion Collider energies.Comment: 19 pages, 8 figures; v2: references adde

    Dispersity-Based Test Case Prioritization

    Get PDF
    With real-world projects, existing test case prioritization (TCP) techniques have limitations when applied to them, because these techniques require certain information to be made available before they can be applied. For example, the family of input-based TCP techniques are based on test case values or test script strings; other techniques use test coverage, test history, program structure, or requirements information. Existing techniques also cannot guarantee to always be more effective than random prioritization (RP) that does not have any precondition. As a result, RP remains the most applicable and most fundamental TCP technique. In this thesis, we propose a new TCP technique, and mainly aim at studying the Effectiveness, Actual execution time for failure detection, Efficiency and Applicability of the new approach

    Constraints on the Dense Matter Equation of State and Neutron Star Properties from NICER's Mass-Radius Estimate of PSR J0740+6620 and Multimessenger Observations

    Get PDF
    In recent years our understanding of the dense matter equation of state (EOS) of neutron stars has significantly improved by analyzing multimessenger data from radio/X-ray pulsars, gravitational wave events, and from nuclear physics constraints. Here we study the additional impact on the EOS from the jointly estimated mass and radius of PSR J0740+6620, presented in Riley et al. (2021) by analyzing a combined dataset from X-ray telescopes NICER and XMM-Newton. We employ two different high-density EOS parameterizations: a piecewise-polytropic (PP) model and a model based on the speed of sound in a neutron star (CS). At nuclear densities these are connected to microscopic calculations of neutron matter based on chiral effective field theory interactions. In addition to the new NICER data for this heavy neutron star, we separately study constraints from the radio timing mass measurement of PSR J0740+6620, the gravitational wave events of binary neutron stars GW190425 and GW170817, and for the latter the associated kilonova AT2017gfo. By combining all these, and the NICER mass-radius estimate of PSR J0030+0451 we find the radius of a 1.4 solar mass neutron star to be constrained to the 95% credible ranges 12.33^{+0.76}_{-0.81} km (PP model) and 12.18^{+0.56}_{-0.79} km (CS model). In addition, we explore different chiral effective field theory calculations and show that the new NICER results provide tight constraints for the pressure of neutron star matter at around twice saturation density, which shows the power of these observations to constrain dense matter interactions at intermediate densities

    Deep R Programming

    Full text link
    Deep R Programming is a comprehensive course on one of the most popular languages in data science (statistical computing, graphics, machine learning, data wrangling and analytics). It introduces the base language in-depth and is aimed at ambitious students, practitioners, and researchers who would like to become independent users of this powerful environment. This textbook is a non-profit project. Its online and PDF versions are freely available at . This early draft is distributed in the hope that it will be useful.Comment: Draft: v0.2.1 (2023-04-27

    High-precision computation of uniform asymptotic expansions for special functions

    Get PDF
    In this dissertation, we investigate new methods to obtain uniform asymptotic expansions for the numerical evaluation of special functions to high-precision. We shall first present the theoretical and computational fundamental aspects required for the development and ultimately implementation of such methods. Applying some of these methods, we obtain efficient new convergent and uniform expansions for numerically evaluating the confluent hypergeometric functions and the Lerch transcendent at high-precision. In addition, we also investigate a new scheme of computation for the generalized exponential integral, obtaining on the fastest and most robust implementations in double-precision floating-point arithmetic. In this work, we aim to combine new developments in asymptotic analysis with fast and effective open-source implementations. These implementations are comparable and often faster than current open-source and commercial stateof-the-art software for the evaluation of special functions.Esta tesis presenta nuevos métodos para obtener expansiones uniformes asintóticas, para la evaluación numérica de funciones especiales en alta precisión. En primer lugar, se introducen fundamentos teóricos y de carácter computacional necesarios para el desarrollado y posterior implementación de tales métodos. Aplicando varios de dichos métodos, se obtienen nuevas expansiones uniformes convergentes para la evaluación numérica de las funciones hipergeométricas confluentes y de la función transcendental de Lerch. Por otro lado, se estudian nuevos esquemas de computo para evaluar la integral exponencial generalizada, desarrollando una de las implementaciones más eficientes y robustas en aritmética de punto flotante de doble precisión. En este trabajo, se combinan nuevos desarrollos en análisis asintótico con implementaciones rigurosas, distribuidas en código abierto. Las implementaciones resultantes son comparables, y en ocasiones superiores, a las soluciones comerciales y de código abierto actuales, que representan el estado de la técnica en el campo de la evaluación de funciones especiales.Postprint (published version

    High-precision computation of uniform asymptotic expansions for special functions

    Get PDF
    In this dissertation, we investigate new methods to obtain uniform asymptotic expansions for the numerical evaluation of special functions to high-precision. We shall first present the theoretical and computational fundamental aspects required for the development and ultimately implementation of such methods. Applying some of these methods, we obtain efficient new convergent and uniform expansions for numerically evaluating the confluent hypergeometric functions and the Lerch transcendent at high-precision. In addition, we also investigate a new scheme of computation for the generalized exponential integral, obtaining on the fastest and most robust implementations in double-precision floating-point arithmetic. In this work, we aim to combine new developments in asymptotic analysis with fast and effective open-source implementations. These implementations are comparable and often faster than current open-source and commercial stateof-the-art software for the evaluation of special functions.Esta tesis presenta nuevos métodos para obtener expansiones uniformes asintóticas, para la evaluación numérica de funciones especiales en alta precisión. En primer lugar, se introducen fundamentos teóricos y de carácter computacional necesarios para el desarrollado y posterior implementación de tales métodos. Aplicando varios de dichos métodos, se obtienen nuevas expansiones uniformes convergentes para la evaluación numérica de las funciones hipergeométricas confluentes y de la función transcendental de Lerch. Por otro lado, se estudian nuevos esquemas de computo para evaluar la integral exponencial generalizada, desarrollando una de las implementaciones más eficientes y robustas en aritmética de punto flotante de doble precisión. En este trabajo, se combinan nuevos desarrollos en análisis asintótico con implementaciones rigurosas, distribuidas en código abierto. Las implementaciones resultantes son comparables, y en ocasiones superiores, a las soluciones comerciales y de código abierto actuales, que representan el estado de la técnica en el campo de la evaluación de funciones especiales
    • …
    corecore