1,077,194 research outputs found
How to use the Standard Model effective field theory
We present a practical three-step procedure of using the Standard Model
effective field theory (SM EFT) to connect ultraviolet (UV) models of new
physics with weak scale precision observables. With this procedure, one can
interpret precision measurements as constraints on a given UV model. We give a
detailed explanation for calculating the effective action up to one-loop order
in a manifestly gauge covariant fashion. This covariant derivative expansion
method dramatically simplifies the process of matching a UV model with the SM
EFT, and also makes available a universal formalism that is easy to use for a
variety of UV models. A few general aspects of RG running effects and choosing
operator bases are discussed. Finally, we provide mapping results between the
bosonic sector of the SM EFT and a complete set of precision electroweak and
Higgs observables to which present and near future experiments are sensitive.
Many results and tools which should prove useful to those wishing to use the SM
EFT are detailed in several appendices.Comment: 99 pages, 11 figures. V2: Typos corrected, references added. Fixed a
link to Mathematica notebook for download. Substantial text changes for
clarification with no change in results. In particular, sections 2.5, 3, and
5 received clarifying edits. Additionally, results from part of appendix A
have been separated out to a new appendi
Firm size related to export performance
Purpose: This paper provides an overview of exporting firms, as a special case SMEs of the Republic of Kosovo, that are exporters based on relevant academic literature. Empirical evidence reveals that most of the SMEs, have positive relations with some determinants as the number of employees. To verify whether exporting is the first step in the process of internationalization, the findings of this study are linked with related literature on exporting aspects. This also offers a more in-depth understanding of the relation between variables used in the study and export performance. Design/Methodology/Approach: The paper used quantitative data and face to face interviews with respondents. The descriptive statistics were calculated to give an overview of the distribution, mean and the standard deviation of the dataset. Internal consistency and reliability analysis on a Likert scale was performed using Cronbach’s Alpha coefficient. Findings: The firm size was shown to be highly statistically significant and positively related, indicating that the importance of economies of scale in the probability of being engaged in export is high. The obtained results from the conducted research on SMEs in Kosovo show that the dependency of managers’ education and training were corresponding with those attained when testing dependency of managers’ age and their international experience. Practical Implications: Development policy should be made towards the added value and growth of competitive competencies of SMEs in the domestic market and the external market, as well the process to take facilitating steps in exporting promotional activities. Agency for supporting SMEs should coordinate activities to improve the conditions for exporting enterprises by allowing access to public infrastructure. Originality/Value: This paper is summarized with some specific recommendations for the management of SMEs and for government institutions to improve export performance.peer-reviewe
A Standardised Procedure for Evaluating Creative Systems: Computational Creativity Evaluation Based on What it is to be Creative
Computational creativity is a flourishing research area, with a variety of creative systems being produced and developed. Creativity evaluation has not kept pace with system development with an evident lack of systematic evaluation of the creativity of these systems in the literature. This is partially due to difficulties in defining what it means for a computer to be creative; indeed, there is no consensus on this for human creativity, let alone its computational equivalent. This paper proposes a Standardised Procedure for Evaluating Creative Systems (SPECS). SPECS is a three-step process: stating what it means for a particular computational system to be creative, deriving and performing tests based on these statements. To assist this process, the paper offers a collection of key components of creativity, identified empirically from discussions of human and computational creativity. Using this approach, the SPECS methodology is demonstrated through a comparative case study evaluating computational creativity systems that improvise music
Teaching telecommunication standards: bridging the gap between theory and practice
©2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Telecommunication standards have become a reliable mechanism to strengthen collaboration between industry and research institutions to accelerate the evolution of communications systems. Standards are needed to enable cooperation while promoting competition. Within the framework of a standard, the companies involved in the standardization process contribute and agree on appropriate technical specifications to ensure diversity and compatibility, and facilitate worldwide commercial deployment and evolution. Those parts of the system that can create competitive advantages are intentionally left open in the specifications. Such specifications are extensive, complex, and minimalistic. This makes telecommunication standards education a difficult endeavor, but it is much demanded by industry and governments to spur economic growth. This article describes a methodology for teaching wireless communications standards. We define our methodology around six learning stages that assimilate the standardization process and identify key learning objectives for each. Enabled by software-defined radio technology, we describe a practical learning environment that facilitates developing many of the needed technical and soft skills without the inherent difficulty and cost associated with radio frequency components and regulation. Using only open source software and commercial of-the-shelf computers, this environment is portable and can easily be recreated at other educational institutions and adapted to their educational needs and constraints. We discuss our and our students' experiences when employing the proposed methodology to 4G LTE standard education at Barcelona Tech.Peer ReviewedPostprint (author's final draft
Dynamic Bayesian Predictive Synthesis in Time Series Forecasting
We discuss model and forecast combination in time series forecasting. A
foundational Bayesian perspective based on agent opinion analysis theory
defines a new framework for density forecast combination, and encompasses
several existing forecast pooling methods. We develop a novel class of dynamic
latent factor models for time series forecast synthesis; simulation-based
computation enables implementation. These models can dynamically adapt to
time-varying biases, miscalibration and inter-dependencies among multiple
models or forecasters. A macroeconomic forecasting study highlights the dynamic
relationships among synthesized forecast densities, as well as the potential
for improved forecast accuracy at multiple horizons
Computational aspects of Bayesian spectral density estimation
Gaussian time-series models are often specified through their spectral
density. Such models present several computational challenges, in particular
because of the non-sparse nature of the covariance matrix. We derive a fast
approximation of the likelihood for such models. We propose to sample from the
approximate posterior (that is, the prior times the approximate likelihood),
and then to recover the exact posterior through importance sampling. We show
that the variance of the importance sampling weights vanishes as the sample
size goes to infinity. We explain why the approximate posterior may typically
multi-modal, and we derive a Sequential Monte Carlo sampler based on an
annealing sequence in order to sample from that target distribution.
Performance of the overall approach is evaluated on simulated and real
datasets. In addition, for one real world dataset, we provide some numerical
evidence that a Bayesian approach to semi-parametric estimation of spectral
density may provide more reasonable results than its Frequentist counter-parts
Dynamic dependence networks: Financial time series forecasting and portfolio decisions (with discussion)
We discuss Bayesian forecasting of increasingly high-dimensional time series,
a key area of application of stochastic dynamic models in the financial
industry and allied areas of business. Novel state-space models characterizing
sparse patterns of dependence among multiple time series extend existing
multivariate volatility models to enable scaling to higher numbers of
individual time series. The theory of these "dynamic dependence network" models
shows how the individual series can be "decoupled" for sequential analysis, and
then "recoupled" for applied forecasting and decision analysis. Decoupling
allows fast, efficient analysis of each of the series in individual univariate
models that are linked-- for later recoupling-- through a theoretical
multivariate volatility structure defined by a sparse underlying graphical
model. Computational advances are especially significant in connection with
model uncertainty about the sparsity patterns among series that define this
graphical model; Bayesian model averaging using discounting of historical
information builds substantially on this computational advance. An extensive,
detailed case study showcases the use of these models, and the improvements in
forecasting and financial portfolio investment decisions that are achievable.
Using a long series of daily international currency, stock indices and
commodity prices, the case study includes evaluations of multi-day forecasts
and Bayesian portfolio analysis with a variety of practical utility functions,
as well as comparisons against commodity trading advisor benchmarks.Comment: 31 pages, 9 figures, 3 table
Quantum Monte Carlo for large chemical systems: Implementing efficient strategies for petascale platforms and beyond
Various strategies to implement efficiently QMC simulations for large
chemical systems are presented. These include: i.) the introduction of an
efficient algorithm to calculate the computationally expensive Slater matrices.
This novel scheme is based on the use of the highly localized character of
atomic Gaussian basis functions (not the molecular orbitals as usually done),
ii.) the possibility of keeping the memory footprint minimal, iii.) the
important enhancement of single-core performance when efficient optimization
tools are employed, and iv.) the definition of a universal, dynamic,
fault-tolerant, and load-balanced computational framework adapted to all kinds
of computational platforms (massively parallel machines, clusters, or
distributed grids). These strategies have been implemented in the QMC=Chem code
developed at Toulouse and illustrated with numerical applications on small
peptides of increasing sizes (158, 434, 1056 and 1731 electrons). Using 10k-80k
computing cores of the Curie machine (GENCI-TGCC-CEA, France) QMC=Chem has been
shown to be capable of running at the petascale level, thus demonstrating that
for this machine a large part of the peak performance can be achieved.
Implementation of large-scale QMC simulations for future exascale platforms
with a comparable level of efficiency is expected to be feasible
- …