2,212 research outputs found

    Vertex reconstruction framework and its implementation for CMS

    Full text link
    The class framework developed for vertex reconstruction in CMS is described. We emphasize how we proceed to develop a flexible, efficient and reliable piece of reconstruction software. We describe the decomposition of the algorithms into logical parts, the mathematical toolkit, and the way vertex reconstruction integrates into the CMS reconstruction project ORCA. We discuss the tools that we have developed for algorithm evaluation and optimization and for code release.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 4 pages, LaTeX, no figures. PSN TULT01

    The global covariance matrix of tracks fitted with a Kalman filter and an application in detector alignment

    Full text link
    We present an expression for the covariance matrix for the set of state vectors describing a track fitted with a Kalman filter. We demonstrate that this expression facilitates the use of a Kalman filter track model in a minimum χ2\chi^2 algorithm for the alignment of tracking detectors. We also show that it allows to incorporate vertex constraints in such a procedure without refitting the tracks.Comment: 17 pages, 4 figure

    Forecasting inflation using dynamic model averaging

    Get PDF
    We forecast quarterly US inflation based on the generalized Phillips curve using econometric methods which incorporate dynamic model averaging. These methods not only allow for coefficients to change over time, but also allow for the entire forecasting model to change over time. We find that dynamic model averaging leads to substantial forecasting improvements over simple benchmark regressions and more sophisticated approaches such as those using time varying coefficient models. We also provide evidence on which sets of predictors are relevant for forecasting in each period

    Bayesian Analysis of Switching ARCH Models

    Get PDF
    We consider a time series model with autoregressive conditional heteroskedasticity that is subject to changes in regime. The regimes evolve according to a multistate latent Markov switching process with unknown transition probabilities, and it is the constant in the variance process of the innovations that is subject to regime shifts. The joint estimation of the latent process and all model parameters is performed within a Bayesian framework using the method of Markov Chain Monte Carlo simulation. One iteration of the sampler involves first a multi-move step to simulate the latent process out of its conditional distribution. The Gibbs sampler can then be used to simulate the parameters, in particular the transition probabilities, for which the full conditional posterior distribution is known. For most parameters, however, the full conditionals do not belong to any well-known family of distributions. The simulations are then based on the Metropolis-Hastings algorithm with carefully chosen proposal densities. We perform model selection with respect to the number of states and the number of autoregressive parameters in the variance process using Bayes factors and model likelihoods. To this aim, the model likelihood is estimated by combining the candidate's formula with importance sampling. The usefulness of the sampler is demonstrated by applying it to the dataset previously used by Hamilton and Susmel who investigated models with switching autoregressive conditional heteroskedasticity using maximum likelihood methods. The paper concludes with some issues related to maximum likelihood methods, to classical model selection, and to potential straightforward extensions of the model presented here.

    Discovering Business Models of Data Marketplaces

    Get PDF
    The modern economy relies heavily on data as a resource for advancement and growth. Data marketplaces have gained an increasing amount of attention since they provide possibilities to exchange, trade and access data across organizations. Due to the rapid development of the field, the research on business models of data marketplaces is fragmented. We aimed to address this issue in this article by identifying the dimensions and characteristics of data marketplaces from a business model perspective. Following a rigorous process for taxonomy building, we propose a business model taxonomy for data marketplaces. Using evidence collected from a final sample of twenty data marketplaces, we analyze the frequency of specific characteristics of data marketplaces. In addition, we identify four data marketplace business model archetypes. The findings reveal the impact of the structure of data marketplaces as well as the relevance of anonymity and encryption for identified data marketplace archetypes

    The Data Product Canvas - A Visual Collaborative Tool for Designing Data-Driven Business Models

    Get PDF
    The availability of data sources and advances in analytics and artificial intelligence offers the opportunity for organizations to develop new data-driven products, services and business models. Though, this process is challenging for traditional organizations, as it requires knowledge and collaboration from several disciplines such as data science, domain experts, or business perspective. Furthermore, it is challenging to craft a meaningful value proposition based on data; whereas existing research can provide little guidance. To overcome those challenges, we conducted a Design Science Research project to derive requirements from literature and a case study, develop a collaborative visual tool and evaluate it through several workshops with traditional organizations. This paper presents the Data Product Canvas, a tool connecting data sources with the user challenges and wishes through several intermediate steps. Thus, this paper contributes to the scientific body of knowledge on developing data-driven business models, products and services

    Semiparametric Bayesian inference in smooth coefficient models

    Get PDF
    We describe procedures for Bayesian estimation and testing in cross-sectional, panel data and nonlinear smooth coefficient models. The smooth coefficient model is a generalization of the partially linear or additive model wherein coefficients on linear explanatory variables are treated as unknown functions of an observable covariate. In the approach we describe, points on the regression lines are regarded as unknown parameters and priors are placed on differences between adjacent points to introduce the potential for smoothing the curves. The algorithms we describe are quite simple to implement - for example, estimation, testing and smoothing parameter selection can be carried out analytically in the cross-sectional smooth coefficient model. We apply our methods using data from the National Longitudinal Survey of Youth (NLSY). Using the NLSY data we first explore the relationship between ability and log wages and flexibly model how returns to schooling vary with measured cognitive ability. We also examine a model of female labor supply and use this example to illustrate how the described techniques can been applied in nonlinear settings

    Supporting Data-Driven Business Model Innovations: A structured literature review on tools and methods

    Get PDF
    Purpose: This paper synthesizes existing research on tools and methods that support data-driven business model innovation, and maps out relevant directions for future research. Design/methodology/approach: We have carried out a structured literature review and collected and analysed a respectable but not excessively large number of 33 publications, due to the comparatively emergent nature of the field. Findings: Current literature on supporting data-driven business model innovation differs in the types of contribution (taxonomies, patterns, visual tools, methods, IT tool and processes), the types of thinking supported (divergent and convergent) and the elements of the business models that are addressed by the research (value creation, value capturing and value proposition). Research limitations/implications: Our review highlights the following as relevant directions for future research. Firstly, most research focusses on supporting divergent thinking, i.e. ideation. However, convergent thinking, i.e. evaluating, prioritizing, and deciding, is also necessary. Secondly, the complete procedure of developing data-driven business models and also the development on chains of tools related to this have been under-investigated. Thirdly, scarcely any IT tools specifically support the development of data-driven business models. These avenues also highlight the necessity to integrate between research on specifics of data in business model innovation, on innovation management, information systems and business analytics. Originality/value: This paper is the first to synthesize the literature on how to identify and develop data-driven business models, and to map out (interdisciplinary) research directions for the community. Keywords: Business model innovation, data-driven business models, research agenda.   Article classification: Literature revie

    Radiolabelling Pt-based quadruplex DNA binders via click chemistry.

    Get PDF
    Guanine-rich sequences of DNA and RNA can fold into intramolecular tetra-helical assemblies known as G-quadruplexes (G4). Their formation in vivo has been associated to a range of biological functions and therefore they have been identified as potential drug targets. Consequently, a broad range of small molecules have been developed to target G4s. Amongst those are metal complexes with Schiff base ligands. Herein, we report the functionalisation of one of these well-established G4 DNA binders (based on a square planar platinum(II)-salphen complex) with two different radiolabelled complexes. An 111In-conjugate was successfully used to assess its in vivo distribution in a mouse tumour model using single-photon emission computed tomography (SPECT) imaging. These studies highlighted the accumulation of this Pt-salphen-111In conjugate in the tumour
    corecore