11,945 research outputs found
Cultivating knowledge sharing through the relationship management maturity model
Purpose - The purpose of this paper is to present the development of the relationship management maturity model (RMMM), the output of an initiative aimed at bridging the gap between business units and the IT organisation. It does this through improving and assessing knowledge sharing between business and IT staff in Finco, a large financial services organisation. Design/methodology/approach - The objectives were achieved by undertaking ethnographic research with the relationship managers (RMs) as they carried out their activities, and developing the RMMM by visualizing the development of a community of practice (CoP) between business and IT. Findings - The RMMM demonstrates a learning mechanism to bridge the business/IT gap through an interpretive approach to knowledge sharing by defining knowledge sharing processes between business and IT and defining the tasks of the relationship managers as facilitators of knowledge sharing. Research limitations/implications - More research is necessary to determine whether the RMMM is a useful tool on which Finco can base the development of RM over the next few years. Practical implications - The RMMM acts as a practical knowledge management tool, and will act as a future reference for the RMs as they attempt to further develop the business/IT relationship. Originality/value - The findings provide an initial endorsement of the knowledge sharing perspective to understand the business/IT relationship. Also, the RMMM can be used to identify problematic issues and develop processes to address them
Reviewing and extending the five-user assumption: A grounded procedure for interaction evaluation
" © ACM, 2013. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Computer-Human Interaction (TOCHI), {VOL 20, ISS 5, (November 2013)} http://doi.acm.org/10.1145/2506210 "The debate concerning how many participants represents a sufficient number for interaction testing is
well-established and long-running, with prominent contributions arguing that five users provide a good
benchmark when seeking to discover interaction problems. We argue that adoption of five users in this
context is often done with little understanding of the basis for, or implications of, the decision. We present
an analysis of relevant research to clarify the meaning of the five-user assumption and to examine the
way in which the original research that suggested it has been applied. This includes its blind adoption and
application in some studies, and complaints about its inadequacies in others. We argue that the five-user
assumption is often misunderstood, not only in the field of Human-Computer Interaction, but also in fields
such as medical device design, or in business and information applications. The analysis that we present
allows us to define a systematic approach for monitoring the sample discovery likelihood, in formative and
summative evaluations, and for gathering information in order to make critical decisions during the
interaction testing, while respecting the aim of the evaluation and allotted budget. This approach – which
we call the ‘Grounded Procedure’ – is introduced and its value argued.The MATCH programme (EPSRC Grants: EP/F063822/1 EP/G012393/1
The treatment of the infrared region in perturbative QCD
We discuss the contribution coming from the infrared region to NLO matrix
elements and/or coefficient functions of hard QCD processes. Strictly speaking,
this contribution is not known theoretically, since it is beyond perturbative
QCD. For DGLAP evolution all the infrared contributions are collected in the
phenomenological input parton distribution functions (PDFs), at some relatively
low scale Q_0; functions which are obtained from a fit to the `global' data.
However dimensional regularization sometimes produces a non-zero result coming
from the infrared region. Instead of this conventional regularization
treatment, we argue that the proper procedure is to first subtract from the NLO
matrix element the contribution already generated at the same order in \alpha_s
by the LO DGLAP splitting function convoluted with the LO matrix element. This
prescription eliminates the logarithmic infrared divergence, giving a
well-defined result which is consistent with the original idea that everything
below Q_0 is collected in the PDF input. We quantify the difference between the
proposed treatment and the conventional approach using low-mass Drell-Yan
production and deep inelastic electron-proton scattering as examples; and
discuss the potential impact on the `global' PDF analyses. We present arguments
to show that the difference cannot be regarded as simply the use of an
alternative factorization scheme.Comment: 15 pages, 5 figures, title changed, text considerably modified to
improve presentation, and discussion section enlarge
Meaningful characterisation of perturbative theoretical uncertainties
We consider the problem of assigning a meaningful degree of belief to
uncertainty estimates of perturbative series. We analyse the assumptions which
are implicit in the conventional estimates made using renormalisation scale
variations. We then formulate a Bayesian model that, given equivalent initial
hypotheses, allows one to characterise a perturbative theoretical uncertainty
in a rigorous way in terms of a credibility interval for the remainder of the
series. We compare its outcome to the conventional uncertainty estimates in the
simple case of the calculation of QCD corrections to the e+e- -> hadrons
process. We find comparable results, but with important conceptual differences.
This work represents a first step in the direction of a more comprehensive and
rigorous handling of theoretical uncertainties in perturbative calculations
used in high energy phenomenology.Comment: 28 pages, 5 figures. Language modified in order to make it more
'bayesian'. No change in results. Version published in JHE
Theoretical Uncertainties in Electroweak Boson Production Cross Sections at 7, 10, and 14 TeV at the LHC
We present an updated study of the systematic errors in the measurements of
the electroweak boson cross-sections at the LHC for various experimental cuts
for a center of mass energy of 7, 10 and 14 TeV. The size of both electroweak
and NNLO QCD contributions are estimated, together with the systematic error
from the parton distributions. The effects of new versions of the MSTW, CTEQ,
and NNPDF PDFs are considered.Comment: PDFLatex with JHEP3.cls. 22 pages, 43 figures. Version 2 adds the
CT10W PDF set to analysis and updates the final systematic error table and
conclusions, plus several citations and minor wording changes. Version 3 adds
some references on electroweak and mixed QED/QCD corrections. Version 4 adds
more references and acknowledgement
Recommended from our members
Characterization of high purity germanium point contact detectors with low net impurity concentration
High Purity germanium point-contact detectors have low energy thresholds and excellent energy resolution over a wide energy range, and are thus widely used in nuclear and particle physics. In rare event searches, such as neutrinoless double beta decay, the point-contact geometry is of particular importance since it allows for pulse-shape discrimination, and therefore for a significant background reduction. In this paper we investigate the pulse-shape discrimination performance of ultra-high purity germanium point contact detectors. It is demonstrated that a minimal net impurity concentration is required to meet the pulse-shape performance requirements
W boson production at hadron colliders: the lepton charge asymmetry in NNLO QCD
We consider the production of W bosons in hadron collisions, and the
subsequent leptonic decay W->lnu_l. We study the asymmetry between the rapidity
distributions of the charged leptons, and we present its computation up to the
next-to-next-to-leading order (NNLO) in QCD perturbation theory. Our
calculation includes the dependence on the lepton kinematical cuts that are
necessarily applied to select W-> lnu_l events in actual experimental analyses
at hadron colliders. We illustrate the main differences between the W and
lepton charge asymmetry, and we discuss their physical origin and the effect of
the QCD radiative corrections. We show detailed numerical results on the charge
asymmetry in ppbar collisions at the Tevatron, and we discuss the comparison
with some of the available data. Some illustrative results on the lepton charge
asymmetry in pp collisions at LHC energies are presented.Comment: 37 pages, 21 figure
Evaluation of machine-learning methods for ligand-based virtual screening
Machine-learning methods can be used for virtual screening by analysing the structural characteristics of molecules of known (in)activity, and we here discuss the use of kernel discrimination and naive Bayesian classifier (NBC) methods for this purpose. We report a kernel method that allows the processing of molecules represented by binary, integer and real-valued descriptors, and show that it is little different in screening performance from a previously described kernel that had been developed specifically for the analysis of binary fingerprint representations of molecular structure. We then evaluate the performance of an NBC when the training-set contains only a very few active molecules. In such cases, a simpler approach based on group fusion would appear to provide superior screening performance, especially when structurally heterogeneous datasets are to be processed
Subtle changes in the flavour and texture of a drink enhance expectations of satiety
Background: The consumption of liquid calories has been implicated in the development of obesity and weight gain. Energy-containing drinks are often reported to have a weak satiety value: one explanation for this is that because of their fluid texture they are not expected to have much nutritional value. It is important to consider what features of these drinks can be manipulated to enhance their expected satiety value. Two studies investigated the perception of subtle changes in a drink’s viscosity, and the extent to which thick texture and creamy flavour contribute to the generation of satiety expectations. Participants in the first study rated the sensory characteristics of 16 fruit yogurt drinks of increasing viscosity. In study two, a new set of participants evaluated eight versions of the fruit yogurt drink, which varied in thick texture, creamy flavour and energy content, for sensory and hedonic characteristics and satiety expectations.
Results: In study one, participants were able to perceive small changes in drink viscosity that were strongly related to the actual viscosity of the drinks. In study two, the thick versions of the drink were expected to be more filling and have a greater expected satiety value, independent of the drink’s actual energy content. A creamy flavour enhanced the extent to which the drink was expected to be filling, but did not affect its expected satiety.
Conclusions: These results indicate that subtle manipulations of texture and creamy flavour can increase expectations that a fruit yogurt drink will be filling and suppress hunger, irrespective of the drink’s energy content. A thicker texture enhanced expectations of satiety to a greater extent than a creamier flavour, and may be one way to improve the anticipated satiating value of energy-containing beverages
Constraints for the nuclear parton distributions from Z and W production at the LHC
The LHC is foreseen to finally bring also the nuclear collisions to the TeV
scale thereby providing new possibilities for physics studies, in particular
related to the electro-weak sector of the Standard Model. We study here the Z
and W production in proton-lead and lead-lead collisions at the LHC,
concentrating on the prospects of testing the factorization and constraining
the nuclear modifications of the parton distribution functions (PDFs).
Especially, we find that the rapidity asymmetries in proton-nucleus collisions,
arising from the differences in the PDFs between the colliding objects, provide
a decisive advantage in comparison to the rapidity-symmetric nucleus-nucleus
case. We comment on how such studies will help to improve our knowledge of the
nuclear PDFs.Comment: The version accepted for publication in JHEP. New figures has been
added, and we also discuss the single charged lepton productio
- …
