410,959 research outputs found
Innovation dynamics and the role of infrastructure
This report shows how the role of the infrastructure – standards, measurement,
accreditation, design and intellectual property – can be integrated into a quantitative
model of the innovation system and used to help explain levels and changes in
labour productivity and growth in turnover and employment. The summary focuses
on the new results from the project, set out in more detail in Sections 5 and 6. The
first two sections of the report provide contextual material on the UK innovation
system, the nature and content of the infrastructure knowledge and the institutions
that provide it.
Mixed modes of innovation, the typology of innovation practices developed and
applied here, is constituted of six mixed modes, derived from many variables taken
from the UK Innovation Survey. These are:
Investing in intangibles
Technology with IP innovating
Using codified knowledge
Wider (managerial) innovating
Market-led innovating
External process modernising.
The composition of the innovation modes, and the approach used to compute them,
is set out in more detail in Section 4. Modes can be thought of as the underlying
process of innovation, a bundle of activities undertaken jointly by firms, and whose
working out generates well known indicators such as new product innovations, R&D
spending and accessing external information, that are the partial indicators gathered
from the innovation survey itself
What happened to the knowledge economy? ICT, intangible investment and Britain's productivity record revisited
A major puzzle is that despite the apparent importance of innovation around the "knowledge economy", UK macro performance appears unaffected: investment rates are flat, and productivity has slowed down. We investigate whether measurement issues might account for the puzzle. The standard National Accounts treatment of most spending on "knowledge" or "intangible" assets is as intermediate consumption. Thus they do not count as either GDP or investment. We ask how treating such spending as investment affects some key macro variables, namely, market sector gross value added (MGVA), business investment, capital and labour shares, growth in labour and total factor productivity, and capital deepening. We find (a) MGVA was understated by about 6% in 1970 and 13% in 2004 (b) instead of the nominal business investment/MGVA ratio falling since 1970 it is has been rising (c) instead of the labour compensation/MGVA ratio being flat since 1970 it has been falling (d) growth in labour productivity and capital deepening has been understated and growth in total factor productivity overstated (e) total factor productivity growth has not slowed since 1990 but has been accelerating
Quality measurement of semantic standards
Quality of semantic standards is unadressed in current research while there is an explicit need from standard developers. The business importance is evident since quality of standards will have impact on its diffusion and achieved interoperability in practice. An instrument to measure the quality of semantic standards is designed to contribute to the knowledge domain, standards developers and might ultimo lead to improved interoperability. This instrument is iteratively designed with multiple case studies. This paper describes the rationale and research design, just as current status and future plans
Recommended from our members
A Mixed-Effects Location Scale Model for Dyadic Interactions.
We present a mixed-effects location scale model (MELSM) for examining the daily dynamics of affect in dyads. The MELSM includes person and time-varying variables to predict the location, or individual means, and the scale, or within-person variances. It also incorporates a submodel to account for between-person variances. The dyadic specification can accommodate individual and partner effects in both the location and the scale components, and allows random effects for all location and scale parameters. All covariances among the random effects, within and across the location and the scale are also estimated. These covariances offer new insights into the interplay of individual mean structures, intra-individual variability, and the influence of partner effects on such factors. To illustrate the model, we use data from 274 couples who provided daily ratings on their positive and negative emotions toward their relationship - up to 90 consecutive days. The model is fit using Hamiltonian Monte Carlo methods, and includes subsets of predictors in order to demonstrate the flexibility of this approach. We conclude with a discussion on the usefulness and the limitations of the MELSM for dyadic research
Scalar and vector games in the evaluation of social and environmental disclosure and their relationship with market value
This study evaluated the association of social, environmental and socio-environmental disclosure with the market value of Brazilian companies with high environmental impact based on the Game
Theory. To perform the analysis, rankings were developed by using scalar and vector gaming techniques.
After the construction of the rankings, the association between them was verified through Kendall’s correlation analysis. The findings indicate a positive association of social, environmental and socio-environmental
disclosure with the market value of Brazilian companies with high environmental impact. In addition, there
was an increase in the degree of association during the investigated periods. This result suggests that the
market is increasingly demanding regarding the disclosure of this information, which indicates that the
disclosure of these information can bring competitive advantages in relation to the market value
A design for testability study on a high performance automatic gain control circuit.
A comprehensive testability study on a commercial automatic gain control circuit is presented which aims to identify design for testability (DfT) modifications to both reduce production test cost and improve test quality. A fault simulation strategy based on layout extracted faults has been used to support the study. The paper proposes a number of DfT modifications at the layout, schematic and system levels together with testability. Guidelines that may well have generic applicability. Proposals for using the modifications to achieve partial self test are made and estimates of achieved fault coverage and quality levels presente
Eye-tracking as a measure of cognitive effort for post-editing of machine translation
The three measurements for post-editing effort as proposed by Krings (2001) have been adopted by many researchers in subsequent studies and publications. These measurements comprise temporal effort (the speed or productivity rate of post-editing, often measured in words per second or per minute at the segment level), technical effort (the number of actual edits performed by the post-editor, sometimes approximated using the Translation Edit Rate metric (Snover et al. 2006), again usually at the segment level), and cognitive effort. Cognitive effort has been measured using Think-Aloud Protocols, pause measurement, and, increasingly, eye-tracking. This chapter provides a review of studies of post-editing effort using eye-tracking, noting the influence of publications by Danks et al. (1997), and O’Brien (2006, 2008), before describing a single study in detail.
The detailed study examines whether predicted effort indicators affect post-editing effort and results were previously published as Moorkens et al. (2015). Most of the eye-tracking data analysed were unused in the previou
From buildings to cities: techniques for the multi-scale analysis of urban form and function
The built environment is a significant factor in many urban processes, yet direct measures of built form are
seldom used in geographical studies. Representation and analysis of urban form and function could provide
new insights and improve the evidence base for research. So far progress has been slow due to limited data
availability, computational demands, and a lack of methods to integrate built environment data with
aggregate geographical analysis. Spatial data and computational improvements are overcoming some of
these problems, but there remains a need for techniques to process and aggregate urban form data. Here we
develop a Built Environment Model of urban function and dwelling type classifications for Greater
London, based on detailed topographic and address-based data (sourced from Ordnance Survey
MasterMap). The multi-scale approach allows the Built Environment Model to be viewed at fine-scales for
local planning contexts, and at city-wide scales for aggregate geographical analysis, allowing an improved
understanding of urban processes. This flexibility is illustrated in the two examples, that of urban function
and residential type analysis, where both local-scale urban clustering and city-wide trends in density and
agglomeration are shown. While we demonstrate the multi-scale Built Environment Model to be a viable
approach, a number of accuracy issues are identified, including the limitations of 2D data, inaccuracies in
commercial function data and problems with temporal attribution. These limitations currently restrict the
more advanced applications of the Built Environment Model
- …