88 research outputs found
Reach for the stars: disentangling quantity and quality of inventors’ productivity in a multifaceted latent variable model
Star inventors generate superior innovation outcomes. Their capacity to invent high-quality
patents might be decisive beyond mere productivity. However, the relationship between
quantitative and qualitative dimensions has not been exhaustively investigated. The equal
odds baseline (EOB) framework can explicitly model this relationship. This work com-
bines a theoretical model for creative production with recent calls in the patentometrics lit-
erature for multifaceted measurement of the ability to create high-quality patents. The EOB
is extended and analyzed through structural equation modeling. Specifically, we compared
a multifaceted EOB model with a single latent variable for quality, and a two-dimensional
model that distinguishes between technological complexity and value of invention portfo-
lios. The two-dimensional model had better fit but weaker factor scores (for the “value”
latent variable) than the unidimensional model. These findings suggest that both the uni-
and the two-dimensional approaches can be directly used for extending research on star
inventors, while for practical high-stakes assessments the two-dimensional model would
require further improvements
Scoring divergent thinking tests: A review and systematic framework
Divergent thinking tests are often used in creativity research as measures of creative potential. However, measurement approaches across studies vary to a great extent. One facet of divergent thinking measurement that contributes strongly to differences across studies is the scoring of participants’ responses. Most commonly, responses are scored for fluency, flexibility, and originality. However, even with respect to only one dimension (e.g., originality), scoring decisions vary extensively. In the current work, a systematic framework for practical scoring decisions was developed. Scoring dimensions, instruction-scoring fit, adequacy of responses, objectivity (vs. subjectivity), level of scoring (response vs. ideational pool level), and the method of aggregation were identified as determining factors of divergent thinking test scoring. In addition, recommendations and guidelines for making these decisions and reporting the information in papers have been provided
The machines take over: a comparison of various supervised learning approaches for automated scoring of divergent thinking tasks
Traditionally, researchers employ human raters for scoring responses to creative thinking tasks. Apart from the associated costs this approach entails two potential risks. First, human raters can be subjective in their scoring behavior (inter-rater-variance). Second, individual raters are prone to inconsistent scoring patterns (intra-rater-variance). In light of these issues, we present an approach for automated scoring of Divergent Thinking (DT) Tasks. We implemented a pipeline aiming to generate accurate rating predictions for DT responses using text mining and machine learning methods. Based on two existing data sets from two different laboratories, we constructed several prediction models incorporating features representing meta information of the response or features engineered from the response’s word embeddings that were obtained using pre-trained GloVe and Word2Vec word vector spaces. Out of these features, word embeddings and features derived from them proved to be particularly effective. Overall, longer responses tended to achieve higher ratings as well as responses that were semantically distant from the stimulus object. In our comparison of three state-of-the-art machine learning algorithms, Random Forest and XGBoost tended to slightly outperform the Support Vector Regression.Correction for this article: https://doi.org/10.1002/jocb.62
A New Perspective on the Multidimensionality of Divergent Thinking Tasks
In the presented work, a shift of perspective with respect to the dimensionality of divergent thinking (DT) tasks is introduced moving from the question of multidimensionality across DT scores (i.e., fluency, flexibility, or originality) to the question of multidimensionality within one holistic score of DT performance (i.e., snapshot ratings of creative quality). We apply IRTree models to test whether unidimensionality assumptions hold in different task instructions for snapshot scoring of DT tests across Likert-scale points and varying levels of fluency. It was found that evidence for unidimensionality across scale points was stronger with be-creative instructions as compared to be-fluent instructions which suggests better psychometric quality of ratings when be-creative instructions are used. In addition, creative quality latent variables pertaining to low-fluency and high-fluency ideational pools shared around 50% of variance which suggests both strong overlap, and evidence for differentiation. The presented approach allows to further examine the psychometric quality of subjective ratings and to examine new questions with respect to within-item multidimensionality in DT
Star inventors: quantity and quality in the EOB model
Star inventors generate superior innovation outcomes. Their capacity to invent high -quality patents might be
decisive beyond mere productivity. However, the relationship between quantitative and qualitative dimensions has
not been exhaustively investigated. The equal odds baseline (EOB) framework can explicitly model this
relationship. This work combines a theoretical model for creative production with recent calls in the patentometrics
literature for multifaceted measurement of the ability to create high-quality patents, extending the recent results
obtained with forward citations. The results provide evidence in favor of the EOB across all quality indicators and
that inventors’ capacities to create quality patents can be measured to potentially identify star inventors in a way
that explicitly takes the intricate relationship between overall productivity and patent quality into account.
Furthermore, the rankings of inventors in terms of quantity or quality are overall more dissimilar than similar,
confirming that quantity and quality can be measured as orthogonal dimensions. This result is of particular interest
for organizations and for the society: i ncentives and compensation schemes that focus only on a quantitative
assessment of inventors’ output risk to neglect a relevant part of the innovation productio
Researcher Capacity Estimation based on the Q Model: A Gen-eralized Linear Mixed Model Perspective
Below the following material is made openly available:
1. models_OSF.rda: the fitted model objects of the main analysis.
2. models_pe_OSF.rda: the fitted model objects of the complementary analyses reported in the supplemental file.
3. models_train_OSF.rda: the fitted model objects and data.frame objects used for cross-validation.
4. OSF Zeor_Inflation_Results.docx: Supplemental file in which complementary analyses are described and reported.
5. q_model_OSF.R: R script including all analyses reported in the main paper and supplemental file.
The dataset is also openly available:
https://lu-liu.github.io/hotstreaks/
It was created for the following publication:
Liu, L., Wang, Y., Sinatra, R., Giles, C. L., Song, C., & Wang, D. (2018). Hot streaks in artistic, cultural, and scientific careers. Nature, 559, 396-399. https://doi.org/10.1038/s41586-018-0315-
- …