15,448 research outputs found

    From a Mirage to an Oasis: Narcissism, Perceived Creativity, and Creative Performance

    Get PDF
    We examine the link between narcissism and creativity at the individual, relational, and group levels of analysis. We find that narcissists are not necessarily more creative than others but they think they are, and they are adept at convincing others to agree with them. In the first study, narcissism was positively associated with self-rated creativity, despite the fact that blind coders saw no difference between the creative products offered by those low and high on narcissism. In a second study, more narcissistic individuals asked to pitch creative ideas to a target person were judged by the targets as being more creative than were less narcissistic individuals, in part because narcissists were more enthusiastic. Finally, in a study of group creativity, we find evidence of a curvilinear effect: having more narcissists is better for generating creative outcomes (but having too many provides diminishing returns)

    Evaluation of the Land Surface Water Budget in NCEP/NCAR and NCEP/DOE Reanalyses using an Off-line Hydrologic Model

    Get PDF
    The ability of the National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis (NRA1) and the follow-up NCEP/Department of Energy (DOE) reanalysis (NRA2), to reproduce the hydrologic budgets over the Mississippi River basin is evaluated using a macroscale hydrology model. This diagnosis is aided by a relatively unconstrained global climate simulation using the NCEP global spectral model, and a more highly constrained regional climate simulation using the NCEP regional spectral model, both employing the same land surface parameterization (LSP) as the reanalyses. The hydrology model is the variable infiltration capacity (VIC) model, which is forced by gridded observed precipitation and temperature. It reproduces observed streamflow, and by closure is constrained to balance other terms in the surface water and energy budgets. The VIC-simulated surface fluxes therefore provide a benchmark for evaluating the predictions from the reanalyses and the climate models. The comparisons, conducted for the 10-year period 1988–1997, show the well-known overestimation of summer precipitation in the southeastern Mississippi River basin, a consistent overestimation of evapotranspiration, and an underprediction of snow in NRA1. These biases are generally lower in NRA2, though a large overprediction of snow water equivalent exists. NRA1 is subject to errors in the surface water budget due to nudging of modeled soil moisture to an assumed climatology. The nudging and precipitation bias alone do not explain the consistent overprediction of evapotranspiration throughout the basin. Another source of error is the gravitational drainage term in the NCEP LSP, which produces the majority of the model\u27s reported runoff. This may contribute to an overprediction of persistence of surface water anomalies in much of the basin. Residual evapotranspiration inferred from an atmospheric balance of NRA1, which is more directly related to observed atmospheric variables, matches the VIC prediction much more closely than the coupled models. However, the persistence of the residual evapotranspiration is much less than is predicted by the hydrological model or the climate models

    Techniques of linear prediction, with application to oceanic and atmospheric fields in the tropical Pacific

    No full text
    The problem of constructing optimal linear prediction models by multivariance regression methods is reviewed. It is well known that as the number of predictors in a model is increased, the skill of the prediction grows, but the statistical significance generally decreases. For predictions using a large number of candidate predictors, strategies are therefore needed to determine optimal prediction models which properly balance the competing requirements of skill and significance. The popular methods of coefficient screening or stepwise regression represent a posteriori predictor selection methods and therefore cannot be used to recover statistically significant models by truncation if the complete model, including all predictors, is statistically insignificant. Higher significance can be achieved only by a priori reduction of the predictor set. To determine the maximum number of predictors which may be meaningfully incorporated in a model, a model hierarchy can be used in which a series of best fit prediction models is constructed for a (prior defined) nested sequence of predictor sets, the sequence being terminated when the significance level either falls below a prescribed limit or reaches a maximum value. The method requires a reliable assessment of model significance. This is characterized by a quadratic statistic which is defined independently of the model skill or artificial skill. As an example, the method is applied to the prediction of sea surface temperature anomalies at Christmas Island (representative of sea surface temperatures in the central equatorial Pacific) and variations of the central and east Pacific Hadley circulation (characterized by the second empirical orthogonal function (EOF) of the meridional component of the trade wind anomaly field) using a general multiple‐time‐lag prediction matrix. The ordering of the predictors is based on an EOF sequence, defined formally as orthogonal variables in the composite space of all (normalized) predictors, irrespective of their different physical dimensions, time lag, and geographic position. The choice of a large set of 20 predictors at 12 time lags yields significant predictability only for forecast periods of 3 to 5 months. However, a prior reduction of the predictor set to 4 predictors at 10 time lags leads to 95% significant predictions with skill values of the order of 0.4 to 0.7 up to 6 or 8 months. For infinitely long time series the construction of optimal prediction models reduces essentially to the problem of linear system identification. However, the model hierarchies normally considered for the simulation of general linear systems differ in structure from the model hierarchies which appear to be most suitable for constructing pure prediction models. Thus the truncation imposed by statistical significance requirements can result in rather different models for the two cases. The relation between optimal prediction models and linear dynamical models is illustrated by the prediction of east‐west sea level changes in the equatorial Pacific from wind field anomalies. It is shown that the optimal empirical prediction is statistically consistent in this case with both the first‐order relaxation and damped oscillator models recently proposed by McWilliams and Gent (but with somewhat different model parameters than suggested by the authors). Thus the data do not allow a distinction between the two physical models; the simplest acceptable model is the first‐order damped response. Finally, the problem of estimating forecast skill is discussed. It is usually stated that the forecast skill is smaller than the true skill, which in turn is smaller than the hindcast skill, by an amount which in both cases is approximately equal to the artificial skill. However, this result applies to the mean skills averaged over the ensemble of all possible hindcast data sets, given the true model. Under the more appropriate side condition of a given hindcast data set and an unknown true model, the estimation of the forecast skill represents a problem of statistical inference and is dependent on the assumed prior probability distribution of true models. The Bayesian hypothesis of a uniform prior distribution yields an average forecast skill equal to the hindcast skill, but other (equally acceptable) assumptions yield lower forecast skills more compatible with the usual hindcast‐averaged expressio

    Density Forecasting: A Survey

    Get PDF
    A density forecast of the realization of a random variable at some future time is an estimate of the probability distribution of the possible future values of that variable. This article presents a selective survey of applications of density forecasting in macroeconomics and finance, and discusses some issues concerning the production, presentation and evaluation of density forecasts.

    Timescales of Massive Human Entrainment

    Get PDF
    The past two decades have seen an upsurge of interest in the collective behaviors of complex systems composed of many agents entrained to each other and to external events. In this paper, we extend concepts of entrainment to the dynamics of human collective attention. We conducted a detailed investigation of the unfolding of human entrainment - as expressed by the content and patterns of hundreds of thousands of messages on Twitter - during the 2012 US presidential debates. By time locking these data sources, we quantify the impact of the unfolding debate on human attention. We show that collective social behavior covaries second-by-second to the interactional dynamics of the debates: A candidate speaking induces rapid increases in mentions of his name on social media and decreases in mentions of the other candidate. Moreover, interruptions by an interlocutor increase the attention received. We also highlight a distinct time scale for the impact of salient moments in the debate: Mentions in social media start within 5-10 seconds after the moment; peak at approximately one minute; and slowly decay in a consistent fashion across well-known events during the debates. Finally, we show that public attention after an initial burst slowly decays through the course of the debates. Thus we demonstrate that large-scale human entrainment may hold across a number of distinct scales, in an exquisitely time-locked fashion. The methods and results pave the way for careful study of the dynamics and mechanisms of large-scale human entrainment.Comment: 20 pages, 7 figures, 6 tables, 4 supplementary figures. 2nd version revised according to peer reviewers' comments: more detailed explanation of the methods, and grounding of the hypothese

    The impact of local masses and inertias on the dynamic modelling of flexible manipulators

    Get PDF
    After a brief review of the recent literature dealing with flexible multi-body modelling for control design purpose, the paper first describes three different techniques used to build up the dynamic model of SECAFLEX, a 2 d.o.f. flexible in-plane manipulator driven by geared DC motors : introduction of local fictitious springs, use of a basis of assumed Euler-Bernouilli cantilever-free modes and of 5th order polynomial modes. This last technique allows to take easily into account local masses and inertias, which appear important in real-life experiments. Transformation of the state space models obtained in a common modal basis allows a quantitative comparison of the results obtained, while Bode plots of the various interesting transfer functions relating input torques to output in-joint and tip mea-surements give rather qualitative results. A parametric study of the effect of angular configuration changes and physical parameter modifications (including the effect of rotor inertia) shows that the three techniques give similar results up to the first flexible modes of each link when concentrated masses and inertias are present. From the control point of view, “pathological” cases are exhibited : uncertainty in the phase of the non-colocated transfer functions, high dependence of the free modes in the rotor inertia value. Robustness of the control to these kinds of uncertainties appears compulsory

    Profiling of OCR'ed Historical Texts Revisited

    Full text link
    In the absence of ground truth it is not possible to automatically determine the exact spectrum and occurrences of OCR errors in an OCR'ed text. Yet, for interactive postcorrection of OCR'ed historical printings it is extremely useful to have a statistical profile available that provides an estimate of error classes with associated frequencies, and that points to conjectured errors and suspicious tokens. The method introduced in Reffle (2013) computes such a profile, combining lexica, pattern sets and advanced matching techniques in a specialized Expectation Maximization (EM) procedure. Here we improve this method in three respects: First, the method in Reffle (2013) is not adaptive: user feedback obtained by actual postcorrection steps cannot be used to compute refined profiles. We introduce a variant of the method that is open for adaptivity, taking correction steps of the user into account. This leads to higher precision with respect to recognition of erroneous OCR tokens. Second, during postcorrection often new historical patterns are found. We show that adding new historical patterns to the linguistic background resources leads to a second kind of improvement, enabling even higher precision by telling historical spellings apart from OCR errors. Third, the method in Reffle (2013) does not make any active use of tokens that cannot be interpreted in the underlying channel model. We show that adding these uninterpretable tokens to the set of conjectured errors leads to a significant improvement of the recall for error detection, at the same time improving precision
    corecore