39,728 research outputs found

    Diffusion of Lexical Change in Social Media

    Full text link
    Computer-mediated communication is driving fundamental changes in the nature of written language. We investigate these changes by statistical analysis of a dataset comprising 107 million Twitter messages (authored by 2.7 million unique user accounts). Using a latent vector autoregressive model to aggregate across thousands of words, we identify high-level patterns in diffusion of linguistic change over the United States. Our model is robust to unpredictable changes in Twitter's sampling rate, and provides a probabilistic characterization of the relationship of macro-scale linguistic influence to a set of demographic and geographic predictors. The results of this analysis offer support for prior arguments that focus on geographical proximity and population size. However, demographic similarity -- especially with regard to race -- plays an even more central role, as cities with similar racial demographics are far more likely to share linguistic influence. Rather than moving towards a single unified "netspeak" dialect, language evolution in computer-mediated communication reproduces existing fault lines in spoken American English.Comment: preprint of PLOS-ONE paper from November 2014; PLoS ONE 9(11) e11311

    Demonstration and validation of Kernel Density Estimation for spatial meta-analyses in cognitive neuroscience using simulated data

    Get PDF
    The data presented in this article are related to the research article entitled "Convergence of semantics and emotional expression within the IFG pars orbitalis" (Belyk et al., 2017) [1]. The research article reports a spatial meta-analysis of brain imaging experiments on the perception of semantic compared to emotional communicative signals in humans. This Data in Brief article demonstrates and validates the use of Kernel Density Estimation (KDE) as a novel statistical approach to neuroimaging data. First, we performed a side-by-side comparison of KDE with a previously published meta-analysis that applied activation likelihood estimation, which is the predominant approach to meta-analyses in cognitive neuroscience. Second, we analyzed data simulated with known spatial properties to test the sensitivity of KDE to varying degrees of spatial separation. KDE successfully detected true spatial differences in simulated data and displayed few false positives when no true differences were present. R code to simulate and analyze these data is made publicly available to facilitate the further evaluation of KDE for neuroimaging data and its dissemination to cognitive neuroscientists

    Extracting information from S-curves of language change

    Full text link
    It is well accepted that adoption of innovations are described by S-curves (slow start, accelerating period, and slow end). In this paper, we analyze how much information on the dynamics of innovation spreading can be obtained from a quantitative description of S-curves. We focus on the adoption of linguistic innovations for which detailed databases of written texts from the last 200 years allow for an unprecedented statistical precision. Combining data analysis with simulations of simple models (e.g., the Bass dynamics on complex networks) we identify signatures of endogenous and exogenous factors in the S-curves of adoption. We propose a measure to quantify the strength of these factors and three different methods to estimate it from S-curves. We obtain cases in which the exogenous factors are dominant (in the adoption of German orthographic reforms and of one irregular verb) and cases in which endogenous factors are dominant (in the adoption of conventions for romanization of Russian names and in the regularization of most studied verbs). These results show that the shape of S-curve is not universal and contains information on the adoption mechanism. (published at "J. R. Soc. Interface, vol. 11, no. 101, (2014) 1044"; DOI: http://dx.doi.org/10.1098/rsif.2014.1044)Comment: 9 pages, 5 figures, Supplementary Material is available at http://dx.doi.org/10.6084/m9.figshare.122178

    Autoregressive time series prediction by means of fuzzy inference systems using nonparametric residual variance estimation

    Get PDF
    We propose an automatic methodology framework for short- and long-term prediction of time series by means of fuzzy inference systems. In this methodology, fuzzy techniques and statistical techniques for nonparametric residual variance estimation are combined in order to build autoregressive predictive models implemented as fuzzy inference systems. Nonparametric residual variance estimation plays a key role in driving the identification and learning procedures. Concrete criteria and procedures within the proposed methodology framework are applied to a number of time series prediction problems. The learn from examples method introduced by Wang and Mendel (W&M) is used for identification. The Levenberg–Marquardt (L–M) optimization method is then applied for tuning. The W&M method produces compact and potentially accurate inference systems when applied after a proper variable selection stage. The L–M method yields the best compromise between accuracy and interpretability of results, among a set of alternatives. Delta test based residual variance estimations are used in order to select the best subset of inputs to the fuzzy inference systems as well as the number of linguistic labels for the inputs. Experiments on a diverse set of time series prediction benchmarks are compared against least-squares support vector machines (LS-SVM), optimally pruned extreme learning machine (OP-ELM), and k-NN based autoregressors. The advantages of the proposed methodology are shown in terms of linguistic interpretability, generalization capability and computational cost. Furthermore, fuzzy models are shown to be consistently more accurate for prediction in the case of time series coming from real-world applications.Ministerio de Ciencia e Innovación TEC2008-04920Junta de Andalucía P08-TIC-03674, IAC07-I-0205:33080, IAC08-II-3347:5626

    Optimization of fuzzy analogy in software cost estimation using linguistic variables

    Get PDF
    One of the most important objectives of software engineering community has been the increase of useful models that beneficially explain the development of life cycle and precisely calculate the effort of software cost estimation. In analogy concept, there is deficiency in handling the datasets containing categorical variables though there are innumerable methods to estimate the cost. Due to the nature of software engineering domain, generally project attributes are often measured in terms of linguistic values such as very low, low, high and very high. The imprecise nature of such value represents the uncertainty and vagueness in their elucidation. However, there is no efficient method that can directly deal with the categorical variables and tolerate such imprecision and uncertainty without taking the classical intervals and numeric value approaches. In this paper, a new approach for optimization based on fuzzy logic, linguistic quantifiers and analogy based reasoning is proposed to improve the performance of the effort in software project when they are described in either numerical or categorical data. The performance of this proposed method exemplifies a pragmatic validation based on the historical NASA dataset. The results were analyzed using the prediction criterion and indicates that the proposed method can produce more explainable results than other machine learning methods.Comment: 14 pages, 8 figures; Journal of Systems and Software, 2011. arXiv admin note: text overlap with arXiv:1112.3877 by other author
    corecore