1,640 research outputs found

    Clustering Memes in Social Media

    Full text link
    The increasing pervasiveness of social media creates new opportunities to study human social behavior, while challenging our capability to analyze their massive data streams. One of the emerging tasks is to distinguish between different kinds of activities, for example engineered misinformation campaigns versus spontaneous communication. Such detection problems require a formal definition of meme, or unit of information that can spread from person to person through the social network. Once a meme is identified, supervised learning methods can be applied to classify different types of communication. The appropriate granularity of a meme, however, is hardly captured from existing entities such as tags and keywords. Here we present a framework for the novel task of detecting memes by clustering messages from large streams of social data. We evaluate various similarity measures that leverage content, metadata, network features, and their combinations. We also explore the idea of pre-clustering on the basis of existing entities. A systematic evaluation is carried out using a manually curated dataset as ground truth. Our analysis shows that pre-clustering and a combination of heterogeneous features yield the best trade-off between number of clusters and their quality, demonstrating that a simple combination based on pairwise maximization of similarity is as effective as a non-trivial optimization of parameters. Our approach is fully automatic, unsupervised, and scalable for real-time detection of memes in streaming data.Comment: Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM'13), 201

    A Penalty Method for the Numerical Solution of Hamilton-Jacobi-Bellman (HJB) Equations in Finance

    Full text link
    We present a simple and easy to implement method for the numerical solution of a rather general class of Hamilton-Jacobi-Bellman (HJB) equations. In many cases, the considered problems have only a viscosity solution, to which, fortunately, many intuitive (e.g. finite difference based) discretisations can be shown to converge. However, especially when using fully implicit time stepping schemes with their desirable stability properties, one is still faced with the considerable task of solving the resulting nonlinear discrete system. In this paper, we introduce a penalty method which approximates the nonlinear discrete system to first order in the penalty parameter, and we show that an iterative scheme can be used to solve the penalised discrete problem in finitely many steps. We include a number of examples from mathematical finance for which the described approach yields a rigorous numerical scheme and present numerical results.Comment: 18 Pages, 4 Figures. This updated version has a slightly more detailed introduction. In the current form, the paper will appear in SIAM Journal on Numerical Analysi

    On the relative intensity of Poisson’s spot

    Get PDF
    The Fresnel diffraction phenomenon referred to as Poisson’s spot or spot of Arago has, beside its historical significance, become relevant in a number of fields. Among them are for example fundamental tests of the super-position principle in the transition from quantum to classical physics and the search for extra-solar planets using star shades. Poisson’s spot refers to the positive on-axis wave interference in the shadow of any spherical or circular obstacle. While the spot’s intensity is equal to the undisturbed field in the plane wave picture, its intensity in general depends on a number of factors, namely the size and wavelength of the source, the size and surface corrugation of the diffraction obstacle, and the distances between source, obstacle and detector. The intensity can be calculated by solving the Fresnel–Kirchhoff diffraction integral numerically, which however tends to be computationally expensive. We have therefore devised an analytical model for the on-axis intensity of Poisson’s spot relative to the intensity of the undisturbed wave field and successfully validated it both using a simple light diffraction setup and numerical methods. The model will be useful for optimizing future Poisson-spot matter-wave diffraction experiments and determining under what experimental conditions the spot can be observed

    Penalty Methods for the Solution of Discrete HJB Equations -- Continuous Control and Obstacle Problems

    Full text link
    In this paper, we present a novel penalty approach for the numerical solution of continuously controlled HJB equations and HJB obstacle problems. Our results include estimates of the penalisation error for a class of penalty terms, and we show that variations of Newton's method can be used to obtain globally convergent iterative solvers for the penalised equations. Furthermore, we discuss under what conditions local quadratic convergence of the iterative solvers can be expected. We include numerical results demonstrating the competitiveness of our methods.Comment: 31 Pages, 7 Figure

    What are the triggers of Asian visitor satisfaction and loyalty in the Korean heritage site?

    Get PDF
    Based on complexity theory, this study examines a configurational model that uses motivation antecedents and demographic configurations to explore the causal recipes that lead to high and low levels of Asian visitor satisfaction and loyalty. Data were collected from 183 Chinese and Japanese visitors to the Hanok heritage site in Seoul, South Korea. Asymmetrical modeling using a fuzzy-set qualitative comparative analysis was applied and a combination of desired behavioral outcomes identified. Hanok experience from the motivation configuration and gender from the demographic configuration appeared as necessary conditions to make visitors satisfied and loyal. Key tenets of complexity theory are supported by the study's findings

    Epitaxy and magnetotransport of Sr_2FeMoO_6 thin films

    Full text link
    By pulsed-laser deposition epitaxial thin films of Sr_2FeMoO_6 have been pre- pared on (100) SrTiO_3 substrates. Already for a deposition temperature of 320 C epitaxial growth is achieved. Depending on deposition parameters the films show metallic or semiconducting behavior. At high (low) deposition temperature the Fe,Mo sublattice has a rock-salt (random) structure. The metallic samples have a large negative magnetoresistance which peaks at the Curie temperature. The magnetic moment was determined to 4 mu_B per formula unit (f.u.), in agreement with the expected value for an ideal ferrimagnetic arrangement. We found an ordinary Hall coefficient of -6.01x10^{-10} m^3/As at 300 K, corresponding to an electronlike charge-carrier density of 1.3 per Fe,Mo-pair. In the semiconducting films the magnetic moment is reduced to 1 mu_B/f.u. due to disorder in the Fe,Mo sublattice. In low fields an anomalous holelike contribution dominates the Hall voltage, which vanishes at low temperatures for the metallic films only.Comment: Institute of Physics, University of Mainz, Germany, 4 pages, including 5 pictures and 1 Table, submitted to Phys. Rev.

    Efficient cosmological parameter sampling using sparse grids

    Full text link
    We present a novel method to significantly speed up cosmological parameter sampling. The method relies on constructing an interpolation of the CMB-log-likelihood based on sparse grids, which is used as a shortcut for the likelihood-evaluation. We obtain excellent results over a large region in parameter space, comprising about 25 log-likelihoods around the peak, and we reproduce the one-dimensional projections of the likelihood almost perfectly. In speed and accuracy, our technique is competitive to existing approaches to accelerate parameter estimation based on polynomial interpolation or neural networks, while having some advantages over them. In our method, there is no danger of creating unphysical wiggles as it can be the case for polynomial fits of a high degree. Furthermore, we do not require a long training time as for neural networks, but the construction of the interpolation is determined by the time it takes to evaluate the likelihood at the sampling points, which can be parallelised to an arbitrary degree. Our approach is completely general, and it can adaptively exploit the properties of the underlying function. We can thus apply it to any problem where an accurate interpolation of a function is needed.Comment: Submitted to MNRAS, 13 pages, 13 figure
    • …
    corecore