175 research outputs found

    Managing participation through modal affordances on Twitter

    Get PDF
    On Twitter, retweets function as a method of reporting speech and spreading the talk of other users. We propose that changes to the interface and mechanisms of Twitter have led to the coexistence of two complementary forms of retweeting. The Preserving Retweet, enabled by the Twitter interface, directly reports speech and retains attribution to the original author, but it does not allow for any modification or indication of stance. The Adapting Retweet, a user-created norm studied by boyd et al. (2010), allows users the option to add comments to pre-existing tweets but resulting in confusion in attribution. Using an updated form of Goffman's participation framework, we analyze the use of these two types of retweets and their impact on attribution

    Altmetrics, Legacy Scholarship, and Scholarly Legacy

    Get PDF
    When using alternative metrics (altmetrics) to investigate the impact of a scholar’s work, researchers and librarians are typically cautioned that altmetrics will be less useful for older works of scholarship. This is because it is difficult to collect social media and other attention retroactively, and the numbers will be lower if the work was published before social media marketing and promotion were widely accepted in a field. In this article, we argue that altmetrics can provide useful information about older works in the form of documenting renewed attention to past scholarship as part of a scholar’s legacy. Using the altmetrics profile of the late Dr. Thomas E. Starzl, often referred to as “the father of modern transplantation”, we describe two cases where altmetrics provided information about renewed interest in his works: a controversy about race and genetics that shows the ongoing impact of a particular work, and posthumous remembrances by colleagues which reveal his scholarly legacy.</jats:p

    Social Systems in Virtual Worlds: Building a better looking raid loot system in World of Warcraft using the IAD framework.

    Get PDF
    Online multiplayer games and virtual worlds are difficult to design; they contain economies and other complex systems where the decisions of one player can have far-reaching implications for other players. When considering the welfare of players, game designers have a difficult job; they must create systems, which optimize outcomes for a body of players who often have different motivations for playing the game (Bartle, 1996; Cummings & Ross, 2011; Yee, 2006). In addition, knowledge of how to create and tune game systems - when it does exist - is often institutionalized. There is no shared theory of how to build and maintain a virtual society. The recent development of telemetry systems has helped to mitigate this issue by providing developers with real-time feedback regarding the state of a population, but telemetry systems are not theoretical. Machine learning does not provide developers with insights into why outcomes occur, or how to develop social systems that are optimal for engagement, enjoyment, or the formation of meaningful relationships. This paper proposes a tool that can provide game developers and researchers with such insights. The Institutional Analysis and Design (IAD) Framework was developed specifically for identifying the universal elements of institutions and draws from decades of research regarding human behavior in various institutional settings (Ostrom, 2005). The IAD framework is not a prescriptive solution detailing how to solve all institutional problems; rather, it is a tool for evaluating the arrangement of different institutional elements. Coupled with various theories of human behavior (game theory, economic theory, social psychological theory) the IAD can provide predictions about player behavior and macro-social outcomes. When used correctly, it can provide developers with theory, which explains why social systems can equilibrate into undesirable outcomes - anti-social behavior or a suboptimal distribution of resources - and identifies institutional designs that can solve social problems. In this paper we examine how the IAD framework can be applied to an existing arrangement, specifically targeting the Looking-for-Raid (LFR) loot system of World of Warcraft. The LFR system, released in November 2011, is a game mechanic that matches players into 25-person groups and delivers them into a raid. Once there, the group navigates through a dungeon filled with enemies, occasionally encountering a powerful enemy that requires coordinated teamwork to defeat. Along the way the group finds a few valuable items, which are distributed by allowing eligible players to enter a lottery. In most cases, the 25 players in the group are strangers. They generally have no knowledge of their fellow group members' playing abilities, and no knowledge about their needs or strategies when it comes to obtaining loot. All of the loot items are tagged for specific classes and roles, allowing only players who fit the prerequisites to enter the lottery; however, players who fit the prerequisites can enter the lottery whether they actually need the item or not. This system relies on a player to honestly report their need of an item; in practice, most players report rampant greed (e.g. players "needing" items that they already possess) and dissatisfaction with the system. Interestingly, most players report honest intentions and a desire for honest behavior. Unfortunately, most players also have expectations that others will be greedy, and most players report changing their own behavior - acting in a more greedy ways - because of expectations of greed. One example of this change in behavior can be seen in loot sharing: players bring friends along on the raid, collaborate to "need" on items, and share them with each other to improve the odds of winning a piece of loot. Even though these players are working within the rules of the system to increase their odds of winning, they are also contributing to the very practice that they find undesirable. In this paper, we use two interviews with focus groups and surveys distributed to 317 active World of Warcraft players to analyze players' expectations regarding loot in the LFR system as well as their reported behavior in loot situations. We demonstrate that the reported behavior and expectations follow the predictions of the IAD model (with game theory) under the assumptions of players as boundedly rational agents playing an N-player mixed-motive game. We explore the theoretical potential of the IAD for game developers who are attempting to predict the outcomes of social systems within games, and we also propose a few potential solutions involving information/communication channels, social norms with sanctions, and auctions that could shift the equilibrium of the current system to a more optimal outcome

    Informing the Digital Archive with Altmetrics

    Get PDF
    Altmetrics can be used to understand impact beyond citations, particularly for digitized collections. As cultural institutions look to pursue more active engagement with communities of practice, altmetrics help archivists understand the conversations happening in real time that will allow them to provide access to the most relevant materials. Through the use of case studies, we aim to demonstrate how applying altmetrics while considering the curation of digital collections can allow archivists to stay engaged with target communities outside traditional channels, demonstrating both the applicability of altmetrics to legacy scholarly work and the value of digitization as an access method.</jats:p

    The Influence of Particle Concentration and Bulk Characteristics on Polarized Oceanographic Lidar Measurements

    Get PDF
    Oceanographic lidar measurements of the linear depolarization ratio, δ, contain information on the bulk characteristics of marine particles that could improve our ability to study ocean biogeochemistry. However, a scarcity of information on the polarized light-scattering properties of marine particles and the lack of a framework for separating single and multiple scattering effects on δ have hindered the development of polarization-based retrievals of bulk particle properties. To address these knowledge gaps, we made single scattering measurements of δ for several compositionally and morphologically distinct marine particle assemblages. We then used a bio-optical model to explore the influence of multiple scattering and particle characteristics on lidar measurements of δ made during an expedition to sample a mesoscale coccolithophore bloom. Laboratory measurements of linear depolarization revealed a complex dependency on particle shape, size, and composition that were consistent with scattering simulations for idealized nonspherical particles. Model results suggested that the variability in δ measured during the field expedition was driven predominantly by shifts in particle concentration rather than their bulk characteristics. However, model estimates of δ improved when calcite particles were represented by a distinct particle class, highlighting the influence of bulk particle properties on δ. To advance polarized lidar retrievals of bulk particle properties and to constrain the uncertainty in satellite lidar retrievals of particulate backscattering, these results point to the need for future efforts to characterize the variability of particulate depolarization in the ocean and to quantify the sensitivity of operational ocean lidar systems to multiple scattering

    Cosmic shear requirements on the wavelength-dependence of telescope point spread functions

    Get PDF
    Cosmic shear requires high precision measurement of galaxy shapes in the presence of the observational Point Spread Function (PSF) that smears out the image. The PSF must therefore be known for each galaxy to a high accuracy. However, for several reasons, the PSF is usually wavelength dependent, therefore the differences between the spectral energy distribution of the observed objects introduces further complexity. In this paper we investigate the effect of the wavelength-dependence of the PSF, focusing on instruments in which the PSF size is dominated by the diffraction-limit of the telescope and which use broad-band filters for shape measurement. We first calculate biases on cosmological parameter estimation from cosmic shear when the stellar PSF is used uncorrected. Using realistic galaxy and star spectral energy distributions and populations and a simple three-component circular PSF we find that the colour-dependence must be taken into account for the next generation of telescopes. We then consider two different methods for removing the effect (i) the use of stars of the same colour as the galaxies and (ii) estimation of the galaxy spectral energy distribution using multiple colours and using a telescope model for the PSF. We find that both of these methods correct the effect to levels below the tolerances required for per-cent level measurements of dark energy parameters. Comparison of the two methods favours the template-fitting method because its efficiency is less dependent on galaxy redshift than the broad-band colour method and takes full advantage of deeper photometry.Comment: 10 pages, 8 figures, version accepted for publication in MNRA

    Predicting spectral features in galaxy spectra from broad-band photometry

    Full text link
    We explore the prospects of predicting emission line features present in galaxy spectra given broad-band photometry alone. There is a general consent that colours, and spectral features, most notably the 4000 A break, can predict many properties of galaxies, including star formation rates and hence they could infer some of the line properties. We argue that these techniques have great prospects in helping us understand line emission in extragalactic objects and might speed up future galaxy redshift surveys if they are to target emission line objects only. We use two independent methods, Artifical Neural Neworks (based on the ANNz code) and Locally Weighted Regression (LWR), to retrieve correlations present in the colour N-dimensional space and to predict the equivalent widths present in the corresponding spectra. We also investigate how well it is possible to separate galaxies with and without lines from broad band photometry only. We find, unsurprisingly, that recombination lines can be well predicted by galaxy colours. However, among collisional lines some can and some cannot be predicted well from galaxy colours alone, without any further redshift information. We also use our techniques to estimate how much information contained in spectral diagnostic diagrams can be recovered from broad-band photometry alone. We find that it is possible to classify AGN and star formation objects relatively well using colours only. We suggest that this technique could be used to considerably improve redshift surveys such as the upcoming FMOS survey and the planned WFMOS survey.Comment: 10 pages 7 figures summitted to MNRA

    Accuracy of photometric redshifts for future weak lensing surveys from space

    Full text link
    Photometric redshifts are a key tool to extract as much information as possible from planned cosmic shear experiments. In this work we aim to test the performances that can be achieved with observations in the near-infrared from space and in the optical from the ground. This is done by performing realistic simulations of multi-band observations of a patch of the sky, and submitting these mock images to software usually applied to real images to extract the photometry and then a redshift estimate for each galaxy. In this way we mimic the most relevant sources of uncertainty present in real data analysis, including blending and light pollution between galaxies. As an example we adopt the infrared setup of the ESA-proposed Euclid mission, while we simulate different observations in the optical, modifying filters, exposure times and seeing values. Finally, we consider directly some future ground-based experiments, such as LSST, Pan-Starrs and DES. The results highlight the importance of u-band observations, especially to discriminate between low (z < 0.5) and high (z ~ 3) redshifts, and the need for good observing sites, with seeing FWHM < 1. arcsec. The former of these indications clearly favours the LSST experiment as a counterpart for space observations, while for the other experiments we need to exclude at least 15 % of the galaxies to reach a precision in the photo-zs equal to < 0.05.Comment: 11 pages, to be published in MNRAS. Minor changes to match the published versio
    corecore