11 research outputs found
Bit by (Twitch) Bit: âPlatform Captureâ and the Evolution of Digital Platforms
This article considers the history of donation management tools on the livestreaming platform Twitch. In particular, it details the technical and economic contexts that led to the development of Twitch Bits, a first-party donation management service introduced in 2016. Two contributions to research on the platformization of cultural production are made. One, this article expands the empirical record regarding Twitch by chronicling the role of viewer donations in livestreaming since 2010, as well as the many tools that have facilitated this practice. It is argued that this history traces the complex and co-productive interactions between Twitch as a sociotechnical architecture and a political economy. Two, by considering how the first-party donation tool Twitch Bits has gradually challenged the dominance of the third-party tools that preceded it, this article theorizes the notion of platform capture, a critical rereading of platform envelopment, a popular concept in business studies. Ultimately, it is argued that platform capture demonstrates how platform owners leverage power asymmetries over dependents to aid in their platformâs technical evolution
Recasting Twitch: Livestreaming, Platforms, and New Frontiers in Digital Journalism
22 pagesDespite Twitchâs dominant position in Western livestreaming markets,
institutional journalists rarely produce content on the platform.
This paper investigates how journalistic practices, cultures,
business models, and institutions approach Twitch through three
empirical sites: The Washington Postâs experimentation with
the app, left-leaning political influencer Hasan Piker, and the
pro-QAnon 24/7 ânewsâ channel, Patriotsâ Soapbox. The cases
demonstrate how newsmaking on Twitch flouts traditional journalistsâ
ideological and occupational boundaries, exploiting the platformâs
features and affordances to enroll the audience in a live
broadcasting experience
Pandemic Drugs at Pandemic Speed: Infrastructure for Accelerating COVID-19 Drug Discovery with Hybrid Machine Learning- and Physics-based Simulations on High Performance Computers
The race to meet the challenges of the global pandemic has served as a reminder that the existing drug discovery process is expensive, inefficient and slow. There is a major bottleneck screening the vast number of potential small molecules to shortlist lead compounds for antiviral drug development. New opportunities to accelerate drug discovery lie at the interface between machine learning methods, in this case, developed for linear accelerators, and physics-based methods. The two in silico methods, each have their own advantages and limitations which, interestingly, complement each other. Here, we present an innovative infrastructural development that combines both approaches to accelerate drug discovery. The scale of the potential resulting workflow is such that it is dependent on supercomputing to achieve extremely high throughput. We have demonstrated the viability of this workflow for the study of inhibitors for four COVID-19 target proteins and our ability to perform the required large-scale calculations to identify lead antiviral compounds through repurposing on a variety of supercomputers
IMPECCABLE: Integrated Modeling PipelinE for COVID Cure by Assessing Better LEads
The drug discovery process currently employed in the pharmaceutical industry typically requires about 10 years and $2â3 billion to deliver one new drug. This is both too expensive and too slow, especially in emergencies like the COVID-19 pandemic. In silico methodologies need to be improved both to select better lead compounds, so as to improve the efficiency of later stages in the drug discovery protocol, and to identify those lead compounds more quickly. No known methodological approach can deliver this combination of higher quality and speed. Here, we describe an Integrated Modeling PipEline for COVID Cure by Assessing Better LEads (IMPECCABLE) that employs multiple methodological innovations to overcome this fundamental limitation. We also describe the computational framework that we have developed to support these innovations at scale, and characterize the performance of this framework in terms of throughput, peak performance, and scientific results. We show that individual workflow components deliver 100 Ă to 1000 Ă improvement over traditional methods, and that the integration of methods, supported by scalable infrastructure, speeds up drug discovery by orders of magnitudes. IMPECCABLE has screened ⌠1011 ligands and has been used to discover a promising drug candidate. These capabilities have been used by the US DOE National Virtual Biotechnology Laboratory and the EU Centre of Excellence in Computational Biomedicine
THE CONSTRUCTION OF ALTERNATIVE FACTS: DARK PARTICIPATION AND KNOWLEDGE PRODUCTION IN THE QANON CONSPIRACY
QAnon is a right-wing conspiracy theory based on a series of posts (âDropsâ) made to the imageboard 8chan by âQâ, an anonymous poster who claims to be a Trump administration insider and encourages their followers (âBakersâ) to conduct research to interpret and find hidden truths (âBreadâ) behind current events. In this paper, we argue that QAnon Bakers adopt a âscientistic selfâ by producing and maintaining specific facts and theories that enable the conspiracyâs social and political cohesion over time. Rather than dismissing Q researchersâ conclusions out of hand, we adopt science studiesâ symmetry principle to consider the tools and techniques of Baking. We argue that the institutional character of Baking distinguishes QAnon from other online conspiracy communities, which primarily rely on anecdotal evidence or sow doubt in scientific consensuses. Q, by contrast, research is intended to produce certainty through the systematic construction of alternative facts. In making this argument, we share and build upon other scholarsâ critiques of participatory media. Indeed, we conclude that it is precisely the participatory affordances of the social web that have made QAnon so potent
The prevalence and impact of psychiatric symptoms in an undiagnosed diseases clinical program.
In 2008, the NIH launched an undiagnosed diseases program to investigate difficult to diagnose, and typically, multi-system diseases. The objective of this study was to evaluate the presence of psychiatric symptoms or psychiatric diagnoses in a cohort of patients seeking care at the Emory Special Diagnostic Service clinic. We hypothesized that psychiatric symptoms would be prevalent and associated with trauma exposure, and a decreased quality of life and functioning. This is a cross-sectional, retrospective analysis of 247 patients seen between February 7, 2014 and May 31, 2017. The sources for data included the Emory Health History Questionnaire (HHQ) that had the work and social adjustment and quality of life enjoyment and satisfaction questionnaire-short form (QLSQ) embedded in it; medical records, and the comprehensive standardized special diagnostic clinic forms. Primary outcomes were presence of any psychiatric symptom, based on report of the symptom on the HHQ or medical record, or presence of a confirmed preexisting psychiatric disorder. Seventy-two percent of patients had at least one psychiatric symptom while 24.3% of patients had a pre-existing psychiatric diagnosis. Patients with any psychiatric symptom had significantly diminished Q-LES-Q scores (45.27 ± 18.63) versus patients with no psychiatric symptoms (62.01 ± 21.57, t = 5.60, df = 225, p<0.0001) and they had significantly greater functional disability. Patients with a psychiatric disorder also had significantly diminished Q-LES-Q scores (45.16 ± 17.28) versus those without a psychiatric diagnosis (51.85 ± 21.54, t = 2.11, df = 225, p = 0.036) but did not have significantly increased functional impairment. Both patients with psychiatric symptoms and ones with psychiatric disorders had an increased prevalence of trauma. Psychiatric symptoms are prevalent in patients evaluated for undiagnosed disorders. The presence of any psychiatric symptom, with or without a formal psychiatric diagnosis, significantly decreases quality of life and functioning. This suggests that assessment for psychiatric symptoms should be part of the evaluation of individuals with undiagnosed disorders and may have important diagnostic and treatment implications
Recommended from our members
AI-accelerated protein-ligand docking for SARS-CoV-2 is 100-fold faster with no significant change in detection
Protein-ligand docking is a computational method for identifying drug leads. The method is capable of narrowing a vast library of compounds down to a tractable size for downstream simulation or experimental testing and is widely used in drug discovery. While there has been progress in accelerating scoring of compounds with artificial intelligence, few works have bridged these successes back to the virtual screening community in terms of utility and forward-looking development. We demonstrate the power of high-speed ML models by scoring 1 billion molecules in under a day (50 k predictions per GPU seconds). We showcase a workflow for docking utilizing surrogate AI-based models as a pre-filter to a standard docking workflow. Our workflow is ten times faster at screening a library of compounds than the standard technique, with an error rate less than 0.01% of detecting the underlying best scoring 0.1% of compounds. Our analysis of the speedup explains that another order of magnitude speedup must come from model accuracy rather than computing speed. In order to drive another order of magnitude of acceleration, we share a benchmark dataset consisting of 200 million 3D complex structures and 2D structure scores across a consistent set of 13 million âin-stockâ molecules over 15 receptors, or binding sites, across the SARS-CoV-2 proteome. We believe this is strong evidence for the community to begin focusing on improving the accuracy of surrogate models to improve the ability to screen massive compound libraries 100 Ă or even 1000 Ă faster than current techniques and reduce missing top hits. The technique outlined aims to be a fast drop-in replacement for docking for screening billion-scale molecular libraries