1,425 research outputs found
Dynamics underlying Box-office: Movie Competition on Recommender Systems
We introduce a simple model to study movie competition in the recommender
systems. Movies of heterogeneous quality compete against each other through
viewers' reviews and generate interesting dynamics of box-office. By assuming
mean-field interactions between the competing movies, we show that run-away
effect of popularity spreading is triggered by defeating the average review
score, leading to hits in box-office. The average review score thus
characterizes the critical movie quality necessary for transition from
box-office bombs to blockbusters. The major factors affecting the critical
review score are examined. By iterating the mean-field dynamical equations, we
obtain qualitative agreements with simulations and real systems in the
dynamical forms of box-office, revealing the significant role of competition in
understanding box-office dynamics.Comment: 8 pages, 6 figure
Incentivizing High Quality Crowdwork
We study the causal effects of financial incentives on the quality of
crowdwork. We focus on performance-based payments (PBPs), bonus payments
awarded to workers for producing high quality work. We design and run
randomized behavioral experiments on the popular crowdsourcing platform Amazon
Mechanical Turk with the goal of understanding when, where, and why PBPs help,
identifying properties of the payment, payment structure, and the task itself
that make them most effective. We provide examples of tasks for which PBPs do
improve quality. For such tasks, the effectiveness of PBPs is not too sensitive
to the threshold for quality required to receive the bonus, while the magnitude
of the bonus must be large enough to make the reward salient. We also present
examples of tasks for which PBPs do not improve quality. Our results suggest
that for PBPs to improve quality, the task must be effort-responsive: the task
must allow workers to produce higher quality work by exerting more effort. We
also give a simple method to determine if a task is effort-responsive a priori.
Furthermore, our experiments suggest that all payments on Mechanical Turk are,
to some degree, implicitly performance-based in that workers believe their work
may be rejected if their performance is sufficiently poor. Finally, we propose
a new model of worker behavior that extends the standard principal-agent model
from economics to include a worker's subjective beliefs about his likelihood of
being paid, and show that the predictions of this model are in line with our
experimental findings. This model may be useful as a foundation for theoretical
studies of incentives in crowdsourcing markets.Comment: This is a preprint of an Article accepted for publication in WWW
\c{opyright} 2015 International World Wide Web Conference Committe
Meta-analysis derived atopic dermatitis (MADAD) transcriptome defines a robust AD signature highlighting the involvement of atherosclerosis and lipid metabolism pathways
BACKGROUND: Atopic dermatitis (AD) is a common inflammatory skin disease with limited treatment options. Several microarray experiments have been conducted on lesional/LS and non-lesional/NL AD skin to develop a genomic disease phenotype. Although these experiments have shed light on disease pathology, inter-study comparisons reveal large differences in resulting sets of differentially expressed genes (DEGs), limiting the utility of direct comparisons across studies. METHODS: We carried out a meta-analysis combining 4 published AD datasets to define a robust disease profile, termed meta-analysis derived AD (MADAD) transcriptome. RESULTS: This transcriptome enriches key AD pathways more than the individual studies, and associates AD with novel pathways, such as atherosclerosis signaling (IL-37, selectin E/SELE). We identified wide lipid abnormalities and, for the first time in vivo, correlated Th2 immune activation with downregulation of key epidermal lipids (FA2H, FAR2, ELOVL3), emphasizing the role of cytokines on the barrier disruption in AD. Key AD “classifier genes” discriminate lesional from nonlesional skin, and may evaluate therapeutic responses. CONCLUSIONS: Our meta-analysis provides novel and powerful insights into AD disease pathology, and reinforces the concept of AD as a systemic disease. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1186/s12920-015-0133-x) contains supplementary material, which is available to authorized users
Estimation of interdomain flexibility of N-terminus of factor H using residual dipolar couplings
Characterization of segmental flexibility is needed to understand the biological mechanisms of the very large category of functionally diverse proteins, exemplified by the regulators of complement activation, that consist of numerous compact modules or domains linked by short, potentially flexible, sequences of amino acid residues. The use of NMR-derived residual dipolar couplings (RDCs), in magnetically aligned media, to evaluate interdomain motion is established but only for two-domain proteins. We focused on the three N-terminal domains (called CCPs or SCRs) of the important complement regulator, human factor H (i.e. FH1-3). These domains cooperate to facilitate cleavage of the key complement activation-specific protein fragment, C3b, forming iC3b that no longer participates in the complement cascade. We refined a three-dimensional solution structure of recombinant FH1-3 based on nuclear Overhauser effects and RDCs. We then employed a rudimentary series of RDC datasets, collected in media containing magnetically aligned bicelles (disk-like particles formed from phospholipids) under three different conditions, to estimate interdomain motions. This circumvents a requirement of previous approaches for technically difficult collection of five independent RDC datasets. More than 80% of conformers of this predominantly extended three-domain molecule exhibit flexions of < 40 °. Such segmental flexibility (together with the local dynamics of the hypervariable loop within domain 3), could facilitate recognition of C3b via initial anchoring and eventual reorganization of modules to the conformation captured in the previously solved crystal structure of a C3b:FH1-4 complex
BMQ
BMQ: Boston Medical Quarterly was published from 1950-1966 by the Boston University School of Medicine and the Massachusetts Memorial Hospitals
Analyzing collaborative learning processes automatically
In this article we describe the emerging area of text classification research focused on the problem of collaborative learning process analysis both from a broad perspective and more specifically in terms of a publicly available tool set called TagHelper tools. Analyzing the variety of pedagogically valuable facets of learners’ interactions is a time consuming and effortful process. Improving automated analyses of such highly valued processes of collaborative learning by adapting and applying recent text classification technologies would make it a less arduous task to obtain insights from corpus data. This endeavor also holds the potential for enabling substantially improved on-line instruction both by providing teachers and facilitators with reports about the groups they are moderating and by triggering context sensitive collaborative learning support on an as-needed basis. In this article, we report on an interdisciplinary research project, which has been investigating the effectiveness of applying text classification technology to a large CSCL corpus that has been analyzed by human coders using a theory-based multidimensional coding scheme. We report promising results and include an in-depth discussion of important issues such as reliability, validity, and efficiency that should be considered when deciding on the appropriateness of adopting a new technology such as TagHelper tools. One major technical contribution of this work is a demonstration that an important piece of the work towards making text classification technology effective for this purpose is designing and building linguistic pattern detectors, otherwise known as features, that can be extracted reliably from texts and that have high predictive power for the categories of discourse actions that the CSCL community is interested in
Environmental screening tools for assessment of infrastructure plans based on biodiversity preservation and global warming (PEIT, Spain).
Most Strategic Environmental Assessment (SEA) research has been concerned with SEA as a procedure, and there have been relatively few developments and tests of analytical methodologies. The first stage of the SEA is the ‘screening’, which is the process whereby a decision is taken on whether or not SEA is required for a particular programme or plan. The effectiveness of screening and SEA procedures will depend on how well the assessment fits into the planning from the early stages of the decision-making process. However, it is difficult to prepare the environmental screening for an infrastructure plan involving a whole country. To be useful, such methodologies must be fast and simple. We have developed two screening tools which would make it possible to estimate promptly the overall impact an infrastructure plan might have on biodiversity and global warming for a whole country, in order to generate planning alternatives, and to determine whether or not SEA is required for a particular infrastructure plan
- …