312 research outputs found

    What\u27s a Name Worth?: Experimental Tests of The Value of Attribution in Intellectual Property

    Get PDF
    Despite considerable research suggesting that creators value attribution – i.e., being named as the creator of a work – U.S. intellectual property (IP) law does not provide a right to attribution to the vast majority of creators. On the other side of the Atlantic, however, many European countries give creators, at least in their copyright laws, much stronger rights to attribution. At first blush it may seem that the U.S. has gotten it wrong, and the Europeans have made a better policy choice in providing to creators a right that they value. But for reasons we will explain in this Article, matters are a lot more complicated than that. This Article reports a series of experiments that are the first to attempt to quantitatively measure the value of attribution to creators. In previous research, we have shown that creators of IP are subject to a “creativity effect” that results in them assigning substantially higher value to their works than neoclassical economic theory predicts. The first two experiments reported in this Article suggest a way that the creativity effect may be reduced – creators are willing to sacrifice significant economic payments in favor of receiving attribution for their work. The value to creators of attribution raises the question whether U.S. IP law should be re-structured to provide attribution as a creator’s default right. The third and most important experiment reported here casts doubt on the value of giving creators such a default right, because creators value attribution differently depending on whether the legal rule gives it to them as an initial entitlement or not. When creators are given a right to attribution as a default they value credit four times higher than when attribution is not the default option. Our findings make clear that creators value attribution, and that the prospect of obtaining it can lead to a more efficient level of transacting. At the same time, and paradoxically, our findings also suggest that before we restructure American law, which provides no right to attribution for the vast majority of creators, we need to take care, because it is possible, under conditions that we will describe, that providing creators with a default right to attribution will result in less efficient transacting. Finally, our findings have important implications for property theory which are broader than IP law or attribution rights. Our third experiment suggests that a party who enjoys a default legal right as part of her initial complement of rights will tend to treat that legal right in a fashion similar to any other form of initial entitlement, and overvalue it relative to what neoclassical theory would predict. This suggests a principle regarding how to efficiently structure default rules in any setting. All other factors being equal, an efficiently-structured default rule will locate the initial legal entitlement in the party who is either less likely to overvalue the entitlement, or, if overvaluation seems inevitable regardless of where the initial entitlement is placed, is likely to overvalue it less

    What\u27s a Name Worth?: Experimental Tests of the Value of Attribution in Intellectual Property

    Get PDF
    Despite considerable research suggesting that creators value attribution – i.e., being named as the creator of a work – U.S. intellectual property (IP) law does not provide a right to attribution to the vast majority of creators. On the other side of the Atlantic, however, many European countries give creators, at least in their copyright laws, much stronger rights to attribution. At first blush it may seem that the U.S. has gotten it wrong, and the Europeans have made a better policy choice in providing to creators a right that they value. But for reasons we will explain in this Article, matters are a lot more complicated than that. This Article reports a series of experiments that are the first to attempt to quantitatively measure the value of attribution to creators. In previous research, we have shown that creators of IP are subject to a “creativity effect” that results in them assigning substantially higher value to their works than neoclassical economic theory predicts. The first two experiments reported in this Article suggest a way that the creativity effect may be reduced – creators are willing to sacrifice significant economic payments in favor of receiving attribution for their work. The value to creators of attribution raises the question whether U.S. IP law should be re-structured to provide attribution as a creator’s default right. The third and most important experiment reported here casts doubt on the value of giving creators such a default right, because creators value attribution differently depending on whether the legal rule gives it to them as an initial entitlement or not. When creators are given a right to attribution as a default they value credit four times higher than when attribution is not the default option. Our findings make clear that creators value attribution, and that the prospect of obtaining it can lead to a more efficient level of transacting. At the same time, and paradoxically, our findings also suggest that before we restructure American law, which provides no right to attribution for the vast majority of creators, we need to take care, because it is possible, under conditions that we will describe, that providing creators with a default right to attribution will result in less efficient transacting. Finally, our findings have important implications for property theory which are broader than IP law or attribution rights. Our third experiment suggests that a party who enjoys a default legal right as part of her initial complement of rights will tend to treat that legal right in a fashion similar to any other form of initial entitlement, and overvalue it relative to what neoclassical theory would predict. This suggests a principle regarding how to efficiently structure default rules in any setting. All other factors being equal, an efficiently-structured default rule will locate the initial legal entitlement in the party who is either less likely to overvalue the entitlement, or, if overvaluation seems inevitable regardless of where the initial entitlement is placed, is likely to overvalue it less

    Dynamic telomerase gene suppression via network effects of GSK3 inhibition

    Get PDF
    <b>Background</b>: Telomerase controls telomere homeostasis and cell immortality and is a promising anti-cancer target, but few small molecule telomerase inhibitors have been developed. Reactivated transcription of the catalytic subunit hTERT in cancer cells controls telomerase expression. Better understanding of upstream pathways is critical for effective anti-telomerase therapeutics and may reveal new targets to inhibit hTERT expression. <b>Methodology/Principal Findings</b>: In a focused promoter screen, several GSK3 inhibitors suppressed hTERT reporter activity. GSK3 inhibition using 6-bromoindirubin-3′-oxime suppressed hTERT expression, telomerase activity and telomere length in several cancer cell lines and growth and hTERT expression in ovarian cancer xenografts. Microarray analysis, network modelling and oligonucleotide binding assays suggested that multiple transcription factors were affected. Extensive remodelling involving Sp1, STAT3, c-Myc, NFκB, and p53 occurred at the endogenous hTERT promoter. RNAi screening of the hTERT promoter revealed multiple kinase genes which affect the hTERT promoter, potentially acting through these factors. Prolonged inhibitor treatments caused dynamic expression both of hTERT and of c-Jun, p53, STAT3, AR and c-Myc. <b>Conclusions/Significance</b>: Our results indicate that GSK3 activates hTERT expression in cancer cells and contributes to telomere length homeostasis. GSK3 inhibition is a clinical strategy for several chronic diseases. These results imply that it may also be useful in cancer therapy. However, the complex network effects we show here have implications for either setting

    Hypoxia and hypotension in patients intubated by physician staffed helicopter emergency medical services - a prospective observational multi-centre study

    Get PDF
    Background The effective treatment of airway compromise in trauma and non-trauma patients is important. Hypoxia and hypotension are predictors of negative patient outcomes and increased mortality, and may be important quality indicators of care provided by emergency medical services. Excluding cardiac arrests, critical trauma and non-trauma patients remain the two major groups to which helicopter emergency medical services (HEMS) are dispatched. Several studies describe the impact of pre-hospital hypoxia or hypotension on trauma patients, but few studies compare this in trauma and non-trauma patients. The primary aim was to describe the incidence of pre-hospital hypoxia and hypotension in the two groups receiving pre-hospital tracheal intubation (TI) by physician-staffed HEMS. Methods Data were collected prospectively over a 12-month period, using a uniform Utstein-style airway template. Twenty-one physician-staffed HEMS in Europe and Australia participated. We compared peripheral oxygen saturation and systolic blood pressure before and after definitive airway management. Data were analysed using Cochran–Mantel–Haenszel methods and mixed-effects models. Results Eight hundred forty three trauma patients and 422 non-trauma patients receiving pre-hospital TI were included. Non-trauma patients had significantly lower predicted mean pre-intervention SpO2 compared to trauma patients. Post-intervention and admission SpO2 for the two groups were comparable. However, 3% in both groups were still hypoxic at admission. For hypotension, the differences between the groups were less prominent. However, 9% of trauma and 10% of non-trauma patients were still hypotensive at admission. There was no difference in short-term survival between trauma (97%) and non-trauma patients (95%). Decreased level of consciousness was the most frequent indication for TI, and was associated with increased survival to hospital (cOR 2.8; 95% CI: 1.4–5.4). Conclusions Our results showed that non-trauma patients had a higher incidence of hypoxia before TI than trauma patients, but few were hypoxic at admission. The difference for hypotension was less prominent, but one in ten patients were still hypotensive at admission. Further investigations are needed to identify reversible causes that may be corrected to improve haemodynamics in the pre-hospital setting. We found high survival rates to hospital in both groups, suggesting that physician-staffed HEMS provide high-quality emergency airway management in trauma and non-trauma patients.publishedVersio

    Recent Results from the FASTSUM Collaboration

    Full text link
    The FASTSUM Collaboration has developed a comprehensive research programme in thermal QCD using 2+1 flavour, anisotropic ensembles. In this talk, we summarise some of our recent results including thermal hadron spectrum calculations using our ``Generation 2L'' ensembles which have pion masses of 239(1) MeV. These include open charm mesons and charm baryons. We also summarise our work using the Backus Gilbert approach to determining the spectral function of the NRQCD bottomonium system. Finally, we review our determination of the interquark potential in the same system, but using our ``Generation 2'' ensembles which have heavier pion masses of 384(4) MeV.Comment: 9 pages, Contribution to the 39th International Symposium on Lattice Field theory (LATTICE2022),8th-13th August, 2022, Bonn, German

    The costs and benefits of decentralization and centralization of ant colonies

    Get PDF
    A challenge faced by individuals and groups of many species is determining how resources and activities should be spatially distributed: centralized or decentralized. This distribution problem is hard to understand due to the many costs and benefits of each strategy in different settings. Ant colonies are faced by this problem and demonstrate two solutions: 1) centralizing resources in a single nest (monodomy) and 2) decentralizing by spreading resources across many nests (polydomy). Despite the possibilities for using this system to study the centralization/decentralization problem, the trade-offs associated with using either polydomy or monodomy are poorly understood due to a lack of empirical data and cohesive theory. Here, we present a dynamic network model of a population of ant nests which is based on observations of a facultatively polydomous ant species (Formica lugubris). We use the model to test several key hypotheses for costs and benefits of polydomy and monodomy and show that decentralization is advantageous when resource acquisition costs are high, nest size is limited, resources are clustered, and there is a risk of nest destruction, but centralization prevails when resource availability fluctuates and nest size is limited. Our model explains the phylogenetic and ecological diversity of polydomous ants, demonstrates several trade-offs of decentralization and centralization, and provides testable predictions for empirical work on ants and in other systems

    Towards Fairer Datasets: Filtering and Balancing the Distribution of the People Subtree in the ImageNet Hierarchy

    Full text link
    Computer vision technology is being used by many but remains representative of only a few. People have reported misbehavior of computer vision models, including offensive prediction results and lower performance for underrepresented groups. Current computer vision models are typically developed using datasets consisting of manually annotated images or videos; the data and label distributions in these datasets are critical to the models' behavior. In this paper, we examine ImageNet, a large-scale ontology of images that has spurred the development of many modern computer vision methods. We consider three key factors within the "person" subtree of ImageNet that may lead to problematic behavior in downstream computer vision technology: (1) the stagnant concept vocabulary of WordNet, (2) the attempt at exhaustive illustration of all categories with images, and (3) the inequality of representation in the images within concepts. We seek to illuminate the root causes of these concerns and take the first steps to mitigate them constructively.Comment: Accepted to FAT* 202
    corecore