8,484 research outputs found
Preserving and Extending the Commodification of Football Supporter Relations: a Cultural Economy of Supporters Direct
This paper examines the role of Supporters Direct, a sports policy initiative launched by the British Labour government in 2000. The objective of Supporters Direct is to democratise football clubs by intervening in what it views as the unequal relationship that exists between the relatively powerless supporters of football clubs and private shareholders who have organisational control of clubs. They hope to achieve this by facilitating mutual forms of ownership and control of clubs via supporters\' trusts. With respect to this objective, established research concerning Supporters Direct emphasises this initiative as an inherently progressive development for the football industry. The aim of this paper is to situate the development of Supporters Direct in the wider context of the British Labour Government\'s policy of social inclusion. On the basis of a textual analysis that draws on current literature in the area of culture and economy - with specific reference to processes of commodification - we reveal an alternative view of Supporters Direct. The Supporters Direct initiative, we conclude, is an integral part of a social policy aimed at the preservation and extension of commodified social relations.Supporters Direct; Supporters\' Trusts, Social Inclusion, Mutualism, Commodification
A big picture for teaching macroeconomics principles
The economy can be viewed as consisting of four sectors, the goods and services, labor, monetary, and international sectors, with overall equilibrium consisting of simultaneous equilibrium in each of these sectors. Interactions among these markets headaches for students. The ââŹĹbig pictureâ⏠of this paper is a verbal story (supplemented with an oversimplified diagram) that an instructor can tell to exposit this to beginning students. Furthermore, by explaining how these headaches are minimized students are provided an overview of how a typical macroeconomics principles course is structured.aggregate demand curve
A computational trick for calculating the Blinder-Oaxaca decomposition and its standard error
To compute the Blinder-Oaxaca decomposition and associated standard errors a practitioner needs to be comfortable using vector and matrix software manipulations. This paper proposes a computational trick for producing these computations by running an artificial regression.
Clinical features of varicella-zoster virus infection
Varicella-zoster virus (VZV) is a pathogenic human herpes virus that causes varicella (chickenpox) as a primary infection, following which it becomes latent in peripheral ganglia. Decades later, the virus may reactivate either spontaneously or after a number of triggering factors to cause herpes zoster (shingles). Varicella and its complications are more severe in the immunosuppressed. The most frequent and important complication of VZV reactivation is postherpetic neuralgia, the cause of which is unknown and for which treatment is usually ineffective. Reactivation of VZV may also cause a wide variety of neurological syndromes, the most significant of which is a vasculitis, which is treated with corticosteroids and the antiviral drug acyclovir. Other VZV reactivation complications include an encephalitis, segmental motor weakness and myelopathy, cranial neuropathies, GuillainâBarrĂŠ syndrome, enteric features, and zoster sine herpete, in which the viral reactivation occurs in the absence of the characteristic dermatomally distributed vesicular rash of herpes zoster. There has also been a recent association of VZV with giant cell arteritis and this interesting finding needs further corroboration. Vaccination is now available for the prevention of both varicella in children and herpes zoster in older individuals
More on F versus t tests for unit roots when there is no trend
Rodrigues and Tremayne (2004) interpret a problematic size result in a Monte Carlo study reported in Elder and Kennedy (2001) as arising from Elder and Kennedy's use of an inappropriate testing equation. In expositing their result, Rodrigues and Tremayne inadvertently lead readers to believe that the Elder and Kennedy conclusion is in error. We clarify the Rodrigues and Tremayne contribution, putting the validity of the Elder and Kennedy result in proper perspective and underlining the important role played by the starting value in Monte Carlo analyses.
Environmental policy and time consistency - emissions taxes and emissions trading
The authors examine policy problems related to the use of emissions taxes, and emissions trading, two market-based instruments for controlling pollution by getting regulated firms to adopt cleaner technologies. By attaching an explicit price to emissions, these instruments give firms an incentive to continually reduce their volume of emissions. Command, and-control emissions standards create incentives to adopt cleaner technologies only up to the point where the standards are no longer binding (at which point the shadow price on emissions falls to zero). But the ongoing incentives created by the market-based instruments are not necessarily right, either. Time-consistency constraints on the setting of these instruments limit the regulator's ability toset policies that lead to efficiency in adopting technology options. After examining the time-consistency properties of a Pigouvian emissions tax, and of the emissions trading, the authors find that: 1) If damage is linear, efficiency in adopting technologies involves either universal adoption of the new technology, or universal retention of the old technology, depending on the cost of adoption. The first best tax policy, and the first-best permit-supply policy are both time-consistent under these conditions. 2) If damage is strictly convex, efficiency may require partial adoption of the new technology. In this case, the first-best tax policy is not time-consistent, and the tax rate must be adjusted after adoption has taken place (ratcheting). Ratcheting will induce an efficient equilibrium if there is a large number of firms. If there are relatively few firms, ratcheting creates too many incentives to adopt the new technology. 3) The first-best supply policy is time-consistent if there is a large number of firms. If there are relatively few firms, the first-best supply policy may not be time-consistent, and the regulator must ratchet the supply of permits. With this policy, there are not enough incentives for firms to adopt the new technology. The results do not strongly favor one policy instrument over the other, but if the point of an emissions trading program is to increase technological efficiency, it is necessary to continually adjust the supply of permits in response to technological change, even when the damage is linear. This continual adjustment is not needed for an emissions tax when damage is linear, which may give emissions taxes an advantage over emissions trading.General Technology,Environmental Economics&Policies,International Terrorism&Counterterrorism,Technology Industry,ICT Policy and Strategies,Environmental Economics&Policies,General Technology,International Terrorism&Counterterrorism,Carbon Policy and Trading,Energy and Environment
Equilibrium incentives for adopting cleaner technology under emissions pricing
Policymakers sometimes presume that adopting a less polluting technology necessarily improves welfare. This view is generally mistaken. Adopting a cleaner technology is costly, and this cost must be weighed against the technology's benefits in reduced pollution and reduced abatement costs. The literature to date has not satisfactorily examined whether emissions pricing properly internalizes this tradeoff between costs and benefits. And if the trend toward greater use of economic instruments in environmental policy continues, as is likely, the properties of those instruments must be understood, especially for dynamic efficiency. The authors examine incentives for adopting cleaner technologies in response to Pigouvian emissions pricing in equilibrium (unlike earlier analyses, which they contend, have been generally incomplete and at times misleading). Their results indicate that emissions pricing under the standard Pigouvian rule leads to efficient equilibrium adoption of technology under certain circumstances. They show that the equilibrium level of adopting a public innovation is efficient under Pigouvian pricing only if there are enough firms that each firm has a negligible effect on aggregate emissions. When those circumstances are not satisfied, Pigouvian pricing does not induce an efficient (social welfare-maximizing) level of innovation. The potential for inefficiency stems from two problems with the Pigouvian rule. First, the Pigouvian price does not discriminate against each unit of emissions according to its marginal damage. Second, full ratcheting of the emissions price in response to declining marginal damage as firms adopt the cleaner technology is correct expost but distorts incentives for adopting technology ex ante. The next natural step for research is to examine second-best pricing policies or multiple instrument policies. The challenge is to design regulatory policies that go some way toward resolving problems yet are geared to implementation in real regulatory settings. Clearly, such policies must use more instruments than emissions pricing alone. Direct taxes or subsidies for technological change, together with emissions pricing, should give regulators more scope for creating appropriate dynamic incentives. Such instruments are already widely used: investment tax credits (for environmental research and development), accelerated depreciation (for pollution control equipment), and environmental funds (to subsidize the adoption of pollution control equipment). Such direct incentives could be excessive, however, if emissions pricing is already in place. All incentives should be coordinated.Public Health Promotion,Environmental Economics&Policies,Health Monitoring&Evaluation,General Technology,International Terrorism&Counterterrorism,International Terrorism&Counterterrorism,Carbon Policy and Trading,Environmental Economics&Policies,Health Monitoring&Evaluation,General Technology
F versus t tests for unit roots
F tests which test jointly for a unit root and a zero intercept, and so compete against Dickey-Fuller t tests, are shown not to enhance power because they are invariant to the intercept value in the absence of a unit root. Monte Carlo results in the literature that indicate otherwise are shown to have resulted from the use of special starting values. Testing procedures that employ these F tests to enhance power should be revised.Dickey-Fuller
Delineating neuroinflammation, parasite CNS invasion, and blood-brain barrier dysfunction in an experimental murine model of human African trypanosomiasis
Although Trypanosoma brucei spp. was first detected by Aldo Castellani in CSF samples taken from sleeping sickness patients over a century ago there is still a great deal of debate surrounding the timing, route and effects of transmigration of the parasite from the blood to the CNS. In this investigation, we have applied contrast-enhance magnetic resonance imaging (MRI) to study the effects of trypanosome infection on the blood-brain barrier (BBB) in the well-established GVR35 mouse model of sleeping sickness. In addition, we have measured the trypanosome load present in the brain using quantitative Taqman PCR and assessed the severity of the neuroinflammatory reaction at specific time points over the course of the infection.
Contrast enhanced â MRI detected a significant degree of BBB impairment in mice at 14 days following trypanosome infection, which increased in a step-wise fashion as the disease progressed. Parasite DNA was present in the brain tissue on day 7 after infection. This increased significantly in quantity by day 14 post-infection and continued to rise as the infection advanced. A progressive increase in neuroinflammation was detected following trypanosome infection, reaching a significant level of severity on day 14 post-infection and rising further at later time-points. In this model stage-2 disease presents at 21 days post-infection.
The combination of the three methodologies indicates that changes in the CNS become apparent prior to the onset of established stage-2 disease. This could in part account for the difficulties associated with defining specific criteria to distinguish stage-1 and stage-2 infections and highlights the need for improved staging diagnostics
The challenging problem of disease staging in human African trypanosomiasis (sleeping sickness): a new approach to a circular question
Human African trypanosomiasis (HAT), also known as sleeping sickness, puts millions of people at risk in sub-Saharan Africa and is a neglected parasitic disease that is almost always fatal if untreated or inadequately treated. HAT manifests itself in two stages that are difficult to distinguish clinically. The problem of staging in HAT is extremely important since treatment options, some of which are highly toxic, are directly linked to the disease stage. Several suggested investigations for disease staging have been problematic because of the lack of an existing gold standard with which to compare new clinical staging markers. The somewhat arbitrary current criteria based on the cerebrospinal fluid (CSF) white blood cell (WBC) count have been widely used, but the new potential biomarkers are generally compared with these, thereby making the problem somewhat circular in nature. We propose an alternative âreverseâ approach to address this problem, conceptualised as using appropriate statistical methods to test the performance of combinations of established laboratory variables as staging biomarkers to correlate with the CSF WBC/trypanosomes and clinical features of HAT. This approach could lead to the use of established laboratory staging markers, potentially leading to a gold standard for staging and clinical follow-up of HAT
- âŚ