4,611 research outputs found
Using Excel to generate empirical sampling distributions
Teachers in many introductory statistics courses demonstrate the Central Limit Theorem by using a computer to draw a large number of random samples of size n from a population distribution and plot the resulting empirical sampling distribution of the sample mean. There aremany computer applications that can be used for this (see, for example, the Rice Virtual Lab in Statistics: http://www.ruf.rice.edu/~lane/rvls.html). The effectiveness of such demonstrations has been questioned (see delMas et al (1999))) but in the work presented in this paper we do not rely on sampling distributions to convey or teach statistical concepts; only that the sampling distribution is independent of the distribution of the population, provided the sample size is sufficiently large.We describe a lesson that starts out with a demonstration of the CTL, but sample from a (finite) population where actual census data is provided; doing this may help students more easily relate to the concepts – they can see the original data as a column of numbers and if the samples are shown they can also see random samples being taken. We continue with this theme of sampling from census data to teach the basic ideas of inference. We end up with standard resampling/bootstrap procedures.We also demonstrate how Excel can provide a tool for developing a learning objects to support the program; a workbook called Sampling.xls is available from www.deakin.edu.au/~rodneyc/PS > Sampling.xls.<br /
Usage-based pricing of software services under competition
With the emergence of high speed networks, software firms have the ability to deploy ‘software as a service\u27 and measure resource usage at the level of individual customers. This enables the implementation of usage-based pricing. We study both fixed and usage-based pricing schemes in a competitive setting where the firm incurs a transaction cost of monitoring usage if it implements usage-based pricing. Offering different pricing schemes helps to differentiate the firms and relax price competition, particularly at higher monitoring costs, even when competing firms offer the same service quality. However, the low usage customers acquired by offering usage-based pricing are unable to compensate for the monitoring costs incurred. This implies that managers should be cautious about implementing usage-based pricing in a competitive setting
Pricing Software Upgrades: The Role of Product Improvement & User Costs
The computer software industry is an extreme example of rapid new product introduction. However, many consumers are sophisticated enough to anticipate the availability of upgrades in the future. This creates the possibility that consumers might either postpone purchase or buy early on and never upgrade. In response, many software producers offer special upgrade pricing to old customers in order to mitigate the effects of strategic consumer behavior. We analyze the optimality of upgrade pricing by characterizing the relationship between magnitude of product improvement and the equilibrium pricing structure, particularly in the context of user upgrade costs. This upgrade cost (such as the cost of upgrading complementary hardware or drivers) is incurred by the user when she buys the new version but is not captured by the upgrade price for the software. Our approach is to formulate a game theoretic model where consumers can look ahead and anticipate prices and product qualities while the firm can offer special upgrade pricing. We classify upgrades as minor, moderate or large based on the primitive parameters. We find that at sufficiently large user costs, upgrade pricing is an effective tool for minor and large upgrades but not moderate upgrades. Thus, upgrade pricing is suboptimal for the firm for a middle range of product improvement. User upgrade costs have both direct and indirect effects on the pricing decision. The indirect effect arises because the upgrade cost is a critical factor in determining whether all old consumers would upgrade to a new product or not and this further alters the product improvement threshold at which special upgrade pricing becomes optimal. Finally, we also analyze the impact of upgrade pricing on the total coverage of the market
Measuring Agency-Level Results: Lessons Learned from Catholic Relief Services’ Beneficiary and Service Delivery Indicators Initiative
Background: Many NGOs have less success documenting their results at the agency level than at the program or project level. Little has been published on the challenges NGOs face in developing and measuring agency-level results. To address this issue, InterAction, an alliance of NGOs, commissioned a comparative study that drew on the existing grey literature, and a sample of 17 InterAction member organizations through case studies and interviews.
Purpose: This paper builds on that InterAction study by presenting one of the first published case studies of a successful agency-level measurement (ALM) system – Catholic Relief Services’ (CRS’) Beneficiary and Service Delivery Indicators (BSDI) initiative.
Setting: A faith-based multi-national relief and development NGO.
Intervention: N/A
Research Design: A case study approach was used to describe and document the development of the CRS ALM.
Data Collection and Analysis: The information in this study is derived primarily from CRS files and documents. Data reflecting ALM practices in other NGOs were derived from the 17 InterAction member NGOs. Data reflecting the ALM practices developed by specific NGOs and presented in tabular form in the paper were derived from official documents published by those NGOs.
Findings: The authors discuss key lessons for other large and small organizations to consider when developing their own ALM systems.
Keywords: agency-level results (ALR); agency-level measurement (ALM); monitoring, evaluation, accountability, and learning (MEAL); monitoring and evaluation; non-governmental organization (NGO); agency-wide metrics; results-based planning and reporting; Catholic Relief Services (CRS
Structure and composition of the superconducting phase in alkali iron selenide KFeSe
We use neutron diffraction to study the temperature evolution of the average
structure and local lattice distortions in insulating and superconducting
potassium iron selenide KFeSe. In the high temperature
paramagnetic state, both materials have a single phase with crystal structure
similar to that of the BaFeAs family of iron pnictides. While the
insulating KFeSe forms a iron
vacancy ordered block antiferromagnetic (AF) structure at low-temperature, the
superconducting compounds spontaneously phase separate into an insulating part
with iron vacancy order and a superconducting phase
with chemical composition of KFeSe and BaFeAs structure.
Therefore, superconductivity in alkaline iron selenides arises from alkali
deficient KFeSe in the matrix of the insulating block AF phase.Comment: 10 pages, 5 figure
Evaluation of Mental Effectiveness Training for people with experience of using mental health services
- …