85 research outputs found
Wigner Surmise For Domain Systems
In random matrix theory, the spacing distribution functions are
well fitted by the Wigner surmise and its generalizations. In this
approximation the spacing functions are completely described by the behavior of
the exact functions in the limits s->0 and s->infinity. Most non equilibrium
systems do not have analytical solutions for the spacing distribution and
correlation functions. Because of that, we explore the possibility to use the
Wigner surmise approximation in these systems. We found that this approximation
provides a first approach to the statistical behavior of complex systems, in
particular we use it to find an analytical approximation to the nearest
neighbor distribution of the annihilation random walk
Damaging real lives through obstinacy: re-emphasising why significance testing is wrong
This paper reminds readers of the absurdity of statistical significance testing, despite its continued widespread use as a supposed method for analysing numeric data. There have been complaints about the poor quality of research employing significance tests for a hundred years, and repeated calls for researchers to stop using and reporting them. There have even been attempted bans. Many thousands of papers have now been written, in all areas of research, explaining why significance tests do not work. There are too many for all to be cited here. This paper summarises the logical problems as described in over 100 of these prior pieces. It then presents a series of demonstrations showing that significance tests do not work in practice. In fact, they are more likely to produce the wrong answer than a right one. The confused use of significance testing has practical and damaging consequences for people's lives. Ending the use of significance tests is a pressing ethical issue for research. Anyone knowing the problems, as described over one hundred years, who continues to teach, use or publish significance tests is acting unethically, and knowingly risking the damage that ensues
Speeding Up Computer Simulations: The Transition Observable Method
A method is presented which allows for a tremendous speed-up of computer
simulations of statistical systems by orders of magnitude. This speed-up is
achieved by means of a new observable, while the algorithm of the simulation
remains unchanged.Comment: 20 pages, 6 figures Submitted to Phys.Rev.E (August 1999) Replacement
due to some minor change
Inference with interference between units in an fMRI experiment of motor inhibition
An experimental unit is an opportunity to randomly apply or withhold a
treatment. There is interference between units if the application of the
treatment to one unit may also affect other units. In cognitive neuroscience, a
common form of experiment presents a sequence of stimuli or requests for
cognitive activity at random to each experimental subject and measures
biological aspects of brain activity that follow these requests. Each subject
is then many experimental units, and interference between units within an
experimental subject is likely, in part because the stimuli follow one another
quickly and in part because human subjects learn or become experienced or
primed or bored as the experiment proceeds. We use a recent fMRI experiment
concerned with the inhibition of motor activity to illustrate and further
develop recently proposed methodology for inference in the presence of
interference. A simulation evaluates the power of competing procedures.Comment: Published by Journal of the American Statistical Association at
http://www.tandfonline.com/doi/full/10.1080/01621459.2012.655954 . R package
cin (Causal Inference for Neuroscience) implementing the proposed method is
freely available on CRAN at https://CRAN.R-project.org/package=ci
Current trends in the cardiovascular clinical trial arena (I)
The existence of effective therapies for most cardiovascular disease states, coupled with increased requirements that potential benefits of new drugs be evaluated on clinical rather than surrogate endpoints, makes it increasingly difficult to substantiate any incremental improvements in efficacy that these new drugs might offer. Compounding the problem is the highly controversial issue of comparing new agents with placebos rather than active pharmaceuticals in drug efficacy trials. Despite the recent consensus that placebos may be used ethically in well-defined, justifiable circumstances, the problem persists, in part because of increased scrutiny by ethics committees but also because of considerable lingering disagreement regarding the propriety and scientific value of placebo-controlled trials (and trials of antihypertensive drugs in particular). The disagreement also substantially affects the most viable alternative to placebo-controlled trials: actively controlled equivalence/noninferiority trials. To a great extent, this situation was prompted by numerous previous trials of this type that were marked by fundamental methodological flaws and consequent false claims, inconsistencies, and potential harm to patients. As the development and use of generic drugs continue to escalate, along with concurrent pressure to control medical costs by substituting less-expensive therapies for established ones, any claim that a new drug, intervention, or therapy is "equivalent" to another should not be accepted without close scrutiny. Adherence to proper methods in conducting studies of equivalence will help investigators to avoid false claims and inconsistencies. These matters will be addressed in the third article of this three-part series
Lectures for chemists on statistics. I. Belief, probability, frequency, and statistics: decision making in a floating world
- …