315 research outputs found

    Damaging real lives through obstinacy: re-emphasising why significance testing is wrong

    Get PDF
    This paper reminds readers of the absurdity of statistical significance testing, despite its continued widespread use as a supposed method for analysing numeric data. There have been complaints about the poor quality of research employing significance tests for a hundred years, and repeated calls for researchers to stop using and reporting them. There have even been attempted bans. Many thousands of papers have now been written, in all areas of research, explaining why significance tests do not work. There are too many for all to be cited here. This paper summarises the logical problems as described in over 100 of these prior pieces. It then presents a series of demonstrations showing that significance tests do not work in practice. In fact, they are more likely to produce the wrong answer than a right one. The confused use of significance testing has practical and damaging consequences for people's lives. Ending the use of significance tests is a pressing ethical issue for research. Anyone knowing the problems, as described over one hundred years, who continues to teach, use or publish significance tests is acting unethically, and knowingly risking the damage that ensues

    Symbolic Preconditioning with Taylor Models: Some Examples

    No full text
    Deterministic global optimization with interval analysis involves using interval enclosures for ranges of the constraints, objective, and gradient to reject infeasible regions, regions without global optima, and regions without critical points; using interval Newton methods to converge on optimum-containing regions and to verify global optima

    INTERVAL ANALYSIS: SUBDIVI- SION DIRECTIONS IN INTERVAL B&B METHODS

    No full text
    The selection of subdivision direction is one of the points where the efficiency of the basic branch-and-bound algorithm for unconstrained global optimization can be improved (see Interval analysis: unconstrained and constrained optimization). The traditional approach is to choose that direction for subdivision in which the actual box has the largest width. If the inclusion function φ(x) is the only available information about the problem min φ(x), x∈x0 then it is usually the best possible choice. If, however, other information like the inclusion of the gradient (∇φ), or even the inclusion of the Hessian (H) is calculated, then a better decision can be made. Subdivision directions. All the rules select a direction with a merit function: k: = arg n max D(i), (1) i=1 where D(i) is determined by the given rule. If many such optimal k indices exist then the algorithm can chose the smallest one, or it can select an optimal direction randomly. Rule A. The first rule was the interval-width oriented rule. This rule chooses the coordinate direction with D(i): = w(xi). (2) This rule is justified by the idea that, if the original interval is subdivided in a uniform way, then the width of the actual subintervals goes to zero most rapidly. The algorithm with Rule A is convergent both with and without the monotonicity test [8]. This rule allows a relatively simple analysis of the subdivision direction branch-and-bound algorithm Interval analysis: unconstrained and constrained optimizatio
    • …
    corecore