2,253,110 research outputs found
Treatment Switching: statistical and decision making challenges and approaches
Objectives: Treatment switching refers to the situation in a randomised controlled trial where
patients switch from their randomly assigned treatment onto an alternative. Often, switching is from
the control group onto the experimental treatment. In this instance, a standard intention-to-treat
analysis does not identify the true comparative effectiveness of the treatments under investigation.
We aim to describe statistical methods for adjusting for treatment switching in a comprehensible
way for non-statisticians, and to summarise views on these methods expressed by stakeholders at
the 2014 Adelaide International Workshop on Treatment Switching in Clinical Trials.
Methods: We describe three statistical methods used to adjust for treatment switching: marginal
structural models, two-stage adjustment, and rank preserving structural failure time models. We
draw upon discussion heard at the Adelaide International Workshop to explore the views of
stakeholders on the acceptability of these methods.
Results: Stakeholders noted that adjustment methods are based on assumptions, the validity of
which may often be questionable. There was disagreement on the acceptability of adjustment
methods, but consensus that when these are used, they should be justified rigorously. The utility of
adjustment methods depends upon the decision being made and the processes used by the
decision-maker.
Conclusions: Treatment switching makes estimating the true comparative effect of a new treatment
challenging. However, many decision-makers have reservations with adjustment methods. These,
and how they affect the utility of adjustment methods, require further exploration. Further technical
work is required to develop adjustment methods to meet real world needs, to enhance their
acceptability to decision-makers
Decision-Making and Statistical Standards
NASA speech on use of statistical methods in governmental decision makin
APSS - Software support for decision making in statistical process control
DOI nefunkční (7.1.2019)Purpose:
SPC can be defined as the problem solving process incorporating many separate decisions including selection of the control chart based on the verification of the data presumptions. There is no professional statistical software which enables to make such decisions in a complex way.
Methodology/Approach:
There are many excellent professional statistical programs but without complex methodology for selection of the best control chart. Proposed program in Excel APSS (Analysis of the Process Statistical Stability) solves this problem and also offers additional learning functions.
Findings:
The created SW enables to link altogether separate functions of selected professional statistical programs (data presumption verification, control charts construction and interpretation) and supports active learning in this field.
Research Limitation/implication:
The proposed SW can be applied to control charts covered by SW Statgraphics Centurion and Minitab. But there is no problem to modify it for other professional statistical SW.
Originality/Value of paper:
The paper prezents the original SW created in the frame of the research activities at the Department of Quality Management of FMT, VSB-TUO, Czech Republic. SW enables to link altogether separate functions of the professional statistical SW needed for the complex realization of statitical process control and it is very strong tool for the active learning of statistical process control tasks.Web of Science223261
Comparison of quantum statistical models: equivalent conditions for sufficiency
A family of probability distributions (i.e. a statistical model) is said to
be sufficient for another, if there exists a transition matrix transforming the
probability distributions in the former to the probability distributions in the
latter. The Blackwell-Sherman-Stein (BSS) theorem provides necessary and
sufficient conditions for one statistical model to be sufficient for another,
by comparing their information values in statistical decision problems. In this
paper we extend the BSS theorem to quantum statistical decision theory, where
statistical models are replaced by families of density matrices defined on
finite-dimensional Hilbert spaces, and transition matrices are replaced by
completely positive, trace-preserving maps (i.e. coarse-grainings). The
framework we propose is suitable for unifying results that previously were
independent, like the BSS theorem for classical statistical models and its
analogue for pairs of bipartite quantum states, recently proved by Shmaya. An
important role in this paper is played by statistical morphisms, namely, affine
maps whose definition generalizes that of coarse-grainings given by Petz and
induces a corresponding criterion for statistical sufficiency that is weaker,
and hence easier to be characterized, than Petz's.Comment: v4: final version to appear on Communications in Mathematical
Physics. v3: submitted version, further improvements and results added, still
23 pages. v2: presentation improved and new results added, now 23 pages. v1:
20 pages, article class, no figure
Self-organization in complex systems as decision making
The idea is advanced that self-organization in complex systems can be treated
as decision making (as it is performed by humans) and, vice versa, decision
making is nothing but a kind of self-organization in the decision maker nervous
systems. A mathematical formulation is suggested based on the definition of
probabilities of system states, whose particular cases characterize the
probabilities of structures, patterns, scenarios, or prospects. In this general
framework, it is shown that the mathematical structures of self-organization
and of decision making are identical. This makes it clear how self-organization
can be seen as an endogenous decision making process and, reciprocally,
decision making occurs via an endogenous self-organization. The approach is
illustrated by phase transitions in large statistical systems, crossovers in
small statistical systems, evolutions and revolutions in social and biological
systems, structural self-organization in dynamical systems, and by the
probabilistic formulation of classical and behavioral decision theories. In all
these cases, self-organization is described as the process of evaluating the
probabilities of macroscopic states or prospects in the search for a state with
the largest probability. The general way of deriving the probability measure
for classical systems is the principle of minimal information, that is, the
conditional entropy maximization under given constraints. Behavioral biases of
decision makers can be characterized in the same way as analogous to quantum
fluctuations in natural systems.Comment: Latex file, 30 page
Robustness
The standard theory of decision making under uncertainty advises the decision maker to form a statistical model linking outcomes to decisions and then to choose the optimal distribution of outcomes. This assumes that the decision maker trusts the model completely. But what should a decision maker do if the model cannot be trusted? Lars Hansen and Thomas Sargent, two leading macroeconomists, push the field forward as they set about answering this question. They adapt robust control techniques and apply them to economics. By using this theory to let decision makers acknowledge misspecification in economic modeling, the authors develop applications to a variety of problems in dynamic macroeconomics. Technical, rigorous, and self-contained, this book will be useful for macroeconomists who seek to improve the robustness of decision-making processes.decision-making, uncertainty, statistical models, control techniques, economic modeling, dynamic microeconomics, misspecification
- …
