165 research outputs found
The Impact of Stereoscopic 3-D on Visual Short-Term Memory
Visual short-term memory has been studied extensively, however nearly all research on this topic has assessed two-dimensional object properties. This is unexpected, given that most individuals perceive the visual environment in three-dimensions. In the experiments reported here, I investigate the stimuli necessary to assess visual short-term memory while eliminating potential confounds: the use of verbal memory to encode visual information, and the unintentional use of mental resources directed at irrelevant aspects of the memory task. I assess the impact of the amount of disparity, and the distribution of elements in depth, on visual short-term memory. Individuals retain simple visual stimuli equivalently when information is displayed in 2-D or 3-D, regardless of how objects are distributed in 3-D. Conversely, ease of encoding does influence visual short-term memory. Tasks that facilitate encoding result in better visual short-term memory performance. The experiments reported show that stereoscopic 3-D does not improve visual short-term memory
A straightforward meta-analysis approach for oncology phase I dose-finding studies
Phase I early-phase clinical studies aim at investigating the safety and the
underlying dose-toxicity relationship of a drug or combination. While little
may still be known about the compound's properties, it is crucial to consider
quantitative information available from any studies that may have been
conducted previously on the same drug. A meta-analytic approach has the
advantages of being able to properly account for between-study heterogeneity,
and it may be readily extended to prediction or shrinkage applications. Here we
propose a simple and robust two-stage approach for the estimation of maximum
tolerated dose(s) (MTDs) utilizing penalized logistic regression and Bayesian
random-effects meta-analysis methodology. Implementation is facilitated using
standard R packages. The properties of the proposed methods are investigated in
Monte-Carlo simulations. The investigations are motivated and illustrated by
two examples from oncology.Comment: 30 pages, 11 figures, 8 table
A Bayesian dose-finding design for drug combination clinical trials based on the logistic model
International audienceIn early phase dose-finding cancer studies, the objective is to determine the maximum tolerated dose, defined as the highest dose with an acceptable dose-limiting toxicity rate. Finding this dose for drug-combination trials is complicated because of drugâdrug interactions, and many trial designs have been proposed to address this issue. These designs rely on complicated statistical models that typically are not familiar to clinicians, and are rarely used in practice. The aim of this paper is to propose a Bayesian dose-finding design for drug combination trials based on standard logistic regression. Under the proposed design, we continuously update the posterior estimates of the model parameters to make the decisions of dose assignment and early stopping. Simulation studies show that the proposed design is competitive and outperforms some existing designs. We also extend our design to handle delayed toxicities. Copyright © 2014 John Wiley & Sons, Ltd
Personalized Dynamic Treatment Regimes in Continuous Time: A Bayesian Approach for Optimizing Clinical Decisions with Timing
Accurate models of clinical actions and their impacts on disease progression
are critical for estimating personalized optimal dynamic treatment regimes
(DTRs) in medical/health research, especially in managing chronic conditions.
Traditional statistical methods for DTRs usually focus on estimating the
optimal treatment or dosage at each given medical intervention, but overlook
the important question of "when this intervention should happen." We fill this
gap by developing a two-step Bayesian approach to optimize clinical decisions
with timing. In the first step, we build a generative model for a sequence of
medical interventions-which are discrete events in continuous time-with a
marked temporal point process (MTPP) where the mark is the assigned treatment
or dosage. Then this clinical action model is embedded into a Bayesian joint
framework where the other components model clinical observations including
longitudinal medical measurements and time-to-event data conditional on
treatment histories. In the second step, we propose a policy gradient method to
learn the personalized optimal clinical decision that maximizes the patient
survival by interacting the MTPP with the model on clinical observations while
accounting for uncertainties in clinical observations learned from the
posterior inference of the Bayesian joint model in the first step. A signature
application of the proposed approach is to schedule follow-up visitations and
assign a dosage at each visitation for patients after kidney transplantation.
We evaluate our approach with comparison to alternative methods on both
simulated and real-world datasets. In our experiments, the personalized
decisions made by the proposed method are clinically useful: they are
interpretable and successfully help improve patient survival
Designing a paediatric study for an antimalarial drug including prior information from adults
International audienceThe objectives of this study were to design a pharmacokinetic (PK) study by using information about adults and evaluate the robustness of the recommended design through a case study of mefloquine. PK data about adults and children were available from two different randomized studies of the treatment of malaria with the same artesunate-mefloquine combination regimen. A recommended design for pediatric studies of mefloquine was optimized on the basis of an extrapolated model built from adult data through the following approach. (i) An adult PK model was built, and parameters were estimated by using the stochastic approximation expectation-maximization algorithm. (ii) Pediatric PK parameters were then obtained by adding allometry and maturation to the adult model. (iii) A D-optimal design for children was obtained with PFIM by assuming the extrapolated design. Finally, the robustness of the recommended design was evaluated in terms of the relative bias and relative standard errors (RSE) of the parameters in a simulation study with four different models and was compared to the empirical design used for the pediatric study. Combining PK modeling, extrapolation, and design optimization led to a design for children with five sampling times. PK parameters were well estimated by this design with few RSE. Although the extrapolated model did not predict the observed mefloquine concentrations in children very accurately, it allowed precise and unbiased estimates across various model assumptions, contrary to the empirical design. Using information from adult studies combined with allometry and maturation can help provide robust designs for pediatric studies
Value of information methods to design a clinical trial in a small population to optimise a health economic utility function
Background:
Most confirmatory randomised controlled clinical trials (RCTs) are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population.
Methods:
We considered the use of a decision-theoretic value of information (VOI) method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future.
Results:
The optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored.
Conclusions:
Decision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently trial sample size) depending on the size of the future population for whom the treatment under investigation is intended. This might be particularly suitable for small populations when there is considerable information about the patient population
Approaches to sample size calculation for clinical trials in rare diseases
We discuss 3 alternative approaches to sample size calculation: traditional sample size calculation based on power to show a statistically significant effect, sample size calculation based on assurance, and sample size based on a decision-theoretic approach. These approaches are compared head-to-head for clinical trial situations in rare diseases. Specifically, we consider 3 case studies of rare diseases (Lyell disease, adult-onset Still disease, and cystic fibrosis) with the aim to plan the sample size for an upcoming clinical trial. We outline in detail the reasonable choice of parameters for these approaches for each of the 3 case studies and calculate sample sizes. We stress that the influence of the input parameters needs to be investigated in all approaches and recommend investigating different sample size approaches before deciding finally on the trial size. Highly influencing for the sample size are choice of treatment effect parameter in all approaches and the parameter for the additional cost of the new treatment in the decision-theoretic approach. These should therefore be discussed extensively
Développement d'une méthode de recherche de dose modélisant un score de toxicité pour les essais cliniques de phase I en Oncologie
Le but principal d'un essai de phase I en oncologie est d'identifier, parmi un nombre fini de doses, la dose à recommander d'un nouveau traitement pour les évaluations ultérieures, sur un petit nombre de patients.Le critÚre de jugement principal est classiquement la toxicité. Bien que la toxicité soit mesurée pour différents organes sur une échelle gradée, elle est généralement réduite à un indicateur binaire appelé "toxicité dose-limitante" (DLT). Cette simplification trÚs réductrice est problématiqu, en particulier pour les thérapies, dites "thérapies ciblées", associées à peu de DLTs.Dans ce travail, nous proposons un score de toxicité qui résume l'ensemble des toxicités observées chez un patient. Ce score, appelé TTP pour Total Toxicity Profile, est défini par la norme euclidienne des poids associés aux différents types et grades de toxicités possibles. Les poids reflÚtent l'importance clinique des différentes toxicités.\\ Ensuite, nous proposons la méthode de recherche de dose, QLCRM pour Quasi-Likelihood Continual Reassessment Method, modélisant la relation entre la dose et le score de toxicité TTP à l'aide d'une régression logistique dans un cadre fréquentiste.A l'aide d'une étude de simulation, nous comparons la performance de cette méthode à celle de trois autres approches utilisant un score de toxicité : i) la méthode de Yuan et al. (QCRM) basée sur un modÚle empirique pour estimer, dans un cadre bayésien, la relation entre la dose et le score, ii) la méthode d'Ivanova et Kim (UA) dérivée des méthodes algorithmiques et utilisant une régression isotonique pour estimer la dose à recommander en fin d'essai, iii) la méthode de Chen et al. (EID) basée sur une régression isotonique pour l'escalade de dose et l'identification de la dose à recommander. Nous comparons ensuite ces quatre méthodes utilisant le score de toxicité aux méthodes CRM basées sur le critÚre binaire DLT. Nous étudions également l'impact de l'erreur de classement des grades pour les différentes méthodes, guidées par le score de toxicité ou par la DLT.Enfin, nous illustrons le processus de construction du score de toxicité ainsi que l'application de la méthode QLCRM dans un essai réel de phase I. Dans cette application, nous avons utilisé une approche Delphi pour déterminer avec les cliniciens la matrice des poids et le score de toxicité jugé acceptable.Les méthodes QLCRM, QCRM, UA et EID présentent une bonne performance en termes de capacité à identifier correctement la dose à recommander et de contrÎle du surdosage. Dans un essai incluant 36 patients, le pourcentage de sélection correcte de la dose à recommander obtenu avec les méthodes QLCRM et QCRM varie de 80 à 90% en fonction des situations. Les méthodes basées sur le score TTP sont plus performantes et plus robustes aux erreurs de classement des grades que les méthodes CRM basées sur le critÚre binaire DLT.Dans l'application rétrospective, le processus de construction du score apparaßt faisable facilement. Cette étude nous a conduits à proposer des recommandations pour guider les investigateurs et faciliter l'utilisation de cette approche dans la pratique.En conclusion, la méthode QLCRM prenant en compte l'ensemble des toxicités s'avÚre séduisante pour les essais de phase I évaluant des médicaments associés à peu de DLTs a priori, mais avec des toxicités multiples modérées probables.The aim of a phase I oncology trial is to identify a dose with an acceptable safety level. Most phase I designs use the Dose-Limiting Toxicity (DLT), a binary endpoint, to assess the level of toxicity. DLT might be an incomplete endpoint for investigating molecularly targeted therapies as a lot of useful toxicity information is discarded.In this work, we propose a quasi-continuous toxicity score, the Total Toxicity Profile (TTP), to measure quantitatively and comprehensively the overall burden of multiple toxicities. The TTP is defined as the Euclidean norm of the weights of toxicities experienced by a patient, where the weights reflect the relative clinical importance of each type and grade of toxicity.We propose then a dose-finding design, the Quasi-Likelihood Continual Reassessment Method (QLCRM), incorporating the TTP-score into the CRM, with a logistic model for the dose-toxicity relationship in a frequentist framework. Using simulations, we compare our design to three existing designs for quasi-continuous toxicity scores: i) the QCRM design, proposed by Yuan et al., with an empiric model for the dose-toxicity relationship in a Bayesian framework, ii) the UA design of Ivanova and Kim derived from the "up-and-down" methods for the dose-escalation process and using an isotonic regression to estimate the recommended dose at the end of the trial, and iii) the EID design of Chen et al. using the isotonic regression for the dose-escalation process and for the identification of the recommended dose.We also perform a simulation study to evaluate the TTP-driven methods in comparison to the classical DLT-driven CRM. We then evaluate the robustness of these designs in a setting where grades can be misclassified.In the last part of this work, we illustrate the process of building the TTP-score and the application of the QLCRM method through the example of a paediatric trial. In this study, we have used the Delphi method to elicit the weights and the target toxicity-score considered as an acceptable toxicity measure.All designs using the TTP-score to identify the recommended dose had good performance characteristics for most scenarios, with good overdosing control. For a sample size of 36, the percentage of correct selection for the QLCRM ranged from 80 to 90%, with similar results for the QCRM design. Simulation study demonstrates also that score-driven designs present an improved performance and robustness compared to conventional DLT-driven designs. In the retrospective application of erlotinib trial, the consensus weights as well as the target-TTP were easily obtained, confirming the feasibility of the process. Some guidelines to facilitate the process in a real clinical trial for a better practice of this approach are suggested.The QLCRM method based on the TTP-endpoint combining multiple graded toxicities is an appealing alternative to the conventional dose-finding designs, especially in the context of molecularly targeted agents.PARIS11-SCD-Bib. électronique (914719901) / SudocSudocFranceF
Using dichotomized survival data to construct a prior distribution for a Bayesian seamless Phase II/III clinical trial
Master protocol designs allow for simultaneous comparison of multiple treatments or disease subgroups. Master protocols can also be designed as seamless studies, in which two or more clinical phases are considered within the same trial. They can be divided into two categories: operationally seamless, in which the two phases are separated into two independent studies, and inferentially seamless, in which the interim analysis is considered an adaptation of the study. Bayesian designs are scarcely studied. Our aim is to propose and compare Bayesian operationally seamless Phase II/III designs using a binary endpoint for the first stage and a time-to-event endpoint for the second stage. At the end of Phase II, arm selection is based on posterior (futility) and predictive (selection) probabilities. The results of the first phase are then incorporated into prior distributions of a time-to-event model. Simulation studies showed that Bayesian operationally seamless designs can approach the inferentially seamless counterpart, allowing for an increasing simulated power with respect to the operationally frequentist design
- âŠ