12 research outputs found

    The Global Risk Approach Should Be Better Applied in French Hypertensive Patients: A Comparison between Simulation and Observation Studies

    Get PDF
    The prediction of the public health impact of a preventive strategy provides valuable support for decision-making. International guidelines for hypertension management have introduced the level of absolute cardiovascular risk in the definition of the treatment target population. The public health impact of implementing such a recommendation has not been measured.We assessed the efficiency of three treatment scenarios according to historical and current versions of practice guidelines on a Realistic Virtual Population representative of the French population aged from 35 to 64 years: 1) BP≥160/95 mm Hg; 2) BP≥140/90 mm Hg and 3) BP≥140/90 mm Hg plus increased CVD risk. We compared the eligibility following the ESC guidelines with the recently observed proportion of treated amongst hypertensive individuals reported by the Etude Nationale Nutrition Santé survey. Lowering the threshold to define hypertension multiplied by 2.5 the number of eligible individuals. Applying the cardiovascular risk rule reduced this number significantly: less than 1/4 of hypertensive women under 55 years and less than 1/3 of hypertensive men below 45 years of age. This was the most efficient strategy. Compared to the simulated guidelines application, men of all ages were undertreated (between 32 and 60%), as were women over 55 years (70%). By contrast, younger women were over-treated (over 200%).The global CVD risk approach to decide for treatment is more efficient than the simple blood pressure level. However, lack of screening rather than guideline application seems to explain the low prescription rates among hypertensive individuals in France. Multidimensional analyses required to obtain these results are possible only through databases at the individual level: realistic virtual populations should become the gold standard for assessing the impact of public health policies at the national level

    Benefits of numerical modeling applied to new drug discovery and development : theoretical approach, simplified model and application to atherosclerosis

    No full text
    La découverte de nouveaux médicaments est un processus complexe qui implique de gros investissements et de longues phases de développement. Le coût des échecs est phénoménal et le taux de succès diminue depuis deux décennies, suggérant que le processus d'innovation-développement tel qu'il est pratiqué aujourd'hui n'est plus adapté aux défis actuels. Dans ce travail de thèse, nous proposons une nouvelle approche, dite descendante « Top-down » qui consiste à inverser le procédé traditionnel. Elle reposera essentiellement sur la conception et l'optimisation de modèles de prédiction, afin de mieux orienter les phases successives du développement et de prédire les résultats de chaque étape avec le maximum de précision. Cette démarche s'appuie sur la conception du modèle d'innovation thérapeutique et de toutes ses composantes, notamment le modèle physiopathologique (modèle simulant l'organisme et l'évolution naturelle de la maladie) le modèle thérapeutique (modèle simulant l'effet du médicament sur la maladie), et le modèle d'effet (modèle qui peut être prédit à partir des modèles précédents et qui permet de simuler l'impact du médicament sur la population traitée à partir de son effet sur chaque individu de la population. Le processus complet de notre stratégie comprend dix étapes : élaboration du modèle physiopathologique, calibration, validation, identification des cibles potentielles, activation du logiciel de modifications séquentielles des cibles, construction de la population virtuelle (réaliste ou non), obtention du modèle d'effet nécessaire pour calculer le nombre d'événements évités (NEE), calcul du NEE par cible, comparaison et choix de la “meilleure” cibleNew drug discovery and development is a complex process which requires massive investments over protracted horizons. The cost of failed programs is significant in absolute dollar terms. Decreasing R&D productivity over the past 2 decades suggests that the traditional innovation model needs a radical rethink. The central purpose of this thesis work is to lay the theoretical foundations for a new "topdown" approach to new target identification construed as an alternative to the current bottomup approach based on high throughput screening. In essence, it will consist in the design and optimization of in silico models in order to guide the development of a new drug candidate by predicting the outcomes of each successive phase of the development process. This in silico framework brings together a physiopathological model (to simulate the natural evolution of the disease) with a therapeutic model (to simulate the effects of a drug candidate on disease evolution) and the effect model (to predict the impact of the drug candidate on a population of patients

    Empirical methods for the validation of time-to-event mathematical models taking into account uncertainty and variability: application to EGFR + lung adenocarcinoma

    No full text
    Abstract Background Over the past several decades, metrics have been defined to assess the quality of various types of models and to compare their performance depending on their capacity to explain the variance found in real-life data. However, available validation methods are mostly designed for statistical regressions rather than for mechanistic models. To our knowledge, in the latter case, there are no consensus standards, for instance for the validation of predictions against real-world data given the variability and uncertainty of the data. In this work, we focus on the prediction of time-to-event curves using as an application example a mechanistic model of non-small cell lung cancer. We designed four empirical methods to assess both model performance and reliability of predictions: two methods based on bootstrapped versions of parametric statistical tests: log-rank and combined weighted log-ranks (MaxCombo); and two methods based on bootstrapped prediction intervals, referred to here as raw coverage and the juncture metric. We also introduced the notion of observation time uncertainty to take into consideration the real life delay between the moment when an event happens, and the moment when it is observed and reported. Results We highlight the advantages and disadvantages of these methods according to their application context. We have shown that the context of use of the model has an impact on the model validation process. Thanks to the use of several validation metrics we have highlighted the limit of the model to predict the evolution of the disease in the whole population of mutations at the same time, and that it was more efficient with specific predictions in the target mutation populations. The choice and use of a single metric could have led to an erroneous validation of the model and its context of use. Conclusions With this work, we stress the importance of making judicious choices for a metric, and how using a combination of metrics could be more relevant, with the objective of validating a given model and its predictions within a specific context of use. We also show how the reliability of the results depends both on the metric and on the statistical comparisons, and that the conditions of application and the type of available information need to be taken into account to choose the best validation strategy

    Bivariate distributions in two virtual populations, simulated with and without taking into account their covariance.

    No full text
    <p>Contour plots show the distribution of systolic and diastolic blood pressure on the X and Y axes, respectively. Bars beside plots indicate the scales of colours representing the density of individuals at each level of bivariate values. Panel A, taking into account the covariance. Panel B, covariance = 0.</p

    Comparison of the prevalence of hypertension estimated in the RVP with that reported by ENNS.

    No full text
    <p>Bars represent the percentage of hypertensive individuals in each category of age in both sexes separately, estimated in the RVP, the ENNS survey and the RVP after reconstitution of data including the proportion of hypertensive subjects taking medications in the MONICA-France cohort. Abbreviations: RVPi, the realistic virtual population initial estimates; ENNS, Etude Nationale Nutrition Santé; RVPr, the reconstituted estimates from the RVP.</p

    Recommendation for use of blood pressure lowering drugs according to the blood pressure level and the level of risk.

    No full text
    <p>Schematic guidelines inspired from the table for “Management of total CVD risk – blood pressure” appeared in the ESC guidelines on CVD prevention 2007 <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0017508#pone.0017508-Graham1" target="_blank">[13]</a>.</p

    Theoretical eligibility from implementing the guidelines and treatment of hypertension in a real setting.

    No full text
    <p>Bars signal the proportion of hypertensive individuals eligible and ineligible for treatment according to the ESC guidelines implemented in the RVP and the proportion of hypertensive subjects that are unknown and untreated, those who are known but untreated and those who are treated in the ENNS survey for each category of age in men and women. Results are expressed as a percentage of all the individuals belonging to each class.</p
    corecore