275 research outputs found

    Chapter 15: Medical Malpractice

    Get PDF

    The Science of Endocrine Disruption - Will It Change the Scope of Products Liability Claims?

    Get PDF
    The FDA Food Quality Protection Act of 1996 and amendments to the EPA Safe Drinking Water Act require the EPA Administrator, in consultation with the Secretary of Health and Human Services, to develop a screening program for endocrine disrupting effects. This Comment explores the science of endocrine disruptors and examines whether the current science supports a successful products liability claim. The various methods for proving liability, causation and harm are presented and analyzed against the current science of endocrine disruption. This Comment suggests possible ways the court might allow plaintiffs to present evidence of causation in bringing suit against companies that add endocrine disruptors to their products. Despite this a plaintiff bringing such an action still has formidable hurdles to overcome

    Predicting Next Kidney Offer for a Kidney Transplant Candidate Declining Current One

    Get PDF
    RÉSUMÉ : Un patient atteint d’insuffisance rénale terminale est confronté à un choix difficile lorsqu’un rein d’un donneur décédé lui est offert. Il peut ou bien accepter, ou bien attendre une meilleure offre en restant sous dialyse. La décision associe le patient et son néphrologue, qui doivent, à deux, parvenir au meilleur choix pour le patient dans un processus appelé Prise de Décision Participative (PDP). D’une part, environ 500 personnes étaient en attente d’une transplantation rénale en 2017 au Québec. Parmi elles, 54 sont décédées sur la liste d’attente la même année. Le temps d’attente moyen des patients transplantés en 2017 était de 493 jours. D’autre part, des reins de moindre qualité qui auraient pu bénéficier à des patients à risque ne sont pas utilisés. Pour certains patients, accepter un rein de qualité inférieure augmente les chances de survie par rapport à attendre en dialyse. De plus, les chances de succès à long terme d’une transplantation diminuent avec le temps passé sous dialyse. Cependant, il peut être avantageux pour les patients prioritaires d’attendre une meilleur offre. Par conséquent, une méthodologie et des outils mathématiques pour développer une PDP éclairée augmenteraient la qualité et le nombre de greffes, la survie et la satisfaction des patients, la confiance des médecins dans leurs recommandations, diminueraient les dépenses de santé et les reins non-utilisés. Toutefois, les outils qui existent à ce jour pour ce faire ne nous paraissent pas adéquats. En effet, ils visent en général à recommander au patient la décision à prendre, en se fondant sur des critères souvent incomplets, au lieu de l’informer. Notre travail s’inscrit dans un projet de recherche plus général qui a été séparé en deux questions pour lesquelles le patient aimerait des réponses afin de prendre une décision. Notre travail répond à la seconde question en supposant la première résolue comme une boîte-noire. 1. Qu’advient-il si j’accepte l’offre ? Combien de temps un patient comme moi peut-il espérer survivre avec ce rein ? Ce temps est-il très différent du temps de survie espéré pour un donneur médian ? 2. Qu’advient-il si je refuse l’offre ? Combien de temps vais-je devoir attendre pour une prochaine offre ? Quelle est la qualité espérée d’une telle offre ? Combien de temps devrais-je attendre pour une meilleure offre que l’offre actuelle ? Nous considérons un système d’attribution par pointage de reins de donneurs décédés. Des offres sont faites au patient selon son rang sur la liste d’attente, déterminé par une fonction de pointage à chaque arrivée de donneur. Soit x un patient auquel un rein y0 est offert. Nous cherchons à prédire le temps T auquel le prochain rein Y sera proposé à x. Sachant q un prédicteur de la qualité d’une greffe (le temps de survie par exemple), nous souhaitons estimer q(x, Y ) ainsi que le temps de prochaine meilleure offre (T|q(x, Y ) > q(x, y0)). Nous modélisons l’arrivée de donneurs éligibles (c’est-à-dire compatibles et réellement proposés au patient) par un processus ponctuel de Poisson non-homogène de paramètre constant par morceaux. En pratique, nous estimons ce paramètre à l’aide des donneurs arrivés au cours des deux années précédant l’offre. Pour chaque donneur, nous évaluons s’il aurait été éligible pour le patient en considérant différentes dates dans le futur (afin de tenir compte de l’évolution du temps d’attente et de l’âge dudit patient). En définitive, notre algorithme apprend la distribution complète de l’offre suivante pour ce patient. L’on peut fournir au patient le temps espéré E(T) de prochaine offre (avec intervalles de confiance obtenus par bootstrapping) et t95%, temps auquel le patient aura obtenu une offre avec probabilité 0.95. Pour valider notre algorithme, nous utilisons des données fournies par Transplant Québec. Nous démontrons que comparer les quantiles prédits tα aux temps réels de prochaine offre permet d’estimer les quantiles empiriques sur l’ensemble du jeu de données. Nous démontrons qu’il est possible de comparer la moyenne des temps observés au même mois prédit E(T), tout en incluant les données censurées. La meilleure version de l’algorithme prédit fidèlement la distribution de T sur l’ensemble de test (712 offres : 569 observées, 143 censurées) : temps observés inférieurs aux t95% dans 94% des cas pour un C-index de 0.74. Nous introduisons une mesure de détection des erreurs de prédiction et de leur envergure. Enfin, nous utilisons le Kidney-Donor-Risk-Index (mesure de qualité reconnue en pratique) pour estimer la qualité de l’offre espérée comparativement à l’offre actuelle. Nous adaptons l’algorithme pour prédire l’espérance du temps de prochaine meilleure offre E(T|q(x, Y ) > q(x, y0)). Nous n’avons appliqué l’algorithme qu’à des données québécoises à ce jour, mais il s’étend à toute liste d’attente par pointage. Il est personnalisé, économe en temps de calcul, interprétable, s’adapte aux évolutions de la distribution de donneurs et permet d’informer le patient de multiples manières. Il inclut actuellement des limitations. Les prédictions sont mauvaises quand les données sont trop peu nombreuses ou pour certains types de patients. Enfin, l’algorithme néglige le risque de décès ou de sortie de la liste, d’où l’importance que le néphrologue confronte les résultats avec son expertise et que l’approche continue à être développée.----------ABSTRACT :Patients with end-stage kidney disease waiting for a kidney transplant are confronted to difficult decisions when a deceased donor is proposed to them. They can either accept the offer, or wait for a potentially better offer, while remaining under dialysis. The decision involves both patient and physician who should evaluate together the alternative to find the best decision for the patient. This process is called Shared-Decision-Making (SDM). On the one hand, around 500 persons were waiting for a kidney transplant in 2017 in the province of Québec, and 54 died in the waiting-list the same year. The mean waiting time of transplanted patients in 2017 was 493 days. On the other hand, lower-quality kidneys are wasted which could have benefited to patients at risk. For some patients, getting a lowerquality kidney leads to better survival chances than remaining on dialysis. Moreover, the longer the waiting-time, the worse the expected outcomes of a future transplant. At the same time, some high-priority patients can benefit from waiting for a better kidney offer. Therefore, developing a methodology and decision-support tools to enhance informed SDM could at once increase quality and number of transplants, survival and satisfaction of patients, physicians’ confidence in their advice, decrease organ wastage and healthcare expenditures for end-stage kidney disease. Yet, the mathematical tools which exist to foster SDM to date are not fully satisfactory. Indeed, they are designed most of the time to give an advice to the patient, based on little evidence, and not to inform him. Our work is part of a larger research project. It has been split in two questions that the patient would like answered in order to make his decision. Our work addresses the second question and assumes that the first one is solved as a black-box. 1. What happens if I say yes? How long is the kidney from this specific donor expected to survive in a patient like me? How different is this survival compared to the one of an average donor? 2. What happens if I say no? If I decline this offer, how long am I supposed to wait for another offer? What would be the expected quality of this offer? How long would I have to wait for a better offer than the current one? We consider a general scoring allocation system for deceased donors: offers are made to patients according to their rank on the waiting-list determined by a scoring function at each donor arrival. We consider a patient x getting a kidney offer y0. Our objective is to predict the time T at which the patient will get a next offer Y . Given a black-box q predicting the quality of a matching (for example time of survival), we want to estimate the quality of next offer q(x, Y ) and the time to next better offer (T|q(x, Y ) > q(x, y0)). The arrival of eligible donors (i.e. compatible donors who will actually be proposed to our patient) can be modelled as a non-homogeneous Poisson point process with piecewise constant parameter. We learn this parameter in practice using donors arrived up to two years before the current offer. For each donor y, we try to assess if she would have been eligible to our patient at different points in times in the future (accounting for update of patient’s age and waiting-time). In the end, our algorithm predicts the whole distribution of next offer for a specific patient. This enables to provide the patient with the expected time to next offer E(T) (and bootstrapping derived confidence intervals) and t95%, time by which he will have had an offer with 95% confidence. We validated our algorithm on data provided by Transplant Québec. We proved that we could compare the actual predicted quantiles tα to the observed times to next offer and estimate the empirical quantiles over the whole dataset. We also proved that we could group the predicted expected times to next offer by month and compare them to the averaged observed times while accounting for censored values. The best version of the algorithm predicts faithfully the distribution of T on our test set (712 offers: 569 uncensored, 143 censored): actual observed times lower than predicted t95% for 94% of the observations and concordance-index 0.74. We introduced a measure to detect bad predictions and study their importance. Finally we used the well-known Kidney-Donor-Risk-Index to estimate the next offer’s expected quality and compare it to the current one. We adapted our algorithm to predict the mean time to next better offer E(T|q(x, Y ) > q(x, y0)). Though we only applied our algorithm to data from Québec, it is applicable to any scoring waiting-list. It is a highly personalised and interpretable on-line algorithm, it is not timeconsuming, captures long-term trends in donors’ arrival and leads to many ways of informing the patient. Of course, it currently has limitations. Bad predictions occurred for different reasons: too little data, special type of patient. Furthermore, we did not include the risk of death or removal from the list. Thus, it is important the physician should be able to confront the results to her expertise and it is also important to continue developing the approach

    Life Sciences, Technology, and the Law - Symosium Transcript - March 7, 2003

    Get PDF
    Life sciences, Technology, and the Law Symposium held at the University of Michigan Law School Friday, March 7, 200

    Environmental Effects on Health: Ignorance and Undone Science

    Get PDF
    Considerable research has been completed showing that environmental exposures can have significant effects on people’s health, especially in terms of autoimmune conditions, cancers, and neurological and psychological conditions. Health effects are possible at exposure levels far below those generally considered safe by orthodox health authorities. A prime example is multiple chemical sensitivity (MCS), where sufferers themselves have made clear, short-term associations between health effects and low-level environmental exposures. The condition of MCS is not clearly definable and significantly overlaps with other, largely unrecognised health conditions including fibromyalgia (FMS), chronic fatigue syndrome (CFS), electro hypersensitivity syndrome (EHS) and chronic inflammatory response syndrome (CIRS). The orthodox medical diagnostic process is implicated in the production of ignorance on such health conditions. Despite the large amount of research showing health effects from low level environmental exposures, there remains much “undone science” in the field - research that could be done but isn’t. The reasons for undone science and the consequent societal ignorance are generally due to society’s ingrained desire for technological improvements. Industry, responsible for technological developments the use of chemical products or radiation devices, is not interested in possible health effects, so expensive scientific research into them is left undone. When subsequent research or firsthand experiences of health effects start to be realised there is ample evidence that the industries responsible for environmental exposures then become active in generating ignorance. Due to close ties with industry, medical and health systems become complicit in industry’s strategy, and knowledge is manipulated by the industry funding of scientific studies, which then influences the conclusions of the research. The support of industry products by institutions, including regulatory agencies, due to conflicts of interest also contributes to knowledge manipulation. Common industry strategies of generating ignorance also include using doubt, blame, power, industry shills, astroturfing, smear campaigns, media manipulation and fact checking services. Future generations of children who inherit contaminants from their conception will be most affected by the gross neglect of their effect on health. The carry-through of health effects and their magnification in subsequent generations is a tragedy in the making

    Between Ethical Oversight and State Neutrality: Introducing Controversial Technologies into the Public Healthcare Systems of Germany, Italy and England

    Get PDF
    Introducing ethically controversial (bio)technologies into the public healthcare system inevitably provokes societal and legal conflict. While it is often argued that these choices ought to comply with moral standards, the consideration of ethical and religious concerns raises a serious problem of legitimacy. By adopting the position that the state must act in an ethically neutral manner this book provides a critical legal analysis of the relationship between ethics and law and its implications for the public healthcare system. The ensuing examination combines a comparative, legal-constitutional perspective with the investigation of two case studies: preimplantation genetic diagnosis (PGD) and non-invasive prenatal testing (NIPT).Nach welchen Kriterien dürfen ethisch umstrittene (Bio-)Technologien in das öffentliche Gesundheitswesen aufgenommen werden? Zwar wird vertreten, dass diese Entscheidung moralischen Vorgaben entsprechen sollte, doch hat die Berücksichtigung ethischer oder religiöser Bedenken aufgrund des staatlichen Neutralitätsgebots ein Legitimitätsproblem zur Folge. Diese rechtsvergleichende Arbeit untersucht daher kritisch das Verhältnis zwischen Ethik und Recht sowie seine Auswirkungen auf das öffentliche Gesundheitswesen. Insbesondere kombiniert die Analyse rechtsethische und verfassungsrechtliche Ansätze und wendet diese auf zwei Fallbeispiele an, die Präimplantationsdiagnostik (PID) und den nicht-invasiven Pränatalen Test (NIPT)

    Data- og ekspertdreven variabelseleksjon for prediktive modeller i helsevesenet : mot økt tolkbarhet i underbestemte maskinlæringsproblemer

    Get PDF
    Modern data acquisition techniques in healthcare generate large collections of data from multiple sources, such as novel diagnosis and treatment methodologies. Some concrete examples are electronic healthcare record systems, genomics, and medical images. This leads to situations with often unstructured, high-dimensional heterogeneous patient cohort data where classical statistical methods may not be sufficient for optimal utilization of the data and informed decision-making. Instead, investigating such data structures with modern machine learning techniques promises to improve the understanding of patient health issues and may provide a better platform for informed decision-making by clinicians. Key requirements for this purpose include (a) sufficiently accurate predictions and (b) model interpretability. Achieving both aspects in parallel is difficult, particularly for datasets with few patients, which are common in the healthcare domain. In such cases, machine learning models encounter mathematically underdetermined systems and may overfit easily on the training data. An important approach to overcome this issue is feature selection, i.e., determining a subset of informative features from the original set of features with respect to the target variable. While potentially raising the predictive performance, feature selection fosters model interpretability by identifying a low number of relevant model parameters to better understand the underlying biological processes that lead to health issues. Interpretability requires that feature selection is stable, i.e., small changes in the dataset do not lead to changes in the selected feature set. A concept to address instability is ensemble feature selection, i.e. the process of repeating the feature selection multiple times on subsets of samples of the original dataset and aggregating results in a meta-model. This thesis presents two approaches for ensemble feature selection, which are tailored towards high-dimensional data in healthcare: the Repeated Elastic Net Technique for feature selection (RENT) and the User-Guided Bayesian Framework for feature selection (UBayFS). While RENT is purely data-driven and builds upon elastic net regularized models, UBayFS is a general framework for ensembles with the capabilities to include expert knowledge in the feature selection process via prior weights and side constraints. A case study modeling the overall survival of cancer patients compares these novel feature selectors and demonstrates their potential in clinical practice. Beyond the selection of single features, UBayFS also allows for selecting whole feature groups (feature blocks) that were acquired from multiple data sources, as those mentioned above. Importance quantification of such feature blocks plays a key role in tracing information about the target variable back to the acquisition modalities. Such information on feature block importance may lead to positive effects on the use of human, technical, and financial resources if systematically integrated into the planning of patient treatment by excluding the acquisition of non-informative features. Since a generalization of feature importance measures to block importance is not trivial, this thesis also investigates and compares approaches for feature block importance rankings. This thesis demonstrates that high-dimensional datasets from multiple data sources in the medical domain can be successfully tackled by the presented approaches for feature selection. Experimental evaluations demonstrate favorable properties of both predictive performance, stability, as well as interpretability of results, which carries a high potential for better data-driven decision support in clinical practice.Moderne datainnsamlingsteknikker i helsevesenet genererer store datamengder fra flere kilder, som for eksempel nye diagnose- og behandlingsmetoder. Noen konkrete eksempler er elektroniske helsejournalsystemer, genomikk og medisinske bilder. Slike pasientkohortdata er ofte ustrukturerte, høydimensjonale og heterogene og hvor klassiske statistiske metoder ikke er tilstrekkelige for optimal utnyttelse av dataene og god informasjonsbasert beslutningstaking. Derfor kan det være lovende å analysere slike datastrukturer ved bruk av moderne maskinlæringsteknikker for å øke forståelsen av pasientenes helseproblemer og for å gi klinikerne en bedre plattform for informasjonsbasert beslutningstaking. Sentrale krav til dette formålet inkluderer (a) tilstrekkelig nøyaktige prediksjoner og (b) modelltolkbarhet. Å oppnå begge aspektene samtidig er vanskelig, spesielt for datasett med få pasienter, noe som er vanlig for data i helsevesenet. I slike tilfeller må maskinlæringsmodeller håndtere matematisk underbestemte systemer og dette kan lett føre til at modellene overtilpasses treningsdataene. Variabelseleksjon er en viktig tilnærming for å håndtere dette ved å identifisere en undergruppe av informative variabler med hensyn til responsvariablen. Samtidig som variabelseleksjonsmetoder kan lede til økt prediktiv ytelse, fremmes modelltolkbarhet ved å identifisere et lavt antall relevante modellparametere. Dette kan gi bedre forståelse av de underliggende biologiske prosessene som fører til helseproblemer. Tolkbarhet krever at variabelseleksjonen er stabil, dvs. at små endringer i datasettet ikke fører til endringer i hvilke variabler som velges. Et konsept for å adressere ustabilitet er ensemblevariableseleksjon, dvs. prosessen med å gjenta variabelseleksjon flere ganger på en delmengde av prøvene i det originale datasett og aggregere resultater i en metamodell. Denne avhandlingen presenterer to tilnærminger for ensemblevariabelseleksjon, som er skreddersydd for høydimensjonale data i helsevesenet: "Repeated Elastic Net Technique for feature selection" (RENT) og "User-Guided Bayesian Framework for feature selection" (UBayFS). Mens RENT er datadrevet og bygger på elastic net-regulariserte modeller, er UBayFS et generelt rammeverk for ensembler som muliggjør inkludering av ekspertkunnskap i variabelseleksjonsprosessen gjennom forhåndsbestemte vekter og sidebegrensninger. En case-studie som modellerer overlevelsen av kreftpasienter sammenligner disse nye variabelseleksjonsmetodene og demonstrerer deres potensiale i klinisk praksis. Utover valg av enkelte variabler gjør UBayFS det også mulig å velge blokker eller grupper av variabler som representerer de ulike datakildene som ble nevnt over. Kvantifisering av viktigheten av variabelgrupper spiller en nøkkelrolle for forståelsen av hvorvidt datakildene er viktige for responsvariablen. Tilgang til slik informasjon kan føre til at bruken av menneskelige, tekniske og økonomiske ressurser kan forbedres dersom informasjonen integreres systematisk i planleggingen av pasientbehandlingen. Slik kan man redusere innsamling av ikke-informative variabler. Siden generaliseringen av viktighet av variabelgrupper ikke er triviell, undersøkes og sammenlignes også tilnærminger for rangering av viktigheten til disse variabelgruppene. Denne avhandlingen viser at høydimensjonale datasett fra flere datakilder fra det medisinske domenet effektivt kan håndteres ved bruk av variabelseleksjonmetodene som er presentert i avhandlingen. Eksperimentene viser at disse kan ha positiv en effekt på både prediktiv ytelse, stabilitet og tolkbarhet av resultatene. Bruken av disse variabelseleksjonsmetodene bærer et stort potensiale for bedre datadrevet beslutningsstøtte i klinisk praksis

    Survey of Developments in North Carolina Law

    Get PDF
    • …
    corecore