1,221 research outputs found

    Robust Cox Regression as an Alternative Method to Estimate Adjusted Relative Risk in Prospective Studies with Common Outcomes

    Get PDF
    Objective: To demonstrate the use of robust Cox regression in estimating adjusted relative risks (and confidence intervals) when all participants with an identical follow-up time and when a common outcome is investigated.Methods: In this paper, we propose an alternative statistical method, robust Cox regression, to estimate adjusted relative risks in prospective studies. We use simulated cohort data to examine the suitability of robust Cox regression.Results: Robust Cox regression provides estimates that are equivalent to those of modified Poisson regression: regression coefficients, relative risks, 95% confidence intervals, P values. It also yields reasonable probabilities (bounded by 0 and 1). Unlike modified Poisson regression, robust Cox regression allows for four automatic variable selection methods, it directly computes adjusted relative risks for continuous variables, and is able to incorporate time-dependent covariates.Conclusion: Given the popularity of Cox regression in the medical and epidemiological literature, we believe that robust Cox regression may gain wider acceptance and application in the future. We recommend robust Cox regression as an alternative analytical tool to modified Poisson regression. In this study we demonstrated its utility to estimate adjusted relative risks for common outcomes in prospective studies with two or three waves of data collection (spaced similarly)

    Mathematical modeling in the health risk assessment of air pollution-related disease burden in China: A review

    Get PDF
    This review paper covers an overview of air pollution-related disease burden in China and a literature review on the previous studies which have recently adopted a mathematical modeling approach to demonstrate the relative risk (RR) of air pollution-related disease burden. The associations between air pollution and disease burden have been explored in the previous studies. Therefore, it is necessary to quantify the impact of long-term exposure to ambient air pollution by using a suitable mathematical model. The most common way of estimating the health risk attributable to air pollution exposure in a population is by employing a concentration-response function, which is often based on the estimation of a RR model. As most of the regions in China are experiencing rapid urbanization and industrialization, the resulting high ambient air pollution is influencing more residents, which also increases the disease burden in the population. The existing RR models, including the integrated exposure-response (IER) model and the global exposure mortality model (GEMM), are critically reviewed to provide an understanding of the current status of mathematical modeling in the air pollution-related health risk assessment. The performances of different RR models in the mortality estimation of disease are also studied and compared in this paper. Furthermore, the limitations of the existing RR models are pointed out and discussed. Consequently, there is a need to develop a more suitable RR model to accurately estimate the disease burden attributable to air pollution in China, which contributes to one of the key steps in the health risk assessment. By using an updated RR model in the health risk assessment, the estimated mortality risk due to the impacts of environment such as air pollution and seasonal temperature variation could provide a more realistic and reliable information regarding the mortality data of the region, which would help the regional and national policymakers for intensifying their efforts on the improvement of air quality and the management of air pollution-related disease burden

    Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare

    Get PDF
    Precision Medicine implies a deep understanding of inter-individual differences in health and disease that are due to genetic and environmental factors. To acquire such understanding there is a need for the implementation of different types of technologies based on artificial intelligence (AI) that enable the identification of biomedically relevant patterns, facilitating progress towards individually tailored preventative and therapeutic interventions. Despite the significant scientific advances achieved so far, most of the currently used biomedical AI technologies do not account for bias detection. Furthermore, the design of the majority of algorithms ignore the sex and gender dimension and its contribution to health and disease differences among individuals. Failure in accounting for these differences will generate sub-optimal results and produce mistakes as well as discriminatory outcomes. In this review we examine the current sex and gender gaps in a subset of biomedical technologies used in relation to Precision Medicine. In addition, we provide recommendations to optimize their utilization to improve the global health and disease landscape and decrease inequalities.This work is written on behalf of the Women’s Brain Project (WBP) (www.womensbrainproject.com/), an international organization advocating for women’s brain and mental health through scientific research, debate and public engagement. The authors would like to gratefully acknowledge Maria Teresa Ferretti and Nicoletta Iacobacci (WBP) for the scientific advice and insightful discussions; Roberto Confalonieri (Alpha Health) for reviewing the manuscript; the Bioinfo4Women programme of Barcelona Supercomputing Center (BSC) for the support. This work has been supported by the Spanish Government (SEV 2015–0493) and grant PT17/0009/0001, of the Acción Estratégica en Salud 2013–2016 of the Programa Estatal de Investigación Orientada a los Retos de la Sociedad, funded by the Instituto de Salud Carlos III (ISCIII) and European Regional Development Fund (ERDF). EG has received funding from the Innovative Medicines Initiative 2 (IMI2) Joint Undertaking under grant agreement No 116030 (TransQST), which is supported by the European Union’s Horizon 2020 research and innovation programme and the European Federation of Pharmaceutical Industries and Associations (EFPIA).Peer ReviewedPostprint (published version

    Artificial Intelligence Enabled Project Management: A Systematic Literature Review

    Get PDF
    In the Industry 5.0 era, companies are leveraging the potential of cutting-edge technologies such as artificial intelligence for more efficient and green human-centric production. In a similar approach, project management would benefit from artificial intelligence in order to achieve project goals by improving project performance, and consequently, reaching higher sustainable success. In this context, this paper examines the role of artificial intelligence in emerging project management through a systematic literature review; the applications of AI techniques in the project management performance domains are presented. The results show that the number of influential publications on artificial intelligence-enabled project management has increased significantly over the last decade. The findings indicate that artificial intelligence, predominantly machine learning, can be considerably useful in the management of construction and IT projects; it is notably encouraging for enhancing the planning, measurement, and uncertainty performance domains by providing promising forecasting and decision-making capabilities

    Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans

    Get PDF
    We are currently unable to specify human goals and societal values in a way that reliably directs AI behavior. Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives. "Law Informs Code" is the research agenda embedding legal knowledge and reasoning in AI. Similar to how parties to a legal contract cannot foresee every potential contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code. We describe how data generated by legal processes (methods of law-making, statutory interpretation, contract drafting, applications of legal standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals. This increases human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment. Although law is partly a reflection of historically contingent political power - and thus not a perfect aggregation of citizen preferences - if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. If law eventually informs powerful AI, engaging in the deliberative political process to improve law takes on even more meaning.Comment: Forthcoming in Northwestern Journal of Technology and Intellectual Property, Volume 2

    Predictive Contracting

    Get PDF
    This Article examines how contract drafters can use data on contract outcomes to inform contract design. Building on recent developments in contract data collection and analysis, the Article proposes “predictive contracting,” a new method of contracting in which contract drafters can design contracts using a technology system that helps predict the connections between contract terms and outcomes. Predictive contracting will be powered by machine learning and draw on contract data obtained from integrated contract management systems, natural language processing, and computable contracts. The Article makes both theoretical and practical contributions to the contracts literature. On a theoretical level, predictive contracting can lead to greater customization, increased innovation, more complete contract design, more effective balancing of front-end and back-end costs, better risk assessment and allocation, and more accurate term pricing for negotiation. On a practical level, predictive contracting has the potential to significantly alter the role of transactional lawyers by providing them with access to previously unavailable information on the statistical connections between contract terms and outcomes. In addition to these theoretical and practical contributions, the Article also anticipates and addresses limitations and risks of predictive contracting, including technical constraints, concerns regarding data privacy and confidentiality, the regulation of the unauthorized practice of law and the potential for exacerbating information inequality

    PARAMETRIC ESTIMATION IN COMPETING RISKS AND MULTI-STATE MODELS

    Get PDF
    The typical research of Alzheimer\u27s disease includes a series of cognitive states. Multi-state models are often used to describe the history of disease evolvement. Competing risks models are a sub-category of multi-state models with one starting state and several absorbing states. Analyses for competing risks data in medical papers frequently assume independent risks and evaluate covariate effects on these events by modeling distinct proportional hazards regression models for each event. Jeong and Fine (2007) proposed a parametric proportional sub-distribution hazard (SH) model for cumulative incidence functions (CIF) without assumptions about the dependence among the risks. We modified their model to assure that the sum of the underlying CIFs never exceeds one, by assuming a proportional SH model for dementia only in the Nun study. To accommodate left censored data, we computed non-parametric MLE of CIF based on Expectation-Maximization algorithm. Our proposed parametric model was applied to the Nun Study to investigate the effect of genetics and education on the occurrence of dementia. After including left censored dementia subjects, the incidence rate of dementia becomes larger than that of death for age \u3c 90, education becomes significant factor for incidence of dementia and standard errors for estimates are smaller. Multi-state Markov model is often used to analyze the evolution of cognitive states by assuming time independent transition intensities. We consider both constant and duration time dependent transition intensities in BRAiNS data, leading to a mixture of Markov and semi-Markov processes. The joint probability of observing a sequence of same state until transition in a semi-Markov process was expressed as a product of the overall transition probability and survival probability, which were simultaneously modeled. Such modeling leads to different interpretations in BRAiNS study, i.e., family history, APOE4, and sex by head injury interaction are significant factors for transition intensities in traditional Markov model. While in our semi-Markov model, these factors are significant in predicting the overall transition probabilities, but none of these factors are significant for duration time distribution

    Self-Supervised Time-to-Event Modeling with Structured Medical Records

    Full text link
    Time-to-event (TTE) models are used in medicine and other fields for estimating the probability distribution of the time until a specific event occurs. TTE models provide many advantages over classification using fixed time horizons, including naturally handling censored observations, but require more parameters and are challenging to train in settings with limited labeled data. Existing approaches, e.g. proportional hazards or accelerated failure time, employ distributional assumptions to reduce parameters but are vulnerable to model misspecification. In this work, we address these challenges with MOTOR (Many Outcome Time Oriented Representations), a self-supervised model that leverages temporal structure found in collections of timestamped events in electronic health records (EHR) and health insurance claims. MOTOR uses a TTE pretraining objective that predicts the probability distribution of times when events occur, making it well-suited to transfer learning for medical prediction tasks. Having pretrained on EHR and claims data of up to 55M patient records (9B clinical events), we evaluate performance after finetuning for 19 tasks across two datasets. Task-specific models built using MOTOR improve time-dependent C statistics by 4.6% over state-of-the-art while greatly improving sample efficiency, achieving comparable performance to existing methods using only 5% of available task data
    • …
    corecore