1,979 research outputs found

    Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutions

    Full text link
    Machine learning is expected to fuel significant improvements in medical care. To ensure that fundamental principles such as beneficence, respect for human autonomy, prevention of harm, justice, privacy, and transparency are respected, medical machine learning systems must be developed responsibly. Many high-level declarations of ethical principles have been put forth for this purpose, but there is a severe lack of technical guidelines explicating the practical consequences for medical machine learning. Similarly, there is currently considerable uncertainty regarding the exact regulatory requirements placed upon medical machine learning systems. This survey provides an overview of the technical and procedural challenges involved in creating medical machine learning systems responsibly and in conformity with existing regulations, as well as possible solutions to address these challenges. First, a brief review of existing regulations affecting medical machine learning is provided, showing that properties such as safety, robustness, reliability, privacy, security, transparency, explainability, and nondiscrimination are all demanded already by existing law and regulations - albeit, in many cases, to an uncertain degree. Next, the key technical obstacles to achieving these desirable properties are discussed, as well as important techniques to overcome these obstacles in the medical context. We notice that distribution shift, spurious correlations, model underspecification, uncertainty quantification, and data scarcity represent severe challenges in the medical context. Promising solution approaches include the use of large and representative datasets and federated learning as a means to that end, the careful exploitation of domain knowledge, the use of inherently transparent models, comprehensive out-of-distribution model testing and verification, as well as algorithmic impact assessments

    A Differentially Private Weighted Empirical Risk Minimization Procedure and its Application to Outcome Weighted Learning

    Full text link
    It is commonplace to use data containing personal information to build predictive models in the framework of empirical risk minimization (ERM). While these models can be highly accurate in prediction, results obtained from these models with the use of sensitive data may be susceptible to privacy attacks. Differential privacy (DP) is an appealing framework for addressing such data privacy issues by providing mathematically provable bounds on the privacy loss incurred when releasing information from sensitive data. Previous work has primarily concentrated on applying DP to unweighted ERM. We consider an important generalization to weighted ERM (wERM). In wERM, each individual's contribution to the objective function can be assigned varying weights. In this context, we propose the first differentially private wERM algorithm, backed by a rigorous theoretical proof of its DP guarantees under mild regularity conditions. Extending the existing DP-ERM procedures to wERM paves a path to deriving privacy-preserving learning methods for individualized treatment rules, including the popular outcome weighted learning (OWL). We evaluate the performance of the DP-wERM application to OWL in a simulation study and in a real clinical trial of melatonin for sleep health. All empirical results demonstrate the viability of training OWL models via wERM with DP guarantees while maintaining sufficiently useful model performance. Therefore, we recommend practitioners consider implementing the proposed privacy-preserving OWL procedure in real-world scenarios involving sensitive data.Comment: 24 pages and 2 figures for the main manuscript, 5 pages and 2 figures for the supplementary material

    Countering Expansion and Organization of Terrorism in Cyberspace

    Get PDF
    Terrorists use cyberspace and social media technology to create fear and spread violent ideologies, which pose a significant threat to public security. Researchers have documented the importance of the application of law and regulation in dealing with the criminal activities perpetrated through the aid of computers in cyberspace. Using routine activity theory, this study assessed the effectiveness of technological approaches to mitigating the expansion and organization of terrorism in cyberspace. The study aligned with the purpose area analysis objective of classifying and assessing potential terrorist threats to preempt and mitigate the attacks. Data collection included document content analysis of the open-source documents, government threat assessments, legislation, policy papers, and peer-reviewed academic literature and semistructured interviews with fifteen security experts in Nigeria. Yin\u27s recommended analysis process of iterative and repetitive review of materials was applied to the documents analysis, including interviews of key public and private sector individuals to identify key themes on Nigeria\u27s current effort to secure the nation\u27s cyberspace. The key findings were that the new generation of terrorists who are more technological savvy are growing, cybersecurity technologies are effective and quicker tools, and bilateral/multilateral cooperation is essential to combat the expansion of terrorism in cyberspace. The implementation of recommendations from this study will improve the security in cyberspace, thereby contributing to positive social change. The data provided may be useful to stakeholders responsible for national security, counterterrorism, law enforcement on the choice of cybersecurity technologies to confront terrorist expansion, and organization in cyberspace
    • …
    corecore