30 research outputs found

    The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: what can they learn from each other?

    Get PDF
    On the whole, the US Algorithmic Accountability Act of 2022 (US AAA) is a pragmatic approach to balancing the benefits and risks of automated decision systems. Yet there is still room for improvement. This commentary highlights how the US AAA can both inform and learn from the European Artificial Intelligence Act (EU AIA)

    The Role of AI in Drug Discovery: Challenges, Opportunities, and Strategies

    Full text link
    Artificial intelligence (AI) has the potential to revolutionize the drug discovery process, offering improved efficiency, accuracy, and speed. However, the successful application of AI is dependent on the availability of high-quality data, the addressing of ethical concerns, and the recognition of the limitations of AI-based approaches. In this article, the benefits, challenges and drawbacks of AI in this field are reviewed, and possible strategies and approaches for overcoming the present obstacles are proposed. The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods, as well as the potential advantages of AI in pharmaceutical research are also discussed. Overall, this review highlights the potential of AI in drug discovery and provides insights into the challenges and opportunities for realizing its potential in this field. Note from the human-authors: This article was created to test the ability of ChatGPT, a chatbot based on the GPT-3.5 language model, to assist human authors in writing review articles. The text generated by the AI following our instructions (see Supporting Information) was used as a starting point, and its ability to automatically generate content was evaluated. After conducting a thorough review, human authors practically rewrote the manuscript, striving to maintain a balance between the original proposal and scientific criteria. The advantages and limitations of using AI for this purpose are discussed in the last section.Comment: 11 pages, 1 figur

    Achieving MAX-MIN Fair Cross-efficiency scores in Data Envelopment Analysis

    Get PDF
    Algorithmic decision making is gaining popularity in today's business. The need for fast, accurate, and complex decisions forces decision-makers to take advantage of algorithms. However, algorithms can create unwanted bias or undesired consequences that can be averted. In this paper, we propose a MAX-MIN fair cross-efficiency data envelopment analysis (DEA) model that solves the problem of high variance cross-efficiency scores. The MAX-MIN cross-efficiency procedure is in accordance with John Rawls’s Theory of justice by allowing efficiency and cross-efficiency estimation such that the greatest benefit of the least-advantaged decision making unit is achieved. The proposed mathematical model is tested on a healthcare related dataset. The results suggest that the proposed method solves several issues of cross-efficiency scores. First, it enables full rankings by having the ability to discriminate between the efficiency scores of DMUs. Second, the variance of cross-efficiency scores is reduced, and finally, fairness is introduced through optimization of the minimal efficiency scores

    Detecting discriminatory risk through data annotation based on Bayesian inferences

    Get PDF
    Thanks to the increasing growth of computational power and data availability, the research in machine learning has advanced with tremendous rapidity. Nowadays, the majority of automatic decision making systems are based on data. However, it is well known that machine learning systems can present problematic results if they are built on partial or incomplete data. In fact, in recent years sev- eral studies have found a convergence of issues related to the ethics and transparency of these systems in the process of data collection and how they are recorded. Although the process of rigorous data collection and analysis is fundamental in the model design, this step is still largely overlooked by the machine learning community. For this reason, we propose a method of data annotation based on Bayesian statistical inference that aims to warn about the risk of discriminatory results of a given data set. In particular, our method aims to deepen knowledge and promote awareness about the sam- pling practices employed to create the training set, highlighting that the probability of success or failure conditioned to a minority membership is given by the structure of the data available. We empirically test our system on three datasets commonly accessed by the machine learning community and we investigate the risk of racial discrimination

    Detecting discriminatory risk through data annotation based on Bayesian inferences

    Get PDF
    Thanks to the increasing growth of computational power and data availability, the research in machine learning has advanced with tremendous rapidity. Nowadays, the majority of automatic decision making systems are based on data. However, it is well known that machine learning systems can present problematic results if they are built on partial or incomplete data. In fact, in recent years several studies have found a convergence of issues related to the ethics and transparency of these systems in the process of data collection and how they are recorded. Although the process of rigorous data collection and analysis is fundamental in the model design, this step is still largely overlooked by the machine learning community. For this reason, we propose a method of data annotation based on Bayesian statistical inference that aims to warn about the risk of discriminatory results of a given data set. In particular, our method aims to deepen knowledge and promote awareness about the sampling practices employed to create the training set, highlighting that the probability of success or failure conditioned to a minority membership is given by the structure of the data available. We empirically test our system on three datasets commonly accessed by the machine learning community and we investigate the risk of racial discrimination.Comment: 11 pages, 8 figure
    corecore