233,669 research outputs found

    Are Clinical Decision Support Systems Compatible with Patient-Centred Care?

    Get PDF
    Few, if any, of the Clinical Decision Support Systems developed and reported within the informatics literature incorporate patient preferences in the formal and quantitatively analytic way adopted for evidence. Preferences are assumed to be 'taken into account' by the clinician in the associated clinical encounter. Many CDSS produce management recommendations on the basis of embedded algorithms or expert rules. These are often focused on a single criterion, and the preference trade-offs involved have no empirical basis outside an expert panel. After illustrating these points with the Osteoporosis Adviser CDSS from Iceland, we review an ambitious attempt to address both the monocriterial bias and lack of empirical preference-sensitivity, in the context of Early Rheumatoid Arthritis. It brings together the preference data from a Discrete Choice Experiment and the best available evidence data, to arrive at the percentage of patients who would prefer particular treatments from those in the listed options. It is suggested that these percentages could assist a GRADE panel determine whether to produce a strong or weak recommendation. However, any such group average preference-based recommendations are arguably in breach of both the reasonable patient legal standard for informed consent and simple ethical principles. The answer is not to localise, but personalise, decisions through the use of preference-sensitive multi-criteria decision support tools engaged with at the point of care

    Ethics of the algorithmic prediction of goal of care preferences: from theory to practice

    Full text link
    Artificial intelligence (AI) systems are quickly gaining ground in healthcare and clinical decision-making. However, it is still unclear in what way AI can or should support decision-making that is based on incapacitated patients’ values and goals of care, which often requires input from clinicians and loved ones. Although the use of algorithms to predict patients’ most likely preferred treatment has been discussed in the medical ethics literature, no example has been realised in clinical practice. This is due, arguably, to the lack of a structured approach to the epistemological, ethical and pragmatic challenges arising from the design and use of such algorithms. The present paper offers a new perspective on the problem by suggesting that preference predicting AIs be viewed as sociotechnical systems with distinctive life-cycles. We explore how both known and novel challenges map onto the different stages of development, highlighting interdisciplinary strategies for their resolution

    Building Ethically Bounded AI

    Full text link
    The more AI agents are deployed in scenarios with possibly unexpected situations, the more they need to be flexible, adaptive, and creative in achieving the goal we have given them. Thus, a certain level of freedom to choose the best path to the goal is inherent in making AI robust and flexible enough. At the same time, however, the pervasive deployment of AI in our life, whether AI is autonomous or collaborating with humans, raises several ethical challenges. AI agents should be aware and follow appropriate ethical principles and should thus exhibit properties such as fairness or other virtues. These ethical principles should define the boundaries of AI's freedom and creativity. However, it is still a challenge to understand how to specify and reason with ethical boundaries in AI agents and how to combine them appropriately with subjective preferences and goal specifications. Some initial attempts employ either a data-driven example-based approach for both, or a symbolic rule-based approach for both. We envision a modular approach where any AI technique can be used for any of these essential ingredients in decision making or decision support systems, paired with a contextual approach to define their combination and relative weight. In a world where neither humans nor AI systems work in isolation, but are tightly interconnected, e.g., the Internet of Things, we also envision a compositional approach to building ethically bounded AI, where the ethical properties of each component can be fruitfully exploited to derive those of the overall system. In this paper we define and motivate the notion of ethically-bounded AI, we describe two concrete examples, and we outline some outstanding challenges.Comment: Published at AAAI Blue Sky Track, winner of Blue Sky Awar

    Modeling Epistemological Principles for Bias Mitigation in AI Systems: An Illustration in Hiring Decisions

    Full text link
    Artificial Intelligence (AI) has been used extensively in automatic decision making in a broad variety of scenarios, ranging from credit ratings for loans to recommendations of movies. Traditional design guidelines for AI models focus essentially on accuracy maximization, but recent work has shown that economically irrational and socially unacceptable scenarios of discrimination and unfairness are likely to arise unless these issues are explicitly addressed. This undesirable behavior has several possible sources, such as biased datasets used for training that may not be detected in black-box models. After pointing out connections between such bias of AI and the problem of induction, we focus on Popper's contributions after Hume's, which offer a logical theory of preferences. An AI model can be preferred over others on purely rational grounds after one or more attempts at refutation based on accuracy and fairness. Inspired by such epistemological principles, this paper proposes a structured approach to mitigate discrimination and unfairness caused by bias in AI systems. In the proposed computational framework, models are selected and enhanced after attempts at refutation. To illustrate our discussion, we focus on hiring decision scenarios where an AI system filters in which job applicants should go to the interview phase

    Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support

    Full text link
    A framework and methodology---termed LogiKEy---for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples---all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure

    Regional production’ and ‘Fairness’ in organic farming: Evidence from a CORE Organic project

    Get PDF
    The CORE Organic pilot project ‘Farmer Consumer Partnerships’ aims to strengthen the partnership between producers and consumers through better communication. To achieve this the project first mapped concerns of organic stakeholders in relation to a broad range of ethical values and then compared them with the European regulations for organic food before testing a limited number of communication arguments related to those concerns with consumers. Stakeholders of organic supply chains refer to a broad range of values that include concerns about systems integrity, regional origin, fairness issues as well as impact on the environment, on animals and other social impact. Several concerns including regional origin of products and fairness issues are not directly covered by the European Regulation for organic food. Seven different ethical attributes were tested with about 1200 consumers in relation to product prices and additional premiums by means of an Information‐Display‐Matrix (IDM) in five European countries. In all countries the most important ethical attributes to consumers turned out to be 'animal welfare', 'regional production' and 'fair prices for farmers'. It is concluded that communicating ethical quality of organic products that are produced in ways that go beyond the requirements of the European Regulation represents an opportunity for differentiation in an increasingly competitive market. Increasing transparency could be the first step in facing the difficulties in defining mandatory standards, particularly regarding ‘fairness’ and ‘local/regional production’

    Using the stated preference method for the calculation of social discount rate

    Get PDF
    The aim of this paper is to build the stated preference method into the social discount rate methodology. The first part of the paper presents the results of a survey about stated time preferences through pair-choice decision situations for various topics and time horizons. It is assumed that stated time preferences differ from calculated time preferences and that the extent of stated rates depends on the time period, and on how much respondents are financially and emotionally involved in the transactions. A significant question remains: how can the gap between the calculation and the results of surveys be resolved, and how can the real time preferences of individuals be interpreted using a social time preference rate. The second part of the paper estimates the social time preference rate for Hungary using the results of the survey, while paying special attention to the pure time preference component. The results suggest that the current method of calculation of the pure time preference rate does not reflect the real attitudes of individuals towards future generations

    Trust your instincts:The relationship between intuitive decision making and happiness

    Get PDF
    Epstein (1994; 2003) proposed that there are two cognitive information processing systems that operate in parallel: the intuitive thinking style and the rational thinking style. Decisional fit occurs when the preferred thinking style is applied to making a decision and research has shown that this fit increases the value of the outcome of a decision. Additionally, decisional fit leads to less regret, even when post hoc evaluations show the decision to be incorrect. It has not yet been determined whether decisional fit correlates with greater happiness and hence, the purpose of the current study was to investigate the difference between styles of thinking, styles of decision making and the impact of decisional fit on happiness scores. Individual differences in thinking and decision style were measured using an online interactive questionnaire (N = 100), and an ANOVA, hierarchical multiple regression, and a series of t-tests, were used to investigate the relationship between thinking style, decision style, decisional fit, and happiness, thereby addressing a gap in the existing literature. The major findings from the current study show that intuitive thinking has a strong positive correlation with happiness; that intuitive thinkers are more likely to utilize intuitive decisional style, than rational thinkers; and that when both rational and intuitive thinkers experienced decisional fit, higher ratings of happiness were reported. Explanations and recommendations for future studies are outlined in the discussion
    • 

    corecore