1,257 research outputs found

    AI, Algorithms, and Awful Humans

    Get PDF
    A profound shift is occurring in the way many decisions are made, with machines taking greater roles in the decision-making process. Two arguments are often advanced to justify the increasing use of automation and algorithms in decisions. The “Awful Human Argument” asserts that human decision-making is often awful and that machines can decide better than humans. Another argument, the “Better Together Argument,” posits that machines can augment and improve human decision-making. These arguments exert a powerful influence on law and policy. In this Essay, we contend that in the context of making decisions about humans, these arguments are far too optimistic. We argue that machine and human decision-making are not readily compatible, making the integration of human and machine decision-making extremely complicated. It is wrong to view machines as deciding like humans do, except better because they are supposedly cleansed of bias. Machines decide fundamentally differently, and bias often persists. These differences are especially pronounced when decisions require a moral or value judgment or involve human lives and behavior. Making decisions about humans involves special emotional and moral considerations that algorithms are not yet prepared to make—and might never be able to make. Automated decisions often rely too much on quantifiable data to the exclusion of qualitative data, resulting in a change to the nature of the decision itself. Whereas certain matters might be readily reducible to quantifiable data, such as the weather, human lives are far more complex. Human and machine decision-making often do not mix well. Humans often perform badly when reviewing algorithmic output. We contend that algorithmic decision-making is being relied upon too eagerly and with insufficient skepticism. For decisions about humans, there are important considerations that must be better appreciated before these decisions are delegated in whole or in part to machines

    The Prediction Society: Algorithms and the Problems of Forecasting the Future

    Get PDF
    Predictions about the future have been made since the earliest days of humankind, but today, we are living in a brave new world of prediction. Today’s predictions are produced by machine learning algorithms that analyze massive quantities of personal data. Increasingly, important decisions about people are being made based on these predictions. Algorithmic predictions are a type of inference. Many laws struggle to account for inferences, and even when they do, the laws lump all inferences together. But as we argue in this Article, predictions are different from other inferences. Predictions raise several unique problems that current law is ill-suited to address. First, algorithmic predictions create a fossilization problem because they reinforce patterns in past data and can further solidify bias and inequality from the past. Second, algorithmic predictions often raise an unfalsifiability problem. Predictions involve an assertion about future events. Until these events happen, predictions remain unverifiable, resulting in an inability for individuals to challenge them as false. Third, algorithmic predictions can involve a preemptive intervention problem, where decisions or interventions render it impossible to determine whether the predictions would have come true. Fourth, algorithmic predictions can lead to a self-fulfilling prophecy problem where they actively shape the future they aim to forecast. More broadly, the rise of algorithmic predictions raises an overarching concern: Algorithmic predictions not only forecast the future but also have the power to create and control it. The increasing pervasiveness of decisions based on algorithmic predictions is leading to a prediction society where individuals’ ability to author their own future is diminished while the organizations developing and using predictive systems are gaining greater power to shape the future. Privacy and data protection law do not adequately address algorithmic predictions. Many laws lack a temporal dimension and do not distinguish between predictions about the future and inferences about the past or present. Predictions about the future involve considerations that are not implicated by other types of inferences. Many laws provide correction rights and duties of accuracy that are insufficient to address problems arising from predictions, which exist in the twilight between truth and falsehood. Individual rights and anti-discrimination law also are unable to address the unique problems with algorithmic predictions. We argue that the use of algorithmic predictions is a distinct issue warranting different treatment from other types of inference. We examine the issues laws must consider when addressing the problems of algorithmic predictions

    Heme breakdown and ischemia/reperfusion injury in grafted liver during living donor liver transplantation

    Get PDF
    Living donor liver transplantation (LDLT) requires ischemia/reperfusion (I/R), which can cause early graft injury. However, the detailed mechanism of I/R injury remains unknown. Heme oxygenase-1 (HO-1) is a rate-limiting enzyme in heme catabolism and results in the production of iron, carbon monoxide (CO), and biliverdin IXα. Furthermore, in animals, HO-1 has a protective effect against oxidative stress associated with I/R injury. However, in humans, the molecular mechanism and clinical significance of HO-1 remain unclear. We previously demonstrated that exhaled CO levels increase during LDLT, and postulated that this may indicate I/R injury. In this study, we elucidate the origin of increased exhaled CO levels and the role of HO-1 in I/R injury during LDLT. We studied 29 LDLT donors and recipients each. For investigation of HO-1 gene expression by polymerase chain reaction and HO-1 localization by immunohistological staining, liver biopsies from the grafted liver were conducted twice, once before and once after I/R. Exhaled CO levels and HO-1 gene expression levels significantly increased after I/R. In addition, HO-1 levels significantly increased after I/R in Kupffer cells. Furthermore, we found a significant positive correlation between exhaled CO levels and HO-1 gene expression levels. These results indicated that increased heme breakdown in the grafted liver is the source of increased exhaled CO levels. We also found a significant relationship between HO-1 gene expression levels and alanine aminotransferase (ALT) levels; i.e., the higher the HO-1 gene expression levels, the higher the ALT levels. These results suggest that HO-1-mediated heme breakdown is caused by I/R during LDLT, since it is associated with increased exhaled CO levels and liver damage

    Cross-cultural research methods

    Get PDF
    corecore